asp.net mvc, blogging, code comments suggest edit

When someone says they want to write a technical book, I take a hammer and slam it on the aspiring author’s thumb and ask “How do you like that?” If the answer is, “Very much! May I have another.” This person is ready to write a technical book.

Sure, writing a book always starts off full of exciting possibilities. But it soon devolves into drudgery and pain with the constant question of whether the time spent is worth it when weighed against all your other obligations and opportunities in life.

But no matter how much I sometimes hate the process of writing a book, I do love this moment. The moment the fruits of your labor are delivered in a cardboard box and you open it up to see your name on a glossy cover with a speedy looking bobsled!

mvc4-books

But soon after the big question sets in, “what the hell am I going to do with all these books?!” (I’ll bring some toMonkeySpacein October! And maybe I’ll give a few away via the internet if I can think of a criteria for giving them out.)

At long last, the Wrox Professional ASP.NET MVC 4 book written by me and my esteemed coauthors, Jon Galloway, K. Scott Allen, and Brad Wilson, is available (in Kindle and Print formats) on Amazon.

As Jon points out in his blog post, the Kindle version looks nice and is in color (on devices that support color of course).

Real World Development

This book includes a new chapter that I really did enjoy writing. The focus on the chapter is building a real world ASP.NET MVC application using the NuGet Gallery as a case study. The NuGet Gallery is the code behind http://nuget.org/, the online public host for thousands of NuGet packages with millions of package installs.

This is a great case study because it’s easy to visit the site while you read the chapter and you can easily grab the source code. The chapter covers the packages used to build the gallery as well as some of the trade-offs the NuGet gallery team made. For example, there are some places where we rolled our own features rather than using the built in defaults.

Ironically, because we haven’t upgraded the NuGet Gallery yet, the content of the chapter describes an ASP.NET MVC 3 project. But all of the concepts and packages apply equally well to an ASP.NET MVC 4 project.

If you’re building real web applications with ASP.NET MVC, I think you’ll find at least one thing useful in this chapter. Hopefully more!

Credits

I really must give a lot of credit to my coauthors who really knocked it out of the park. These are some smart folks I have the pleasure of working with. Extra kudos to Jon Galloway who was more or less the project manager and chief cheerleader in organizing the rest of us to get this book out the door.

I also want to give a big thanks to Eilon Lipton, our technical reviewer. He’s the developer lead still responsible for the developer team that works on ASP.NET MVC and Web API and is a meticulous reviewer.

What’s Next?

ASP.NET MVC 4 is the last version of ASP.NET MVC that I had any involvement with. I don’t plan to be a coauthor on the ASP.NET MVC 5/Web API 2 book at this point. I’m just not likely to be involved enough to have the level of insight that I’ve had in the past. Also, it’s really hard work.

So you should buy this book because it’ll be a collector’s item! Or, you can take the advice of another former coauthor of mine who said Don’t Buy This Book right after we finished one together. Either way, I won’t hold it against you. (But do buy it anyways just to spite Atwood.)

Will I write another book again? Maybe, but if I do it won’t be like any of the books I’ve worked on in the past. I have some ideas to write a fiction piece about a shimmery vampire wizard who goes to a school named PigPimples under the control of a spoiled brat king named Jeffrey who hates his midget uncle Teeryon Bannister. I even have a catchy phrase, a Bannister always pays his dates.

If it somehow turns out that writing fiction doesn’t work out, I might be interested in writing a book that explores more of the people aspects of software development or related to core principles. Something that doesn’t change and get outdated every year.

Or perhaps I’ll just keep randomly slamming keys on my keyboard and publish the result on my blog and call it a blog post. At least I still have fun doing that.

code comments suggest edit

I was once accused of primitive obsession. Especially when it comes to strings. Guilty as charged!

There’s a lot of reasons to be obsessed with string primitives. Many times, the data really is a just a string and encapsulating it in some custom type is just software “designerbation.” Also, strings are special and the .NET Framework heavily optimizes strings through techniques like string interning and providing classes like the StringBuilder.

But in many cases, a strongly typed class that better represents the domain is the right thing to do. I think System.Uri and its corresponding UriBuilder is a prime example. When you work with URLs, there are security implications that very few people get right if you treat them just as strings.

But there’s a third scenario that I often run into now that I build client applications with the Model View ViewModel (MVVM) pattern. Many properties on a view model correspond to user input. Often these properties are populated via data binding with input controls on the view. As such, these properties often need be able to hold invalid values that represent the input the user entered.

For example, suppose I want a user to type in a URL. I might have a property like:

public string YourBlogUrl {get; set;}

I can’t change the type of that property to a Uri because as the user types, the intermediate values bound to the property are not valid instances of Uri. For example, as the user types the first “h” in “http”, trying to bind the input value to the Uri property would fail.

But this sucks because suppose I want to display the host name on the screen as soon as one becomes can (more or less) be determined based on the input. I’d love to be able to just bind another control to YourBlogUrl.Host. But alas, the string type does not have a Host property.

Ideally I would have some middle ground where the type of that property both has structure, but allows me to have invalid values. Perhaps it has methods to convert it to a more strict type once we validate that the value is valid. In this case, a ToUri method would makes sense.

But string is sealed, so can’t derive from it. What’s a hapless coder to do?

Custom string types through implicit conversions

Well you could use the StringOr<T> class written as an April Fool’s joke. It was a joke, but it might be useful in cases like this! But that’s not the approach I’ll take.

Or you can follow the advice of Jimmy Bogard in his post on primitive obsession that I linked to at the beginning (I’m sure he’ll love that I dragged out a post he wrote five years ago) and write a custom class that’s implicitly convertible to string.

In his post, he shows a ZipCodeString example which I will include below, but with one change. The very last method is a conversion overload and I changed it from explicit to implicit.

public class ZipCodeString
{
    private readonly string _value;

    public ZipCodeString(string value)
    {
        // perform regex matching to verify XXXXX or XXXXX-XXXX format
        _value = value;
    }

    public string Value
    {
        get { return _value; }
    }

    public override string ToString()
    {
        return _value;
    }

    public static implicit operator string(ZipCodeString zipCode)
    {
        return zipCode.Value;
    }

    public static implicit operator ZipCodeString(string value)
    {
        return new ZipCodeString(value);
    }
}

This allows you to write code like:

ZipCodeString zip = "98008";

This provides the ease of a string to initialize a ZipCodeString type, while at the same time it provides access to the structure of a zip code.

In the interest of full disclosure, many people have a strong feeling against implicit conversions. I asked Jon Skeet, Number one dude on StackOverflow and perhaps as well versed in C# as just about anybody in the world, to review a draft of this post as I didn’t want to propagate bad practices without due warning. Here’s what he said:

Personally I really dislike implicit conversions. I don’t even like explicit conversions much - I prefer methods. So if I’m writing XML serialization code, I’ll usually have a FromXElement static method, and a ToXElement instance method. It definitely makes the code longer, but I reckon it’s ultimately clearer. (It also allows for several conversions from the same type with different meanings - e.g. Duration.FromHours, Duration.FromMinutes etc.)

I don’t think I’d ever expose an implicit conversion like this in a public API that’s meant to share with others. But within my own code, I like it so far. If I get bitten by it, maybe I’ll change my tune and Skeet can tell me, “I told you so!”

Taking it further

I like Jimmy’s approach, but it doesn’t go far enough for my needs. For example, this works great when you employ this approach from the start. But what if you already shipped version 1 of a property as a string? And now you want to change that property to a ZipCodeString. But you have existing values serialized to disk. Or maybe you need pass this ZipCodeString to a JSON endpoint. Is that going to serialize ok?

In my case, I often want these types to act as much like strings as possible. That way, if I change a property from string to one of these types, it’ll break as little of my code as possible (if any).

What this means is we need to write a lot more boilerplate code. For example, override the Equals method and operator. In other cases, you may want to override the addition operator. I did this with a PathString class that represents file paths so I could write code like this:

// The code I recommend writing.
PathString somePath = @"c:\fake\path";
somePath = somePath.Combine("subfolder");
// somePath == @"c:\fake\path\subfolder";

// But if you did this by accident:
PathString somePath = @"c:\fake\path";
somePath += "subfolder";
// somePath == @"c:\fake\path\subfolder";

PathString has a proper Combine method, but I see code where people attempt to concatenate paths all the time. PathString overrides the addition operator creating an idiom where concatenation is equivalent to path combination.  This may end up being a bad idea, we’ll see. My feeling is that if you’re already concatenating paths, this can only make it better.

I also implemented ISerializable and IXmlSerializable to make sure that, for example, the serialized representation of PathString looks exactly like a string.

Since I have multiple types like this, I tried to push as much of the boilerplate into a base class. But it takes some tricky tricky tricks that might be a little bit evil.

Here’s the signature of the base class I wrote:

[Serializable]
public abstract class StringEquivalent<T> 
  : ISerializable, IXmlSerializable where T : StringEquivalent<T>
{
    protected StringEquivalent(string value);
    protected StringEquivalent();
    public abstract T Combine(string addition);
    public static T operator +(StringEquivalent<T> a, string b);
    public static bool operator ==(StringEquivalent<T> a, StringEquivalent<T> b);
    public static bool operator !=(StringEquivalent<T> a, StringEquivalent<T> b);
    public override bool Equals(Object obj);
    public bool Equals(T stringEquivalent);
    public virtual bool Equals(string other)    
    public override int GetHashCode();
    public override string ToString();
    // Implementations of ISerializable and IXmlSerializable
}

The full implementation is available in my CodeHaacks repo on GitHub with full unit tests and examples.

Self Referencing Generic Constraints

There’s some stuff in here that just seemed crazy to me at first. For example, taking out the interfaces, did you notice the generic type declaration?

public abstract class StringEquivalent<T> : where T : StringEquivalent<T> 

Notice that the generic constraint is self-referencing. This is a pattern that Eric Lippert discourages:

Yes it is legal, and it does have some legitimate uses. I see this pattern rather a lot(**). However, I personally don’t like it and I discourage its use.

This is a C# variation on what’s called the Curiously Recurring Template Pattern in C++, and I will leave it to my betters to explain its uses in that language. Essentially the pattern in C# is an attempt to enforce the usage of the CRTP.

…snip…

So that’s one good reason to avoid this pattern: because it doesn’t actually enforce the constraint you think it does.

…snip…

The second reason to avoid this is simply because itbakes the noodleof anyone who reads the code.

Again, Jon Skeet provided an example of the warning that Lippert states in regards to the inability to actually enforce the constraint I might wish to enforce.

While you’re not fulIy enforcing a constraint, the constraint which you have got doesn’t prevent some odd stuff. For example, it would be entirely legitimate to write:

public class ZipCodeString : StringEquivalent<ZipCodeString>

public class WeirdAndWacky : StringEquivalent<ZipCodeString>

That’s legal, and we don’t really want it to be. That’s the kind of thing Eric was trying to avoid, I believe.

The reason I chose to against the recommendation of someone much smarter than me in this case is because my goal isn’t to enforce these constraints at all. It’s to enable a scenario. This is the only way to implement these various operator overloads in a base class. Without these constraints, I’d have to reimplement them for every class class. If you know a better approach, I’m all ears.

WPF Value Converter and Markup Extension Examples

As a bonus divergence, I thought I’d throw in one more example of a self-referencing generic constraint. In WPF, there’s a concept of a value converter, IValueConverter, used to convert values from XAML to your view model and vice versa. However, the mechanism to declare and use value converters is really clunky.

Josh Twist provides a nice example that cleans up the syntax with value converters that are also MarkupExtension. I decided to take it further and write a base class that does it.

public abstract class ValueConverterMarkupExtension<T> 
    : MarkupExtension, IValueConverter where T : class, IValueConverter, new()
{
  static T converter;

  public override object ProvideValue(IServiceProvider serviceProvider)
  {
    return converter ?? (converter = new T());
  }

  public abstract object Convert(object value, Type targetType
    , object parameter, CultureInfo culture);

  // Only override this if this converter might be used with 2-way data binding.
  public virtual object ConvertBack(object value
    , Type targetType, object parameter, CultureInfo culture)
  {
    return DependencyProperty.UnsetValue;
  }
}

I’m sure I’m not the first to do something like this.

Now all my value converters inherit from this base class.

Back to Primitives

Back to the original topic, I used to supplement primitives with loads of extension methods. I have a set of extension methods of string I use quite a bit. But more and more, I’m starting to prefer dialing that back a bit in cases like this where I need something to be a string with structure.

code, git, github, community, tech comments suggest edit

Next week my wife and I celebrate our tenth anniversary in Oahu with the kids. It’s been a great ten years and I’m just so lucky to have such a wonderful woman and partner in my life along with two deviously great kids.

And what better way to celebrate an anniversary than to give a talk on Git and GitHub for Windows Developers!

UPDATE: Immediately after the talk we’re going to have a drinkup!

git-github-logo

Before I go further, I need you to soak in that logo for a minute. At first glance, it looks like it was drawn by a five year old phoning in a homework assignment. But let it wash over you and the awesomeness starts to make itself apparent.

It’s a commission from http://www.horriblelogos.com/ where you can spend…

$5 for a logo guaranteed to
suck

You might even learn something from these logos. For example, if you’ve heard the term “Map Reduce” and you know it’s probably useful but don’t understand what it means. You can thank me later for the following:

horrible-logos-map-reduce

But I digress.

This is my first time speaking in Hawaii and I’m excited. I hope this begins a trend of being invited to speak in lush tropical islands.

code comments suggest edit

If you look hard enough at our industry (really at all industries), you’ll find many implicit quotas in play. For example, some companies demand a minimum set of hours worked per week.

This reminds me of an apocryphal story of the “know where man”. Here’s one variant of this famous legend as described on snopes:

Nikola Tesla visited Henry Ford at his factory, which was having some kind of difficulty. Ford asked Tesla if he could help identify the problem area. Tesla walked up to a wall of boilerplate and made a small X in chalk on one of the plates. Ford was thrilled, and told him to send an invoice.

The bill arrived, for $10,000. Ford asked for a breakdown. Tesla sent another invoice, indicating a $1 charge for marking the wall with an X, and $9,999 for knowing where to put it.

In this variant, Ford is surprised by the price because $10,000 is a lot to pay for a few minutes of work. But as Tesla points out, he’s not paying for Tesla’s time, he’s paying for a solution to an expensive problem.

Another example is the idea of measuring a developer’s productivity by lines of code. Unless you sell code by the line, this is also pointless as Bill Gates once pointed out:

Measuring programming progress by lines of code is like measuring aircraft building progress by weight

Set working hours is another example of a poor quota. Developers aren’t paid for lines of code, number of hours in the office, or being in the office at certain hours. They’re paid to create value!

I got to thinking about this after reading an article completely unrelated to software - this heart wrenching and infuriating account of young offenders being enlisted as confidential informants and placed in extremely dangerous situations that far outweigh the gravity of their alleged crime.

One thing in particular caught my attention:

Mitchell McLean has come to see his son’s death as the result of an equally cynical and utilitarian calculation. “The cops, they get federal funding by the number of arrests they make—to get the money, you need the numbers,” he explained, alluding to, among other things, asset-forfeiture laws that allow police departments to keep a hefty portion of cash and other resources seized during drug busts. \

Notice the incentive here. The focus is on number of arrests. This focuses on a symptom, but not on the actual desired outcome.

That’s the problem with quotas. They rarely lead to the actual outcome you want. They simply reward gaming the quota by any means necessary.

This is not to say that all quotas are useless. Perhaps there are cases where they are called for. But they have to overcome the dreaded law of unintended consequences.

The law of unintended consequences, often cited but rarely defined, is that actions of people—and especially of government—always have effects that are unanticipated or unintended.

I imagine a good quota would be one in which it brings the system closer to the desired outcome and manages to avoid unintended consequences that would set the overall system back in worst shape than before. For example, perhaps if the gulf between your current state and the desired outcome is huge, a quota might help make small gains.

If you have examples where you think quotas produce the desired outcome with negligible unintended consequences, please do comment.

community, code, company culture, github comments suggest edit

Back in March of this year I had the honor and delight to give the opening keynote at CodeMania, a new conference in New Zealand.

This conference was something special. I mean, just look at their beautiful lucha libre inspired site design.

codemania

Although inexplicably, they switched to a pirate theme when it came to my profile image. Even so, it’s fun and the Twitter integration is a nice touch. It’s time for me to tweet something very inappropriate.

phil-at-codemania

On a personal level, this was a particularly special conference for me as it was the first time I’ve been asked to deliver a keynote. The topic I chose was about the love of coding and some of the barriers that can dampen that love.

I touched upon some themes that I’ve written about here such as why we should care lack of women in our industry as well as the benefits of a work environment where employees feel trusted and fulfilled. I also riffed a bit about the GitHub work environment based on my brief experience there as well as the blog posts by Zach Holman on How GitHub Works. It was a privilege and a lot of fun to give a talk that’s very different from the ones I usually give.

Not only that, but apparently the talk touched a nerve for at least one person who tweeted that this talk made him leave his job in search of a better one!

You can watch the talk on Youtube. Note that the title of my talk includes a swear word (I explain why). I know some of you are sensitive to that so I thought I’d warn you in case you’re watching this out loud with children around and would rather they not hear such language.

You can view my “slides” here on GitHub. The slides is actually a web page using Impress.js. You’ll probably need to use Safari or Chrome to view them. As I mentioned before, I’ve been posting my recent talks on Github. The GitHub project page for my talks are here if you want to clone them locally.

I should also reiterate that this talk was delivered to a New Zealand audience. Hence the crack at the expense of Aussies. I have nothing against Australians. Some of my close friends are Aussies. But when you’re in the land of Kiwis, you do what you need to in order to get the message across. Apparently the relationship between Kiwis and Aussies is not unlike that of Canadians and Americans. No real hate back and forth, but a lot of mutual ribbing.

company culture comments suggest edit

Today on Twitter, I noticed this tweet from Dare Obasanjo (aka @carnage4life on Twitter) critical of a blog post by Rand Fishkin, co-founder and CEO of SEOMoz.

Why you shouldnt take lessons from inexperienced managers. Replaces to-the-point email with lengthy BS no one’ll read -http://t.co/u1vu8sUS

Dare is one of those folks who is very thoughtful in what he blogs and tweets. Most of what he posts is worthwhile, so naturally I clicked through and read the post. In the post, Rand poses a hypothetical scenario (emphasis mine):

For example, let’s say Company X has been having trouble with abuse of work-from-home privileges. Managers are finding that more and more people are getting less accomplished and a primary suspect is a lack of coming into the office*. The problem is circulated at the executive team meeting and a decision is made to change the work-from-home policy to provide greater analytics and visibility. An email is sent to the team that looks like this:

Rand supplies the “typical” corporate response:

To: Allhands@CompanyX.com \ From: JoeTheHRManager@CompanyX.com \ Subject: New Work-From-Home Policy

Hi Everyone,

Starting next week, we’re making a change in policy around time working out of the office. Employees wishing to work from home must send an explanatory writeup to their manager. It will be at managers’ discretion whether these requests will be accepted.

If you have feedback, please email HR@CompanyX.com

Thanks very much,

Joe

And here’s his “improved” response. To be fair, he makes it clear that he doesn’t think it’s perfect and he’d spend more time on it if he were actually sending such mail.

To: Allhands@CompanyX.com \ From: RandtheCEO@CompanyX.com \ Subject: Productivity & Working Out of Office

Hi Everyone,

*Over the last month, several managers have been concerned about our ability to get collaboration-dependent projects completed. We need a way to better track in-office vs. out-of-office work to help prevent frustration and lost productivity. *If you’re planning to work from home or from the road, please email your manager letting them know. If the time doesn’t work, they might ask you to come in.**

I know many of you are putting in a ton of effort and a lot of hours, and that this extra layer of communication may be a pain. I’m sorry for that. But, as we’ve learned with all sorts of things growing this company, we can’t improve what we don’t measure, so please help us out and, hopefully, we can make things better for everyone (more work-from-wherever time for those who need it, more in-office collaboration so communication delays don’t hold you back, etc).

If you’ve got any feedback, ideas or feel that we’re being knuckleheads and missing the real problem, please talk to your manager and/or me!

Thanks gang,

Rand

The Golden Rule

Before I comment, I should point out that while I have managed some poor souls in the distant past, I’ve never been a CEO or an HR director. I don’t have years of experience in those fields.

But I do have years of experience being an employee. This makes me especially qualified to critique these emails. Let’s face it. They absolutely reek of manager speak.

I’ve had great managers in the past, but there’s one common trait I’ve noticed from many managers. They’re typically self-centered and frame everything from their scope of influence.

The email could be made so much better by practicing a very simple thing. Put yourself in the audience’s shoes.In other words, practice the Golden Rule. How would you react to such an email if the tables were turned and you were the employee and you got this email from management?

I’d imagine you’d prefer to be spoken to as a peer and an adult, not as child who needs to be controlled. These emails feel like classic examples of “Theory Y” as Dan Ostlund highlighted in his FogCreek blog post, Why do we pay sales commissions where he addresses the dual theories on how management views workers:

The tension between these views of workers was described in the 1960s by Douglas MacGregor in his book The Human Side of Enterprise. He suggested that managers had two views of motivation, and that a manager’s theory of motivation determined company culture. The first view he called Theory X which assumes that people are lazy, want to avoid work and need to be controlled, coerced, punished, and lavishly rewarded in order to perform. Sounds like some sort of S&M dungeon to me. Theory X demands a lot of managerial control and tends to demotivate, generate hostility, and generally make people into sour pusses.

The second he called Theory Y which assumes that people are self-motivated, derive satisfaction from their work, are creative, and thrive when given autonomy.

As you can tell, I strongly subscribe to Theory Y. Perhaps if I was a CEO, I’d change my mind and subscribe to Theory X in a sadistic desire to exert my will over others. Perhaps using a bullwhip.

But I’m not. I’m an employee and I like being treated like an adult. I’m very fortunate to work at a place that is so camped out in Theory Y it’s crazy.

If you do have employees who act like children and only respond to command and control, maybe it’s time to get rid of them.

Attack the Root Problem

So back to these emails. Putting myself in the employee’s shoes, here’s how I might react to them.

If I were working from home productively, I’d be annoyed by the fact that more process is being added to my work day due to the lack of productivity of others.

But maybe I’m one of the people whose productivity has declined. Well I’d probably still be annoyed because the letter misses the point and doesn’t address the real problem.

Note that in the original scenario, I put some emphasis on a phrase:

more and more people are getting less accomplished and a primary suspect is a lack of coming into the office*.

Both of the proposed responses immediately commit a logical fallacy. Now please, repeat after me: correlation does not imply causation!

The problem is not that people are working from home. The problem is the decline in productivity!

Working from home is only a potential suspect as the cause in this decline. But management runs with this and puts more constraints in place that only serve to annoy employees. That’s putting a band-aid on a problem they admit they don’t yet understand!

The solution is to attack the root problem. Find out what the real cause is and enlist the help of your employees to solve the issue. If I were sending out the email, I’d probably start by sending it to just the managers first (this assumes my company even has managers in the first place):

To: Overhead <managers@CompanyX.com> \ From: HaackTheAllPowerfulCEO@CompanyX.com \ Subject: Productivity & Working Out of Office

Hi Everyone,

Over the last month, several managers have been concerned about our ability to get collaboration-dependent projects completed. We need to better understand the root cause of why our productivity has declined.

I recommend talking to your reports, clearly state the problem, and gather their ideas on how we can improve overall collaboration and productivity. I’m especially interested in what we can do as management to remove any roadblocks that prevent them from being as productive as they’d like.

If you’ve got any feedback, ideas or feel that we’re being knuckleheads and missing the real problem, please talk to me!

Also, whoever used the executive bathroom last, please light a candle next time.

Thanks gang,

Phil

You just might find out that the reason Fred’s productivity declined is because he has a sick child at home and needs to be able to help out at home during the day. But because your company’s culture is so focused on synchronous collaboration, he can’t really make up the work at night. Asking Fred to come into work more often doesn’t solve anything. But improving your collaboration tools and helping foster a culture that can thrive with asynchronous communication just might!

Strike at the root problem my friends and treat each other with respect. That’s how you talk to employees (and ideally, everyone).

asp.net, asp.net mvc, code comments suggest edit

In one mailing list I’m on, someone ran into a problem where they renamed a controller, but ASP.NET MVC could not for the life of it find it. They double checked everything. But ASP.NET MVC simply reported a 404.

This is usually where I tell folks to run the following NuGet command:

Install-Package RouteDebugger

RouteDebugger is a great way to find out why a route isn’t matching a controller.

In this particular case, the culprit was that the person renamed the controller and forgot to append the “Controller” suffix. So from something like HomeController to Default.

In this post, I’ll talk a little bit about what makes a controller a controller and a potential solution so this mistake is caught automatically.

What’s with the convention anyways?

A question that often comes up is why require a name based convention for controllers in the first place? After all, controllers also must implement the IController interface and that should be enough to indicate the class is a controller.

Great question! That decision was made before I joined Microsoft and I never bothered to ask why. What I did do was make up my own retroactive reasoning, which is as follows.

Suppose we simply took the class name to be the controller name. Also suppose you’re building a page to display a category and you want the URL to be something like /category/unicorns. So you write a controller named Category. Chances are, you also have an entity class named Category too that you want to use within the Category controller. So now, a common default situation becomes painful.

If I could get in a time machine and revisit this decision, would I? Hell no! As my former co-worker Eilon always says; if you have a time machine, there’s probably a lot better things you can do than fix bad design decisions in ASP.NET MVC.

But if I were to do this again, I’m not so sure I’d require the “Controller” suffix. Instead, I’d suggest using plural controllers (and URLs) to better avoid conflicts. So the controller would be Categories and the URL would be /categories/unicorns. And perhaps I’d make the “Controller” suffix allowed as a way to resolve conflicts. So CategoryController would still work fine (as a conroller named “category”) if that’s the way you roll.

How do I detect Controller Problems?

Since I didn’t use my time machine to fix this issue (I would rather use it to go back in time and fix line endings text encodings), what can we do about it?

The simplest solution is to do nothing. How often will you make this mistake?

Then again, when you do, it’s kind of maddening because there’s no error message. The controller just isn’t there. Also, there’s other mistakes you could make, though many are unlikley. All the following look like they were intended to be controllers, but aren’t.

public class Category : Controller
{}

public abstract class CategoryContoller : Controller
{}

public class Foo 
{
    public class CategoryController : Controller 
    { }
}

class BarController : Controller
{}

internal class BarController : Controller
{}

public class BuzzController
{}

Controllers in ASP.NET MVC must be public, non-abstract, non-nested, and implement IController, or derive from a class that does.

So the list of possible mistakes you might make are:

  • Forget to add the “Controller” suffix (happens more than you think)
  • Make the class abstract (probably not likely)
  • Nest the class (could happen by accident like you thought you were pasting in a namespace, but very very unlikely)
  • Forget to make the class public (not likely if you use the Add Controller dialog. But if you use the Add Class dialog, could happen)
  • Forget to derive from Controller or ControllerBase or implement IController. (Again, probably not very likely)

How to detect these cases

As I mentioned before, it might not be that important to do this, but one thing you could consider is writing a unit test that detects these various conditions. Well how would you do that?

You’re in luck. Whether this is super useful or not, I still found this to be an interesting problem to solve. I wrote a ControllerValidator class with some methods for finding all controllers that match one of these conditions.

It builds on the extension method to retrieve all types I blogged about the other day. First, I wrote extension methods for the various controller conditions:

static bool IsPublicClass(this Type type)
{
    return (type != null && type.IsPublic && type.IsClass && !type.IsAbstract);
}

static bool IsControllerType(this Type t)
{
    return typeof (IController).IsAssignableFrom(t);
}

static bool MeetsConvention(this Type t)
{
    return t.Name.EndsWith("Controller", StringComparison.OrdinalIgnoreCase);
}

With these methods, it became a simple case of writing methods that checked for two out of these three conditions.

For example, to get all the controllers that don’t have the “Controller” suffix:

public static IEnumerable<Type> GetUnconventionalControllers
  (this Assembly assembly)
{
  return from t in assembly.GetLoadableTypes()
      where t.IsPublicClass() && t.IsControllerType() && !t.MeetsConvention()
      select t;
}

With these methods, it’s a simple matter to write automated tests that look for these mistakes.

Source code and Demo

I added these methods as part of the ControllerInspector library which is available on NuGet. You can also grab the source code from my CodeHaacks repository on GitHub (click the Clone in Windows button!).

If you get the source code, check out the following projects:

  • ControllerInspectorTests.csproj – Unit tests of these new methods show you how you might write your own unit tests.
  • MvcHaack.ControllerInspector.csproj – Contains the ControllerValidator class.
  • MvcHaack.ControllerInspector.DemoWeb.csproj – Has a website that demonstrates this class too.

The demo website’s homepage uses these methods to show a list of bad controllers.

bad-controllers

The code I wrote is based on looking at how ASP.NET MVC locates and determines controllers. It turns out, because of the performance optimizations, it takes a bit of digging to find the right code.

If you’re interested in looking at the source, check out TypeCacheUtil.cs on CodePlex. It’s really nice that ASP.NET MVC is not only open source, but also accepts contributions. I highly recommend digging through the source as there’s a lot of interesting useful code in there, especially around reflection.

If you don’t find this useful, I hope you at least found it illuminating.

asp.net mvc, code comments suggest edit

Sometimes, you need to scan all the types in an assembly for a certain reason. For example, ASP.NET MVC does this to look for potential controllers.

One naïve implementation is to simply call Assembly.GetTypes() and hope for the best. But there’s a problem with this. As Suzanne Cook points out,

If a type can’t be loaded for some reason during a call to Module.GetTypes(), ReflectionTypeLoadException will be thrown. Assembly.GetTypes() also throws this because it calls Module.GetTypes().

In other words, if any type can’t be loaded, the entire method call blows up and you get zilch.

There’s multiple reason why a type can’t be loaded. Here’s one example:

public class Foo : Bar // Bar defined in another unavailable assembly
{
}

The class Foo derives from a class Bar, but Bar is defined in another assembly. Here’s a non-exhaustive list of reasons why loading Foo might fail:

  • The assembly containing Bar does not exist on disk.
  • The current user does not have permission to load the assembly containing Bar.
  • The assembly containing Bar is corrupted and not a valid assembly.

Once again, for more details check out Suzanne’s blog post on Debugging Assembly Loading Failures.

Solution

As you might expect, being able to get a list of types, even if you don’t plan on instantiating instances of them, is a common and important task. Fortunately, the ReflectionTypeLoadException thrown when a type can’t be loaded contains all the information you need. Here’s an example of ASP.NET MVC taking advantage of this within the internal TypeCacheUtil class (there’s a lot of other great code nuggets if you look around the source code)

Type[] typesInAsm;
try
{
    typesInAsm = assembly.GetTypes();
}
catch (ReflectionTypeLoadException ex)
{
    typesInAsm = ex.Types;
}

This would be more useful as a generic extension method. Well the estimable Jon Skeet has you covered in this StackOverflow answer (slightly edited to add in parameter validation):

public static IEnumerable<Type> GetLoadableTypes(this Assembly assembly)
{
    if (assembly == null) throw new ArgumentNullException(nameof(assembly));
    try
    {
        return assembly.GetTypes();
    }
    catch (ReflectionTypeLoadException e)
    {
        return e.Types.Where(t => t != null);
    }
}

I’ve found this code to be extremely useful many times.

code, personal, tech comments suggest edit

As a kid, I was an impatient little brat. On any occasion that required waiting, I became Squirmy Wormy until I pushed my dad to make the demand parents so often make of fidgety kids, “Sit still!

Recent evidence suggests a rejoinder to kids today in response to this command, “What!? Are you trying to kill me?!

There is compelling evidence that modern workers propensity to sit for prolonged periods every day makes them fat and shortens their lives. Hmmm, you wouldn’t happen to know any professions where sitting limply at a desk for long periods of time is common, would you?

Yeah, me too.

This spurred me to learn more which led me to The Daily Infographic’s great summary of this research. Seriously, click on the image below. I could have stopped there and called it a post. But as always, I don’t know when to stop.

sitting-is-killing-you

Much has been written about the detrimental health effects of inactivity. According to Marc Hamilton, a biomedical researcher, sitting down shuts off your fat burning.

Physiologists analyzing obesity, heart disease, and diabetes found that the act of sitting shuts down the circulation of a fat-absorbing enzyme called lipase.

The same Hamilton goes into more details in this interesting NY Times article on the potential lethality of prolonged sitting,

This is your body on chairs: Electrical activity in the muscles drops — “the muscles go as silent as those of a dead horse,” Hamilton says — leading to a cascade of harmful metabolic effects. Your calorie-burning rate immediately plunges to about one per minute, a third of what it would be if you got up and walked.

In other words, sitting down is the off button.

This LifeHacker article points out that the the long term health effects of sitting multiple hours a day go way beyond weight gain.

After 10-20 Years of Sitting More Than Six Hours a Day

Sitting for over six hours a day for a decade or two can cut away about seven quality adjusted life years (the kind you want). It increases your risk of dying of heart disease by 64 percent and your overall risk of prostate or breast cancer increases 30 percent.

I want all kinds of life years, but the “quality adjusted” variety sounds extra special.

You might think that you’ll be just fine because you exercise the recommended 30 minutes a day, but a study from the British Journal of Sports Medicine notes that’s not the case.

Even if people meet the current recommendation of 30 minutes of physical activity on most days each week, there may be significant adverse metabolic and health effects from prolonged sittingthe activity that dominates most people’s remaining “non-exercise” waking hours.

That’s particularly disheartening. All that other exercise you do might not counteract all the prolonged sitting.

Get up, stand up! Stand up for your code!

With apologies to Bob Marley

So what’s a developer to do? Note that these studies put an emphasis on prolonged. The simple solution is to stop sitting for prolonged periods at a time. Get up at least once an hour and move!

But developers are interesting creatures. We easily get in the zone on a problem and focus so deeply that three hours pass in a blink. Ironically this wasn’t a problem I faced as a Program Manager since I was moving from meeting to meeting nearly every hour.

But in my new job, writing code at home, I knew I needed more than an egg timer to tell me to move every hour. I want to move constantly if I can. For example, the way you do when you stand. So I looked into adjustable desks.

According to Alan Hedge, director of Cornell’s Human Factors and Ergonomics laboratory, workers fare better when using adjustable tables (emphasis mine).

We found that the computer workers who had access to the adjustable work surfaces also reported significantly less musculoskeletal upper-body discomfort, lower afternoon discomfort scores and significantly more productivity,” said Alan Hedge, professor of design and environmental analysis in the College of Human Ecology at Cornell and director of Cornell’s Human Factors and Ergonomics Laboratory.

So I went on a quest to find the perfect adjustable desk. How did I choose which desk to purchase?

dodecahedron \ Critical hit! Photo by disoculated from Flickr Ok, not quite. I might have put in a little more research into it than that.

I asked around the interwebs and received a lot of feedback on various options. I found two desk companies that stood out: Ergotron and GeekDesk.

Initially, I really liked the Ergotron approach. Rather than a motorized system for moving the desk up and down, it has a clever quick release lever system that makes it easy to adjust the desk’s height quickly without requiring any tools or electricity.

For this reason, I initially settled on the Workfit-D Sit Stand Desk. Unfortunately, Ergotron is a victim of its own success in this particular case. They were backordered until our sun grows into a red giant and engulfs the planet and I couldn’t wait that long.

So I ended up ordering the GeekDesk Max. This desk uses a motor to adjust to specific heights, but has four presets. This is important because without the presets, you’re sitting there holding the button until it reaches the height you want. While the motor is slower than the Ergotron approach, with the presets, I can just hit the button and go get a coffee. To be fair, it’s not all that slow. Did I mention I’m impatient?

I’m very happy with this desk. Here’s a photo of my workspace that I sent to CoderWall with the desk in a standing configuration.

my-workspace

If you are looking for a more inexpensive option, I recently learned about this Adjustable Keyboard Podium that seems like a good option. Jarrod, a StackOverflow developer, uses it in his office.

As far as I can find, there’s only one study that points to a potential negative health impact from standing at work. http://www.ncbi.nlm.nih.gov/pubmed/10901115

Significant relationships were found between the amount of standing at work and atherosclerotic progression.

However, as you might expect, this one study is not conclusive and doesn’t focus solely on the work habits of office workers who stand. From what I can tell so far, the health benefits far outweigh the detriments assuming you don’t over do it.

If you do stand at work, I highly recommend getting some sort of gel or foam padding to stand on. Especially if you’re not wearing shoes. The hard floor might seem fine at first because you’re a tough guy or girl, but over the course of a day, it’ll feel like someone’s taken a bat to your soles.

Also, vary it up throughout the day. Don’t stand all day. Take breaks where you work sitting down and alternate.

Fight malaise!

Not every developer is the same, clearly. Some are fit, but many, well, let’s just say that the health benefits mentioned in this post might not factor into their decision making.

James Levine, a researcher at the Mayo clinic, had a more philosophical point to make about sitting all day that goes beyond just the physical health benefits. He also sees a mental health benefit.

For all of the hard science against sitting, he admits that his campaign against what he calls “the chair-based lifestyle” is not limited to simply a quest for better physical health. His is a war against inertia itself, which he believes sickens more than just our body. “Go into cubeland in a tightly controlled corporate environment and you immediately sense that there is a malaise about being tied behind a computer screen seated all day,” he said. “The soul of the nation is sapped, and now it’s time for the soul of the nation to rise.”

http://www.nytimes.com/2011/04/17/magazine/mag-17sitting-t.html

In other words, stop sitting and write better code! Go forth and conquer.

code comments suggest edit

Take a look at the following code.

const string input = "interesting";
bool comparison = input.ToUpper() == "INTERESTING";
Console.WriteLine("These things are equal: " + comparison);
Console.ReadLine();

Let’s imagine that input is actually user input or some value we get from an API. That’s going to print out These things are equal: True right? Right?!

Well not if you live in Turkey. Or more accurately, not if the current culture of your operating system is tr-TR (which is likely if you live in Turkey).

To prove this to ourselves, let’s force this application to run using the Turkish locale. Here’s the full source code for a console application that does this.

using System;
using System.Globalization;
using System.Threading;
internal class Program
{
    private static void Main(string[] args)
    {      
        Thread.CurrentThread.CurrentCulture = new CultureInfo("tr-TR");
        const string input = "interesting";
        
        bool comparison = input.ToUpper() == "INTERESTING";

        Console.WriteLine("These things are equal: " + comparison);
        Console.ReadLine();
    }
}

Now we’re seeing this print out These things are equal: False.

To understand why this is the case, I recommend reading much more detailed treatments of this topic:

The tl;dr summary summary is that the uppercase for i in English is I (note the lack of a dot) but in Turkish it’s dotted, İ. So while we have two i’s (upper and lower), they have four.

My app is English only. AMURRICA!

Even if you have no plans to translate your application into other languages, your application can be affected by this. After all, the sample I posted is English only.

Perhaps there aren’t going to be that many Turkish folks using your app, but why subject the ones that do to easily preventable bugs? If you don’t pay attention to this, it’s very easy to end up with a costly security bug as a result.

The solution is simple. In most cases, when you compare strings, you want to compare them using StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase. It just turns out there are so many ways to compare strings. It’s not just String.Equals.

Code Analysis to the rescue

I’ve always been a fan of FxCop. At times it can seem to be a nagging nanny constantly warning you about crap you don’t care about. But hidden among all those warnings are some important rules that can prevent some of these stupid bugs.

If you have the good fortune to start a project from scratch in Visual Studio 2010 or later, I highly recommend enabling Code Analysis (FxCop has been integrated into Visual Studio and is now called Code Analysis). My recommendation is to pick a set of rules you care about and make sure that the build breaks if any of the rules are broken. Don’t turn them on as warnings because warnings are pointless noise. If it’s not important enough to break the build, it’s not important enough to add it.

Of course, many of us are dealing with existing code bases that haven’t enforced these rules from the start. Adding in code analysis after the fact is a daunting task. Here’s an approach I took recently that helped me retain my sanity. At least what’s left of it.

First, I manually created a file with the following contents:

<?xml version="1.0" encoding="utf-8"?>
<RuleSet Name="PickAName" Description="Important Rules" ToolsVersion="10.0">
  <Rules AnalyzerId="Microsoft.Analyzers.ManagedCodeAnalysis"
      RuleNamespace="Microsoft.Rules.Managed">

    <Rule Id="CA1309" Action="Error" />    
  
  </Rules>
</RuleSet>

You could create one per project, but I decided to create one for my solution. It’s just a pain to maintain multiple rule sets. I named this file SolutionName.ruleset and put it in the root of my solution (the name doesn’t matter. Just make the extension .ruleset)

I then configured each project that I cared about in my solution (I ignored the unit test project) to enable code analysis using this ruleset file. Just go to the project properties and select the Code Analysis tab.

CodeAnalysisRuleSet

I changed the selected Configuration to “All Configurations”. I also checked the “Enable Code Analysis…” checkbox. I then clicked “Open” and selected my ruleset file.

At this point, every time I build, Code Analysis will only run the one rule, CA1309, when I build. This way, adding more rules becomes manageable. Every time I fixed a warning, I’d add that warning to this file one at a time. I went through the following lists looking for important rules.

I didn’t add every rule from each of these lists, only the ones I thought were important.

At some point, I reached the point where I was including a large number of rules and it made sense for me to invert the list so rather than listing all the rules I want to include, I only listed the ones I wanted to exclude.

<?xml version="1.0" encoding="utf-8"?>
<RuleSet Name="PickAName" Description="Important Rules" ToolsVersion="10.0">
  <IncludeAll Action="Error" />
  <Rules AnalyzerId="Microsoft.Analyzers.ManagedCodeAnalysis"
      RuleNamespace="Microsoft.Rules.Managed">

    <Rule Id="CA1704" Action="None" />    
  
  </Rules>
</RuleSet>

Notice the IncludeAll element now makes every code analysis warning into an error, but then I turn CA1704 off in the list.

Note that you don’t have to edit this file by hand. If you open the ruleset in Visual Studio it’ll provide a GUI editor. I prefer to simply edit the file.

RuleSetEditor

One other thing I did was for really important rules where there were too many issues to fix in a timely manner, I would simply use Visual Studio to suppress all of them and commit that. At least that ensured that no new violations of the rule would be committed. That allowed me to fix the existing ones at my leisure.

I’ve found this approach makes using code analysis way more useful and less painful than simply turning on every rule and hoping for the best. Hope you find this helpful as well. May you never ship a bug with the Turkish I problem again!

git, github, nuget, code comments suggest edit

A couple weeks ago I had the great pleasure to speak at the Norwegian Developer’s Conference (NDC). This is my second time speaking at NDC. The first time was back in 2009 and it was a blast!

I gave two talks this year. My slides and a video of each presentation are available as well.

Git and GitHub for Developers on Windows

GitHub.com is the place for open source developers to collaborate on their projects. But there’s a perception that GitHub and Git are the domain of Mac and *nix users. Not so! In this talk, Phil Haack, a GitHub employee, will show how GitHub makes open source collaboration fun and tools and techniques for using Git with GitHub on Windows.

NuGet: Zero to DONE in no time

Developers are known for “scratching their own itch,” producing thousands of libraries to handle every imaginable task. NuGet is an open source package manager from the Outercurve Foundation for .NET (and Windows) developers that brings these libraries together in one gallery and accelerates getting started with a project. It makes it easy to incorporate these libraries in a solution. In this talk, Phil Haack, project coordinator for NuGet, will describe the problem that NuGet solves, how to make effective use of it, and some tips and tricks around the newer features of NuGet.

My “Slides”

Speaking of slides, a lot of people asked me what PowerPoint themes I used for my slides. All of my “slide decks” are actually HTML pages with CSS and JavaScript. I’ve become a bit enamored of the Impress.js library.

If you go this route, it’s a lot more work to create a presentation, but if done well, I think it creates a nice effect of an infinite canvas for your thoughts. But it’s very easy to abuse it, which has caused a bit of a backlash against this approach from some people I know. I’m still experimenting with it because I think with a moderate approach, it provides a nice backdrop to a presentation.

Note that the slides might not work well on IE. Take that up with the impress.js author, not me.

If you want to see my decks for other presentations, they’re all posted on http://talks.haacked.com/.

Oslo, Norway

As always, Norway is beautiful in June. The speakers were treated to a boat ride on the Fjords.

ndc-boat-ride

At the conference, I was flattered by some folks who created some swag based on my blog!

ndc-swag

Collect them all!

And of course, the best part of any conference is the people. Here a large group of us spontaneously gathered at a Jazz bar. The mixture of accents from around the world was music to my ears.

ndc-jazz-bar

Once again, NDC does not disappoint.

The only low point of the conference was the incredibly awkward Windows Azure event punctuated with offensive lyrics. Though I don’t think the conference organizers had anything to do with that, so I don’t hold it against NDC as I do against the local Norwegian Microsoft offices.

github, git comments suggest edit

In my last blog post, I mentioned that GitHub for Windows (GHfW) works with non-GitHub repositories, but I didn’t go into details on how to do that. GHfW is optimized for GitHub.com of course, but using it with non-GitHub repositories is quite easy.

All you need to do is drag and drop the HTTPS clone URL into the dashboard of the application.

For example, suppose you want to work on a project hosted on CodePlex.com. In my case, I’ll choose NuGet. The first thing you need to find is the Clone URL. In CodePlex, click on the Source Code tab and then click on the sidebar Git link to get the remote URL. If there is no Git link, then you are out of luck.

nuget-codeplex-git-url

Next, select the text of the clone url, then click on it and drag it into the GitHub for Windows dashboard. Pretty easy!

ghfw-drag-drop-repo

You’ll see the repository listed in the list of local repositories. Double click the repository (or click on the blue arrow) to navigate to the repository.

ghfw-nuget-local-repo

The first time you navigate to the repository, GHfW prompts you for your credentials to the Git host, in this case, CodePlex.com. This probably goes without saying, but do not enter your GitHub.com credentials here.

ghfw-codeplex-credentials

GHfW will securely store the credentials for this repository so that you only need to enter it once. GHfW acts as a credentials provider for Git so the credentials you enter here will also work with the command line as long as you launch it from the Git Shell shortcut that GHfW installs. That means you won’t have to enter the credentials every time you push or pull commits from the server.

With that, you’re all set. Work on your project, make local commits, and when you’re ready to push your changes to the server, click on the sync button.

ghfw-nuget-sync-codeplex

While we think you’ll have the best experience on GitHub.com, we also think GitHub for Windows is a great client for any Git host.

Tags: git, github, ghfw, gh4w

github, git, code comments suggest edit

For the past several months I’ve been working on a project with my amazing cohorts, Paul, Tim, and Adam, and Cameron at GitHub. I’ve had the joy of learning new technologies and digging deep into the inner workings of Git while lovingly crafting code.

But today, is a good day. We’ve called the shipit squirrel into action once again! We all know that the stork delivers babies and the squirrel delivers software. In our case, we are shipping GitHub For Windows! Check out the official announcement on the GitHub Blog. GitHub for Windows is the easiest and best way to get Git on your Windows box.

gh4w-app

If you’re not familiar with Git, it’s a distributed version control system created by Linus Torvalds and his merry Linux hacking crew. If you are familiar with Git, you’ll know that Git has historically been a strange and uninviting land for developers on the Windows platform. I call this land, Torvaldsia, replete with strange incantations required to make things work.

Better Git on Windows

In recent history, this has started to change due to the heroic efforts of the MSysGit maintainers who’ve worked hard to provide a distribution of Git that works well on Windows.

GitHub for Windows (or GH4W for short) builds on those efforts to provide a client to Git and GitHub that’s friendly, approachable, and inviting. If you’re a Git noob, this is a good place to start. If you’re a Git expert on Windows, at the very least, GitHub for Windows can still be a useful part of your workflow. Just visit http://windows.github.com/ and click the big green download button.

In this post, I’ll give a brief rundown of what gets installed and how to customize the shell for you advanced users of Git.

As the GitHub blog post shows, you can easily access and clone repositories on GitHub either by clicking the Clone in Windows link from a repository on GitHub.com itself, or by cloning a repository associated with your account directly from the application.

The application allows you to browse, make, revert, and rollback commits. You can also find, create, publish, merge, and delete branches. I’ll go into more details about this sort of thing in future blog posts. In this post, I want to talk about what gets installed and then cover customizing the Git shell we include for you advanced Git users.

Installation

If you’ve ever read the old guide to installing msysgit for Windows on the GitHub help page, you’d know there’s a lot of configuration steps involved. We use ClickOnce to install the application and to provide Google Chrome style silent, automated, updates that install in the background to keep it up-to-date.

GH4W is a sandboxed installation of Git and the GitHub application that takes care of all that configuration. Please note, it will not mess with your existing Git environment if you have one. There will be two shortcuts installed on your machine, one for the GH4W application and another labeled “Git Shell”.

The Git Shell shortcut launches the shell of your choice as configured within the GH4W application’s options menu. You can also launch the shell from within the application for any given repository.

gh4w-default-shells

By default, this is PowerShell but you can change it to Bash, Cmd, or even a custom option, which I’ll cover in a second.

Posh-Git and PowerShell

When you launch the shell, you’ll notice that the PowerShell option includes Posh-Git by Keith Dahlby. I’ve written about Posh-Git before and we love it so much we included it in the box. This is an even easier way to get Posh-Git on your machine and stay up to date with the latest version.

You might notice that our PowerShell icon doesn’t execute your existing PowerShell profile. We worried about conflicts with existing Posh-Git installs or whatever you might have. Instead, we execute a custom profile script if it exists, GitHub.PowerShell_profile.ps1.

Just create one in the same directory as your $profile script. In my case, it’s in the C:\Users\Haacked\Documents\WindowsPowerShell directory.

Custom Shell

I’m a huge fan of pimping out my command line shell with Console2. As the previous screenshot shows, you can specify a custom shell like Console2. However, when you launch a custom shell, it won’t load our profile script and also won’t load the version of Posh-Git that we include. However, we added an environment you can check within the Microsoft.Powershell_profile.ps1 script.

# If Posh-Git environment is defined, load it.
if (test-path env:posh_git) {
    . $env:posh_git
}

The benefit here, as I mentioned earlier is that you won’t have to worry about keeping Posh-Git up-to-date since we’ll do it for you as part of GH4W updates.

What’s Next?

I’ll try and cover a few other topics later. For example, GH4W works with local Git repositories as well as those from other hosts. I’ll also try and cover how I fit GitHub for Windows into my Git workflow developing with Visual Studio. If you have other ideas for topics you’d like me to cover, let me know.

In the meanwhile, try it out!

If you have feedback, mention @github on Twitter (hashtag #gh4w). We make sure to read every mention on Twitter. If you find a bug, submit it to support@github.com. Every email is read by a real person.

But of course, I expect many of you will comment right here and I’ll do my best to keep up with responses because I love you all.

personal, tech, code comments suggest edit

Around eight years ago I wrote a blog post about Repetitive Strain Injury entitled The Real Pain of Software Development [part 1]. I soon learned the lesson that it’s a bad idea to have “Part 1” in any blog post unless you’ve already written part 2. But here I am, eight years later, finally getting around to part 2.

But better late than never!

The original reason that led me to write about this topic was a period of debilitating pain I went through when coding. Too many long hours at the keyboard took their toll on me so that even placing my fingers on the keyboard would cause me pain. I experienced numbness in my fingers, pain in my wrists, back and shoulders, and lots of headaches. In short, I was a mess.

Road to Recovery

Fortunately, my employer at the time was supportive of me filing a Worker’s Compensation claim. I know for some, that has a negative connotation, but keep in mind it’s insurance that you pay in to specifically for cases of injuries. So it makes sense to use it if you’re legitimately injured on the job. Per wikipedia:

Workers’ compensation is a form of insurance providing wage replacement and medical benefits to employees injured in the course of employment in exchange for mandatory relinquishment of the employee’s right to sue his or her employer for the tort of negligence.

The insurance covered several things for me:

  • Doctor visits
  • Physical Therapy (PT)
  • Occupational Therapy (OT)
  • An ergonomic chair (Neutral Posture)

I am extremely grateful for these measures as they’ve taught me the means to care for myself and deal with ongoing pain in a productive manner. I love to code and the thought of switching careers at the time was depressing.

One thing that’s important to understand is that every person is different. Some folks can work 16 hrs a day slouching the whole time, and have no problems. While others can work 8 hrs a day in perfect posture and have tons of pain. It’s important to listen to what your own body is telling you.

Therapy

Have an fully grown friend lay down on the floor and relax. No funny business here, I promise. Then lift their head up gently with your two hands. Notice how heavy that is? A human head (without hair) weighs around 8 to 12 pounds. And I’ve been told, some noggins are larger than others.

Ok, you can put it down now. Gently! A head is a pretty heavy thing, even when engaging your arms to lift it. Now consider the fact that you have only your neck muscles to hold it up all day.

So unless you’re built by this guy (photo from The NFL’s Widest Necks article on Slate)

big-neck

Holding your head up all day can be a literal pain in the neck. The trick, of course, is to balance the head well so you’re neck isn’t constantly engaged.

What I learned in PT was how all these systems are connected. Pain in the neck and shoulders can impinge on nerves that run through the arm, elbow, and into your hands.

So a lot of Physical Therapy involved strengthening these muscles to better handle the stresses of the day combined with various massages and stretches to release tension in these muscles.

A lot of Occupational Therapy was focused on habits and behaviors so that these muscles weren’t overused in the first place. No matter how good your posture is, you need to take regular breaks. The body doesn’t respond well to being overly static. Even sitting in place with perfect posture for hours on end takes its toll. The body needs movement.

During my therapy, I bought a foam roller and would bring it to the office. I didn’t care how silly I looked, regular stress breaks with the roller helped me out a lot.

Dvorak Keyboard Layout

Another change I made at the same time was to switch to a Dvorak Simplified Keyboard Layout:

Because the Dvorak layout concentrates the vast majority of key strokes to the home row, the Dvorak layout uses about 63% of the finger motion required by QWERTY, thus making the Dvorak layout more ergonomic.^[16]^ Because the Dvorak layout requires less finger motion from the typist compared to QWERTY, many users with repetitive strain injuries have reported that switching from QWERTY to Dvorak alleviated or even eliminated their repetitive strain injuries.

I hoped that reducing finger motion would result in less strain on my hands over all.

There’s some controversy around whether Dvorak is really better than QWERTY. A reason.com article on QWERTY vs Dvorak pointed out that the idea that QWERTY was designed to slow down typists is a myth. It goes on to provide evidence that there’s no reason to believe Dvorak is superior to QWERTY.

While the part about QWERTY is true, the evidence in the Reason article that QWERTY is superior to Dvorak is also suspect.

The fact is that there’s too little research to make any claims. And all these studies focused on typing speed and not on impact to repetitive stress injuries.

And I’m not sure my experience can lend credence either way because it was not a controlled experiment. I switched to Dvorak while also engaging in new habits meant to improve my condition. So it’s hard to say whether Dvorak helped, I do subjectively feel that it’s more comfortable given how much my fingers stay on the home row.

The Right Chair

In his blog post, The Programmer’s Bill of Rights, Jeff Atwood calls out the need for a comfortable chair.

Let’s face it. We make our livings largely by sitting on our butts for 8 hours a day. Why not spend that 8 hours in a comfortable, well-designed chair? Give developers chairs that make sitting for 8 hours not just tolerable, but enjoyable. Sure, you hire developers primarily for their giant brains, but don’t forget your developers’ other assets.

He also has a great follow-up blog post, Investing in a Quality Programming Chair.

I mentioned earlier that Workman’s Comp paid for a chair. I also bought another one with my own money so I’d have a good one both at home and at work. It’s that important!

For many, the Herman Miller Aeron chair is synonymous with “ergonomic chair.” But it’s very important to note that, as good as it is, it’s not necessarily the right chair for everybody. I found that for whatever reason, it just wasn’t very comfortable with my body type. I felt the seat pan was too long and pushed against the back of my knees more than I liked.

I tried a bunch of chairs and settled on the Neutral Posture series with a Tempurpedic seat cushion so my ass is cradled like a newborn. Be sure to get a chair that works for you and not simply select one because you heard about it.

Today

One thing a doctor told me when I was dealing with this was that it’s very likely that I’ll always have pain. The question is how well will I deal with it when it happens?

And it’s true. The pain has subsided for the most part, but it’s never totally gone away. The good news is that I’ve been able to have a productive career in software because I took the pain seriously and worked to address it immediately. On days when I do have pain, I deal with it with stretches, exercise, and taking breaks. I also work to reduce my stress level as I’ve found that my pain level seems to be correlated to the amount of stress I feel. I think I tend to carry my stress in my shoulders.

If you’re dealing with pain due to coding, please know that it’s not because you are deficient in some manner. Or because you’re a wimp. There’s really no value judgment to be made. You’re not alone. It’s pretty common. Don’t ignore it! You wouldn’t (or shouldn’t) ignore a searing pain in your abdomen, so why ignore this?

With the right treatment and regimen, it can get better. Good luck!

code, rx comments suggest edit

For a long time, good folks like Matt Podwysocki have extolled the virtues of Reactive Extensions (aka Rx) to me. It piqued my interest enough for me to write a post about it, but that was the extent of it. It sounded interesting, but it didn’t have any relevance to any projects I had at the time.

Fortunately, now that I work at GitHub I have the pleasure to work with an Rx Guru, Paul Betts, on a project that actively uses Rx. And man, is my mind blown by Rx.

Hits Me Like A Hurricane

What really blew me away about Rx is how it allows you to handle complex async interactions declaratively. No need to chain callbacks together or worry about race conditions. With Rx, you can easily compose multiple async operations together. It’s powerful.

The way I describe it to folks is to think of how the IEnumerable and IEnumerator are involved when iterating over an enumerable. Now take those and reverse the polarity. That’s Rx. But with Rx, the IObservable and IObserver interfaces are involved and rather than enumerate over existing sequences, you write queries against sequences of future events.

Hear that? That’s the sound of my head asploding again.

the-future

Rx has a tendency to twist and contort the mind in strange ways. But it’s really not all that complicated. It only hurts the head at first because it’s a new way to think about async, sequences, and queryies for many folks.

Here’s a simple example that helps demonstrate the power of Rx. Say you’re writing a client app (such as a WPF application) and want to save the application to persist its window’s position and size. That way, the next time the app starts, the position is restored.

How you save the position isn’t so important, but if you’re curious, I found this post, Saving window size and location in WPF and WinForms, helpful.

I modified it in two ways for my needs. First, I replaced the Settings object with an asynchronous cache as the storage for the placement info.

I then changed it to save the placement info when the window is resized, rather than when the application exits. That way, if the app crashes, it won’t forget its last position.

Handling Resize Events

So let’s think about this a bit. When you resize a window, the resize event might be fired a large number of times. We probably don’t want to save the position on every one of those calls. It’s not just a performance problem, but it could be a data corruption problem if I’m using an async method to save the placement. It might be possible for a later call to occur before an earlier call when so many happen so close together.

What we really want to do is save the setting when there’s a pause during a resize operation. For example, a user starts to resize the window, then stops. Five seconds later, if there’s been no other resize operation, only then do we save the setting.

How would you do this with traditional code? You could probably figure it out, ut it’d be ugly. Perhaps have the resize event start a timer for five seconds, if it isn’t started already. Each subsequent event would reset the timer. When the timer finishes, it saves the setting and turns itself off. The code is going to be a bit gnarly and all over the place.

Here’s what it looks like with Rx.

Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>
    (h => SizeChanged += h, h => SizeChanged -= h)
    .Throttle(TimeSpan.FromSeconds(5), RxApp.DeferredScheduler)
    .Subscribe(_ => this.SavePlacement());

That’s it! Nice and self contained in a single expression.

Let’s break it down a bit.

Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>
    (h => SizeChanged += h, h => SizeChanged -= h)

This first part of the expression converts the SizeChangedEvent into an observable. The specific type of this observable is IObservable<EventPattern<SizeChangedEventArgs>>. This is analogous to an IEnumerable<EventPattern<SizeChangedEventArgs>>, but with its polarity reversed. Having an observable will allow us to subscribe to a stream of size changed events. But first:

.Throttle(TimeSpan.FromSeconds(5), RxApp.DeferredScheduler)

This next part of the expression uses the Throttle method to throttle the sequence of events coming from the observable. It will ignore events in the sequence if a newer one arrives within the specified time span. In other words, this observable won’t return any item until there’s a five second lull in events.

The RxApp.DeferredScheduler comes from the ReactiveUI framework and is equivalent to new DispatcherScheduler(Application.Current.Dispatcher). It indicates which scheduler to run the throttle timers on. In this case, we indicate the dispatcher scheduler which runs the throttle timer on the UI thread.

.Subscribe(_ => this.SavePlacement());

And we end with the Subscribe call. This method takes in an Action to run for each item in the observable sequence when it arrives. This is where we do the work to actually save the window placement.

Putting it all together, every time a resize event is succeeded by a five second lull, we save the placement of the window.

But wait, compose more

Ok, that’s pretty cool. But to write imperative code to do this would be slightly ugly and not all that hard. Ok, let’s up the stakes a bit, shall we?

We forgot something. You don’t just want to save the placement of the window when it’s resized. You also want to save it when it’s moved.

So we really need to observe two sequences of events, but still throttle both of them as if they were one sequence. In other words, when either a resize or move event occurs, the timer is restarted. And only when five seconds have passed since either event has occurred, do we save the window placement.

The traditional way to code this is going to be very ugly.

This is where Rx shines. Rx provides ways to compose observables in very interesting ways. In this case we’ll deal with two observables, the one we already created that handles SizeChanged events, and a new one that handles LocationChanged events.

Here’s the code for the LocationChanged observable. I’ll save the observable into an intermediate variable for clarity. It’s exactly what you’d expect.

var locationChanges = Observable.FromEventPattern<EventHandler, EventArgs>
  (h => LocationChanged += h, h => LocationChanged -= h);

I’ll do the same for the SizeChanged event.

var sizeChanges = Observable.FromEventPattern
    <SizeChangedEventHandler, SizeChangedEventArgs>
    (h => SizeChanged += h, h => SizeChanged -= h);

We can use the Observable.Merge method to merge these sequences into a single sequence. But going back to the IEnumerable analogy, these are both sequences of different types. If you had two enumerables of different types and wanted to combine them into a single enumerable, what would you do? You’d apply a transformation with the Select method! And that’s what we do here too.

Since I don’t care what the event arguments are, just when they arrive, I’ll transform each sequence into an IObservable<Unit.Default> by calling Select(_ => Unit.Default) on each observable. Unit is an Rx type that indicates there’s no information. It’s like returning void.

var merged = Observable.Merge(
    sizeChanges.Select(_ => Unit.Default), 
    locationChanges.Select(_ => Unit.Default)
);

I’ll then call Observable.Merge to merge the two sequences together into a single sequence of event args.

Now, with this combined sequence, I can simply apply the same throttle and subscription I did before.

merged
    .Throttle(TimeSpan.FromSeconds(5), RxApp.DeferredScheduler)
    .Subscribe(_ => this.SavePlacement());

Think about that for a second. I was able to compose various sequences of events and into a single observable and I didn’t have to change the code to throttle the events or to subscribe to them.

As you get more familiar with Rx, it starts to get easier to read the code and you tend to use less intermediate variables. Here’s the full more idiomatic expression.

Observable.Merge(
    Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>
        (h => SizeChanged += h, h => SizeChanged -= h)
        .Select(e => Unit.Default),
    Observable.FromEventPattern<EventHandler, EventArgs>
        (h => LocationChanged += h, h => LocationChanged -= h)
        .Select(e => Unit.Default)
).Throttle(TimeSpan.FromSeconds(5), RxApp.DeferredScheduler)
.Subscribe(_ => this.SavePlacement());

That single declarative expression handles so much crazy logic. Very powerful stuff.

Even if you don’t write WPF apps, there’s still probably something useful here for you. This same powerful approach is also available for JavaScript.

See it in action

I put together a really rough sample app that demonstrates this concept. It’s not using the async cache, but it is using Rx to throttle resize and move events and then save the placement of the window after five seconds.

Just grab the WindowPlacementRxDemo project from my CodeHaacks GitHub repository.

More Info

For more info on Reactive Extensions, I recommend the following:

Tags: Rx, Reactive-Extensions, RxUI, Reactive-UI, WPF

code, open source, asp.net, asp.net mvc comments suggest edit

Changing a big organizations is a slow endeavor. But when people are passionate and persistent, change does happen.

Three years ago, the ASP.NET MVC source code was released under an open source license. But at the time, the team could not accept any code contributions. In my blog post talking about that release, I said the following (emphasis added):

Personally (and this is totally my own opinion), I’d like to reach the point where we could accept patches. There are many hurdles in the way, but if you went back in time several years and told people that Microsoft would release several open source projects (Ajax Control Toolkit, MEF, DLR, IronPython and IronRuby, etc….) you’d have been laughed back to the present.Perhaps if we could travel to the future a few years, we’ll see a completely different landscape from today.

Well my friends, we have travelled to the future! Albeit slowly, one day at a time.

As everyone and their mother knows by now, yesterday Scott Guthrie announced that the entire ASP.NET MVC stack is being released under an open source license (Apache v2) and will be developed under an open and collaborative model:

  • ASP.NET MVC 4
  • ASP.NET Web API
  • ASP.NET Web Pages with Razor Syntax

Note that ASP.NET MVC and Web API have been open source for a long time now. The change that Scott announced is that ASP.NET Web Pages and Razor, which until now was not open source, will also be released under an open source license.

Additionally, the entire stack of products will be developed in the open in a Git repository in CodePlex and the team will accept external contributions. This is indeed exciting news!

Hard Work

It’s easy to underestimate the hard work that the ASP.NET MVC team and Web API team did to pull this off. In the middle of an aggressive schedule, they had to completely re-work their build systems, workflow, etc… to move to a new source control system and host. Not to mention integrate two different teams and products together into a single team and product. It’s a real testament to the quality people that work on this stack that this happened so quickly!

I also want to take a moment and credit the lawyers, who are often vilified, for their work in making this happen.

One of my favorite bits of wisdom Scott Guthrie taught me is that the lawyers’ job is to protect the company and reduce risk. If lawyers had their way, we wouldn’t do anything because that’s the safest choice.

But it turns out that the biggest threat to a company’s long term well-being is doing nothing. Or being paralyzed by fear. And fortunately, there are some lawyers at Microsoft who get that. And rather than looking for reasons to say NO, they looked for reasons to say YES! And looked for ways to convince their colleagues.

I spent a lot of time with these lawyers poring over tons of legal documents and such. Learning more about copyright and patent law than I ever wanted to. But united with a goal of making this happen.

These are the type of lawyers you want to work with.

Submitting Contributions

For those of you new to open source, keep in mind that this doesn’t mean open season on contributing to the project. Your chances of having a contribution accepted are only slightly better than before.

Like any good open source project, I expect submissions to be reviewed carefully. To increase the odds of your pull request being accepted, don’t submit unsolicited requests. Read the contributor guidelines (I was happy to see their similarity to theNuGet guidelines) first and start a discussion about the feature. It’s not that an unsolicited pull request won’t ever be accepted, but the more that you’re communicating with the team, the more likely it will be.

Although their guidelines don’t state this, I highly recommend you do your work in a feature branch. That way it’s very easy to pull upstream changes into your local master branch without disturbing your feature work.

Many kudos to the ASP.NET team for this great step forward, as well as to the CodePlex team for adding Git support. I think Git has a bright future for .NET and Windows developers.

code, personal, open source comments suggest edit

Disclaimer: these opinions are my own and don’t necessarily represent the opinion of any person or institution who are not me.

The topic of sexism in the software industry has flared up recently. This post by Katie Cunningham (aka The Real Katie), entitled Lighten Up, caught my attention. As a father of a delightful little girl, I hope someday my daughter feels welcomed as a developer should she choose that profession.

In general, I try to avoid discussions of politics, religion, and racism/sexism on my blog not because I don’t have strong feelings about these things, but I doubt I will change anyone’s mind.

If you don’t think there’s an institutionalized subtle sexism problem in our industry, I probably won’t change your mind.

So I won’t try.

Instead, I want to attempt an empirical look at some problems that probably do affect you today that just happen to be related to sexism. Maybe you’ll want to do something about it.

But first, some facts.

The Facts

Whether we agree on the existence of institutional sexism in our industry, I think we can all agree that our industry is overwhelmingly male.

It wasn’t always like this. Ada Lovelace is widely credited as the world’s first programmer. So there was at least a brief time in the 1840s when 100% of developers were women. As late as the 1960s, computing was seen as women’s work, emphasis mine:

“You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.

The same site where I found that quote has a link to this great Life Magazine archive photo of IBM computer operators.

ibm-60s

But the percentage of women declined steadily from that point. According to this Girls Go Geek post, in 1987, 42% of software developers were women. But then:

From 1984 to 2006, the number of women majoring in computer science dropped from 37% to 20% — just as the percentages of women were increasing steadily in all other fields of science, technology, engineering, and math, with the possible exception of physics.

The post goes on to state that the number of CS grads at Harvard is on the increase, but overall numbers are still low.

So why is there this decline? That’s not an easy question to answer, but I think we can rule out the idea that women are somehow inherently not suited for software development. History proves that idea wrong.

Ok fine, there’s less women in software for whatever reasons. Maybe they don’t want to be developers. Hard for me to believe as I think it’s the best goddamn profession ever. But let’s humor that argument just for a moment. Suppose that was true. Why is it a problem for our industry? I’ll name two reasons.

The OSS Contributor Problem

If you’re involved in an open source project, you’ve probably noticed that it’s really hard to find good contributors. So many projects are solitary labors of love. Well it turns out according this post, Sexism: Open Source Software’s Dirty Little Secret:

Asked to guess what percentage of FOSS developers are women, mostly people guess a number between 30-45%. A few, either more observant or anticipating a trick question after hearing the proprietary figure, guess 12-16%. The exact figure, though, is even lower: 1.5%

In other words, women’s participation in FOSS development is over seventeen times lower than it is in proprietary software development.

HWHAT!? That is insane!

From a purely selfish standpoint, that’s a lot of potential developers who could be contributing to your project. Even if you don’t believe there’s rampant institutionalized sexism, why wouldn’t you want to remove barriers and create an environment that makes more contributors feel welcome to your project?

Oh, and just making your logo pink isn’t the way to go about it. Not that I have anything against pink, but simple stereotypical approaches won’t cut it. Really listen to the concerns of folks like Katie and try and address them.

I don’t mean to suggest you will get legions of female contributors overnight. This is a very complex problem and I have no clue how to fix it. I’m probably just as guilty as I can’t name a single female contributor to any of my projects, though I’ve tried my best to cajole some to contribute (you know who you are!). But a good first step is to remove ignorance and indifference to the topic.

The Employment Problem

We all know how hard it is to find good developers. In fact, while the recession saw high overall unemployment, that time was marked by a labor shortage of developers. So it comes as a surprise to me that employers tolerate a work environment that makes a large percentage of the potential workforce feel unwelcome.

According to this New York Times article written in 2010,

The share of women in the Silicon Valley-based work force was 33 percent, dropping down from 37 percent in 1999.

Note that it’s not just a gender issue.

It’s an issue I’ve covered over the years, so I was interested to see that while the collective work force of those 10 companies grew by 16 percent between 1999 and 2005, the proportion of Hispanic workers declined by 11 percent, to about 2,200; they now make up about 7 percent of the total work force. Black workers declined to 2 percent of the work force, down from 3 percent.

Again, my point here isn’t to say “You should be ashamed of yourself for being sexist and racist!” Though if you are, you should be.

No, the point here is shift your perspective and look at the reality of the current situation we’re in, despite the reasons why it is the way it is. For whatever reasons, there’s a lot of people who might be great developers, but feel that our industry doesn’t welcome them. That’s a problem! And an opportunity!

It’s an opportunity to improve our industry! If we make the software industry a place where women and minorities want to work, we’ve increased the available pool of software developers. That not only means more quality developers to hire, it also means more diverse perspectives, which is important to creative thought and benefits the bottom line:

So a sociologist called Cedric Herring has just completed a very interesting study that obtained data from 250 representative companies in the United States that looked at both their diversity levels as well as various measures of business performance there. And he finds that with every successive level of increased diversity, companies actually appear to do better on all those measures of business performance.

That’s a pretty compelling argument.

So, what are brogrammers afraid of?

For the uninitiated, the term “brogrammer” is a recent term that describes a new breed of frat boy software developers that are representative of those who don’t see the need to attract more women and minorities to our industry.

Given the benefits we enjoy when we attract a more diverse workforce into software development, why is the attitude that we shouldn’t do anything to increase the numbers of women and minorities in our industry still prevalent?

It’s not an easy question to answer, but I did have one idea that came to mind I wanted to bounce off of you. Suppose we were successful at attracting women and minorities in numbers proportional to the make-up of the country. That would increase the pool of available developers. Would that also lower overall salaries? Supply and demand, after all.

I can see how that belief that might lead to fear and the attitude that we’re fine as it is, we don’t need more of you.

But at the same time, when you consider the talent shortage, I don’t believe this for one second. At this point, I don’t have any studies to point to, but I would welcome any links to evidence you can provide. But my intuition tells me that what would happen is it would simply decrease our talent shortage, but a shortage would still remain.

What would happen is we’d see the shakeout of bad programmers from the ranks.

Let’s face it, because of the talent shortage, there’s a lot of folks who are programmers who probably shouldn’t be. But for the majority of developers, I don’t think we have anything to fear. We should welcome the influx of new ideas and the overall improvement of our industry that more developers (and thus more better developers) bring. A rising tide lifts all boats as they say.

Now, I’m not sure this is the real reason these attitudes prevail. It sure seems awful calculating. I’m inclined to think it’s simple cluelessness. But it’s possible this is a subconscious factor.

Or perhaps it’s the fear that the influx of people from diverse backgrounds will require that they grow up, leave the trappings of their college behind, and become adults who know how to relate to people different than them.

Conclusion

I know this is a touchy subject. I want to make one thing very clear. My focus in this post was on arguments that don’t require one to believe there’s rampant sexism in the software industry. The arguments were mostly self-interest arguments in favor of changing the status quo.

I don’t claim there isn’t sexism. I believe there is. You can find lots of arguments that make a compelling case that institutionalized sexism exists and that it’s wrong. The point of this post is to provide food for thought for those who don’t believe there’s sexism. If we change the status quo, I believe attitudes will follow. They tend to follow one another with each leading the other at times.

In the end, it’s a complex problem and I certainly don’t claim to have the answers on solving it. But I think a good start is leaving behind the fear, acknowledging the issue, recognizing the opportunity to improve, and embracing the concrete benefits that diversification bring.

What do you think?

git, github, code comments suggest edit

I recently gave my first talk on Git and GitHub to the Dot Net Startup Group about Git and GitHub. I was a little nervous about how I would present Git. At its core, Git is based on a simple structure, but that simplicity is easily lost when you start digging into the myriad of confusing command switches.

I wanted a visual aid that showed off the structure of a git repository in real time while I issued commands against the repository. So I hacked one together in a couple afternoons. SeeGit is an open source instructive visual aid for teaching people about git. Point it to a directory and start issuing git commands, and it automatically updates itself with a nice graph of the git repository.

seegit

During my talk, I docked SeeGit to the right and my Console2 prompt to the left so they were side by side. As I issued git commands, the graph came alive and illustrated changes to my repository. It updates itself when new commits occur, when you switch branches, and when you merge commits.

It doesn’t handle rebases well yet due to a bug, but I’m hoping to add that as well as a lot of other useful features that make it clear what’s going on.

Part of the reason I was able to write a useful, albeit buggy, tool so quickly was due to the fantastic packages available on NuGet such as LibGit2Sharp, GraphSharp, and QuickGraph among others. Installing those got me up and running in no time.

I hope to add a nice visual illustration of a rebase soon as well as the ability to toggle the display of unreachable commits. I hope to use this in many future talks as a nice way of teaching git. Who knows, it might become useful in its own right as a tool for developers using Git on real repositories.

But it’s not quite there yet. If you would like to contribute, I would love to have some help. And let me know if you make use of this!

If you want to try it out and don’t want to deal with downloading the source and compiling it, I put together a zip package with the application. I’ve only tested it on Windows 7 so it might break if you run on XP.

asp.net mvc, asp.net, code comments suggest edit

Conway’s Law states,

…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

Up until recently, there was probably no better demonstration of this law than the fact that Microsoft had two ways of shipping angle brackets (and curly braces for that matter) over HTTP – ASP.NET MVC and WCF Web API.

The reorganization of these two teams under Scott Guthrie (aka “The GU” which I’m contractually bound to tack on) led to an intense effort to consolidate these technologies in a coherent manner. It’s an effort that’s lead to what we see in the recently released ASP.NET MVC 4 Beta which includes ASP.NET Web API.

For this reason, this is an exciting release of ASP.NET MVC 4 as I can tell you, it was not a small effort to get these two teams with different philosophies and ideas to come together and start to share a single vision. And this vision may take more than one version to realize fully, but ASP.NET MVC 4 Beta is a great start!

For me personally, this is also exciting as this is the last release I had any part in and it’s great to see the effort everyone put in come to light. So many congrats to the team for this release!

Some Small Things

small-things

If you take a look at Jon Galloway’s on ASP.NET MVC 4, he points to a lot of resources and descriptions of the BIG features in this release. I highly recommend reading that post.

I wanted to take a different approach and highlight some of the small touches that might get missed in the glare of the big features.

Custom IActionInvoker Injection

I’ve written several posts that add interesting cross cutting behavior when calling actions via the IActionInvoker interface.

Ironically, the first two posts are made mostly irrelevant now that ASP.NET MVC 4 includes ASP.NET Web API.

However, the concept is still interesting. Prior to ASP.NET MVC 4, the only way to switch out the action invoker was to write a custom controller factory. In ASP.NET MVC 4, you can now simply inject an IActionInvoker using the dependency resolver.

The same thing applies to the ITempDataProvider interface. There’s almost no need to write a custom IControllerFactory any longer. It’s a minor thing, but it was a friction that’s now been buffed out for those who like to get their hands dirty and extend ASP.NET MVC in deep ways.

Two DependencyResolvers

I’ve been a big fan of using the Ninject.Mvc3 package to inject dependencies into my ASP.NET MVC controllers.

ninject.mvc3

However, your Ninject bindings do not apply to ApiController instances. For example, suppose you have the following binding in the NinjectMVC3.cs file that the Ninject.MVC3 package adds to your project’s App_Start folder.

private static void RegisterServices(IKernel kernel)
{
    kernel.Bind<ISomeService>().To<SomeService>();
}

Now create an ApiController that accepts an ISomeService in its constructor.

public class MyApiController : ApiController
{
  public MyApiController(ISomeService service)
  {
  }
  // Other code...
}

That’s not going to work out of the box. You need to configure a dependency resolver for Web API via a call to GlobalConfiguration.Configuration.ServiceResolver.SetResolver.

However, you can’t pass in the instance of the ASP.NET MVC dependency resolver, because their interfaces are different types, even though the methods on the interfaces look exactly the same.

This is why I wrote a small adapter class and convenient extension method. Took me all of five minutes.

In the case of the Ninject.MVC3 package, I added the following line to the Start method.

public static void Start()
{
  // ...Pre-existing lines of code...

  GlobalConfiguration.Configuration.ServiceResolver
  .SetResolver(DependencyResolver.Current.ToServiceResolver());
}

With that in place, the registrations work for both regular controllers and API controllers.

I’ve been pretty busy with my new job to dig into ASP.NET MVC 4, but at some point I plan to spend more time with it. I figure we may eventually upgrade NuGet.org to run on MVC 4 which will allow me to get my hands really dirty with it.

Have you tried it out yet? What hidden gems have you found?

personal comments suggest edit

Recently I’ve been tweeting photos of my kids playing with a new toy my wife bought them that I’mthey are totally enthralled with. It’s called the Bildopolis Big Bilder Kit.

This is a creation of a family friend of ours who used to be an industrial designer at IDEO. He left a while ago to start on his own thing and came up with this. We bought a set immediately in part to to support his efforts, but also because it looked cool. We were not disappointed. This thing is fun.

mia-cody-and-bildopolis

The concept is really simple. It’s a set of cardboard squares, triangles, and rectangles that have “female” velcro semicircles attached. Those are the white parts in the photos. The kit also comes with a bag of colorful “male” velcro “dots”.

Apply the dots to attach the cardboard pieces together to make all sorts of interesting structures for your kids to play in. If your kids are older, they’ll probably want to build their own structures.

rocket-ship

I want to make it clear that I don’t get any kickbacks or anything and we paid full price for our set. I’m just pimping this out because I had so much fun building stuff with it and my kids really love it. Maybe yours will too.

tunnels

In one evening, we built about four different structures. They’re quick and flexible to build.

long-house

There are a few downsides though. The Haackerdome below is probably going to be a permanent fixture in our living room, which takes up a bit of space. Also, these structures are meant for playing inside of, not on top of. They wouldn’t be sturdy enough to support the weight of kids climbing on top of them.

haackerdome

In any case, if you want to learn more, check out the Bildopolis website. They have a video showing off the kit. There’s also a gallery where folks have sent in photos of what they’ve built. That’s where I got the idea for the dome. The kit sells for $80 not including shipping.