code, personal, open source 0 comments suggest edit

Disclaimer: these opinions are my own and don’t necessarily represent the opinion of any person or institution who are not me.

The topic of sexism in the software industry has flared up recently. This post by Katie Cunningham (aka The Real Katie), entitled Lighten Up, caught my attention. As a father of a delightful little girl, I hope someday my daughter feels welcomed as a developer should she choose that profession.

In general, I try to avoid discussions of politics, religion, and racism/sexism on my blog not because I don’t have strong feelings about these things, but I doubt I will change anyone’s mind.

If you don’t think there’s an institutionalized subtle sexism problem in our industry, I probably won’t change your mind.

So I won’t try.

Instead, I want to attempt an empirical look at some problems that probably do affect you today that just happen to be related to sexism. Maybe you’ll want to do something about it.

But first, some facts.

The Facts

Whether we agree on the existence of institutional sexism in our industry, I think we can all agree that our industry is overwhelmingly male.

It wasn’t always like this. Ada Lovelace is widely credited as the world’s first programmer. So there was at least a brief time in the 1840s when 100% of developers were women. As late as the 1960s, computing was seen as women’s work, emphasis mine:

“You have to plan ahead and schedule everything so it’s ready when you need it. Programming requires patience and the ability to handle detail. Women are ‘naturals’ at computer programming.

The same site where I found that quote has a link to this great Life Magazine archive photo of IBM computer operators.


But the percentage of women declined steadily from that point. According to this Girls Go Geek post, in 1987, 42% of software developers were women. But then:

From 1984 to 2006, the number of women majoring in computer science dropped from 37% to 20% — just as the percentages of women were increasing steadily in all other fields of science, technology, engineering, and math, with the possible exception of physics.

The post goes on to state that the number of CS grads at Harvard is on the increase, but overall numbers are still low.

So why is there this decline? That’s not an easy question to answer, but I think we can rule out the idea that women are somehow inherently not suited for software development. History proves that idea wrong.

Ok fine, there’s less women in software for whatever reasons. Maybe they don’t want to be developers. Hard for me to believe as I think it’s the best goddamn profession ever. But let’s humor that argument just for a moment. Suppose that was true. Why is it a problem for our industry? I’ll name two reasons.

The OSS Contributor Problem

If you’re involved in an open source project, you’ve probably noticed that it’s really hard to find good contributors. So many projects are solitary labors of love. Well it turns out according this post, Sexism: Open Source Software’s Dirty Little Secret:

Asked to guess what percentage of FOSS developers are women, mostly people guess a number between 30-45%. A few, either more observant or anticipating a trick question after hearing the proprietary figure, guess 12-16%. The exact figure, though, is even lower: 1.5%

In other words, women’s participation in FOSS development is over seventeen times lower than it is in proprietary software development.

HWHAT!? That is insane!

From a purely selfish standpoint, that’s a lot of potential developers who could be contributing to your project. Even if you don’t believe there’s rampant institutionalized sexism, why wouldn’t you want to remove barriers and create an environment that makes more contributors feel welcome to your project?

Oh, and just making your logo pink isn’t the way to go about it. Not that I have anything against pink, but simple stereotypical approaches won’t cut it. Really listen to the concerns of folks like Katie and try and address them.

I don’t mean to suggest you will get legions of female contributors overnight. This is a very complex problem and I have no clue how to fix it. I’m probably just as guilty as I can’t name a single female contributor to any of my projects, though I’ve tried my best to cajole some to contribute (you know who you are!). But a good first step is to remove ignorance and indifference to the topic.

The Employment Problem

We all know how hard it is to find good developers. In fact, while the recession saw high overall unemployment, that time was marked by a labor shortage of developers. So it comes as a surprise to me that employers tolerate a work environment that makes a large percentage of the potential workforce feel unwelcome.

According to this New York Times article written in 2010,

The share of women in the Silicon Valley-based work force was 33 percent, dropping down from 37 percent in 1999.

Note that it’s not just a gender issue.

It’s an issue I’ve covered over the years, so I was interested to see that while the collective work force of those 10 companies grew by 16 percent between 1999 and 2005, the proportion of Hispanic workers declined by 11 percent, to about 2,200; they now make up about 7 percent of the total work force. Black workers declined to 2 percent of the work force, down from 3 percent.

Again, my point here isn’t to say “You should be ashamed of yourself for being sexist and racist!” Though if you are, you should be.

No, the point here is shift your perspective and look at the reality of the current situation we’re in, despite the reasons why it is the way it is. For whatever reasons, there’s a lot of people who might be great developers, but feel that our industry doesn’t welcome them. That’s a problem! And an opportunity!

It’s an opportunity to improve our industry! If we make the software industry a place where women and minorities want to work, we’ve increased the available pool of software developers. That not only means more quality developers to hire, it also means more diverse perspectives, which is important to creative thought and benefits the bottom line:

So a sociologist called Cedric Herring has just completed a very interesting study that obtained data from 250 representative companies in the United States that looked at both their diversity levels as well as various measures of business performance there. And he finds that with every successive level of increased diversity, companies actually appear to do better on all those measures of business performance.

That’s a pretty compelling argument.

So, what are brogrammers afraid of?

For the uninitiated, the term “brogrammer” is a recent term that describes a new breed of frat boy software developers that are representative of those who don’t see the need to attract more women and minorities to our industry.

Given the benefits we enjoy when we attract a more diverse workforce into software development, why is the attitude that we shouldn’t do anything to increase the numbers of women and minorities in our industry still prevalent?

It’s not an easy question to answer, but I did have one idea that came to mind I wanted to bounce off of you. Suppose we were successful at attracting women and minorities in numbers proportional to the make-up of the country. That would increase the pool of available developers. Would that also lower overall salaries? Supply and demand, after all.

I can see how that belief that might lead to fear and the attitude that we’re fine as it is, we don’t need more of you.

But at the same time, when you consider the talent shortage, I don’t believe this for one second. At this point, I don’t have any studies to point to, but I would welcome any links to evidence you can provide. But my intuition tells me that what would happen is it would simply decrease our talent shortage, but a shortage would still remain.

What would happen is we’d see the shakeout of bad programmers from the ranks.

Let’s face it, because of the talent shortage, there’s a lot of folks who are programmers who probably shouldn’t be. But for the majority of developers, I don’t think we have anything to fear. We should welcome the influx of new ideas and the overall improvement of our industry that more developers (and thus more better developers) bring. A rising tide lifts all boats as they say.

Now, I’m not sure this is the real reason these attitudes prevail. It sure seems awful calculating. I’m inclined to think it’s simple cluelessness. But it’s possible this is a subconscious factor.

Or perhaps it’s the fear that the influx of people from diverse backgrounds will require that they grow up, leave the trappings of their college behind, and become adults who know how to relate to people different than them.


I know this is a touchy subject. I want to make one thing very clear. My focus in this post was on arguments that don’t require one to believe there’s rampant sexism in the software industry. The arguments were mostly self-interest arguments in favor of changing the status quo.

I don’t claim there isn’t sexism. I believe there is. You can find lots of arguments that make a compelling case that institutionalized sexism exists and that it’s wrong. The point of this post is to provide food for thought for those who don’t believe there’s sexism. If we change the status quo, I believe attitudes will follow. They tend to follow one another with each leading the other at times.

In the end, it’s a complex problem and I certainly don’t claim to have the answers on solving it. But I think a good start is leaving behind the fear, acknowledging the issue, recognizing the opportunity to improve, and embracing the concrete benefits that diversification bring.

What do you think?

git, github, code 0 comments suggest edit

I recently gave my first talk on Git and GitHub to the Dot Net Startup Group about Git and GitHub. I was a little nervous about how I would present Git. At its core, Git is based on a simple structure, but that simplicity is easily lost when you start digging into the myriad of confusing command switches.

I wanted a visual aid that showed off the structure of a git repository in real time while I issued commands against the repository. So I hacked one together in a couple afternoons. SeeGit is an open source instructive visual aid for teaching people about git. Point it to a directory and start issuing git commands, and it automatically updates itself with a nice graph of the git repository.


During my talk, I docked SeeGit to the right and my Console2 prompt to the left so they were side by side. As I issued git commands, the graph came alive and illustrated changes to my repository. It updates itself when new commits occur, when you switch branches, and when you merge commits.

It doesn’t handle rebases well yet due to a bug, but I’m hoping to add that as well as a lot of other useful features that make it clear what’s going on.

Part of the reason I was able to write a useful, albeit buggy, tool so quickly was due to the fantastic packages available on NuGet such as LibGit2Sharp, GraphSharp, and QuickGraph among others. Installing those got me up and running in no time.

I hope to add a nice visual illustration of a rebase soon as well as the ability to toggle the display of unreachable commits. I hope to use this in many future talks as a nice way of teaching git. Who knows, it might become useful in its own right as a tool for developers using Git on real repositories.

But it’s not quite there yet. If you would like to contribute, I would love to have some help. And let me know if you make use of this!

If you want to try it out and don’t want to deal with downloading the source and compiling it, I put together a zip package with the application. I’ve only tested it on Windows 7 so it might break if you run on XP. mvc,, code 0 comments suggest edit

Conway’s Law states,

…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

Up until recently, there was probably no better demonstration of this law than the fact that Microsoft had two ways of shipping angle brackets (and curly braces for that matter) over HTTP – ASP.NET MVC and WCF Web API.

The reorganization of these two teams under Scott Guthrie (aka “The GU” which I’m contractually bound to tack on) led to an intense effort to consolidate these technologies in a coherent manner. It’s an effort that’s lead to what we see in the recently released ASP.NET MVC 4 Beta which includes ASP.NET Web API.

For this reason, this is an exciting release of ASP.NET MVC 4 as I can tell you, it was not a small effort to get these two teams with different philosophies and ideas to come together and start to share a single vision. And this vision may take more than one version to realize fully, but ASP.NET MVC 4 Beta is a great start!

For me personally, this is also exciting as this is the last release I had any part in and it’s great to see the effort everyone put in come to light. So many congrats to the team for this release!

Some Small Things


If you take a look at Jon Galloway’s on ASP.NET MVC 4, he points to a lot of resources and descriptions of the BIG features in this release. I highly recommend reading that post.

I wanted to take a different approach and highlight some of the small touches that might get missed in the glare of the big features.

Custom IActionInvoker Injection

I’ve written several posts that add interesting cross cutting behavior when calling actions via the IActionInvoker interface.

Ironically, the first two posts are made mostly irrelevant now that ASP.NET MVC 4 includes ASP.NET Web API.

However, the concept is still interesting. Prior to ASP.NET MVC 4, the only way to switch out the action invoker was to write a custom controller factory. In ASP.NET MVC 4, you can now simply inject an IActionInvoker using the dependency resolver.

The same thing applies to the ITempDataProvider interface. There’s almost no need to write a custom IControllerFactory any longer. It’s a minor thing, but it was a friction that’s now been buffed out for those who like to get their hands dirty and extend ASP.NET MVC in deep ways.

Two DependencyResolvers

I’ve been a big fan of using the Ninject.Mvc3 package to inject dependencies into my ASP.NET MVC controllers.


However, your Ninject bindings do not apply to ApiController instances. For example, suppose you have the following binding in the NinjectMVC3.cs file that the Ninject.MVC3 package adds to your project’s App_Start folder.

private static void RegisterServices(IKernel kernel)

Now create an ApiController that accepts an ISomeService in its constructor.

public class MyApiController : ApiController
  public MyApiController(ISomeService service)
  // Other code...

That’s not going to work out of the box. You need to configure a dependency resolver for Web API via a call to GlobalConfiguration.Configuration.ServiceResolver.SetResolver.

However, you can’t pass in the instance of the ASP.NET MVC dependency resolver, because their interfaces are different types, even though the methods on the interfaces look exactly the same.

This is why I wrote a small adapter class and convenient extension method. Took me all of five minutes.

In the case of the Ninject.MVC3 package, I added the following line to the Start method.

public static void Start()
  // ...Pre-existing lines of code...


With that in place, the registrations work for both regular controllers and API controllers.

I’ve been pretty busy with my new job to dig into ASP.NET MVC 4, but at some point I plan to spend more time with it. I figure we may eventually upgrade to run on MVC 4 which will allow me to get my hands really dirty with it.

Have you tried it out yet? What hidden gems have you found?

personal 0 comments suggest edit

Recently I’ve been tweeting photos of my kids playing with a new toy my wife bought them that I’mthey are totally enthralled with. It’s called the Bildopolis Big Bilder Kit.

This is a creation of a family friend of ours who used to be an industrial designer at IDEO. He left a while ago to start on his own thing and came up with this. We bought a set immediately in part to to support his efforts, but also because it looked cool. We were not disappointed. This thing is fun.


The concept is really simple. It’s a set of cardboard squares, triangles, and rectangles that have “female” velcro semicircles attached. Those are the white parts in the photos. The kit also comes with a bag of colorful “male” velcro “dots”.

Apply the dots to attach the cardboard pieces together to make all sorts of interesting structures for your kids to play in. If your kids are older, they’ll probably want to build their own structures.


I want to make it clear that I don’t get any kickbacks or anything and we paid full price for our set. I’m just pimping this out because I had so much fun building stuff with it and my kids really love it. Maybe yours will too.


In one evening, we built about four different structures. They’re quick and flexible to build.


There are a few downsides though. The Haackerdome below is probably going to be a permanent fixture in our living room, which takes up a bit of space. Also, these structures are meant for playing inside of, not on top of. They wouldn’t be sturdy enough to support the weight of kids climbing on top of them.


In any case, if you want to learn more, check out the Bildopolis website. They have a video showing off the kit. There’s also a gallery where folks have sent in photos of what they’ve built. That’s where I got the idea for the dome. The kit sells for $80 not including shipping.

personal, community, github 0 comments suggest edit

Next week Microsoft hosts its annual MVP Summit. So what better time for me to host my first GitHub Drinkup – MVP Edition at the Tap House Grill!

Not an MVP? Nonsense! You are in my book, so show up! If you are an MVP, you’re still welcome to slum it with the rest of us schlubs.


All the details are posted over at the GitHub Blog post.

What is a “Drinkup” you ask?

It’s pretty simple. It’s a meetup where we drink and share stories of valor in the face of code complexity. Or jabber on about whatever else software developers want to geek out about.

Getting together and sharing a brew or two is deeply ingrained in the GitHub culture. After all, GitHub was conceived over a beer at a bar.

If you’re not in the Seattle/Bellevue area, GitHub hosts a monthly (more or less) drinkup in San Francisco where the GitHub HQ is located. We also host the occasional drinkup all over the world when we attend conferences in various locations.

So come on out, unwind, and let me buy you a beer. Perhaps this drinkup will be the one where you’ll meet your next cofounder. At the very least, you’ll enjoy some good drink and good conversation.

.NET Startup Group March 8

And if you can’t come out on Tuesday, I’ll also be speaking at the .NET Startup group on March 8 on GitHub, Git and making it all work on Windows. I might throw in a dash of NuGet in there since I can’t help myself. There’s still plenty of seats available.

code, open source 0 comments suggest edit

In my previous post, I attempted to make a distinction between Open Source and Open Source Software. Some folks took issue with the post and that’s great! I love a healthy debate. It’s an opportunity to learn. One minor request though. If you disagree with me, I do humbly ask that you read the whole post first before you go and rip me a new one.

It was interesting to me that critics fell into two opposing camps. There were those who felt that it was was disingenuous for me to use the term “open source software” to describe a Microsoft project that doesn’t accept contributions and is developed under a closed model, even if it is licensed under an open source license. Many of them accepted that, yes, ASP.NET MVC is OSS, but I still shouldn’t use the term.

While others felt that the license is the sole determining factor for open source and I wasn’t helping anybody by trying to expand the definition of “open source.” To my defense, I wasn’t trying to expand it so much as describe how I think a lot of people use the term today, but they have a good point.

Going back to the first camp, a common refrain I heard was that software that meets the Open Source Definition might “meet the letter of the law, but not the spirit of the law” when it comes to open source.

Interesting. But what is the “spirit of open source” that they speak of? What is the essential ingredient?

Looking For The Spirit

I assume they mean developing in the open and accepting contributions to be necessary ingredients to qualify a project as being in the spirit of open source. I started to dig into it. I expected that we should probably see references to these things all over the place when we look up the term “open source”.

Oddly enough, Wikipedia doesn’t really talk about those things in its article on Open Source, but hey, it’s Wikipedia.

But oddly enough, there’s no mention of accepting contributions in the Open Source Definition or pretty much anywhere in that I could find. It doesn’t really address it.

Neither does any open source license have anything to say about the way the software is developed or whether the project accepts contributions.

Let’s take a look at the Free Software Foundation which is opposed to the term “Open Source” and might have a different definition of free software. I’m going to quote a small portion of this.

Thus, “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer”.

A program is free software if the program’s users have the four essential freedoms:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

I looked for interviews from the pioneers of open source for more information. Richard Stallman reiterates the same points in this interview. What about Eric Raymond? Well he just links to As you can see, he’s the President Emeritus of the Open Source Initiative (OSI) which created the OSD that I’ve been using as my definition.

I then asked Miguel De Icaza for his thoughts. Miguel is a developer with a long history in open source. He started the GNOME and Mono projects and has more open source experience in his pinky than I have in my entirety. He had some interesting insights.

In general, I am not sure where the idea came from that for something to be open source, the upstream maintainer had to take patches, that has never been the case.   Some maintainers are just too protective (qmail I believe for a long time did not take patches, or even engage in public discussions).   Others are just effectively too hard for average developers to get patches in (Linux kernel, C compilers) that they are effectively closed.

What gives?

Isn’t it odd that these concepts that we hold so dearly to as being part of the spirit of open source aren’t mentioned by these stewards of open source?

Maybe it’s because the essential ingredient to open source, the spirit of open source, is not accepting contributions, it’s freedom.

The freedom to look at, remix, and redistribute source code without fear of recrimination.

Even so, I still think accepting contributions and developing in the open are hugely important to any open source project. If I didn’t believe that, I wouldn’t be working at GitHub.

I started to think that perhaps a more apt term to describe that process is crowd sourcing. Crowd sourcing can provide many benefits, according to the article I linked to:

  • Problems can be explored at comparatively little cost, and often very quickly.
  • Payment is by results or even omitted.
  • The organization can tap a wider range of talent than might be present in its own organization.
  • By listening to the crowd, organizations gain first-hand insight on their customers’ desires.
  • The community may feel a brand-building kinship with the crowdsourcing organization, which is the result of an earned sense of ownership through contribution and collaboration.

But Miguel thought a better term is “open and collaborative development.” That’s the process that is so closely associated with developing open source software that it’s become synonymous with open source in the minds of many people. But it’s not the same thing because it’s possible to conduct open and collaborative development on a non-open source project.

Splitting Hairs?

I know some folks will continue to think I’m splitting hairs, which is an impressive feat if you think about it as I definitely don’t have the hands of a surgeon and hairs are so thin.

One counter argument might be that perhaps the original framing of “open source” was focused on freedoms, but what we refer to as open source today has evolved to include crowd sourcing as an essential component.

I can see that. It seems obvious to me that collaborative development in the open is a huge part of the culture surrounding open source. But being a core part of the culture doesn’t necessarily mean it’s in the spirit. A lot of people feel that drugs are a big part of the Burning Man culture, for example, but I don’t think it’s an essential part of its spirit. It’s the creativity and expression that forms the spirit of Burning Man.

Who Cares About The License?

Another point someone made is that the community of contributors is more beneficial than having an open source license. I addressed that point in my last post, but Miguel had a great take on this, emphasis mine.

The reason why the OSI definition was important is because it provided a foundation that allowed unlimited redistribution and serviceability for the code for the future, with the knowledge that there were no legal restrictions.  This is the foundation for Debian’s policies and in general, the test that must be passed for a project to be adopted.  Serviceability does not require talking to an upstream maintainer, it means having the means and rights to do so, even if the upstream distribution goes away, vanishes, dies, or moves on to greener pastures.

You could build the most open and collaborative project ever, but if the source code isn’t under a license that meets the open source definition, it may be possible for project to close shop and withdraw all your rights to the code.

With OSS code, that’s just not possible. A copyright holder of the source can close shop and stop giving you access to new addition they write, but they can’t retroactively withdraw the license to code they’ve already released under an OSI certified license.

And this is a real benefit, even with otherwised “closed” projects that are open source. Miguel gave me one example in relation to Mono.

A perfect example is all the code that we have taken from Microsoft that they open sourced and ran with it. In some cases we modify it/tweak it (DLR, BigInteger), in others we use as-is, but without it being open source, we would be years behind and would have never been able to build a lot of extra features that we now depend on.

Prove me wrong?

By the way, I’m open to be proven wrong and changing my mind. Heck, I’ve done it twice this week as Miguel convinced me that “Open Source” only requires the license. But if you do disagree with me, I’d love to see references that back up your point as opposed to unsubstantiated name calling.

I think this topic is tricky because it’s very easy to discuss whether software is licensed under an open source license. If we agree on the open source definition, or the free software definition, you can easily evaluate that yes, the software gives you the rights and freedoms mentioned earlier.

But it’s trickier to create strict definitions of what encompasses the spirit of anything, because people have different ideas around them. I know some folks that feel that commercial software goes against the spirit of open source. I tend to disagree pointing out that it’s the license that matters, and whether or not you make money from the software is tangential. But I digress.

I know I won’t convince everyone of my points. That’s fine. I enjoy a healthy debate. The only thing I hope to convince you of is that even if you disagree with me, that you can see I’ve provided good reasons for why I believe what I do and it’s not out of being disingenuous.

Collaboration is Good

In the end, I think it’s a huge benefit for any open source project to develop in an open collaborative manner. When I think of open source software, that’s the development model that comes first in my mind. But, if you don’t follow that model (perhaps for good reasons, maybe for bad reasons) but do license the software under an open source license, I will still recognize your project as an open source project.

code, mvc, open source, community 0 comments suggest edit

UPDATE: I have a follow-up post that addresses a few criticisms of this post.

It all started with an innocent tweet asking whether ASP.NET MVC 3 is “open source” or not? I jumped in with my usual answer, “of course it is!” The source code is released under the Ms-PL, a license recognized that the OSI legally reviewed to ensure it meets the Open Source Definition (OSD). The Free Software Foundation (FSF) recognizes it as a “free software license”^1^making it not only OSS, but FOSS (Free and open source software) by that definition.

Afterwards, a healthy debate ensued on Twitter. No seriously! “Healthy” debates on Twitter do happen sometimes. And many times, I learn something.

Many countered with objections. How can the project be open source if development is done in private and the code is “tossed over the wall” at the end of the product cycle under an open source license? How can it be open source if it doesn’t accept contributions?

This is when it occurred to me:

  1. I’ve been using these terms interchangeably and imprecisely.
  2. Many people have different concepts of what “open source” is.

“Open source” is not the same thing as “open source software”. The first defines an approach to building software. The second is the end product. Saying they are the same thing is like saying that a car that Toyota makes and Kanban are the same thing. Obvious, isn’t it?

The Importance of Definitions

I’m a big fan of open source software and the open source model of developing software. I don’t necessarily think all software should be open source, but I do find a lot of benefits to this approach. I’m also not arguing that writing OSS in a closed off manner is a good or bad thing. What I care about is having clear definitions so we’re talking about the same things when we use these terms. Or at least making it clear what I mean when I use these terms.

Definitions are important! If you allow for any code to define itself as “open source”, you get the monstrosities like the MS-LPL. If you don’t remember this license, it looked a lot like an open source license, but it had a nasty platform restriction.

Microsoft started the term “Shared Source” to describe these licenses which allow you to look at the code, but aren’t open source by most common definitions of the term.

Open Source Software

Going back to my realization, when I talk about “open source” I often really mean “open source software”. In my mind, open source software is source code that is licensed under a license that meets the Open Source Definition.

Thus this phrase is completely defined in terms of properties of the source code. More specifically, it’s defined in terms of the source code’s license and what that license allows.

So when we discuss what it means for software to be open source software, I try to frame things in terms of the software and not who or how the software is made.

This is why I believe ASP.NET MVC is OSS despite the fact that the team doesn’t currently accept outside contributions. After all, source code can’t accept contributions. People do. In fact, open source licenses have nothing to say about contributions at all.

What defines OSS is the right to modify and freely redistribute the code. It doesn’t give anyone the right to force the authors to accept contributions.

Open Source

So that’s the definition that’s often in my mind, when I’ve use the term “open source” in the past. From now on, I’ll try to use “open source software” for that.

When most people I talk to use the term “open source”, they’re talking about something different. They are usually talking about the culture, process, and philosophy that surrounds building open source products such as open source software. Characteristics typically include:

  • Developed in the open with community involvement
  • The team accepts contributions that meet its standards
  • The end product has an open source license. This encompasses open source software, open source hardware, etc.

There’s probably others I’m forgetting, but those three stand out to me. So while I completely believe that a team can develop open source software in private without accepting contributions, I do believe that they’re not complying with the culture and spirit of open source. Or as my co-worker Paul puts it (slightly modified by me)

I would put a more positive spin, that they’re losing out on many of the benefits of open source.

And as Bertrand points out in my comments, “open source” applies to many more domains than just software, such as open source hardware, recipes, etc.

Why does it matter?

So am I simply quibbling over semantics? Why does it matter? If a project does it in private and doesn’t accept contributions, why does anyone care if the product is licensed as open source software?

I think it’s very important. In fact, I think the most important part is the freedom that an open source license allows. As important and wonderful as I think developing in the open and accepting contributions is, I think the freedom the license gives you is even more important.

Why? Maybe a contrived, but not too far fetched, example will help illustrate.

Imagine a project building a JavaScript library that develops completely in the open and accepts contributions. They’ve built up a nice healthy ecosystem and community around their project. Some of the code is really whiz bang and would make my website super great. I can submit all the contributions I want. Awesome, right?

But there’s one teensy problem. The license for the code has a platform restriction. Let’s say the code may only run on Windows. That’d be pretty horrible, right? The software is useless to me. I can’t even use a tiny part of it in my own website because I know browsers running on macs will visit my site and thus I’d be distributing it to non-Windows machines in violation of the license.

But if it were a library developed in private, and once in a while they produce an open source licensed release, so many doors are opened. I can choose to fork it and create a separate open community around the fork. I can distribute it via my website however I like. Others can redistribute it.

Again, just to clarify. In the best case, I’d have it both ways. An open source license and the ability to submit contributions and see the software developed in the open. My main point is that the license of a piece of useful software, despite its origins, is very important.

In the end?

So in the end, is ASP.NET MVC 3 “open source”? As Drew Miller aptly wrote on Twitter,

How I like to describe it: ASP.NET MVC 3 is open source software, but not an open source project.

At least not yet anyways.

UPDATE: After some discussion, I’m not so sure even this last statement is correct. Read my follow-up post, What is the spirit of open source?

^1^Although the Free Software Foundationrecognizes the MS-PL (and MS-RL) as a free software license, they don’t recommend it because of its incompatibility with GNU GPL.

open source, nuget, code 0 comments suggest edit

Recently, the Log4Net team released log4net 1.2.11 (congrats by the way!). The previous version of log4Net was 1.2.10.

Despite which version of version you subscribe to, we can all agree that only incrementing the third part of a version indicates that the new release is a minor update and one that hopefully has no breaking changes. Perhaps a bug fix release.

This is especially true if you subscribe to Semantic Versioning (SemVer) as NuGet does. As I wrote previously,

SemVer is a convention for versioning your public APIs that gives meaning to the version number. Each version has three parts, Major.Minor.Patch.

In brief, these correspond to:

  • Major: Breaking changes.
  • Minor: New features, but backwards compatible.
  • Patch: Backwards compatible bug fixes only.

Given that the Patch number is supposed to represent bug fixes only, NuGet chooses the minimum Major and Minor version of a package to meet the dependency contstraint, but the maximum Patch version. David Ebbo describes the algorithm and rationale in part 2 of his three part series on NuGet Versioning.

Strong Names and Versioning

The consequence of this is as follows. With the new log4Net release, if you have a package that has log4net 1.2.10 or greater as a dependency:

<dependency id="log4net" version="1.2.10" />

Installing that package would give you log4net 1.2.11. In most cases, this is what you want because the newer release might have important bug fixes such as security fixes.

However, in this case, Log4Net changed the strong name for their assembly for 1.2.11. Whatever your feelings about using strong names or not (that’s a separate discussion), the fact is that if you choose to use them, changing the strong name is changing the identity of your assembly. That’s a major breaking change.

And man, were a lot of people affected! We heard from tons of folks who were broken by this and unsure how to fix it.

NuGet does support a workaround so that you can prevent inadvertent upgrades. You can constrain the allowed versions of an installed package by manually modifying packages.config. Sadly, we don’t yet have a UI for this, so it’s a bit of a pain.

The Solution

Apart from never changing your strong name, the solution in this case is to treat this change as a major breaking change and increment the major version number of the assembly.

I don’t anticipate the Log4Net team will change the version of their assembly, but I reached out to the maintainer of the Log4Net package (no connection to the Log4Net team so please don’t give him grief about this) and he graciously incremented the major version of the Log4Net package to solve the problem.

Just to be clear, the log4net 2.0 NuGet package contains the log4net 1.2.11 assembly.

While it’s generally good form to have the package and assembly version match to avoid confusion, it’s not necessary. This is a good example of a case where they need to differ. I do suggest having the “Title” and “Description” note this fact to help avoid further confusion.

I want to thank Jiri for maintaining the Log4Net package and being responsive to the need out there! It’s much appreciated.

nuget, open source 0 comments suggest edit

I’ve seen a few recent tweets asking about what’s going on with NuGet since I left Microsoft. The fact is that the NuGet team has been hard at work on the release and have been discussing it in various public forums. I think the feeling of “quiet” might be due to the lack of blogging, which I can easily correct right now!

In this post, I want to highlight a few things:

  • What the NuGet team has been working on
  • How you can track what we’re doing
  • And how you can get involved in the discussion

Just to clarify, I have not left the NuGet project. Until my name is removed from this page, I will be involved. SmileHowever, as I’ve been ramping up in my new job at GitHub (loving it!), I have been less involved as I would like simply because there’s only so much of me to go around. Once we get the project I’m working on at work shipped, I hope to divide my time a little better so that I don’t neglect NuGet. But at the same time, everyone else has stepped up so much that I don’t think they’ve missed me much.

The team and I are working through how to best keep me involved and we are starting to improve lines of communication.

NuGet Status Page

The NuGet team is currently working on NuGet 1.7, but in the meantime, we’ve shipped a status page at

This state is design to provide the community with information about the overall state of NuGet as well as information about the future direction and plan of the NuGet team. Notifications about planned maintenance as well as outages will be posted in the “State of NuGet address” section and can be followed using the RSS feed. Additionally during these times the team with use this section to communicate any pertinent information.

NuGet is more than an add-in to Visual Studio. It’s an important service to multiple different clients and partners and we’re working on ways to improve that communication. The status page is a start, but we’re open to other ideas for improving communication.

NuGet Issue Tracker

As always, if you’re curious about the progress being made towards NuGet 1.7, just visit the issue tracker. The link I just provided shows a filtered view of issues that are still open for NuGet 1.7. You can select the Fixed or Closed status to see what issues have already been implemented for 1.7.

So even if our blogs get a little quite, the issue tracker is the source of truth about the activity.

NuGet JabbR

The NuGet team is moving more and more of our design discussions to our JabbR room. Don’t know what JabbR is? It’s a real-time chat site built on top of ASP.NET and SignalR with a lot of nice features. It’s similar to CampFire, but has great features such as tab expansions for user names as well as emojis! JabbR itself has a very active community surrounding it and they accept pull requests!

In fact, last night starting at around 11 PM we had a big design discussion around capability filtering. For example, if you are on a client that doesn’t support a feature of the package (such as PowerShell scripts), can we filter that out for you. If you have something to say about this, don’t respond here, go to the JabbR room!

What’s Next?

A big focus for us is getting the community more and more involved in NuGet. We hope the move towards leveraging JabbR more helps in that regard. We’re still hashing out some of the details in how we do this. For example, what should be discussed in JabbR vs our discussions site? I think JabbR is a great place to hash out a design and then perhaps summarize the results in a discussion item for posterity. What do you think?

code 0 comments suggest edit

Back in November, someone asked a question on StackOverflow about converting arbitrary binary data (in the form of a byte array) to a string. I know this because I make it a habit to read randomly selected questions in StackOverflow written in November 2011. Questions about text encodings in particular really turn me on.

In this case, the person posing the question was encrypting data into a byte array and converting that data into a string. The conversion code he used was similar to the following:

string text = System.Text.Encoding.UTF8.GetString(data);

That isn’t exactly their code, but this is a pattern I’ve seen in the past. In fact, I have a story about this I want to tell you in a future blog post. But I digress.

The infamous Jon Skeet answers:

You should absolutely not use an Encoding to convert arbitrary binary data to text. Encoding is for when you’ve got binary data which genuinely is encoded text - this isn’t.

Instead, use Convert.ToBase64String to encode the binary data as text, then decode usingConvert.FromBase64String.

Yes! Absolutely. Totally agree. As a general rule of thumb, agreeing with Jon Skeet is a good bet.

Not to give you the impression that I’m stalking Skeet, but I did notice that this wasn’t the first time Skeet answered a question about using encodings to convert binary data to text. In response to an earlier question he states:

Basically, treating arbitrary binary data as if it were encoded text is a quick way to lose data. When you need to represent binary data in a string, you should use base64, hex or something similar.

This perked my curiosity. I’ve always known that if you need to send binary data in text format, base64 encoding is the safe way to do so. But I didn’t really understand why the other encodings were unsafe. What are the cases in which you might lose data?

Round Tripping UTF-8 Encoded Strings

Well let’s look at one example. Imagine you’re receiving a stream of bytes and you store it as a UTF-8 string and pop it in the database. Later on, you need to relay that data so you take it out, encode it back to bytes, and send it on its merry way.

The following code simulates that scenario with a byte array containing a single byte, 128.

var data = new byte[] { 128 };
string text = Encoding.UTF8.GetString(data);
var bytes = Encoding.UTF8.GetBytes(text);

Console.WriteLine("Original:\t" + String.Join(", ", data));
Console.WriteLine("Round Tripped:\t" + String.Join(", ", bytes));

The first line of code creates a byte array with a single byte. The second line converts it to a UTF-8 string. The third line takes the string and converts it back to a byte array.

If you drop that code into the Main method of a Console app, you’ll get the following output.

Original:      128
Round Tripped: 239, 191, 189

WTF?! The data was changed and the original value is lost!

If you try it with 127 or less, it round trips just fine. What’s going on here?

UTF-8 Variable Width Encoding

To understand this, it’s helpful to understand what UTF-8 is in the first place. UTF-8 is a format that encodes each character in a string with one to four bytes. It can represent every unicode character, but is also backwards compatible with ASCII.

ASCII is an encoding that represents each character with seven bits of a single byte, and thus consists of 128 possible characters. The high order bit in standard ASCII is always zero. Why only 7-bits and not the full eight?

Because seven bits ought to be enough for anybody:

When you counted all possible alphanumeric characters (A to Z, lower and upper case, numeric digits 0 to 9, special characters like “% * / ?” etc.) you ended up a value of 90-something. It was therefore decided to use 7 bits to store the new ASCII code, with the eighth bit being used as a parity bit to detect transmission errors.

UTF-8 takes advantage of this decision to create a scheme that’s both backwards compatible with the ASCII characters, but also able to represent all unicode characters by leveraging the high order bit that ASCII ignores. Going back to Wikipedia:

UTF-8 is a variable-width encoding, with each character represented by one to four bytes.If the character is encoded by just one byte, the high-order bit is 0 and the other bits give the code value (in the range 0..127).

This explains why bytes 0 through 127 all round trip correctly. Those are simply ASCII characters.

But why does 128 expand into multiple bytes when round tripped?

If the character is encoded by a sequence of more than one byte, the first byte has as many leading “1” bits as the total number of bytes in the sequence, followed by a “0” bit, and the succeeding bytes are all marked by a leading “10” bit pattern.

How do you represent 128 in binary? 10000000

Notice that it’s marked with a leading 10 bit pattern which means it’s a continuation character. Continuation of what?

the first byte never has 10 as its two most-significant bits. As a result, it is immediately obvious whether any given byte anywhere in a (valid) UTF‑8 stream represents the first byte of a byte sequence corresponding to a single character, or a continuation byte of such a byte sequence.

So in answer to the question of why does 128 expand into multiple bytes when round tripped, I don’t really know other than a single byte of 128 isn’t a valid UTF-8 character. So in all likelihood, the behavior shouldn’t be defined. it’s the Unicode Replacement Character used for invalid data (Thanks to RichB for the answer in the comments!).

I’ve noticed a lot of invalid ITF-8 values expand into these three bytes. But that’s beside the point. The point is that using UTF-8 encoding to store binary data is a recipe for data loss and heartache.

What about Windows-1252?

Going back to the original question, you’ll note that the code didn’t use UTF-8 encoding. I took some liberties in describing his approach. What he did was use  System.Text.Encoding.Default. This could be different things on different machines, but on my machine it’s the Windows-1252 character encoding also known as “Western European Latin”.

This is a single byte encoding and when I ran the same round trip code against this encoding, I could not find a data-loss scenario. Wait, could Jon be wrong?

To prove this to myself, I wrote a little program that cycles through every possible byte and round trips it.

using System;
using System.Linq;
using System.Text;

class Program
    static void Main(string[] args)
        var encoding = Encoding.GetEncoding(1252);
        for (int b = Byte.MinValue; b <= Byte.MaxValue; b++)
            var data = new[] { (byte)b };
            string text = encoding.GetString(data);
            var roundTripped = encoding.GetBytes(text);

            if (!roundTripped.SequenceEqual(data))
                Console.WriteLine("Rount Trip Failed At: " + b);

        Console.WriteLine("Round trip successful!");

The output of this program shows that you can encode every byte, then decode it, and get the same result every time.

So in theory, it could be safe to use Windows-1252 encoding of binary data, despite what Jon said.

But I still wouldn’t do it. Not just because I believe Jon more than my own eyes and code. If it were me, I’d still use Base64 encoding because it’s known to be safe.

There are five unmapped code points in Windows-1252. You never know if those might change in the future. Also, there’s just too much risk of corruption. If you were to store this string in a file that converted its encoding to Unicode or some other encoding, you’d lose data (as we saw earlier).

Or if you were to pass this string to some unmanaged API (perhaps inadverdently) that expected a null terminated string, it’s possible this string would include an embedded null character and be truncated.

In other words, the safest bet is to listen to Jon Skeet as I’ve said all along. The next time I see Jon, I’ll have to ask him if there are other reasons not to use Windows-1252 to store binary data other than the ones I mentioned.

personal 0 comments suggest edit

Birthdays are a funny thing, aren’t they? Let’s look at this tweet for example,

It’s @haacked’s birthday. Give him crap about getting old.

No gifts, please. Especially not what Charlie suggests.

Of course I’m getting older. We’re all getting older. Every second of every day and twice on Monday. Every femtosecond even. Perhaps the only time we’re not getting older is the moment within a Planck time interval. But once that interval is up, yep, you’re older.

Yet people apparently live their lives completely oblivious to this fact until they’re next birthday comes along. As the chronometer slides the next number into place, the realization dawns, “Damn! I’m older!” What? You didn’t know this?!

Feeling Older

The odd thing to me is that I don’t really feel older, mentally. I mean, I consciously know I’m older, but I feel like there’s this smooth continuum from my first memory to now. While the things I spend time thinking about have changed, the way I think about others and about myself feels like it hasn’t changed. I’m the same person then as I am now, and that kind of blows my mind.

For example, I still think fart jokes are funny.

In my mind, old people tell you how they used to walk miles uphill both ways to get to school. But I realize that these days, old people tell you about how they used to have to use their phone to connect online at 1200 baud. And there was no internet!!! OMG! What the hell were we connecting to?

Rather than feeling older, I am observing the evidence that I’m older. For example, I used the word “baud” in this blog post. Another example is how injuries now take much longer to heal. I have two kids, a four year old and a two year old and I’m pretty sure that if I were to slice them clean in half, that’d only put them out of commission for a week. They’d heal up and have no scars! Meanwhile, if I get a paper cut on a finger I can pretty much kiss that finger goodbye. Write it off as a loss and start practicing typing with two bloody stumps for hands.

Getting Experienced

But it’s not just physical. I do notice that while I don’t feel older, I do have the benefit of many more years of experience to draw upon. But more importantly, I’m finally actually paying attention to that. Go figure.

Last week, we had our GitHub summit and Friday was our field trip day to a distillery then a bar. This was the night set aside to party hard. Which is amazing to me because the night before I’m pretty sure we as a company consumed enough alcohol to bring elephants to extinction.

But I drew upon my experience and took it easy because I had a flight early the next morning and I did not want to be sick on an airplane. Contrast this to a few years before at Tech-Ed Hong Kong when I was out with some local friends and at 5:00 AM I had to leave the bar early to catch a flight. For the first time in my life, I contemplated suicide.

Some might call that getting wiser. I call it pain avoidance.

Knowing Less

The other evidence of my getting older is that I know a lot less now than I did when I was younger. Certainly that can’t be true in the absolute sense since I don’t have alzheimer’s (that I’m aware of anyways). But I remember as a young programmer I knew everything!

I knew the right way to do all things in all situations with absolute conviction. But these days, I’m not so sure. About anything. All I have is the breadth of my experience and pattern matching at my disposal. Each new situation is simply a pattern matching exercise against my database of experience followed by an experiment to see if what I thought I knew produces good results.

The great thing about this approach is when you know everything, you have nothing to learn. But now, I’m constantly learning. Many of my experiments fail because many of my experiences are no longer relevant today. The world changes. Quickly. But each experiment is an opportunity to learn.

Staying Young

So yeah, I’m getting older, but I’ve found a loophole. Remember the kids I mentioned slicing in half? I’m not going to do that because I’m worried I’d end up with four of them then and two are already a handful.

These two do a great job of making me feel young because they will laugh at every fart joke I can come up with.

So thanks for all the birthday wishes on Twitter, Facebook, and elsewhere. Here’s to getting older!

tdd, code 0 comments suggest edit

Suppose you have a test that needs to compare strings. Most test frameworks do a fine job with their default equality assertion. But once in a while, you get a case like this:

public void SomeTest()
    Assert.Equal("Hard \tto\ncompare\r\n", "Hard  to\r\ncompare\n");

Let’s pretend the first value in the above test is the expected value and the second value is the value you obtained by calling some method.

Clearly, this test fails. So you look at the output and this is what you see:


It’s pretty hard to compare those strings by looking at them. Especially if they are two huge strings.

This is why I typically write an extension method against string used to better output a string comparison. Here’s an example of a test using my helper.

public void Fact()
    "Hard  to\rcompare\n".ShouldEqualWithDiff("Hard \tto\ncompare\r\n");

And here’s an example of the output.


At the very top, the assert message is the same as before. I deferred to the existing Assert.Equal method in xUnit (typically Assert.AreEqual in other test frameworks) to output the error message.

Underneath the existing message are headings for three columns: the character index, the expected character, and the actual character. For each character I print out the int value and the actual character.

Of course in some cases, I don’t print out the actual value. If I were to do that for new line characters and tab characters, it’d screw up the formatting. So instead, I special case those characters and print out the escape sequence in C# for those characters.

This makes it easy to compare two strings and see every difference when a test fails. Even the hidden ones.

This is a simple quick and dirty implementation available in a Gist. For example, it doesn’t do any real DIFF comparisons and try to line up similarities. That’d be a nice improvement to make at some point. If you can improve this, feel free to fork the gist and send me a pull request., mvc 0 comments suggest edit

In the ASP.NET MVC 3 Uservoice site, one of the most voted up items is a suggestion to include an empty project template. No, a really empty project template.

You see, ASP.NET MVC 3 includes an “empty” project template, but it’s not empty enough for many people. So in this post, I’ll give you a much emptier one. It’s not completely empty. If you really wanted it completely empty, just choose the ASP.NET Empty Web Application template.

The Results

I’ll show you the results first, and then talk about how I made it. After installing my project template, every time you create a new ASP.NET MVC 3 project, you’ll see a new entry named “Really Empty”


Select that and you end up with the following directory structure.


I removed just about everything. I kept the Views directory because the Web.config file that’s required is not obvious and there’s special logic related to the Views directory. I also kept the Controllers directory, since that’s where the tooling is going to put controllers anyways. I also kept the Global.asax and Web.config files which are typically necessary for an ASP.NET MVC project.

I debated removing the AssemblyInfo.cs file, but decided to trim it down and keep it.

Building Custom Project Templates

I wrote about building a custom ASP.NET MVC 3 project template a long time ago. However, I’ve improved on what I did quite a bit. Now, I have a single install.cmd file you can run and it’ll determine whether you’re on x64 or x86 and run the correct registry script. The install.cmd and uninstall.cmd batch files are there for convenience and call into a PowerShell script that does the real work.

UPDATE 1/12/2012: Thanks to Tim Heuer, we have an even better installation experience. He refactored the project to output a VSIX file. All you need to do is double click the extension file to install the project template. I’ve uploaded the extension file to GitHub here.

I tried uploading it to the gallery, but it wouldn’t let me. I’ll follow up on that.


If you’re wondering why the product team hasn’t included this all along, it’s for a lot of reasons. There was (at least when I was there) internal debate about how empty to make it. For example, when you create a new project with my empty template, and hit F5, you get an error. Not a great experience for most people.

Honestly, I’m all for it, but there are many other higher priority items for the team to work on. So I figured I’d do it myself and put it up on GitHub.


Installation is really simple. If you like to build things from source, grab the source from my GitHub repository and run the build.cmd batch file. Then double click the resulting VSIX file. Be sure to read the README for more details.

If you don’t yet know how to use Git to grab a repository, don’t worry, just navigate to the downloads page and download the VSIX file I’ve conveniently uploaded.


Hey, if you think you can help me make this better, please go fork it and send me a pull request. Let me know if I include too little or too much.

I’ve already posted a few things that could use improvement in the README. If you’d like to help make this better, consider one of the following. :)

  • Make script auto-detect whether VS is running or not and do the right thing
  • Test this on an x86 machine
  • Write an installer for this

Let me know if you find this useful.

open source, community 0 comments suggest edit

Mary Poppendieck writes the following in Unjust Deserts (pdf), a paper on compensation systems (emphasis mine),

There is no greater de-motivator than a reward system that is perceived to be unfair. It doesn’t matter if the system is fair or not. If there is a perception of unfairness, then those who think that they have been treated unfairly will rapidly lose their motivation.

Written over seven years ago, the paper is just as insightful and applicable today. For example, let’s apply it to the recent dust-up about the legitimacy and fairness of the Microsoft MVP Program.

I think the MVP program means well. It’s not trying to be a conspiracy or filch you of your just desserts. But if you think about the MVP program as a compensation system, it becomes very clear why people feel disillusioned.

What compensation am I talking about?

  1. An MSDN Subscription
  2. Privileged access to product teams and not yet public information (under NDA)
  3. A yearly summit which provides hotel rooms and access to product team members as well as a nice party.

Not only is it a compensation system, but the means by which compensation is doled out is perceived to be arbitrary and hidden. It’s a recipe for mistrust.

Intrinsic Motivations

Mary goes on to point out,

In the same way, once employees get used to receiving financial rewards for meeting goals, they begin to work for the rewards, not the intrinsic motivation that comes from doing a good job and helping their company be successful.

Someone asked me what I thought about the MVP program recently and I said I think Microsoft’s actually a great company, but I don’t think you should seek out recognition from Microsoft or any other corporation for your community contributions. I think that provides the wrong incentives to build community.

If you run an open source project, don’t do it to receive recognition from Microsoft. Or any other corporation for that matter (except maybe you’re own). Do it to scratch an itch! Do it because it’s fun. Do it to show cool stuff to your peers. Worry about their recognition more than some corporation.

If you answer questions about a technology on StackOverflow, do it because you enjoy sharing your knowledge with others (and you want the SO points!), not because it’s on a checklist to receive an MVP award.

Just as Mary points out, when you start to frame these activities as means to receive an extrinsic reward, you become disillusioned. So whether the program exists or not, we should strive on our part to not feel a sense of entitlement to the program and focus on our intrinsic motivations.

Fixing It

I covered what I think we should strive for. But what do I think Microsoft should do? Several things.

So far, I glossed over the the fact that recognition from Microsoft isn’t the only reason people want the award. There are material benefits. MVPs are part of a privileged group that gets early access to what Microsoft is doing, which might provide a real competitive advantage. Why wouldn’t you seek that out?

Open Development

Let’s tackle the first thing first, privileged “early access”. Well there’s one easy solution to that. Do you know why NuGet doesn’t have an “early access” program? Drew Miller nails it on Twitter:

Know how you avoid the need for a privileged group of folks under NDA that inevitably is seen as special and superior? Develop in the open.

NuGet sidesteps the whole question of a recognition program by developing in the open. The same is true for the Azure SDK. When active development occurs in a public repository, the whole concept of “early access program” makes no sense.

Not only that, but recognition in an open source project doesn’t come from some corporation. It comes from the maintainers of a project and from the folks in the project’s community that you’ve helped. You can point to the reason people are recognizing you.

Better Free Tools

The other reason folks want an MVP is to have access to the professional tools. Most companies will easily shell out the money for this, but if you’re a hobbyist or open source developer, it’s a lot of money to shell out.

In this regard, I think Microsoft should either make its free Express tools have more pro features such as allowing Visual Studio Extensions and multi-project support, or simply make Visual Studio Professional free, and focus on developing the ecosystem that gets a boost when everyone has better tools to build on your platform. Everyone wins.

Focused recognition

I don’t think it’s inherently wrong for a company to recognize people’s contributions. But it has to be done in a way so that it’s seen as icing and not an entitlement or cronyism.

It’s darn near impossible to conceive of a recognition program that would be seen as universally fair and recognizes something so broad as “community contributions”. A better approach might be to have multiple smaller recognition programs. Focus on removing obstacles that get in the way of people inherently doing the things that’s good for all of us. For example, it benefits Microsoft’s when:

  1. People are helping solve each other problems on the forums.
  2. People are giving talks about their products.
  3. People are building software (open and not) on their platforms.
  4. Probably some others I’m forgetting…

For what it’s worth, I think the first one is already solved by StackOverflow. Just move your forums there and be done with it. After all, nobody gets upset when they answer a question on Twitter and don’t get StackOverflow points.


Will Microsoft change the program? I have no idea. I’m not really all that concerned about it really. In the meanwhile, we can recognize folks who make our lives better. We don’t need to wait for Microsoft to do so. I’ve used a huge swath of open source projects that have made my development smoother. I’ve found many great answers in forums, blog posts, StackOverflow that unblocked me.

Moving forward, I’ll make an extra effort to thank the people responsible for those things. Maybe there’s some projects and folks you should recognize. Go for it! It’ll feel good.

Disclaimer: I was a former Microsoft MVP for about three months before joining Microsoft as an employee. I’m now an employee of GitHub. My opinion here is simply my own opinion and does not necessarily represent the opinion of any employers past, present, and future. Nor does it represent the opinion of my dog, because I don’t have one, nor anyone in my neighborhood.

code, tdd 0 comments suggest edit

In the past, I’ve tried various schemes to structure my unit tests but never fell into a consistent approach. Pretty much the only rule I had (which I broke all the time) was to write a test class for each class I tested. I would then fill that class with a ton of haphazard test methods.

That was until I saw the approach that Drew Miller took with The way he structured the unit tests struck me as odd at first, but quickly won me over. Drew tells me he can’t take all the credit for this approach. This approach came from when he worked at CodePlex, and builds upon practices he learned from Brad Wilson and Jim Newkirk. That’s the thing I like about Drew, he won’t take credit for other people’s work. Unlike me, of course.

The structure has a test class per class being tested. That’s not so unusual. But what was unusual to me was that he had a nested class for each method being tested.

I’ll provide a simple code example to illustrate this approach and then highlight some of the benefits. The following has two methods for embellishing names with more interesting titles. What it does isn’t really that important for this discussion.

using System;

public class Titleizer
    public string Titleize(string name)
        if (String.IsNullOrEmpty(name))
            return "Your name is now Phil the Foolish";
        return name + " the awesome hearted";

    public string Knightify(string name, bool male)
        if (String.IsNullOrEmpty(name))
            return "Your name is now Sir Jester";
        return (male ? "Sir" : "Dame") + " " + name;

Under Drew’s system, I’ll have a corresponding top level class, with two embedded classes, one for each method. In each class, I’ll have a series of tests for that method.

Let’s look at a set of potential tests for this class. I wrote xUnit.NET tests for this, but you could apply the same approach with NUnit, mbUnit, or whatever you use.

using Xunit;

public class TitleizerFacts
    public class TheTitleizerMethod
        public void ReturnsDefaultTitleForNullName()
            // Test code

        public void AppendsTitleToName()
            // Test code

    public class TheKnightifyMethod
        public void ReturnsDefaultTitleForNullName()
            // Test code

        public void AppendsSirToMaleNames()
            // Test code

        public void AppendsDameToFemaleNames()
            // Test code

Pretty simple, right? If you want to see a real-world example, look at these tests of the user service within

So why do this at all? Why not stick with the old way I’ve done in the past?

Well for one thing, it’s a nice way to keep tests organized. All the tests (or facts) for a method are grouped together. For example, if you use the CTRL+M, CTRL+O shortcut to collapse method bodies, you can easily scan your tests and read them like a spec for your code.


You also get the same effect if you run your tests in a test runner such as the xUnit test runner:


When the test class file is open in Visual Studio, the class drop down provides a quick way to see a list of the methods you have tests for.


This makes it easy to then see all the tests for a given method by using the drop down on the right.


It’s a minor change to my existing practices, but one that I’ve grown to like a lot and hope to apply in all my projects in the future.

Update: Several folks asked about how to have common setup code for all tests. ZenDeveloper has a simple solution in which the nested child classes simply inherit the outer parent class. Thus they’ll all share the same setup code.

Tags: unit testing, tdd, xunit

personal 0 comments suggest edit

Happy New Year’s Eve everyone! And by the time you read this, it’ll probably already be the new year. To my friends across the international date line, what is 2012 like? The rest of us will be there soon.

New Year’s Eve has always been one of my favorite holidays. It brings a collective time for reflection on the past year and anticipation and hope for the year to come.

And for me, New Year’s Eve has an extra special meaning because exactly ten years ago on New Year’s Eve, I met this woman at Giant Village. A mutual friend suggested that we should meet since we were both attending this event. This woman was there with her brother, and I was there with a buddy.


I wonder what she’s been up to after all these years?

Just kidding!

I know what she’s up to. We met in 2001 and were smitten by the time 2002 arrived and have been together ever since. Ten years later, we’ve added to our funky bunch. We work hard hoping to keep these little munchkins alive. What a difference a decade makes, no?


So yeah, New Year’s Eve totally rocks in my book.

code, open source 0 comments suggest edit

T’is the season for “Year in Review” and “Best of” blog posts. It’s a vain practice, to be sure. This is exactly why I’ve done it almost every year! After all, isn’t all blogging pure vanity? Sadly, I did miss a few years when my vanity could not overcome my laziness.

This year I am changing it up a bit to look at some of the highlights, in my opinion, that occurred in 2011 with open source software and the .NET community. I think it’s been a banner year for OSS and .NET/Microsoft, and I think it’s only going to get better in 2012.


We released NuGet 1.0 in the beginning of this year and it had a big impact on the amount of sleep I got last year. Insomnia aside, it’s also had a significant impact on the .NET community and been very well received.

One key benefit of NuGet is it provides a central location for people to discover and easily acquire open source libraries. This alone helps many open source libraries gain visibility. According to, the NuGet gallery now has over 4,000 unique packages and 3.4 Million package downloads.

Scott Hanselman noted another impact I hadn’t considered in his DevReach 2011 keynote. To understand his observation, I need to provide a bit of background.

Back in April, Microsoft released the ASP.NET MVC 3 Tools update. This added support for pre-installed NuGet packages in the ASP.NET MVC 3 project templates so that projects created from these templates already include dependent libraries installed as NuGet packages rather than as flat files in the project. This allows developers who create a project from a template to upgrade these libraries after the project has been created via NuGet.

NuGet 1.5 adds this support for pre-installed packages to any project template that wants it. In the preview for ASP.NET MVC 4, we included libraries such as ModernizR, jQuery, jQuery UI, jQuery Validation, and Knockout in this manner. We expect other project templates in the future to take advantage of this as well.

The interesting observation Hanselman had in his keynote is that this is an example of Microsoft giving equal billing to these open source libraries as it does to its own. When you create an ASP.NET MVC 4 project, your project includes Microsoft packages alongside 3rd party OSS packages all installed in the same manner.

Additionally, the way NuGet itself was developed is also important. NuGet is an Apache v2 open source project that accepts contributions from the community. Microsoft gave it to the Outercurve Foundation and continues to supply the project with employee contributors.

Orchard Project

Before there was NuGet, there was Orchard. Orchard is an open source CMS system that was started at Microsoft, but also contributed to the OuterCurve Foundation.

What’s really impressive about Orchard is the amount of community involvement they’ve fostered. They’ve set up a governance structure consisting of an elected steering committee so that it’s truly a community run project.

They recently surpassed 1 million module downloads from their online gallery. Modules are extensions to Orchard that are installable directly from within the Orchard admin.


Umbraco is an independent open source CMS that has a huge following and a strong community. They’ve been around for a while, long before 2011. But in 2011, Microsoft hosted the redesigned site using Umbraco.

Micro ORMS

For a lack of better term, I think 2011 was the year of the mini-ORMS. While many refere to these libraries as micro-ORMS, they’re not technically ORMs. They’re more simple data access libraries. A non-comprehensive list of the ones that made a big splash are:

If you’re interested in seeing a more comprehensive list of Micro ORMS with source code examples of usage (Nice!), check out this blog post by James Hughes.

Micro Web Frameworks and OWIN

Like pairing a good beer with the right steak, lightweight micro-web frameworks provide a good pairing with Micro-ORMS. It’s interesting that  both of these picked up quite a bit this past year.

Some that caught my attention this year are:

  • Named after Sinatra’s daughter, there’s the Nancy micro web framework.
  • FubuMVC is billed as the project that gets out of your way.
  • OpenRasta is a resource oriented web framework for building REST services.

Again, James Hughes provides a comparative list of micro-web frameworks complete with source code examples.

With the proliferation of web frameworks as well as lightweight web servers such as Kayak and Manos de Mono, the need to decouple the one from the other arose. This is where OWIN stepped into the gap.

OWIN stands for Open Web Interface for .NET. It is a project inspired by Rack, a Ruby Webserver interface, meant to decouple web servers from the web application frameworks that run on them.

This project was started as a completely grass roots project in 2011 but has seen amazing pick-up from the community and I believe will have a big impact in 2012.


Miguel de Icaza wrote a monster blog post about the year that he and the Mono (and Xamarin) folks have had in

  1. His post inspired me to write this less monstrous one. It’s a great post and really inspiring to see how they’ve emerged from the ashes of the great Novell layoff of 2011 to have a great year.

In the following image, you can see me teaching Miguel everything he knows about software development and open source while Scott acts surprised?

What really caught my interest in his post was the note about Microsoft using Mono and Unity3D to build Kinectimals for iOS systems such as the iPad. 2011 seems to be the year of pigs flying for Microsoft.

Xamarin is doing a great job of bringing Mono, and consequently C# and open source to just about every device imaginable!

Open Source Fest at Mix 11

Mix is one of my favorite conferences and I’ve attended every single one. And it has nothing to do with it being in Las Vegas, though that doesn’t hurt one bit.

This year was special due to the efforts of John Papa (who’s name makes me wonder if he ever goes all Biggie Smalls on people and sings “I love it when you call me John Papa”). This year, John put together the Open Source Fest at Mix.

This was an event where around 50 projects had stations in a large open room where they could represent their project and talk to attendees. The atmosphere was electric as folks went from table to table learning about useful software directly from the folks who built it.

This is where projects such as Glimpse got noticed and really took off. Would love to see more of this sort of thing at conferences.

Azure SDKs and GitHub

As I recently wrote on the GitHub blog, Microsoft is actively developing a set of Azure SDKs for multiple platforms (not just .NET) in GitHub. All of these libraries are Apache v2 licensed and actively being developed in GitHub.

screenshot of the azure sdk

It’s great to see Microsoft not only releasing source code under an open source license, but actively developing it in the open and ostensibly accepting contributions from the public. I look forward to seeing more of this in the future.


Last but not least, there’s GitHub. Full disclaimer: I’m an employee of GitHub so naturally my opinion is totally biased. But a bias doesn’t necessarily mean an opinion is wrong.

What I love about GitHub is that just about everybody is there. GitHub hosts a huge number of open source projects, including a large number of the important ones you’ve heard of. Quantity alone isn’t a sign of quality, but it can create network effects. When a site has such a large community, hosting a project there makes it easier to attract contributors because there’s such a large pool to draw from.

I’ve seen this benefit .NET open source projects first hand. Since moving some of my projects there, I’ve received more pull requests. Small independent projects such as JabbR have really attracted a passionate community at GitHub with large numbers of external contributions. Most of the credit must go to the efforts of the great project leads who’ve worked hard to foster a great community, but I think they’d agree that hosting on GitHub certainly makes it easier and more enjoyable.

What did I miss?

Did I miss anything significant in your opinion? Let me know in the comments. What do you think will happen in 2012? Does the number 2012 look like a science fiction year to you? Because it does to me. I can’t believe it’s just about here already. Have a great holidays!

UPDATE: Egg on my face. This post was meant to list a few highlights and not be a comprehensive list of all that happened in open source in the .NET space. Even so, in my holiday infused malaise, I was negligent in omitting several highlights. I apologize and updated the post to reflect a few more significant events. Let me know if I missed some obvious ones.

git, code 0 comments suggest edit

My last post covered how to improve your Git experience on Windows using PowerShell, Posh-Git, and PsGet. However, a commenter reminded me that a lot of folks don’t know how to get Git for Windows in the first place.

And once you do get Git set up, how do you avoid getting prompted all the time for your credentials when you push changes back to your repository (or pull from a private repository)?

I’ll answer both of those questions in this post.

Install msysgit

The first step is to install Git for Windows (aka msysgit). The full installer for msysgit 1.7.8 is here. For a detailed walkthrough of the setup steps, check out GItHub’s Windows Setup walkthrough. It’s pretty straightforward. That’ll put Git.exe in your path so that Posh-Git will work.

Bam! Done! On to the second question. Make sure you set up your SSH keys before moving to the second section.

Using SSH with Posh-Git

One annoyance with Git on Windows is when pushing changes to a repository (or pulling from a private repository), you have to constantly enter your password if you cloned the repository using HTTPS.

Likewise, if you clone with SSH, you also need to enter your passphrase each time. Fortunately, a little program called ssh-agent can securely save your pass phrase (and consequently your sanity) for the session and supply it when needed.

Update: Mike Chaliy just fixed PsGet so it always grabs the latest version of Posh-Git. If you installed Posh-Git before today using PsGet, you’ll need to update Posh-Git by running the following command:

Install-Module Posh-Git force

Unfortunately, at the time that I write this, the version of Posh-Git in PsGet does not support starting an SSH Agent. The good news is, the latest version of Posh-Git direct from their GitHub repository does support SSH Agent.

Since the previous step installed git.exe on my machine, all I needed to do to get the latest version of Posh-Git is to clone the repository.

git clone

This creates a folder named “posh-git” in the directory where you ran the command. I then copied all the files in that folder into the place where PsGet installed posh-git. On my machine, that was:


When I restarted my PowerShell prompt, it told me it could not start SSH Agent.


It turns out that it was not able to find the “ssh-agent.exe” executable. That file is located in C:\Program Files (x86)\Git\bin. but that folder isn’t automatically added to your PATH by msysgit.

If you don’t want to add this path to your system PATH, you can update your PowerShell profile script so it only applies to your PowerShell session. Here’s the change I made.

$env:path += ";" + (Get-Item "Env:ProgramFiles(x86)").Value + "\Git\bin"

On my machine that script is at:


The next time I opened my PowerShell prompt, I was greeted with a request for my pass phrase.


After typing in my super secret pass phrase, once at the beginning of the session, I was set. I could clone some private repositories and push some changes without having to specify my pass phrase each time. Nice. Secure. Convenient.

The Start-SshAgent Command

The reason that I get the ssh-agent prompt when starting up PowerShell is because when I installed Posh-Git, it updated my profile to load in their example profile:


That profile script is calling the Start-SshAgent command which is included with Posh-Git. If you don’t like their profile example, you can manually start ssh-agent by calling the Start-SshAgent command.

code, git 0 comments suggest edit

I’m usually not one to resort to puns in my blog titles, but I couldn’t resist. Git it? Git it? Sorry.

Ever since we introduced PowerShell into NuGet, I’ve become a big fan. I think it’s great, yet I’ve heard from so many other developers that they have no time to try it out. That it’s “on their list” and they really want to learn it, but they just don’t have the time.

But here’s the dirty little secret about PowerShell. This might get me banned from the PowerShell junkie secret meet-ups (complete with secret handshake) for leaking it, but here it is anyways. You don’t have to learn PowerShell to get started with it and benefit from it!

Seriously. If you use a command line today, and switch to PowerShell instead, pretty much everything you do day to day still works without changing much of your workflow. There might be the occasional hiccup here and there, but not a whole lot. And over time, as you use it more, you can slowly start accreting PowerShell knowledge and start to really enjoy its power. But on your time schedule.

UPDATE: Before you do any of this, make sure you have Git for Windows (msysgit) installed. Read my post about how to get this set up and configured.

There’s a tiny bit of one time setup you do need to remember to do:

Set-ExecutionPolicy RemoteSigned

Note: Some folks simply use Unrestricted for that instead of RemoteSigned. I tend to play it safe until shit breaks.So with that bit out of the way, let’s talk about the benefits.


If you do any work with Git on Windows, you owe it to yourself to check out Posh-Git. In fact, there’s also Posh-HG for mercurial users and even Posh-Svn for those so inclined.

Once you have Posh-Git loaded up, your PowerShell window lights up with extra information and features when you are in a directory with a git repository.


Notice that my PowerShell prompt includes the current branch name as well as information about the current status of my index. I have 2 files added to my index ready to be committed.

More importantly though, Posh-Git adds tab expansions for Git commands as well as your branches! The following animated GIF shows what happens when I hit the tab key multiple times to cycle through my available branches. That alone is just sublime.


Install Posh-Git using PsGet

You’re ready to dive into Posh-Git now, right? So how do you get it? Well, you could follow all those pesky directionson the GitHub site. But we’re software developers. We don’t follow no stinkin’ list of instructions. It’s time to AWW TOE  MATE!

And this is where a cool utility named PsGet comes along. There are several implementations of “PsGet” around, but the one I cover here is so dirt simple to use I cried the first time I used it.

To use posh-git, I only needed to run the following two commands:

(new-object Net.WebClient).DownloadString("") | iex
install-module posh-git

Here’s a screenshot of my PowerShell window running the command. Once you run the commands, you’ll need to close and re-open the PowerShell console for the changes to take’s

Both of these commands are pulled right from the PsGet homepage. That’s it! Took me no effort to do this, but suddenly using Git just got that much smoother for me.

Many thanks to Keith Dahlby for Posh-Git and Mike Chaliy for PsGet. Now go git it!, mvc, tdd, code, razor 0 comments suggest edit

Given how central JavaScript is to many modern web applications,  it is important to use unit tests to drive the design and quality of that JavaScript. But I’ve noticed that there are a lot of developers that don’t know where to start.

There are many test frameworks out there, but the one I love is QUnit, the jQuery unit test framework.


Most of my experience with QUnit is writing tests for a client script library such as a jQuery plugin. Here’s an example of one QUnit test file I wrote a while ago (so you know it’s nasty).

You’ll notice that the entire set of tests is in a single static HTML file.

I saw a recent blog post by Jonathan Creamer that uses ASP.NET MVC 3 layouts for QUnit tests. It’s a neat approach that consolidates all the QUnit boilerplate into a single layout page. This allows you to have multiple test files and duplicate that boilerplate.

But there was one thing that nagged me about it. For each new set of tests, you need to add an action method and a corresponding view. ASP.NET MVC does not allow rendering a view without a controller action.

Controller-Less Views

The idea of controller-less views has been one tossed around by folks, but there are all sorts of design issues that come up when you consider it. For example, how do you request such a view directly? If you allow that, what if the view is intended to be rendered by a controller action. Now you have two ways to access that view, one of which is probably incorrect. And so on.

However, there is another lesser known framework (at least, lesser known to ASP.NET MVC developers) from the ASP.NET team that pretty much provides this ability!

ASP.NET Web Pages with Razor Syntax

It’s a product called ASP.NET Web Pages that is designed to appeal to developers who prefer an approach to web development that’s more like PHP or classic ASP.

Aside: I’d like to go on record and say I hated that name from the beginning because it causes so much confusion. Isn’t everything I do in ASP.NET a web page?

A Web Page in ASP.NET Web Pages (see, confusing!) uses Razor syntax inline to render out the response to a request. ASP.NET Web Pages also support layouts. This means we can create an approach very similar to Jonathan’s, but we only need to add one file for each new set of tests. Even better, this approach works for both ASP.NET MVC 3 and ASP.NET Web Pages.

The Code

The code to do this is straightforward. I just created a folder named test which will contain all my unit tests. I added an _PageStart.cshtml file to this directory that sets the layout for each page. Note that this is equivalent to the _ViewStart.cshtml page in ASP.NET MVCs.

    Layout = "_Layout.cshtml";

The next step is to write the layout file, _Layout.cshtml. This contains the QUnit boilerplate along with a place holder (the RenderBody call) for the actual tests.

<!DOCTYPE html>

        <link rel="stylesheet" href="/content/qunit.css " />
        <script src="/Scripts/jquery-1.7.1.min.js"></script>
        <script src="/scripts/qunit.js"></script>

        @RenderSection("Javascript", false)
        @* Tests are written in the body. *@
        <h1 id="qunit-header">
          @(Page.Title ?? "QUnit tests")
        <h2 id="qunit-banner">
        <h2 id="qunit-userAgent"></h2>
        <ol id="qunit-tests">
            <a href="/tests">Back to tests</a>

And now, one or more files that contain the actual test. Here’s an example called footest.cshtml.

  Page.Title = "FooTests";
@if (false) {
  // OPTIONAL! QUnit script (here for intellisense)
  <script src="/scripts/qunit.js"> </script>

<script src="/scripts/calculator.js"></script>

  $(function () {
    // calculator_tests.js
    module("A group of tests get's a module");
    test("First set of tests", function () {
      var calc = new Calculator();
      ok(calc, "My caluculator is a O.K.");
      equals(calc.add(2, 2), 4, "shit broken");

You’ll note that I have this funky if (false) block in the code. That’s to workaround a current limitation in Razor so that JavaScript Intellisense for QUnit works in this file. If you don’t care for Intellisense, you don’t need it. I hope that in the future, Razor will pick up the script in the layout and you won’t need this either way.

With this in place, to add a new test with the proper QUnit boilerplate is very easy. Just add a .cshtml file, set the title for the tests, and then add the script you’re testing and the test script into the same file.

The last step is to create an index into all the tests. I wrote the following index.cshtml file that creates a list of links for each set of tests. It simply iterates through every test file and generates a link. One nifty little perk of using ASP.NET Web Pages is you can leave off the extension when you request the file.

@using System.IO;
  Layout = null;

  var files = from path in
  Directory.GetFiles(Server.MapPath("./"), "*.cshtml")
  let fileName = Path.GetFileNameWithoutExtension(path)
  where !fileName.StartsWith("_")
  && !fileName.Equals("index", StringComparison.OrdinalIgnoreCase)
  select fileName;

<!DOCTYPE html>
        <h1>QUnit tests</h1>
        @foreach (var file in files) {
            <li><a href="@file">@file</a></li>

The output of this page isn’t pretty, but it works. When I navigate to /test I see a list of my test files:


Here’s the contents of my test folder when I’m done with all this.



I personally haven’t used this approach yet, but I think it could be a nice approach if you tend to have more than one QUnit test file in your projects and you tend to customize the boilerplate for those tests.

I tend to just use a static HTML file, but so far, most of my QUnit tests are for a single JavaScript library. But this approach might come in handy when I get around to testing the JavaScript in the NuGet gallery.