code 0 comments suggest edit

One of the side projects I’ve been working on lately is helping to shepherd the Semantic Versioning specification (SemVer) along to its 2.0.0 release. I want to thank everyone who sent pull requests and engaged in thoughtful, critical, spirited feedback about the spec. Your involvement has made it better!

I also want to thank Tom for creating SemVer in the first place and trusting me to help move it along.

I’ve mentioned SemVer in the past as it relates to NuGet. The 2.0.0 release of SemVer addresses some of the issues I raised.

What’s Changed?

Not too much has changed. Most of the changes focus around clarifications.

Build metadata

Perhaps the biggest change is the addition of optional build metadata (what we used to call a build number). This simply allows you to add a bit of metadata to a version in a manner that’s compliant with SemVer.

The metadata does not affect version precedence. It’s analogous to a code comment.

It’s useful for internal package feeds and for being able to tie a specific version to some mechanism that generated it.

For existing package managers that choose to be SemVer 2.0 compliant, the logic change needed is minimal. Instead of reporting an error when encountering a version with build metadata, all they need to do is ignore or strip the build metadata. That’s pretty much it.

Some package managers may choose to do more with it (for internal feeds for example) but that’s up to them.

Pre-release identifiers

Pre-release labels have a little more structure to them now. For example, they can be separated into identifiers using the “.” delimiter and identifiers that only contain digits are compared numerically instead of lexically. That way, 1.0.0-rc.1 < 1.0.0-rc.11 as you might expect. See the specification for full details.


The rest of the changes to the specification are concerned with clarifications and resolving ambiguities. For example, we clarified that leading zeroes are not allowed in the Major, Minor, or Patch version nor in pre-release identifiers that only contain digits. This makes a canonical form for a version possible.

If you find an ambiguity, feel free to report it.

What’s Next?

As SemVer matures, we expect the specification to become a little more formal in nature as a means of removing ambiguities. One such effort underway is to include a BNF grammar for the structure of a version number in the spec. This should hopefully be part of SemVer 2.1.

code 0 comments suggest edit

Code is unforgiving. As the reasonable human beings that we are, when we review code we both know what the author intends. But computers can’t wait to Well, Actually all over that code like a lonely Hacker News commenter:

Well Actually, Dave. I’m afraid I can’t do that.

Hal, paraphrased from 2001: A Space Odyssey

As an aside, imagine the post-mortem review of that code!

Code review is a tricky business. Code is full of hidden mines that lay dormant while you test just to explode in a debris of stack trace at the most inopportune time – when its in the hands of your users.

The many times I’ve run into such mines just reinforce how important it is to write code that is intention revealing and to make sure assumptions are documented via asserts.

Such devious code is often the most innocuous looking code. Let me give one example I ran into the other day. I was fortunate to defuse this mine while testing.

This example makes use of the Enumerable.ToDictionary method that turns a sequence into a dictionary. You supply an expression to produce a key for each element. In this example, loosely based on the actual code, I am using the CloneUrl property of Repository as the key of the dictionary.

IEnumerable<Repository> repositories = GetRepositories();
repositories.ToDictionary(r => r.CloneUrl);

It’s so easy to gloss over this line during a code review and not think twice about it. But you probably see where this is going.

While I was testing I was lucky to run into the following exception:

An item with the same key has already been added.

Doh! There’s an implicit assumption in this code – that two repositories cannot have the same CloneUrl. In retrospect, it’s obvious that’s not the case.

Let’s simplify this example.

var items = new[]
    new {Id = 1}, 
    new {Id = 2}, 
    new {Id = 2}, 
    new {Id = 3}
items.ToDictionary(item => item.Id);

This example attempts to create a dictionary of anonymous types using the Id property as a key, but we have a duplicate, so we get an exception.

What are our options?

Well, it depends on what you need. Perhaps what you really want is a dictionary that where the value contains every item with the given key. The Enumerable.GroupBy method comes in handy here.

Perhaps you only care about the first value for a given key and want to ignore any others. The Enumerable.GroupBy method comes in handy in this case.

In the following example, we use this method to group the items by Id. This results in a sequence of IGrouping elements, one for each Id. We can then take advantage of a second parameter of ToDictionary and simply grab the first item in the group.

items.GroupBy(item => item.Id)
  .ToDictionary(group => group.Key, group => group.First());

This feels sloppy to me. There is too much potential for this to cover up a latent bug. Why should the other items be ignored? Perhaps, as in my original example, it’s fully normal to have more than one element for the key and you should handle that properly. Instead of grabbing the first item from the group, we retrieve an array.

items.GroupBy(item => item.Id)
  .ToDictionary(group => group.Key, group => group.ToArray());

In this case, we end up with a dictionary of arrays.

UPDATE: Or, as Matt Ellis points out in the comments, you could use theEnumerable.ToLookupmethod. I should have known such a thing would exist. It’s exactly what I need for my particular situation here.

What if having more than one element with the same key is not expected and should throw an exception. Well you could just use the normal ToDictionary method since it will throw an exception. But that exception is unhelpful. It doesn’t have the information we probably want. For example, you just might want to know, which key was already added as the following demonstrates:

items.GroupBy(item => item.Id)
    .ToDictionary(group => group.Key, group =>
            return group.Single();
        catch (InvalidOperationException)
            throw new InvalidOperationException("Duplicate
  item with the key '" + group.First().Id + "'");

In this example, if a key has more than one element associated with it, we throw a more helpful exception message.

System.InvalidOperationException: Duplicate item with the
key '2'

In fact, we can encapsulate this into our own better extension method.

public static Dictionary<TKey, TSource>
  ToDictionaryBetter<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)
  return source.GroupBy(keySelector)
    .ToDictionary(group => group.Key, group =>
        return group.Single();
      catch (InvalidOperationException)
        throw new InvalidOperationException(
            string.Format("Duplicate item with the key
          '{0}'", keySelector(@group.First())));

Code mine mitigated!

This is just one example of a potential code mine that might go unnoticed during a code review if you’re not careful.

Now, when I review code and see a call to ToDictionary, I make a mental note to verify the assumption that the key selector must never lead to duplicates.

When I write such code, I’ll use one of the techniques I mentioned above to make my intentions more clear. Or I’ll embed my assumptions into the code with a debug assert that proves that the items cannot have a duplicate key. This makes it clear to the next reviewer that this code will not break for this reason. This code still might not open the hatch, but at least it won’t have a duplicate key exception.

If I search through my code, I will find many other examples of potential code mines. What are some examples that you can think of? What mines do you look for when reviewing code?

personal, empathy, parenting 0 comments suggest edit

This post is a departure from my typical software related topics, but I think you’ll find parallels with management and dealing with software developers.

Parenting is a skill like any other – it can be improved (for some more than others, amirite?!).

Look, I’m not trying to claim I’m the world’s greatest dad. But I was given a coffee mug with that claim by my kids. I don’t mean to brag, but I’m pretty sure they did a quantitative exhaustive analysis of all dads before conferring that award to me because that’s just how I raised them. Right kids?!

But I digress.

When my son was still a very young toddler, my wife and I took advantage of a Microsoft benefit that paid for parenting classes (Many Microsoft employees who are parents have no idea this benefit exists). We attended a series on “Reflective Parenting.” It was an amazing learning experience that taught us this idea that parenting is a skill like any other.

It’s a strange conceit of many parents that because they can reproduce, that they suddenly are imbued with unassailable parenting skills.

As Richard Stallman once remarked, perhaps callously,

It doesn’t take special talents to reproduce—even plants can do it. On the other hand, contributing to a program like Emacs takes real skill. That is really something to be proud of.

It helps more people, too.

And you never have to clean up poo from an Emacs blowout!

He’s right about one part. Even plants can reproduce. But reproducing is the easy part. Plants can’t parent.

Parenting is a subject that trends towards being heavy on tradition. “If it was good enough for me, it’s good enough for my kids.” But that’s not how progress is made my friends.

Despite my megalomaniacal tendencies, I like to think I turned out ok so far. My parents did a pretty good job. Does that mean I can’t strive to do even better? It’s worth a try. So in this post, I’ll explore what SCIENCE brings to bear on the subject. It may seem weird to invoke science in a subject as personal and emotional as parenting. But the scientific method is effective, even on a personal scale.


Note that the focus here is on core principles and less on specifics. I’m not going to tell you to spank or not spank your child (because we know that’ll end in a shit storm debate).

This post will focus on principles to consider when making your own decisions about these things. Because in the end, if you are a parent, it is ultimately up to you what you do…within reason.

That’s one reason try to embrace the “no judging” philosophy towards other parents. Each parent has a different situation and different background. I may offer ideas that I think are helpful, but I won’t judge. Unless you tend to drive off with your five-week-old child on top of your car. I might judge just an teensy weensy bit then.

Lessons from Reflective Parenting

The about page for the Center of Reflective Parenting says it was founded… response to groundbreaking research in child development and the study of the neurobiology of the developing mind showing that the single best way to positively impact the attachment relationship is to increase a parent’s capacity to reflect on their relationship with their child – to think about the meaning that underlies behavior.

There’s a lot of science and research underlying the core precepts of this approach. But when you hear it, it doesn’t sound academic at all. In fact, it sounds a lot like common sense.

There are three core lessons I took from the classes.


The first is to work on developing empathy and understanding for your child. We learned a lot about what children are capable of developmentally at certain ages. For example, at very young ages, children aren’t very good at understanding cause and effect.

This allows you to develop more appropriate expectations and responses to the things your child may do. At some ages, you just can’t expect them to respond to reason, for example. (By their teenage years, they can respond to reason, they just choose not to. It’s different.)

Self Control

The second, and perhaps more important lesson for me personally, is that good parenting is more about controlling yourself than your child. This is because children reflect the behavior of their parents.

For example, we’ve all been there, in the car, with the kids loudly misbehaving, when a parent gets fed up, blows up, and screams at the kids. I’ve been there.

In that moment the parent is not disciplining. The parent is only momentarily making him or herself feel better. But this teaches the kids that the best way to handle a stressful situation is to lose your shit. Discipline comes from the calm moments when a parent is very considered and in control of his or her actions. Remember, kids don’t do what you tell them to do. They do as you do.

In such situations, the class taught us to attempt to empathize with what the children might be experiencing and base our actions on that. If we can’t help but to lose our temper, it’s OK to separate ourselves from the situation. For example, in extreme situations, you might pull over, step out of the car, and let the kids scream their heads off while you (out of earshot) calm your nerves.


Now this last point is the most important lesson. Parents, we are going to fuck up. We’re going to do it royally. Accept it. Forgive yourself. And then repair the situation.

I’ve lost my shit plenty of times. I’m pretty sure I did it twice this past week. It doesn’t make me a bad parent, though I feel that way at the moment. What would make me a bad parent is if I doubled down on my anger and never apologized to the kids and never tried to repair whatever damage I may have caused.

Some parents believe in the doctrine of parental infallibility. Never let them see you sweat and never admit fault to your children lest they see an opening and walk all over you.

But when you consider the principle that kids do as you do, I don’t think this doctrine stands up to scrutiny. I hope you want your children to be able to admit when they were wrong and know how to deliver a sincere apology. Teach by living the example.

The Economist’s Guide to Parenting

Many years after the reflective parenting class, I listened to this outstanding Freakonomics Podcast epidsode on parenting.

I know what you’re thinking when you read the title of this podcast. You’re thinking what the **** — economists? What can economists possibly have to say about something as emotional, as nuanced, as humane, as parenting? Well, let me say this: because economists aren’t necessarily emotional (or, for that matter, all that nuanced or humane), maybe they’re exactly the people we need to sort this through. Maybe.

As you might expect, it’s hard to conduct a double blind laboratory study of raising kids. Are you going to separate twins at birth and knowingly give one to shitty parents and another to wonderful parents to examine the effects? Cool idea bro, but…


But there are such things as natural experiments. There were studies done of large groups of twins separated at birth and raised by different adoptive parents.

The striking result of the studies is that what the parents did had very little influence in how kids ended up. Letting kids watch as much TV as they want? Restricting TV? Helicopter parenting? Piano and violin lessons? Sorry Tiger Mom, it made very little difference.

Over and over again, guess what made the difference. It wasn’t what the parents did, it’s who they were. Educated parents ended up with educated kids. As far as I could tell, the study didn’t really get into cause and effect much. For example, is it because educated parents tend to do the things that lead to educated kids? Inconclusive. But they did find that many of the practices of “helicopter parents” such as music lessons, etc. had very little affect on future success and happiness of the child.

But studies reveal there is one thing parents do that had a strong correlation with how their progeny end up - how parents treated wait staff. Those who were rude to waiters and waitresses ended up with rude children. Those who were kind and tipped well, ended up with kind kids.

See a pattern here?

Kids aren’t affected as much by what you tell them and what you teach them as much as what you do. If you’re curious and love learning, your kids are more likely to be infused with a similar passion for learning.

On one level, this is encouraging. You don’t need to schedule every hour of your children’s free time with Latin and theremin lessons for them to turn out well.

On the other hand, it’s also very challenging. If you’re a naturally awful misanthropic person it’s much harder to change yourself than to simply pay for classes.

I can be pretty lazy. But after the Freakonomics podcast, I started making an effort to do one simple thing every morning. Make the bed before I left the room. Honestly, I didn’t care so much if the bed was made, but I did like how a clean room made me feel less disheveled as I started my day.

And here’s the amazing thing. My son, who’s only five, will now sometimes come into the room to make our bed. I never asked him nor told him to. He saw me doing it and he’s reflecting my behavior. It’s really rather striking.

Go Forth And Parent

In the beginning I mentioned how parenting research applies to software developers. I wasn’t making a comparison of software developers to children (though if the description fits…). It’s more a comment on the idea that parenting is a lot like leadership. Like parents, leaders lead by doing, not by telling others what to do.

The good news is what you do as a parent has little effect on how your kids end up. The best thing you can do is focus on being the type of person you want your kids to be.

However, what you do in the interim can affect how well you cope with being a parent and all the travails that come with it. It also may affect what your relationship with your children will look like down the road. It’d kind of suck to raise wonderful successful kids who want nothing to do with you. So don’t be awful to them.

This is why I still think it’s worthwhile working on improving parenting skills. It’s less about affecting your kids success as adults and more about building a good lasting relationship with them.

Like any skill, there’s always new evidence coming in that might cause you to reevaluate how you parent. For example, here’s a list of ten things parents are often dead wrong about.

Perhaps you factor those in and maybe improve your technique. Maybe not. The key thing is, don’t sweat it too much. Ultimately, what we all want is for our kids to lead fulfilled and happy lives. This is one reason I optimize for my own happiness so they hopefully reflect that.

Happy parenting!


code, open source, github 0 comments suggest edit

In some recent talks I make a reference to Conway’s Law named after Melvin Conway (not to be confused with British Mathematician John Horton Conway famous for Conway’s Game of Life nor to be confused with Conway Twitty) which states:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

Many interpret this as a cynical jibe at software management dysfunction. But this was not Melvin’s intent. At least it wasn’t his only intent. On his website, he quotes from Wikipedia, emphasis mine:

Conway’s law was not intended as a joke or a Zen koan, but as a valid sociological observation. It is a consequence of the fact that two software modules A and B cannot interface correctly with each other unless the designer and implementer of A communicates with the designer and implementer of B. Thus the interface structure of a software system necessarily will show a congruence with the social structure of the organization that produced it.

I savor Manu Cornet’s visual interpretation of Cornet’s law. I’m not sure how Manu put this together, but it’s not a stretch to suggest that the software architectures these companies produce might lead to these illustrations.


Having worked at Microsoft, the one that makes me laugh the most is the Microsoft box. Let’s zoom in on that one. Perhaps it’s an exaggerated depiction, but in my experience it’s not without some basis in truth.


The reason I mention Conway’s Law in my talks is to segue to the topic of how GitHub the company is structured. It illustrates why is structured the way it is.

So how is GitHub structured?

Well Zach Holman has written about it in the past where he talks about the distributed and asynchronous nature of GitHub. More recently, Ryan Tomayko gave a great talk (with associated blog post) entitled Your team should work like an open source project.

By far the most important part of the talk — the thing I hope people experiment with in their own organizations — is the idea of borrowing the natural constraints of open source software development when designing internal process and communication.

GitHub in many respects is structured like a set of open source projects. This is why is structured the way it is. It’s by necessity.

Like the typical open source project, we’re not all in the same room. We don’t work the same hours. Heck, many of us are not in the same time zones even. We don’t have top-down hierarchical management. This explains why doesn’t focus on the centralized tools or reports managers often want as a means of controlling workers. It’s a product that is focused more on the needs of the developers than on the needs of executives. It’s a product that allows GitHub itself to continue being productive.

Apply Conway’s Law

So if Conway’s Law is true, how can you make it work to your advantage? Well by restating it as Jesse Toth does according to this tweet by Sara Mei:

Conway’s Law restated by @jesse_toth: we should model our teams and our communication structures after the architecture we want.  #scotruby

Conway’s Law in its initial form is passive. It’s an observation of how software structures tend to follow social structures. So it only makes sense to move from observer to active participant and change the organizational structures to match the architecture you want to produce.

Do you see the effects of Conway’s Law in the software you produce?

code 0 comments suggest edit

In a recent post, Test Better, I suggested that developers can and ought do a better job of testing their own code. If you haven’t read it, I recommend you read that post first. I’m totally not biased in saying this at all. GO DO IT ALREADY!

There was some interesting pushback in the comments. Some took it to mean that we should get rid of all the testers. Whoa whoa whoa there! Slow down folks.

I can see how some might come to that conclusion. I did mention that my colleague Drew wants to destroy the role of QA. But it’s not because we want to just watch it burn.

Rather, we’re interested in something better rising from the ashes. It’s not that there’s no need for testers in a software shop. It’s that what we need is a better idea of what a tester is.

Testers Are Not Second Class Citizens

Perhaps you’ve had different experiences than me with testers. Good for you, here’s a Twinkie. For the vast majority of you, you can probably relate to the following.

At almost every position I’ve been at, developers treated testers like second class citizens in the pecking order of employees. Testers were just a notch above unskilled labor.

Not every tester mind you. There were always standouts. But I can’t tell you how many times developers would joke that testers are just wannabe developers who didn’t make the cut. The general attitude was that you could replace these folks with Amazon’s Mechanical Turk and not know the difference.

mechanical turkMechanical Turk from Wikimedia Commons. Public Domain image.

And in some cases, it was true. At Microsoft, for example, it’s easier to be hired as a tester than as a developer. And once your foot is in the door, you can eventually make that transition to developer if you’re half way decent.

This makes it very challenging to hire and retain good testers.

Elevate the Profession

But it shouldn’t be this way. The problem is we need to elevate the profession of tester. Drew and I talk about these things from time to time and he told me to think of testers as folks who provide a service to developers. Developers should test their own code, but testers can provide guidance on how to better test the code, suggest usage scenarios, set up test labs, etc.

Then it hit me. We already do have testers that are well respected by developers and may serve as a model for what we mean by the concept of better testers.

Security Testers

By most accounts, good security testers are well respected and not seen as folks who are developer wannabes. If not respected, they are feared for what they might do to your system should you piss them off. One security expert I know mentioned developers never click on links he sends without setting up a Virtual Machine to try it in first. Either way, it works.

Perhaps by looking at some of the qualities of security testers and how they integrate into the typical development flow, we can tease out some ideas on what better testers look like.

Like regular testers, many security testers test code that’s ready to ship. Many sites hire white hat penetration testers to attempt to locate and exploit vulnerabilities in a site that’s already been deployed. These folks are experts who keep up to date on the latest in security testing. They are not folks you can just replace with a Mechanical Turk.

Of course, smart developers don’t wait till code is deployed to get a security expert involved. That’s way too late. Security testers can help in the early stages of planning. Provide guidance on what patterns to avoid, what to look out for, some good practices to follow. During the coding stages they can provide code reviews with an eye towards security or simply answer questions you may have about tricky situations.

Testing as a Service

There are other testers that also follow a similar model. If you need to target 12 languages, you’ll definitely want to work with a localization/internationalization tester. If you value usability you may want to work with a usability expert. The list goes on.

It’s just not possible for a developer to be an expert in all these possible areas. I’d expect developers to have a basic understanding of these areas. Perhaps be quite knowledgeable in each, but never as knowledgeable as someone who is focused on these areas all the time.

The common theme among these testers is that they are providing a service to developers. They are sought out for their expertise.

General feature and quality testers should be no different. Good testers spend all their time learning and thinking about better and more efficient ways to test products. They are advocates for the end users and just as concerned about shipping software as developers. They are not gate keepers. They are enablers. They enable developers to ship better code.

This idea of testers as a service is not mine. It’s something Drew told me (he seriously needs to start his blog up again) that struck me.

By necessity, these would be folks who are great developers who have chosen to focus their efforts on the art and science of testing, just as another developer might choose to focus their efforts on the art and science of native clients, or reactive programming.

I love working with someone who knows way more about testing software and building in quality from the start than I do.

This is one of the motivations for me to test my own code better. If I’m going to leverage the skills of a great tester, it’s a matter of pride not to embarrass myself with stupid bugs I should have caught in my own testing. I want to impress these folks with crazy hard bugs I have no idea how to test.

Ok, maybe that last bit didn’t come out the way I intended. The point is when you work with experts, you don’t want them spending all their time with softballs. You want their help with the meaty stuff.

community, open source, personal 0 comments suggest edit

Someone recently emailed me to ask if I’m speaking at any upcoming conferences this year. Good question!

I’ve been keeping it pretty light this year since my family and I are doing a bit of travelling ourselves and I like spending time with them.

But I will be hitting up two conferences that I know of.

<anglebrackets> April 8 – 11

Ohmagerd! That’s this week! I better prepare!

I’ll be giving two talks this week. One of them will be a joint talk with the incomparable Scott Hanselman. Usually that means him taking potshots at me for your enjoyment. ARE YOU NOT ENTERTAINED?!


You will be!

Jazz Up Your Open Source with GitHub

Wednesday April 10 3:30 PM – 4:45 PM - Room 5 (Just Me)

You write some code that handles angle brackets like nobody’s business and you’re ready to share it with the world on GitHub. Great! Now what?

The story doesn’t end there. When the first users and contributors show up at your doorstep, you need to be prepared. Find out some tips for engaging an audience with your open source project and really make your project sing.

Return of the HaaHa Show: How to Open Source

Thursday April 11 8:00 AM (HWHAT!?) – 9:00 AM – Keynote Room 2 – Scott and Phil

They are back. ScottHa and PhilHaa reprise their legendary (OK, not really) HaaHa show that has thrilled audiences on three continents. There will be code. There will be jokes, bad ones. There will be Pull Requests. There will be Markdown. Will there be injuries? Papercuts? Let’s find out as we join Phil Haack and Scott Hanselman as they learn how to open source. We will answer questions like: How do I get involved in open source? How do I clone and repro, branch it, do a pull request and commit to an open source project? Seems kind of hard. Let’s see if it is!

MonkeySpace 2013 July 22-25

The call for proposals for this conference is still open. If you know anyone who might bring a diverse and unique perspective to this conference, please encourage them to submit. We’d really love to get a more diverse speaker cast than is typical for a conference on .NET open source. This conference is no longer just a conference on Mono. Mono figures prominently, but the scope has expanded to the broader topic of .NET open source and cross platform .NET.


I’ll be in Tokyo Japan in late April. So if you have a user group there that meets on Tuesday 4/30 and want to hear about GitHub, Git, NuGet, or even ASP.NET MVC, let me know. I’d be happy to swing by, but be warned I do not speak Japanese.

There might also be some local upcoming conferences I’ll speak at.


I recently was a guest on Yet Another Podcast with Jesse Liberty where I talked about Git, GitHub, GitHub for Windows, and subverting the oppressive traditional hierarchical organizational structure that serves to keep us down. FIGHT THE POWER!

Check it out.

Tags: speaking, talks, opensource, podcast

code, tdd, github 0 comments suggest edit

Developers take pride in speaking their mind and not shying away from touchy subjects. Yet there is one subject makes many developers uncomfortable.


I’m not talking about drug testing, unit testing, or any form of automated testing. After all, while there are still some holdouts, at least these types of tests involve writing code. And we know how much developers love to write code (even though that’s not what we’re really paid to do).

No, I’m talking about the kind of testing where you get your hands dirty actually trying the application. Where you attempt to break the beautifully factored code you may have just written. At the end of this post, I’ll provide a tip using GitHub that’s helped me with this.

TDD isn’t enough

I’m a huge fan of Test Driven Development. I know, I know. TDD isn’t about testing as Uncle Bob sayeth from on high in his book, Agile Software Development, Principles, Patterns, and Practices,

The act of writing a unit test is more an act of design than of verification.

And I agree! TDD is primarily about the design of your code. But notice that Bob doesn’t omit the verification part. He simply provides more emphasis to the act of design.

In my mind it’s like wrapping a steak in bacon. The steak is the primary focus of the meal, but I sure as hell am not going to throw away the bacon! I know, half of you are hitting the reply button to suggest you prefer the bacon. Me too but allow me this analogy.

bacon-wrapped-steakMMMM, gimme dat! Credit: Jason Lam CC-BY-SA-2.0

The problem I’ve found myself running into, despite my own advice to the contrary, is that I start to trust too much in my unit tests. Several times I’ve made changes to my code, crafted beautiful unit tests that provide 100% assurance that the code is correct, only to have customers run into bugs with the code. Apparently my 100% correct code has a margin of error. Perhaps Donald Knuth said it best,

Beware of bugs in the above code; I have only proved it correct, not tried it.

It’s surprisingly easy for this to happen. In one case, we had a UI gesture bound to a method that was very well tested. Our UI was bound to this method. All tests pass. Ship it!

Except when you actually execute the code, you find that there’s a certain situation where an exception might occur that causes the code to attempt to modify the UI on a thread other than the UI thread #sadtrombone. That’s tricky to catch in a unit test.

Getting Serious about Testing

When I joined the GitHub for Windows (GHfW) team, we were still in the spiking phase, constantly experimenting with the UI and code. We had very little in the way of proper unit tests. Which worked fine for two people working in the same code in the same room in San Francisco. But here I was, the new guy hundreds of miles away in Bellevue, WA without any of the context they had. So I started to institute more rigor in our unit and integration tests as the product transitioned to a focus on engineering.

But we still lacked rigor in regular non-automated testing. Then along comes my compatriot, Drew Miller. If you recall, he’s the one I cribbed my approach structuring unit tests from.

Drew really gets testing in all its forms. I first started working with him on the ASP.NET MVC team when he joined as a test lead. He switched disciplines from a developer to become a QA person because he wanted a venue to test this theories on testing and eventually show the world that we don’t need separate QA person. Yes, he became a tester so he could destroy the role, in order to save the practice.

In fact, he hates the term QA (which stands for Quality Assurance):

The only assurance you will ever have is that code has bugs. Testing is about confidence. It’s about generating confidence that the user’s experience is good enough. And it’s about feedback. It’s about providing feedback to the developer in lieu of a user in the room. Be a tester, don’t be QA.

On the GitHub for Windows team, we don’t have a tester. We’re all responsible for testing. With Drew on board, we’re also getting much better at it.

Testing Your Own Code and Cognitive Bias

There’s this common belief that developers shouldn’t test their own code. Or maybe they should test it, but you absolutely need independent testers to also test it as well. I used to fully subscribe to this idea. But Drew has convinced me it’s hogwash.

It’s strange to me how developers will claim they can absolutely architect systems, provide insights into business decisions, write code, and do all sorts of things better than the suits and other co-workers, but when it comes to testing. Oh no no no, I can’t do that!

I think it’s a myth we perpetuate because we don’t like it! Of course we can do it, we’re smart and can do most anything we put our minds to. We just don’t want to so we perpetuate this myth.

There is some truth that developers tend to be bad at testing their own code. For example, the goal of a developer is to write software as bug free as possible. The presence of a bug is a negative. And it’s human nature to try to avoid things that make us sad. It’s very easy to unconsciously ignore code paths we’re unsure of while doing our testing.

While a tester’s job is to find bugs. A bug is a good thing to these folks. Thus they’re well suited to testing software.

But this oversimplifies our real goals as developers and testers. To ship quality software. Our goals are not at odds. This is the mental switch we must make.

And We Can Do It!

After all, you’ve probably heard it said a million times, when you look back on code written several months ago, you tend to cringe. You might not even recognize it. Code in the brain has a short half-life. For me, it only takes a day before code starts to slip my mind. In many respects, when I approach code I wrote yesterday, it’s almost as if I’m someone else approaching the code.

And that’s great for testing it.

When I think I’m done with a feature or a block of code, I pull a mental trick. I mentally envision myself as a tester. My goal now is to find bugs in this code. After all, if I find them and fix them first, nobody else has to know. Whenever a customer finds a bug caused by me, I feel horrible. So I have every incentive to try and break this code.

And I’m not afraid to ask for help when I need it. Sometimes it’s as simple as brainstorming ideas on what to test.

One trick that my team has started doing that I really love is when a feature is about done, we update the Pull Request (remember, a pull request is a conversation about some code and you don’t have to wait for the code to be ready to merge to create a PR) with a test plan using the new Task Lists feature via GitHub Flavored Markdown.

This puts me in a mindset to think about all the possible ways to break the code. Some of these items might get pulled from our master test plan or get added to it.

Here’s an example of a portion of a recent test plan for a major bug fix I worked on (click on it to see it larger).


The act of writing the test plan really helps me think hard about what could go wrong with the code. Then running through it just requires following the plan and checking off boxes. Sometimes as I’m testing, I’ll think of new cases and I’ll just edit the plan accordingly.

Also, the test plan can serve as an indicator to others that the PR is ready to be merged. When you see everything checked off, then it should be good to go! Or if you want to be more explicit about it, add a “sign-off” checkbox item. Whatever works best for you.

The Case for Testers

Please don’t use this post to justify firing your test team. The point I’m trying to make is that developers are capable of and should test their own (and each others) code. It should be a badge of pride that testers cannot find bugs in your code. But until you reach that point, you’re probably going to need your test team to stick around.

While my team does not have dedicated testers, we consider each of us to be testers. It’s a frame of mind we can put our minds into when we need to.

But we’re also not building software for the Space Shuttle so maybe we can get away with this.

I’m still of the mind that many teams can benefit from a dedicated tester. But the role this person has is different from the traditional rote mechanical testing you often find testers lumped into. This person would mentor developers in the testing part of building software. Help them get into that mindset. This person might also work to streamline whatever crap gets in the way so that developers can better test their code. For example, building automation that sets up test labs for various configuration in a moment’s notice. Or helping to verify incoming bug reports from customers.

Related Posts

nuget 0 comments suggest edit

How can you trust anything you install from NuGet? It’s a simple question, but the answer is complicated. Trust is not some binary value. There are degrees of trust. I trust my friends to warn me before they contact the authorities and maybe suggest a lawyer, but I trust my wife to help me dispose of the body and uphold the conspiracy of silence (Honey, it was in the fine print of our wedding vows in case you’re wondering).

The following are some ideas I’ve been bouncing around with the NuGet team about trust and security since even before I left NuGet. Hopefully they spark some interesting discussions about how to make NuGet a safer place to install packages.

Establish Identity and Authorship

The question “do I trust this package” is not the best question to ask. The more pertinent question is “do I trust the author of this package?”

NuGet doesn’t change how you go about answering this question yet. Whether you found a zip file on some random website or install it via NuGet, you still have to answer the following questions (perhaps unconsciously):

  1. Who is the author?
  2. Is the author trustworthy?
  3. Do I trust that the this software really was written by the author?
  4. Is the author’s means of distributing software tamper resistant and verifiable?

In some cases, the non-NuGet software is signed with a certificate. That helps answer questions 1, 2, and 3. But chances are, you don’t restrict yourself to only using certificate signed libraries. I looked through my own installed Visual Studio Extensions and several were not certificate signed.

NuGet doesn’t yet support package signing, but even if it did, it wouldn’t solve this problem sufficiently. If you want to know more why I think that, read the addendum about package signing at the end of this post.

What most people do in such situations is try to find alternate means to establish identity and authorship:

  1. I look for other sites that link to this package and mention the author.
  2. I look for sites that I already know to be in control of the author (such as a blog or Twitter account) and look for links to the package.
  3. I look for blog posts and tweets from other people I trust mentioning the package and author.

I think NuGet really needs to focus on making this better.

A Better Approach

There isn’t a single solution that will solve the problem. But I do believe a multipronged approach will make it much easier for people to establish the identity and authorship of a package and make an educated decision on whether or not to install any given package.

Piggy back on other verification systems

This first idea is a no-brainer to me. I’m a lazy bastard. If someone else has done the hard work, I’d like to build on what they’ve done.

This is where social media can come into play and have a useful purpose beyond telling the world what you ate for lunch.

For example, suppose you want to install RouteMagic and you see that the package owner is some user named haacked on NuGet. Who is this joker?

Hey! Maybe you happen to know haacked on GitHub! Is that the same guy as this one? You also know a haacked on Twitter and you trust that guy. Can we tie all these identities together?

Well it’d be easy through Oauth. The NuGet gallery could allow me to verify that I am the same person as haacked on GitHub and Twitter by doing an Oauth exchange with those sites. Only the real haacked on Twitter could authenticate as haacked on Twitter.

The more identities I attach to my NuGet account, the more you can trust that identity. It’s unlikely someone will hack both my GitHub and Twitter accounts.

The NuGet Gallery would need to expose these verifications in the UI anywhere I see a package owner, perhaps with little icons.

With Twitter, you could go even further. Twitter has the concept of verified identities. If we trust their process of verification, we could piggyback on that and show a verified icon next to Twitter verified users, adding more weight to your claimed identity.

This would be so easy and cheap to implement and provide a world of benefit for establishing identity.

Build our own verification system

Eventually, I think NuGet might want to consider having its own verification system and NuGet Verified Accounts™. This is much costlier than my previous suggestion to do it right and not simply favor corporations over the little guy.

Honestly, if we implemented the first idea well, I’m not sure this would ever have to happen anytime soon.


This idea is inspired by the concept of a Web of Trust with PGP which provides a decentralized approach to establishing the identity of the owner of a public key.

While the previous ideas help establish identity, we still don’t know if we can trust these people. Chances are, if someone has a well established identity they won’t want to smudge their reputation with malware. But what about folks without well established reputations?

We could implement a system of vouching. For example, suppose you trust me and I vouch for ten people. And they in turn vouch for ten people each. That’s a network of 111 potentially trustworthy people. Of course, each degree you move out, the level of trust declines. You probably trust me more than the people I trust. And those people more than the people they trust. And so on.

How do we use this information in NuGet?

It could be as simple as factoring it into sort order. For example, one factor in establishing trust in a package today is looking at the download count of a package. Chances are that a malware library is not going to get ten thousand downloads.

We could also incorporate the level of trust of the package owner into that sort order. For example, show me packages for sending emails in order of trust and download count.

Other attack vectors

So far, I’ve focused on establishing trust in the author of a package. But a package manager system has other attack vectors.

For example, the place where packages are stored could be hacked or the service itself could be hacked.

If Azure Blob storage was hacked, an attacker could swap out packages of trusted authors with untrusted materials. This is a real concern. luckily stores the hash of each package and presents it in the feed. The NuGet client verifies the contents before installing it on the users machine.

However, suppose database was hacked. There is still a level of protection because any hash tampering would be caught by the clients.

An attacker would have to compromise both the Azure Blob Storage and the database.

Or worse, if the attacker compromises the machine that hosts NuGet, then it’s game over as they could corrupt the hashes and run code to pull packages from another location.

Mitigations of this nightmare scenario include having different credentials for Blobs and the database and constant security reviews of the NuGet code base.

Another thing we should consider is storing package hashes in packages.config so that Package Restore could at least verify packages during a restore in this nightmare scenario. But this wouldn’t solve the issue with installing new packages.

PowerShell Scripts

NuGet makes use of PowerShell scripts to perform useful tasks not covered by a typical package.

A lot of folks get worried about this as an attack vector and want a way to disable these scripts. There are definitely bad things that could happen and I’m not opposed to having an option to disable them, but this only gives a false sense of security. It’s security theater.

Why’s that you say? Well a package with only assemblies can still bite you through the use of Module Initializers.

Modules may contain special methods called module initializers to initialize the module itself.

All modules may have a module initializer. This method shall be static, a member of the module, take no parameters, return no value, be marked with rtspecialname and specialname, and be named .cctor.

There are no limitations on what code is permitted in a module initializer. Module initializers are permitted to run and call both managed and unmanaged code.

The module’s initializer method is executed at, or sometime before, first access to any types, methods, or data defined in the module

If you’re installing a package, you’re about to run some code with or without PowerShell scripts. The proper mitigation is to stop running your development environment as an administrator and make sure you trust the package author before you install the package.

At least with NuGet, when you install a package it doesn’t require elevation. If you install an MSI, you’d typically have to elevate privileges.

Addendum: Package Signing is not the answer

Every time I talk about NuGet security, someone gets irate and demands that we implement signing immediately as if it were some magic panacea. I’m definitely not against implementing package signing, but let’s be clear. It is a woefully inadequate solution in and of itself and there’s a lot better things we should do first as I’ve already outlined in this post.

The Cost and Ubiquity Problem

Very few people will sign their packages. Ruby Gems supports package signing and I’ve been told the number that take advantage of it is nearly zero. Visual Studio Extensions also supports package signing. Quick, go look at your list of installed extensions. Were any unsigned?

The problem is this, if you require certificate signing, you’ve just created too much friction to create a package and the package manager ecosystem will dry up and die. Requiring signing is just not an option.

The reason is that obtaining and properly signing software with a certificate is a costly proposition by its very nature. A certificate implies that some authority has verified your identity. For that verification to have value, it must be somewhat reliable and thorough. It’s not going to be immediate and easy or bad agents could easily do it.

Package signing is only a good solution if you can guarantee near ubiquity. Otherwise you still need alternative solutions.

The User Interface Problem

Once you allow package signing, you then have the user interface problem. Visual Studio Extensions is an interesting example of this conundrum. You only see that a package is digitally signed after you’ve downloaded and decided to install it. At that point, you tend to be committed already.


Also notice that the message that this package isn’t signed is barely noticeable.

Ok, so it’s not signed. What can I do about it other that probably Install it anyways because I really want this software. The fact that a package was signed didn’t change my behavior in any way.

Visual Studio could put more dire looking warnings, but it would alienate the community of extension authors by doing so. It could require signing, but that would put onerous restrictions on creating packages and would cause the community of signed packages to wither away, leaving only packages sponsored by corporations.

The point here is that even with signed packages, there’s not much it would do for NuGet. Perhaps we could support a mode where it gave a more dire warning or even disallowed unsigned packages, but that’d just be annoying and most people would never use that mode because the selection of packages would be too small.

The only benefit in this case of signing is that if a package did screw something up, you could probably chase down the author if they signed it. But that’s only a benefit if you never install unsigned packages. Since most people won’t sign them, this isn’t really a viable way to live.


Just to be clear. I’m actually in favor of supporting package signing eventually. But I do not support requiring package signing to make it into the NuGet gallery. And I think there are much better approaches we can take first to mitigate the risk of using NuGet before we get to that point.

I worry that implementing signing just gives a false sense of security and we need to consider all the various ways that people can establish trust in packages and package authors.

git, github, code 0 comments suggest edit

The other day I needed a simple JSON parser for a thing I worked on. Sure, I’m familiar with JSON.NET, but I wanted something I could just compile into my project. The reason why is not important for this discussion (but it has to do with world domination, butterflies, and minotaurs).

I found the SimpleJson package which is also on GitHub.

SimpleJson takes advantage of a neat little feature of NuGet that allows you to include source code in a package and have that code transformed into the appropriate namespace for the package target. Oftentimes, this is used to install sample code or the like into a project. But SimpleJson uses it to distribute the entire library.

At first glance, this is a pretty sweet way to distribute a small single source file utility library. It gets compiled into my code. No binding redirects to worry about. No worries about different versions of the same library pulled in by dependencies. In my particular case, it was just what I needed.

But I started to think about the implications of such an approach on a wider scale. What if everybody did this?

The Update Problem

If such a library were used by multiple packages, it actually could limit the consumer’s ability to update the code.

For example, suppose I have a project that installs the SimpleJson package and also the SimpleOtherStuff package, where SimpleOtherStuff has a dependency on SimpleJson 1.0.0 and higher. The following diagram outlines the NuGet package dependency graph. It’s very simple.


Now suppose we learn that SimpleJson 1.0.0 has a very bad security issue and we need to upgrade to the just released SimpleJson 1.1.

So we do just that. Everything should be hunky dory as we’re now using SimpleJson 1.0.0 everywhere. Or are we?


If all the references to SimpleJson were assembly references, we’d be fine. But recall, it’s a source code package. Even though we upgraded it in our application, SimpleOtherStuff 1.0.0 has SimpleJson 1.0.0 compiled into it.

There’s no way to upgrade SimpleOtherStuff’s reference other than to wait for the package author to do it or to manually recompile it ourselves (assuming the source is available).

You Are in Control

A guiding principle in the design of NuGet is we try and keep you, the consumer of the packages, in control of things. Want to uninstall a package even though other packages reference it? We’ll prevent it by default but then offer you a –Force flag so you can tell NuGet, “No really, I know what I’m doing here and am ready to face the consequences.”

We don’t do this perfectly in every case. Pre-release packages come to mind. But it’s a principle we try to follow.

Source code packages are interesting in that they give you more control in one area (you have the source), but take it away in another (upgrades are no longer complete).

Note that I’m not picking on SimpleJson. As I said before, I really needed this. In fact, I contributed back with several Pull Requests. I’m just pointing out a caveat to consider when using such packages.

Making it Better

So yeah, be careful. There are caveats. But couldn’t we make this better? Well I have an idea. Ok, it’s not my idea but an idea that some of my coworkers and I have bounced around for a while.

Imagine if you could attach a Git repository to your NuGet package. When you install the package, you could add a flag to install it as a Git Submodule rather than the normal assembly approach. Maybe it’d look like this.

Install-Package SimpleJson –AsSource

What this would do is initialize a submodule, and grab the source from GitHub. Perhaps it goes further and adds the files as linked files into your target project based on a bit of configuration in the source tree.

There’s a lot of possibilities here to flesh out. The Upgrade-Package command simply run a Git update submodule command on these submodules and do a normal update for all the other packages.

Since Microsoft recently made it clear that Git is the future of DVCS as far as Microsoft is concerned, maybe now is the time to think about tighter integration with NuGet. What do you think?

At the very least, perhaps NuGet needs a better extensibility model so we could build this support in outside of NuGet. That’s the more prudent approach of course, but I’m not feeling so prudent today.

code 0 comments suggest edit

Today I learned something new and I love that!

I was looking at some code that looked like this:

    await obj.GetSomeAsync();
    Assert.True(false, "SomeException was not thrown");
catch (SomeException)

That’s odd. We’re using xUnit. Why not use the Assert.Throws method? So I tried with the following naïve code.

Assert.Throws<SomeException>(() => await obj.GetSomeAsync());

Well that didn’t work. I got the following helpful compiler error:

error CS4034: The ‘await’ operator can only be used within an async lambda expression. Consider marking this lambda expression with the ‘async’ modifier.

Oh, I never really thought about applying the async keyword to a lambda expression, but it makes total sense. So I tried this:

Assert.Throws<SomeException>(async () => await obj.GetSomeAsync());

Hey, that worked! I rushed off to tell the internets on Twitter.

But I made a big mistake. That only made the compiler happy. It doesn’t actually work. It turns out that Assert.Throws takes in an Action and thus that expression doesn’t return a Task to be awaited upon. Stephen Toub explains the issue in this helpful blog post, Potential pitfalls to avoid when passing around async lambdas.

Ah, I’m gonna need to write my own method that takes in a Func<Task>. Let’s do this!

I wrote the following:

public async static Task<T> ThrowsAsync<T>(Func<Task> testCode)
      where T : Exception
    await testCode();
    Assert.Throws<T>(() => { }); // Use xUnit's default behavior.
  catch (T exception)
    return exception;
  // Never reached. Compiler doesn't know Assert.Throws above always throws.
  return null;

Here’s an example of a unit test (using xUnit) that makes use of this method.

public async Task RequiresBasicAuthentication()
  await ThrowsAsync<SomeException>(async () => await obj.GetSomeAsync());

And that works. I mean it actually works. Let me know if you see any bugs with it.

Note that you have to change the return type of the test method (fact) from void to return Task and mark it with the async keyword as well.

So as I was posting all this to Twitter, I learned that Brendan Forster (aka @ShiftKey) already built a library that has this type of assertion. But it wasn’t on NuGet so he’s dead to me.

But he remedied that five minutes later.

Install-Package AssertEx.

So we’re all good again.

If I were you, I’d probably just go use that. I just thought this was an enlightening look at how await works with lambdas.

personal 0 comments suggest edit

Back in March of last year, Stephen Wolfram wrote a blog post, The Personal Analytics of My Life. It’s a fascinating look at the data he’s accumulated over years about his own personal activities and habits such as daily incoming and outgoing email.

Since I read that, I’ve been fascinated about the idea of how personal data analytics might prove useful to me. It turns out I found an application to my health.

In my series on The Real Pain of Software Development (part1 and part2), I talked about my history with pain related to work and the various measures I took to remedy that pain including intense physical and occupational therapy.

I neglected to mention was how much difference a bit of weight loss makes. A single pound reduces the force on your joints, back, and other muscles an immense amount over the course of a day, week, or years. I am now more aware of how much pain I feel ebbs and flows as my weight does.

Sometime last year, GitHub gave all of us a Fitbit. It’s a little device that tracks the number of steps you take during the day. It can also track vertical distance changes and if you’re diligent, how much sleep you get. It posts all the data online so you can take a look at your numbers and compare to friends.

It wasn’t long before my co-workers hooked it up to Hubot (just another example of how chat is important to us and is at the center of how we work). Here’s a screenshot of the /fitbit me command which shows the leaderboards for step counts in the past 7 days. I blocked out names for privacy reasons, though I bet the top four would love to be unredacted.


What I love about this is it adds an element of friendly competition to the mix. There’s absolutely NOTHING riding on this other than pride. Yet, it’s amazing how motivating this is. I now take a long walk to get coffee every day because I want to be up there near the top. And there’s no downsides to that. Maybe it’s the wrong motivation, but it’s definitely the right result. The evidence for this result is in the following.

I also happened to purchase the FitBit Aria Wi-Fi scale because I love me them stats. As you can see from the graph, the /fitbit me motivation is effective.


Unfortunately, I injured my knee snowboarding a couple weeks ago and have been sick the past few days so it’s starting to trend slightly up. But since I’ve had the scale, I’ve lost 6.3 lbs overall. This isn’t only due to the FitBit. We recently started a single subscription to BistroMD to supplement our cooking efforts since we both work and I work from home. I’ve found portion control a lot easier when it’s controlled for you.

FitBit can also track sleep, but this is not as automatic as step counting. You have to remember to wear it at night and put it in sleep mode. It’s a bit high maintenance.


Also, it seems to count nights that you forget to set it as a day with 0 hours of sleep, which is not really what I want. Based on this graph you might incorrectly assume I averaged from 3 to 6 hours of sleep a night over the past year. Clearly that’s not correct or I’d be a raging hallucinatory lunatic by now. I know, some of you think I am such a beast, but if so it’s for other reasons, not due to lack of sleep.

I hope to be more diligently track my sleep patterns this year to see if there are any interesting correlations between the amount of sleep I get, my fitness level, my mood, and my GitHub contributions graph. Heh.

One thing I wish FitBit did better is provide better graphs for seeing my step counts over time. For example, here’s what I could find to see my step counts over the past 30 days.


I’d love to see a graph of my steps over the year in fine detail.

Speaking of step counts, see that big spike in January, that’s the GitHub Winter Summit, our all company meeting in San Francisco. The way that spikes might give you the impression that we’re all health freaks and the summit is a very physical endeavor. Well, maybe.

The following is a breakdown of one of a couple of those days.

2013-01-17-stepsJanuary 17, 2013: 23,285 (approx 10.69 miles)

2013-01-18-stepsJanuary 18, 2013: 24,512 steps (approx: 11.35 miles)

Notice the hours of the big spikes. Yeah, we like to dance late at night.

Notice the big midday bump on the second day? That was a scavenger hunt that took us all around the great city of San Francisco.

All in all, I’m pretty happy with the FitBit and like how the data driven lifestyle it encourages has been a net positive for me. Your mileage may vary of course.

Some downsides to the FitBit is that it’s easy to lose. It’s easy to forget about it and launder it. It’s somewhat easy to break. Mine’s survived so far, but not without a chip or two. Also, it requires charging almost every day or at least every other day. Also, I’m not aware of a way to get the absolute raw data even with the premium account.

I’d be willing to pay money to get the raw data and create my own graphs. FitBit isn’t the only personal fitness tracker out there, but it’s the only one I’ve tried and I’m a big fan. I wouldn’t mind trying others, but much like the network effects of other social networks, the fact that many of my friends and co-workers are all on FitBit will keep me tied to it for now.

UPDATE: Looks like there is a FitBit API. I’ll have to play around with it. Thanks to @geeksmeetgirl for pointing it out to me.

code 0 comments suggest edit

I love automation. I’m pretty lazy by nature and the more I can offload to my little programmatic or robotic helpers the better. I’ll be sad the day they become self-aware and decide that it’s payback time and enslave us all.

But until that day, I’ll take advantage of every bit of automation that I can.


For example, I’m a big fan of the Code Analysis tool built into Visual Studio. It’s more more commonly known as FxCop, though judging by the language I hear from its users I’d guess it’s street name is “YOU BIG PILE OF NAGGING SHIT STOP WASTING MY TIME AND REPORTING THAT VIOLATION!”

Sure, it has its peccadilloes, but with the right set of rules, it’s possible to strike a balance between a total nag and a helpful assistant.

As a developer, it’s important for us to think hard about our code and take care in its crafting. But we’re all fallible. In the end, I’m just not smart enough to remember ALL the possible pitfalls of coding ALL OF THE TIME such as avoiding the Turkish I problem when comparing strings. If you are, more power to you!

I try to keep the number of rules I exclude to a minimum. It’s saved my ass many times, but it’s also strung me out in a harried attempt to make it happy. Nothing pleases it. Sure, when it gets ornery, it’s easy to suppress a rule. I try hard to avoid that because suppressing one violation sometimes hides another.

Here’s an example of a case that confounded me today. The following very straightforward looking code ran afoul of a code analysis rule.

public sealed class Owner : IDisposable
    Dependency dependency;

    public Owner()
        // This is going to cause some issues.
        this.dependency = new Dependency { SomeProperty = "Blah" };

    public void Dispose()

public sealed class Dependency : IDisposable
    public string SomeProperty { get; set; }
    public void Dispose()

Code Analysis reported the following violation:

CA2000 Dispose objects before losing scope \ In method ‘Owner.Owner()’, object ‘<>g__initLocal0’ is not disposed along all exception paths. Call System.IDisposable.Dispose on object ‘<>g__initLocal0’ before all references to it are out of scope.

That’s really odd. As you can see, dependency is disposed when its owner is disposed. So what’s the deal?

Can you see the problem?

A Funny Thing about Object Initializers

A2600_PitfallThe big clue here is the name of the variable that’s not disposed, <>g__initLocal0. As Phil Karlton once said, emphasis mine,

There are only two hard things in Computer Science: cache invalidation and naming things.

Naming may be hard, but I can do better than that. Clearly the compiler came up with that name, not me. I fired up Reflector to see the generated code. The following is the constructor for Owner.

public Owner()
    Dependency <>g__initLocal0 = new Dependency {
        SomeProperty = "Blah"
    this.dependency = <>g__initLocal0;

Aha! So we can see that the compiler generated a temporary local variable to hold the initialized object while its properties are set, before assigning it to the member field.

So what’s the problem? Well if for some reason setting SomeProperty throws an exception, <>g__initiLocal0 will never be disposed. That’s what the Code Analysis is complaining about. Note that if an exception is thrown while setting that property, my member field is also never set to the instance. So it’s a dangling undisposed instance.

So what’s the Plan Stan?

Well the fix to keep code analysis happy is simple in this case.

public Owner()
    this.dependency = new Dependency();
    this.dependency.SomeProperty = "Blah";

Don’t use the initializer and set the property the old fashioned way.

This shuts up CodeAnalysis, but did it really solve the problem? Not in this specific case because we happen to be inside a constructor. If the Owner constructor throws, nobody is going to dispose of the dependency.

As Greg Beech wrote so long ago,

From this we can ascertain that if the object is not constructed correctly then the reference to the object will not be assigned, which means that no methods can be called on it, so the Dispose method cannot be used to deterministically clean up managed resources. The implication here is that if the constructor creates expensive managed resources which need to be cleaned up at the earliest opportunity then it should do so in an exception handler within the constructor as it will not get another chance.

So a more robust approach would be the following.

public Owner()
    this.dependency = new Dependency();
        this.dependency.SomeProperty = "Blah";
    catch (Exception)

This way, if setting the properties of Dependency throws an exception, we can dispose of it properly.

Why isn’t the compiler smarter?

I’m not the first to run into this pitfall with object initializers and disposable instances. Ayende wrote about a related issue with using blocks and object initializers back in 2009. In that post, he suggests the compiler should generate safe code for this scenario.

It’s an interesting question. Whenever I think of such questions, I put on my Eric Lippert hat and hear his proverbial voice (I’ve never heard his actual voice but I imagine it to be sonorous and profound) in my head saying:

I’m often asked why the compiler does not implement this feature or that feature, and of course the answer is always the same: because no one implemented it. Features start off as unimplemented and only become implemented when people spend effort implementing them: no effort, no feature. This is an unsatisfying answer of course, because usually the person asking the question has made the assumption that the feature is so obviously good that we need to have had a reason tonot implement it. I assure you that no, we actually don’t need a reason to not implement any feature no matter how obviously good. But that said, it might be interesting to consider what sort of pros and cons we’d consider if asked to implement the “silently put inferred constraints on class type parameters” feature.

The current implementation of object initializers seems correct for most cases. The only time it breaks down is in the case of disposable types. So let’s think about some possible solutions.

Why the intermediate variable?

First, let’s look at why the intermediate local variable. My initial knee-jerk reaction (ever notice how often your knee-jerk reaction makes you sound like jerk?) was that the intermediate variable is unecessary. But I thought about it some more and came up with the scenario. Suppose we’re setting a property to the value of an object created via an initializer.

this.SomePropertyWithSideEffects = new Dependency { Prop = 42 };

The way to do this without an intermediate local variable is the following.

this.SomePropertyWithSideEffects = new Dependency();
this.SomePropertyWithSideEffects.Prop = 42;

The first code block only calls the setter of SomePropertyWithSideEffects. But the second code block calls both the getter and setter. That’s pretty different behavior.

Now imagine we’re setting multiple properties or using a collection initializer with multiple items instead. We’d be calling that property getter multiple times. Who knows what awful side-effects that might produce. Sure, side effects in property getters are bad, but as I’ll point out later, there’s another reason this approach is fraught with error.

The intermediate local variable is necessary to ensure the object is only assigned after it’s fully constructed.

Dispose it for me?

So given that, let’s try implementing the the Owner constructor of my previous example the way a compiler might do it.

public Owner()
    var <>g__initLocal0 = new Dependency();
        <>g__initLocal0.SomeProperty = "Blah";
    catch (Exception)
    this.dependency = <>g__initLocal0;

That’s certainly seems much safer, but there’s still a potential flaw. It’s optimistically calling dispose on the object when the exception is thrown. What if I didn’t want to call dispose on it even though it’s disposable? Maybe the Dispose method of this specific object deletes your hard-drive and plays Justin Bieber music when invoked. In 99.9 times out of 100, you would want Dispose called in this case. But this is still a change in behavior and I can understand why the compiler might not risk it.

Perhaps the compiler could attempt to figure out if that instance eventually gets disposed and do the right thing. All you have to do find a flaw in Turing’s proof of the Halting Problem. No problem, right?

Perhaps we could be satisfied with good enough. Dispose it always and just say that’s the behavior of object initializers. It’s probably too late for that change as that’d be a breaking change. It’d be one I could live with honestly.

Let me dispose it

Perhaps the problem isn’t that we want the compiler to automatically dispose of the intermediate object in the case of an exception. What we really want is the assignment to  happen no matter what so we can dispose of it in our code if an exception is thrown. Perhaps the compiler can generate code that would allow us to do this in our code.

public Owner()
        this.dependency = new Dependency { SomeProperty = "blah" };
    catch (Exception)
        if (this.dependency != null)

What might the generated code look like in this case?

public Owner()
    var <>g__initLocal0 = new Dependency();
    this.dependency = <>g__initLocal0;
    <>g__initLocal0.SomeProperty = "Blah";

That’s not too shabby. We got rid of the try/catch block that we had to introduce previously, and if an exception is thrown in the property setter, we can clean up after it. I’m a genius!

Not so fast Synkowski. There’s a potential problem here. Suppose we’re not inside a constructor, but rather are in a method that’s setting a shared member.

public void DoStuff()
    var <>g__initLocal0 = new Dependency();
    this.dependency = <>g__initLocal0;
    <>g__initLocal0.SomeProperty = "Blah";

We’ve now introduced a possible race condition if this method is used in an async or multithreaded environment.

Notice that after this.dependency is set to the local incomplete instance, but before the local property is set, there’s room for another thread to modify this.dependency in some way right in that gap leading to indeterminate results. That’s definitely a change you wouldn’t want the compiler doing.

In fact, this same issue affects my earlier proposal of not using an intermediate variable.

So about that Code Analysis

Note that I didn’t specifically address Ayende’s case. In his case, the initializer is in a using block. That seems like a more tractable problem for the compiler to solve, but this post is getting long as it is and it’s time to wrap up. Maybe someone else can analyze that case for shits and giggles.

In my case, we’re setting a member that we plan to dispose later. That’s a much harder (if not impossible) nut to crack.

And here we get to the moral of the story. I get a lot more work done when I don’t stop every hour to write a blog post about some interesting bug I found in my code.

No wait, that’s not it.

The point here is that code analysis is a very helpful tool for writing more robust and correct code. But it’s just an assistant. It’s not a safety net. It’s more like an air bag. It’ll keep you from splattering your brains on the dashboard, but you can still totally wreck your car and break that nose if you’re not careful at the wheel.

Here’s an example where automated tools can both lead you into an accident, but save your butt at the last second.

If you use Resharper (another tool with its own automated analysis) like I do and you write some code in a constructor that doesn’t use an object initializer, you’re very likely to see this (with the default settings).


See that green squiggly under the new keyword just inviting, no begging, you to hit ALT+ENTER and convert that bad boy into an object initializer? Go ahead, it seems to suggest. What could go wrong? Oh it could cause you to now leak a resource as pointed out earlier.

I often like to hit CTRL E + CTRL C in Resharper to reformat my entire source file to be consistent with my coding standards. Depending on how you set up the reformatting, such an automatic action could easily change this code from working code to subtly broken code.

I still have to pay careful attention to what it’s doing. It’s easy to get lulled into a sense of safety when performing automatic refactorings. But you can’t trust it one hundred percent. You are the one who is responsible, not the tools. You are the one in control.

Fortunately in this case, Code Analysis brought this issue to my attention. And in doing so, pointed out an interesting topic for a blog post. Yay automation!

code 0 comments suggest edit

Tony Hoare, the computer scientist who implemented null references in ALGOL calls it his “billion-dollar mistake.”

I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

It may well be that a billion is a vast underestimate. But if you’re going to make a mistake, might as well go big. Respect!

To this day, we pay the price with tons of boilerplate code. For example, it’s generally good practice to add guard clauses for each potentially null parameter to a public method.

public void SomeMethod(object x, object y) {
  // Guard clauses
  if (x == null)
    throw new ArgumentNullException("x");
  if (y == null)
    throw new ArgumentNullException("y");

  // Rest of the method...

While it may feel like unnecessary ceremony, Jon Skeet gives some good reasons why guard clauses like this are a good idea in this StackOverflow answer:

Yes, there are good reasons:

  • It identifies exactly what is null, which may not be obvious from a NullReferenceException
  • It makes the code fail on invalid input even if some other condition means that the value isn’t dereferenced
  • It makes the exception occur before the method could have any other side-effects you might reach before the first dereference
  • It means you can be confident that if you pass the parameter into something else, you’re not violating theircontract
  • It documents your method’s requirements (using Code Contracts is even better for that of course)

I agree. The guard clauses are needed, but it’s time for some Real Talk™. This is shit work. And I hate shit work.

In this post,

  • I’ll explain the idea of non-nullable parameters and why I didn’t use CodeContracts in the hopes that heads off the first 10 comments asking “why didn’t you use CodeContracts dude?”
  • I’ll cover an approach using PostSharp to automatically validate null arguments.
  • I’ll then explain how I hope to create an even better approach.

Stick with me.

Non Null Parameters

With .NET languages such as C#, there’s no way to prevent a caller of a method from passing in a null value to a reference type argument. Instead, we simply end up having to validate the passed in arguments and ensure they’re not null.

In practice (at least with my code), the number of times I want to allow a null value is far exceeded by the number of times a null value is not valid. What I’d really like to do is invert the model. By default, a parameter cannot be null unless I explicitly say it can. In other words, make allowing null opt-in rather than opt-out as it is today.

I recall that there was some experimentation around this by Microsoft with the Spec# language that introduced a syntax to specify that a value cannot be null. For example…

public void Foo(string! arg);

…defines the argument to the method as a non-nullable string. The idea is this code would not compile if you attempt to pass in a null value for arg. It’s certainly not a trivial change as Craig Gidney writes in this post. He covers many of the challenges in adding a non-nullable syntax and then goes further to provide a proposed solution.

C# doesn’t have such a syntax, but it does have Code Contracts. After reading up on it, I really like the idea, but for me it suffers from one fatal flaw. There’s no way to apply a contract globally and then opt-out of it in specific places. I still have to apply the Contract calls to every potentially null argument of every method. In other words, it doesn’t satisfy my requirement to invert the model and make allowing null opt in rather than opt out. It’s still shit work. It’s also error-prone and I’m too lazy a bastard to get it right in every case.

IL Rewriting to the Rescue

So I figured I’d go off the deep end and experiment with Intermediate Language (IL)weaving with PostSharp to insert guard clauses automatically. Usually, any time I think about rewriting IL, I take a hammer to my head until the idea goes away. A few good whacks is plenty. However in this case, I thought it’d be a fun experiment to try. Not to mention I have a very hard head.

I chose to use PostSharp because it’s easy to get started with and it provides a simple, but powerful, API. It does have a few major downsides for what I want to accomplish that I’ll cover later.

I wrote an aspect, EnsureNonNullAspect, that you apply to a method, a class, or an assembly that injects on null checks for all public arguments and return values in your code. You can then opt out of the null checking using the AllowNullAttribute.

Here’s some examples of usage:

using NullGuard;

[assembly: EnsureNonNullAspect]

public class Sample 
    public void SomeMethod(string arg) {
        // throws ArgumentNullException if arg is null.

    public void AnotherMethod([AllowNull]string arg) {
        // arg may be null here

    public string MethodWithReturn() {
        // Throws InvalidOperationException if return value is null.
    // Null checking works for automatic properties too.
    public string SomeProperty { get; set; }

    [AllowNull] // can be applied to a whole property
    public string NullProperty { get; set; }

    public string NullProperty { 
        [param: AllowNull] // Or just the setter.

For more examples, check out the automated tests in the NullGuard GitHub repository.

By default, the attribute only works for public properties, methods, and constructors. It also validates return values, out parameters, and incoming arguments.

If you need more fine grained control of what gets validated, the EnsureNonNullAspect accepts a ValidationFlags enum. For example, if you only want to validate arguments and not return values, you can specify: [EnsureNonNullAspect(ValidationFlags.AllPublicArguments)].


This approach requires that the NullGuard and PostSharp libraries are redistributed with the application. Also, the generated code is a bit verbose. Here’s an example of the generated code of a previously one line method.

Another downside is that you’ll need to install the PostSharp Visual Studio extension and register for a license before you can fully use my library. The license for the free community edition is free, but it does add a bit of friction just to try this out.

I’d love to see PostSharp add support for generating IL that’s completely free of dependencies on the PostSharp assemblies. Perhaps by injecting just enough types into the rewritten assembly so it’s standalone.

Try it!

To try this out, install the NullGuard.PostSharp package from NuGet.  (It’s a pre-release library so make sure you include preleases when you attempt to install it).

Install-Package NullGuard.PostSharp IncludePrelease

Make sure you also install the PostSharp Visual Studio extension.

When you install the NuGet package into a project, it should modify that project to use PostSharp. If not, you’ll need to add an MSBuild task to run PostSharp against your project. Just look at Tests.csproj file in the NullGuard repository for an example.

If you just want to see things working, clone the NullGuard repository and run the unit tests.

File an issue if you have ideas on how to improve it or anything that’s wonky.

Alternative Approaches and What’s Next?

NullGuard.PostSharp is really an experiment. It’s my first iteration in solving this problem. I think it’s useful in its current state, but there are much better approaches I want to try.

  • Use Fody to write the guard blocks. Fody is an IL Weaver tool written by Simon Cropp that provides an MSBuild task to rewrite IL. The benefit of this approach is there is no need to redistribute parts of Fody with the application. The downside is Fody is much more daunting to use as compared to PostSharp. It leverages Mono.Cecil and requires a decent understanding of IL. Maybe I can convince Simon to help me out here. In the meanwhile, I better start reading up on IL. I think this will be the next approach I try. UPDATE: Turns out that in response to this blog post, the Fody team wrote NullGuard.Fody that does exactly this!
  • Use T4 to rewrite the source code. Rather than rewrite the IL, another approach would be to rewrite the source code much like T4MVC does with T4 Templates. One benefit of this approach is I could inject code contracts and get all the benefits of having them declared in the source code. The tricky part is doing this in a robust manner that doesn’t mess up the developer’s workflow.
  • Use Roslyn. It seems to me that Roslyn should be great for this. I just need to figure out how exactly I’d incorporate it. Modify source code or update the IL?
  • Beg the Code Contracts team to address this scenario. Like the Temptations, I ain’t too proud to beg.

Yet another alternative is to embrace the shit work, but write an automated test that ensures every argument is properly checked. I started working on a method you could add to any unit test suite that’d verify every method in an assembly, but it’s not done yet. It’s a bit tricky.

If you have a better solution, do let me know. I’d love for this to be handled by the language or Code Contracts, but right now those just don’t cut it yet.

personal 0 comments suggest edit

I wasn’t prepared to write an end-of-year blog post given the impending destruction of the world via a Mayan prophesied cataclysmic fury. But since that didn’t pan out I figured I’d better get typing.


Those of us that are software developers shouldn’t be too surprised that the world didn’t end. After all, how often do projects come in on time within the estimated date amirite?! (high five to all of you).

Highlights of 2012

This year has just been a blast. As my kids turn five and three, my wife and I find them so much more fun to hang out with. Also, this year I reached the one year mark at the best place to work ever. Here’s a breakdown of some of the highlights from the year for me.

  • January Twice a year we have an all-company summit where we get together, give talks, plan, and just have a great time together. This was my first one and I loved every moment of it.
  • February The MVP summit was in town. I wasn’t eligible to be an MVP as a recently departed employee, but I was eligible to host my first GitHub Drinkup for all the MVPs and others in town. We had a big crowd and a great time.
  • March I travelled to the home of the Hobbits, New Zealand to give a keynote at CodeMania.
  • April My family and I visited Richard Campbell and his family in the Vancouver area. I also recorded a Hanselminutes podcast.
  • May We released GitHub for Windows in May. I also visited GitHub HQ this month for a mini-summit with the other native app teams and recorded more podcasts including Herding Code and Deep Fried Bytes.
  • June I spoke at NDC in Oslo Norway. Had a great conference despite the awkward “Azure Girls” incident.
  • July Gave a last minute talk at ASPConf. The software I used to record it crashed and so there’s no recording of this talk sadly.
  • August Back in San Francisco for the GitHub all company summit. I corner Skrillex and force him to take a photo with me.
  • September Family vacation to Oahu Hawaii. I also end up giving a talk to a local user group and hosting a drink up. And my son started Kindergarten.
  • October I spoke at MonkeySpace and got really fired up about the future of Open Source in the .NET world.
  • November At the end of the month I was a guest on the .NET Rocks Roadshow. We had a rollicking good time. I went on a private tour of SpaceX with the fellas. We took the RV to the venue and I got to sample some of the Kentucky Whiskey they collected on their travels before recording a show on Git, GitHub, NuGet, and the non-hierarchical management model we have at GitHub.
  • DecemberThis was a quiet month for me. No travels. No talks. Just hacking on code, spending time with the family, and celebrating one year at GitHub. Oh, I also loved watching this Real Life Fruit Ninja to Dubstep video. Perhaps the highlight of 2013.

Top 3 Blog Posts by the numbers

As I did in 2010, I figured I’d post my top three blog posts according to the Ayende Formula.

  • Introducing GitHub for Windows introduces the Git and GitHub client application my team and I worked on this past year (103 comments, 68,672 web views, 25,048 aggregator views).
  • It’s the Little Things About ASP.NET MVC 4 highlights some of the small improvements in ASP.NET MVC 4 that are easy to overlook, but are nice for those that need them (49 comments 56900 web views, 26,044 aggregator views)
  • Structuring Unit Tests covers a nice approach to structuring unit tests for C# developers that I learned from others. This post was written in January which might help explain why it’s in the top three (52 comments, 41,852 web views, 26,073 aggregator views).

My Favorite three posts

These are the three posts that I wrote that were my favorites.

  • You Don’t Need a Thick Skin describes the realization that rather than develop a thick skin, I should focus on developing more empathy for folks that use my software.
  • One year at GitHub is a look back at my year at GitHub and how much I’m enjoying working there.
  • How to Talk to Employees argues that the way to talk to employees is simply the way you’d want to be addressed.

You People

Enough about me, let’s talk about you. As I did in my 2010 post, I thought it’d be fun to post some numbers.

According to Google Analytics:

  • Hello Visitors! 1,880,184 absolute unique visitors (up 6.15% from 2011) made 2,784,021 (down half a %) visits to my blog. came from 223 countries/territories. Most of you came from the United States (875,837) followed by India (267,164) with the United Kingdom (221,727) in third place.
  • Browser of choice:Just two years ago, most of my visitors used Firefox. Now it’s Google Chrome with 45.84%. In second place at 26.37% is Firefox  with IE users at 19.08%. Safari is next at 4% with Opera users still holding on to 2%. I really need to stop making those Opera user jokes. You guys are persistent!
  • Operating System: As I expected, most of you (87.16%) are on Windows, but that number seems to decline every year. 5.71% on a Mac and 2.24% on Linux. The mobile devices are a tiny percentage, but I would imagine that’ll pick up a lot next year.
  • What you read: The blog post most visited in 2012 was written in 2011, C# Razor Syntax Quick Reference with 119,962 page views.
  • How’d you get here: Doesn’t take a genius to guess that most folks come to my blog via Google search results (1,691,540), which probably means most of you aren’t reading this. Winking
smile StackOverflow moves to second place (292,670) followed closely by direct visitors (237,862).

My blog is just a single sample, but it’s interesting to me that these numbers seem to reflect trends I’ve seen elsewhere.

Well that’s all I have for 2012. I’m sure there are highlights I forgot to call out that are more memorable or important than the ones I listed. I’m bad at keeping track of things.

One big highlight for me is all the encouraging feedback, interesting comments, insightful thoughts, etc. that I’ve received from many of you in the past year either through comments on my blog or via Twitter. I appreciate it and I hope many of you have found something useful in something I’ve written on my blog or on Twitter. I’ll work hard to provide even more useful writings in the next year.

Happy New Year and I hope that 2013 is even better for you than 2012!

0 comments suggest edit

Merry Christmas!

I’m going to be migrating my comment system to Disqus. You might notice missing comments or other such weirdness. Do not be alarmed.

I’ll try to do this at a time I expect the lowest amount of traffic.

Hope you’re having a great holidays!

code, community, empathy 0 comments suggest edit

I have a confession to make.

I sometimes avoid your feedback on the Twitters. It’s nothing personal. I have a saved search for stuff I work on because I want to know what folks have to say. I want to know what problems they might run into or what ideas they have to improve things. Nonetheless, I sometimes just let the unread notifications sit there while I hesitate and cringe at the thought of the vitriol that might be contained within.

I know. I know. That’s terrible. It’s long been conventional wisdom that if you’re going to write software and ship it to other humans, you better develop a thick skin.

Hey, I used to work at Microsoft. People have…strong…opinions about software that Microsoft ships. It’s the type of place you learn to develop a full body callus of a thick skin. So I’m with you.

But even so, when you invest so much of yourself into something you create, it’s hard not to take criticisms personally. The Oatmeal captures this perfectly in this snippet of his brilliant post: Some thoughts on and musing about making things for the web.


That is me right there. I’m not going to stop shipping software so the best thing to do is work harder and develop a thicker skin. Right?


I strongly believed this for years, but a single blog post changed my mind. This post didn’t say anything I hadn’t heard before. But it was the experience of the author that somehow clicked and caused me to look at things in a new way. In this case, it was Sam Stephenson’s blog post, You are not your code that did it for me.

In his post, he talks about the rise and fall of his creation, Prototype and how he took its failure personally, and the lesson that he learns as a result.

I have learned that in the open-source world, you are not your code. A critique of your project is not tantamount to a personal attack. An alternative take on the problem your software solves is not hostile or divisive. It is simply the result of a regenerative process, driven by an unending desire to improve the status quo.

This sparked an epiphany. Reinforcing a thick skin detaches me from the people using my software. Even worse, it puts me in an adversarial position towards the folks who just want to get something done with the software. This is so wrong. Rather than work on a developing a thicker skin, I really should work on developing more empathy.

Show of hands. Have any of you ever been frustrated with a piece of software you’re trying to use? Of course you have! Now put your hand down. You look silly raising your hand for no reason.

How did you feel? I know how I’ve felt. Frustrated. Impotent. Stupid. Angry. Perhaps I said a few words I’m not proud of about how I might inflict bodily harm on the author in anatomically impossible ways should we ever meet in a dark alley.

I certainly didn’t mean those words (except in the case of bundled software written by hardware companies. That shit makes me cray!). I was simply lashing out due to my frustrations.

And it hit me.

The angry tweets calling my work “a piece of crap” is written by folks just like me. Rather than harden my stance in opposition to these folks, I need to be on their side!

I need to remove the adversarial mindset and instead share in their frustration as a fellow human who also understands what it’s like to be angry at software. I no longer need to take this criticism personally. This shift in mindset unblocked me from diving right into all that feedback on Twitter. I started replying to folks with something along the lines of “I’m sorry. That does suck. I know it’s frustrating. I’m going to have a word with the idiot who wrote that (namely me)! Email me with details and I’ll work to get it fixed.

The end result is I’m able to provide much better support for the software.

By doing this, I’ve also noticed a trend. When you sincerely address people’s frustrations, they tend to respond very warmly. Many of them know what it’s like to be criticized as well. People are quick to forgive if they know you care and will work to make it better.

Sure, there will still be moments where I have a knee jerk reaction and maybe lose my temper for a moment. But I think this framework for how to think about feedback will help me do that much less and preserve my sanity. I am definitely not my code. But I am here to help you with it.

company culture, personal, github 0 comments suggest edit

As of today, I’ve been a GitHub employee for one year and I gotta tell you…


Please forgive me a brief moment to gush, but I really love this company. I  work with a lot of great people. Crazy people for sure, but great. I love them all. Just look at these crazy folks!


I once told a friend that I’ve long had the idea to start a company that would be my ideal work environment.

GitHub is better than that company.

What Makes it Special?

One of my co-workers Rob Sanheim recently reached his seven month anniversary at GitHub and wrote a succinct post that answers this question. And I’m glad for that as it saves me from the trouble of writing a longer more rambling unfocused version of his post.

Rob breaks it down to five key points:

  1. Great people above all else
  2. Positive Peer Pressure
  3. GitHub Hiring
  4. Culture of Shipping
  5. Anarchist Structure

Optimize for Happiness

Then again, if I didn’t write a long unfocused rambling post, folks would wonder if they were at the right blog. So I’ll continue with a few more thoughts.

All the points Rob mentioned fall under the overall principle: optimize for happiness. This is not some pie in the sky hippy Kumbaya sing-along around a campfire (though if you’re into that, that’s totally cool). I think of it as a hard-nosed effective business strategy.

You might argue that every company optimizes for happiness in one way or another. But often it’s only the happiness of the owners, founders, shareholders, the executives, or the customers. Sure, we want all these people to be happy too! But not at the cost of employees as it too often happens. Happy employees are more effective and do a better job at making everyone else happy.

For example, as Tom Preston-Werner notes in his Optimizing for Happiness talk:

There are other really great things you can do when you optimize for happiness. You can throw away things like financial projections, hard deadlines, ineffective executives that make investors feel safe, and everything that hinders your employees from building amazing products.

At GitHub we don’t have meetings. We don’t have set work hours or even work days. We don’t keep track of vacation or sick days. We don’t have managers or an org chart. We don’t have a dress code. We don’t have expense account audits or an HR department.

Businesses can be successful when they decide that profit is not their only motivation and treat their employees well (ironically putting them in a good position to make more profits in the long run). Costco is a great example of this.

We’re Not Alone

People ask me if I think having no hierarchy and managers will scale. So far we’re at around 130 employees and we haven’t yet killed each other, so I think it’s promising.

I can understand the skepticism. For most people, a hierarchical management model is the only thing they’ve ever experienced. Fortunately, we’re not the first (and hopefully not the last) to employ this model.

Recently, Valve (makers of Half-Life) published their employee handbook on the web. It’s a delightful read, but the striking thing to me is how similar GitHub’s model is to theirs. In particular this section resonated with me.

Hierarchy is great for maintaining predictability and  repeatability. It simplifies planning and makes it easier to control a large group of people from the top down, which is why military organizations rely on it so heavily.

But when you’re an entertainment company that’s spent the last decade going out of its way to recruit the most  intelligent, innovative, talented people on Earth, telling them to sit at a desk and do what they’re told obliterates  99 percent of their value.

I think you can replace “an entertainment company” with “a company in a creative industry.” Writing software is a creative process. Your job is not to write the same line of code over and over again. It’s always creating something that’s never existed before.

Ok, so Valve has around 300 employees. But what about a “large” company?

Gore (makers of Gore-Tex) is a company of 8,500 “associates” that works without managers as well. So apparently this model can scale much larger than we are today.

On a Personal Note

I can pinpoint the moment that was the start of this journey to GitHub, I didn’t know it at the time. It was when I watched the RSA Animate video of Dan Pink’s talk on The surprising truth about what really motivates us. Hint: It’s not more money.

That talk profoundly affected me. I started bringing it up at work every chance I could get. Yes, I was the guy that just wouldn’t shut up about it.

As I reflected on it, I realized it’s so common to spend so much effort and time trying to climb that ladder and earn more pay just because it’s there! I stopped to ask myself. Why? At what cost? Why am I using a ladder when I can take the stairs? Am I stretching this ladder metaphor too far? You know, the big questions.

I later read Zach Holman’s series How GitHub Works and realized that GitHub embodies the key principles that Dan Pink mentioned. That inspired me to try and figure out a way I could add value to GitHub. Before long, I started growing these tentacles and joined GitHub.


After a year at GitHub, I’ve noticed is that I’m much less stressed out now, much healthier, and spend a lot more time with the wife and kids than I used to.

A big part of this is due to the family friendly and balanced work environment that GitHub’s approach results in. I still work a lot. After all, I love to code. But I also spend more of my work time actually, you know, working rather than paying “work tax.” Jason Fried of 37 signals has a great TEDx talk entitled Why work doesn’t happen at work.

That’s kind of bad. But what’s even worse is the thing that managers do most of all, which is call meetings. And meetings are just toxic, terrible, poisonous things during the day at work. We all know this to be true, and you would never see a spontaneous meeting called by employees. It doesn’t work that way.

The Year Ahead

I’m really excited to see what the next year has in store for me and GitHub. I’ve had the great pleasure to ship GitHub for Windows with my team and talk about it, along with GitHub, NuGet, and Git, at various conferences. I’ve met so many great people who love what we do, and folks who’s work I admire. It’s been a lot of fun this past year!

I’m looking forward to continuing to ship software and speak at a few conferences here and there on whatever people want to hear in the upcoming year.

Before anyone gets their underparts in a tussle, I’m not trying to say every company should be exactly like GitHub. Or that GitHub is perfect and what we do is right for every person or every company. I’m not making that claim. But I do believe many of these ideas can benefit more software companies than not. Every company is different and your mileage may vary.

But wherever you are, I hope you’re working at a place that values you and provides an environment where you can do great work and be happy and balanced. And if not, think about finding such a place, whether it’s at GitHub or elsewhere.

personal 0 comments suggest edit

Once again, those crazy fools Richard Campbell and Carl Franklin are touring around this great country of ours in a big ass RV as part of their .NET Rocks Road Trip. Last time it was for the launch of Visual Studio 2010. This time it coincides with Visual Studio 2012.


At each stop these gentlemen each give a presentation and then they interview a guest. If you’re in Los Angeles this Friday (November 30, 2012), you’re stuck with me as the guest. So stop on by and we’ll try to keep it interesting. If not, we’ll go out for some drinks afterwards and something interesting is bound to happen. These two fellas are full of mischief.

If you do plan to show up, please register here so they can get a rough headcount. I won’t make any promises as I’m notoriously forgetful about these things, but I’ll bring a copy of Professional ASP.NET MVC 4 to give away as well as some GitHub and NuGet stickers.

The last time they had a roadshow I joined them in Mountain View, CA and we had a blast. This time, I’m excited to return to Los Angeles where I went to college and had my first programming job.

math 0 comments suggest edit

The recent elections remind me of interesting paradoxes when you study the mathematics of voting. I first learned of this class of paradoxes as an undergraduate at Occidental College in Los Angeles (well technically Eagle Rock, emphasis always on the Rock!). As a student, I spent a couple of summers as an instructor for OPTIMO, a science and math enrichment program for kids about to enter high school. You know, that age when young men and women’s minds are keenly focused on mathematics and science. What could go wrong?!

For several weeks, these young kids would stay in dorm rooms during the week and attend classes on a variety of topics. Many of these classes were taught by full professors, while others were taught by us student instructors. Every Friday they’d go home for the weekend and leave us instructors with a little time for rest and relaxation that we mostly used to gossip about the kids. I am convinced that programs like this are the inspiration for reality television shows such as The Real World and The Jersey Shore given the amount of drama these teenage kids could pack in a few short weeks.

But as per the usual, I digress.

So how do you keep the attention of a group of hormonally charged teenagers? I still don’t know, but I gave it the best effort. I was always on the lookout for little math nuggets that defied conventional wisdom. One such problem I ran into was the voting paradox.

Voting is a method that a group of people use to pick the “best choice” out of a set of candidates. It’s pretty obvious, right? When you have two choices, the method of voting is pretty obvious. Majority wins! But when you have more than two choices, things become interesting.

extended-forehead-edition_2Suppose you have a contest for the best (not biggest) forehead between three candidates. I’ll use my former forehead endowed co-authors for this example.

You’ll notice I left out last year’s winner, Rob Conery, to keep the math simple.

Also suppose you have three voters who are asked to rank their choices in order of preference. Let’s take a look at the results. In the following table, the candidates are on the top and the voters are on the left.

Hanselman Haack Guthrie
Mariah Carey 1 3 2
Nicki Minaj 2 1 3
Keith Urban 3 2 1

Deadlock! In this particular case, there is no apparent winner. No matter which candidate you pick, two others voters prefer another candidate to that candidate.

Ok, let’s run this like a typical election where you simply get a vote or a non vote (rather than ranking candidates), but we’ll also add more voters. There will be no hanging chads in this election.

Hanselman Haack Guthrie
Mariah Carey X
Nicki Minaj X
Keith Urban X
Randy Jackson X
Simon Cowell X
Paula Abdul X
Jennifer Lopez X

In this case, Hanselman is the clear winner with three votes, whereas the other two candidates each have two votes. This is how our elections are held today. But note that Hanselman did not win with the majority (over half) of the votes. He won with a plurality. So can we really say he is the choice of the voters when a majority of people prefer someone else to him?

Both of these situations are examples of Condorcet’s Paradox. Condorcet lived in the late 1700s and was a frilly shirt wearing (but who wasn’t back then?) French mathematician philosopher who advocated crazy ideas like public education and equal rights for women and people of all ages.

I see you over there, to my right.

He also studied these interesting voting problems and noted that collective preferences are not transitive but can by cyclic.

Transitive and Nontransitive Relations

For those who failed elementary math, or simply forgot it, it might help to define what we mean by transitive. The transitive relation is a relationship between items in a set that has the following property for every item: if the first item is related to a second in this way, and that second item is related to a third in the same way. The first item is also related to the last item.

The classic example is the relation, “is larger than”. If Hanselman’s forehead is larger than Guthrie’s. And Guthrie’s is larger than mine. Then Hanselman’s must be larger than mine. One way to think of it is that this property transitions from the first element to the last.

But not every relationship is transitive. For example, if you are to my right, and you’re friend is to your right. Your friend isn’t necessarily to my right. She could be to my left if we formed an inward triangle.

Condorcet formalized the idea that group preferences are also non-transitive. If people prefer Hanselman to me. And they prefer me to Guthrie. It does not necessarily mean they will prefer Hanselman to Guthrie. It could be that Guthrie would pull a surprise upset when faced head to head with Hanselman.

Historical Examples

In fact, there are historical examples of this occurring in U.S. presidential elections. This is known as the Spoiler Effect. For example, in the 2000 U.S. election, many contend that Ralph Nader siphoned enough votes from Al Gore to deny him a clear victory. Had Nader not been in the race, Al Gore most likely would have won Florida outright. Of course, Nader is only considered a spoiler if enough voters who who voted for him would have voted for Gore had Nader not been in the race to put Gore above Bush in Florida. Multiple polls indicate that this is the case.

In the interest of bipartisanship, Scientific American has another example that negatively affected Republicans in 1976.

Mathematician and political essayist Piergiorgio Odifreddi of the University of Turin in Italy gives an example: In the 1976 U.S. presidential election, Gerald Ford secured the Republican nomination after a close race with Ronald Reagan, and Jimmy Carter beat Ford in the general election, but polls suggested Reagan would have beaten Carter (as indeed he did in 1980).

Reagan had to wait another four years to become President due to that Ford spoiler.

No party is immune from the implications of mathematics.

Condorcet Method

As part of his writings on the voting paradox, Condorcet came up with the Condorcet criterion.

Aside: I have to assume Condorcet had a different name for the criterion when he formulated it and it was named after him by later mathematicians. After all, what kind of vainglorious person applieshis own name to theorems.

A Condorcet winner is a candidate who would win every election if paired one on one against every other candidate. Going back to the prior example, if Hanselman would beat me in a one-on-one election. And he would beat Guthrie in a one-on-one election, then Hanselman would be the Condorcet winner.

It’s important to note that not every election has a Condorcet winner. This is the paradox that Condorcet noted. But if there is a Condorcet winner, one would hope that the method of voting would choose that winner. Not every voting method makes this guarantee. For example, the voting method that declares that the candidate with the most votes wins fails to meet this criterion if there are more than two candidates.

A voting method that always elect the Condorcet winner, if such a winner exists in the election, satisfies the Condorcet criterion and is a Condorcet method. Wouldn’t it be nice if our elections at least satisfied this criteria?

Arrow’s Impossibility Theorem

It might feel comforting to know methods exist that can choose a Condorcet method. But that feeling is fleeting when you add Arrow’s Impossibility Theorem to the mix.

In an attempt to devise a voting system that would be consistent, fair (according to a set of fairness rules he came up with), and always choose a clear winner, Arrow instead proved it was impossible to do so when there are more than two candidates.

In short, the theorem states that no rank-order voting system can be designed that satisfies these three “fairness” criteria:

  • If every voter prefers alternative X over alternative Y, then the group prefers X over Y.
  • If every voter’s preference between X and Y remains unchanged, then the group’s preference between X and Y will also remain unchanged (even if voters’ preferences between other pairs like X and Z, Y and Z, or Z and W change).
  • There is no “dictator”: no single voter possesses the power to always determine the group’s preference.

On one hand, this seems to be an endorsement of the two-party political system we have in the United States. Given only two candidates, the “majority rules” criterion is sufficient to choose the preferred candidate that meets the fairness criteria Arrow proposed.

But of course, politics in real life is so much messier than the nice clean divisions of a math theorem. A voting system can only, at times, choose the most preferred of the options given. But it doesn’t necessarily present us with the best candidates to choose from in the first place.

open source, nuget, community, code 0 comments suggest edit

In my last post, I talked about the MonkeySpace conference conference and how it reflects positive trends in the future of open source in .NET. But getting to a better future is going to take some work on our part. And a key component of that is making NuGet better.

Several discussions at MonkeySpace made it clear to me that there is some pervasive confusion and misconceptions about NuGet. It also made it clear that there are some dramatic changes needed for NuGet to continue to grow into a great open source project. In this post, I’ll cover some of these misconceptions and draw an outline of what I hope to see NuGet grow into.

Myth: NuGet is tied to Visual Studio and Windows

This is only partially true. The most popular NuGet client is clearly the one that ships in Visual Studio. Also, NuGet packages may contain PowerShell scripts. PowerShell is not currently available on any other operating system other than Windows.

However, the architecture of NuGet is such that there’s a core assembly, NuGet.Core.dll, that has no specific ties to Visual Studio. The proof of this is in the fact that ASP.NET Web Pages and Web Matrix both have NuGet clients. In these cases, the PowerShell scripts are ignored. Most packages do not contain PowerShell scripts, and those that do, the changes the scripts make are often optional or easily done manually.

In fact, there’s a NuGet.exe which is a wrapper of NuGet.Core.dll that runs on Mono. Well sometimes it does; and this is where we need your help! So far, Mono support for NuGet.exe has been low priority for the NuGet team. But as I see the growth of Mono, I think this is something we want to improve. My co-worker, Drew Miller (also a former member of the NuGet and ASP.NET MVC team) is keen to make better Mono support a reality. Initially, it could be as simple as adding a Mono CI server to make sure NuGet.exe builds and runs on Mono. Ultimately, we would like to build a MonoDevelop plugin.

Initially, it will probably simply ignore PowerShell scripts. There’s an existing CodePlex work item to provide .NET equivalents to Install.ps1 and the other scripts.

I created a personal fork of the NuGet project under my GitHub account at This’ll be our playground for experimenting with these new features with the clear goal of getting these changes back into the official NuGet repository.

Myth: NuGet isn’t truly Open Source

This is an easy myth to dispel. Here’s the license file for NuGet. NuGet is licensed under the Apache version 2 license, and meets the Open Source Definition defined by the Open Source Initiative. The NuGet team accepts external contributions as well, so it’s not just open source, but it’s an open and collaborative project.

But maybe it’s not as collaborative as it could be. I’ll address that in a moment.

Myth: NuGet is a Microsoft Project

On paper, NuGet is a fully independent project of the Outercurve Foundation. If you look at the COPYRIGHT.txt file in the NuGet source tree, you’ll see this:

Copyright 2010 Outercurve Foundation

Which makes me realize, we need to update that file with the current year, but I digress! That’s right, Microsoft assigned the copyright over to the Outercurve foundation. Contributors are asked to assign copyright for their contribution to the foundation as well. So clearly this is not a Microsoft project, right?

Well if you look at the entry in the Visual Studio Extension Manager (or the gallery), you’ll see this:


Huh? What gives? Well, it’s time for some REAL TALK™.

There’s nothing shady going on here. In the same way that Google Chrome is a Google product with its own EULA that incorporates the open source Chromium project, and Safari is an Apple product with its own EULA that incorporates the open source WebKit project, the version of NuGet included in Visual Studio 2012 is officially named the NuGet-Based Microsoft Package Manager and is a Microsoft product with its own EULA that incorporates the open source NuGet project. This is a common practice among companies well known for shipping “open source” and all complies with the terms of the license. You are always free to build and install the Outercurve version of NuGet into Visual Studio should you choose.

Of course, unlike the other two examples, NuGet is a bit confusing because both the proprietary version and the open source version contain the word “NuGet.” This is because we liked the name so much and because it had established its identity that we felt not including “NuGet” in the name of the Microsoft product would cause even more confusion. I almost wish we had named the open source version “NuGetium” following the Chromium/Chrome example.

This explains why NuGet is included in the Visual Studio Express editions when it’s well known that third party extensions are not allowed. It’s because NuGet is not included, it’s NuGet-Based Microsoft Package Manager that’s included.

NuGet is not a Community Project

Ok, this claim is a toss-up. As I pointed out before, NuGet is truly an open source project that accepts external community contributions. But is it really a “community project”

As the originator of the project, the sole provider of full time contributors, and a huge benefactor of the Outercurve Foundation; Microsoft clearly wields enormous influence on the NuGet project. Also, more and more parts of Microsoft are realizing the enormous potential of NuGet and getting on board with shipping packages. NuGet is integrated into Visual Studio 2012. These are all great developments! But it also means lessens the incentive for Microsoft to give up any control of the project to the community at large.

So while I still maintain it is a community project, in its current state the community’s influence is marginalized. But this isn’t entirely Microsoft’s intention or fault. Some of it has to do with the lack of outside contributors. Especially from those who have built products and even businesses on top of NuGet.

My Role With NuGet

Before I talk about what I hope to see in NuGet’s future, let me give you a brief rundown of my role. From the Outercurve perspective, I’m still the project lead of NuGet, the open source project. Microsoft of course has a developer lead, Jeff Handley, and a Program Manager, Howard Dierking, who run the day to day operations of NuGet and manage Microsoft’s extensive contributions to NuGet.

Of course, since NuGet is no longer a large part of my day job, it’s been challenging to stay involved. I recently met with Howard and Jeff to figure out how my role fits in with theirs and we all agreed that I should stay involved, but focus on the high level aspects of the project. So while they run the day to day operations such as triage, feature planning, code reviews, etc. I’ll still be involved in the direction of NuGet as an independent open source project. I recently sat in on the feature review for the next couple of versions of NuGet and will periodically visit my old stomping grounds for these product reviews.

The Future of NuGet

Over time, I would like to see NuGet grow into a true community driven project. This will require buy-in from Microsoft at many levels as well as participation from the NuGet community.

In this regard, I think the governance model of the Orchard Project is a great example of the direction that NuGet could head in. In September of 2011, Microsoft transferred control of the Orchard project to the community. As Bertrand Le Roy writes:

Back in September, we did something with Orchard that is kind of a big deal: we transferred control over the Orchard project to the community.

Most Open Source projects that were initiated by corporations such as Microsoft are nowadays still governed by that corporation. They may have an open license, they may take patches and contributions, they may have given the copyright to some non-profit foundation, but for all practical purposes, it’s still that corporation that controls the project and makes the big decisions.

That wasn’t what we wanted for Orchard. We wanted to trust the community completely to do what’s best for the project.

Why didn’t NuGet follow this model already? It’s complicated.

With something so integrated into so many of areas of Microsoft now, I think this is a pretty bold step for Microsoft to take. It’ll take time to reach this goal and it’ll take us, the community, demonstrating to Microsoft and others who are invested in NuGet’s future that we’re fit and ready to take on this responsibility.

As part of that, I would love to see more corporate sponsors of NuGet supplying contributors. Especially those that profit from NuGet. For example, while GitHub doesn’t directly profit from NuGet, we feel anything that encourages open source is valuable to us. So Drew and I will spend some of our time on NuGet in the upcoming months. The reason I don’t spend more time on NuGet today is really a personal choice and prioritization, not because I’m not given work time to do it since I pretty much define my own schedule.

If you are a company that benefits from NuGet, consider allotting time for some of your developers to contribute back (or become a sponsor of the Outercurve Foundation). Consider it an investment in having more of a say in the future of NuGet should Microsoft transfer control over to the community. NuGet belongs to us all, but we have to do our part to own it.