github git 15 comments suggest edit

Show of hands if this ever happens to you. After a long day of fighting fires at work, you settle into your favorite chair to unwind and write code. Your fingers fly over the keyboard punctuating your code with semi-colons or parentheses or whatever is appropriate.

But after a few commits, it dawns on you that you’re in the wrong branch. Yeah? Me too. This happens to me all the time because I lack impulse control. You can put your hands down now.

GitHub Flow

As you may know, a key component of the GitHub Flow lightweight workflow is to do all new feature work in a branch. Fixing a bug? Create a branch! Adding a new feature? Create a branch! Need to climb a tree? Well, you get the picture.

So what happens when you run into the situation I just described? Are you stuck? Heavens no! The thing about Git is that its very design supports fixing up mistakes after the fact. It’s very forgiving in this regard. For example, a recent blog post on the GitHub blog highlights all the different ways you can undo mistakes in Git.

The Easy Case - Fixing master

This is the simple case. I made commits on master that were intended for a branch off of master. Let’s walk through this scenario step by step with some visual aids.

The following diagram shows the state of my repository before I got all itchy trigger finger on it.

Initial state

As you can see, I have two commits to the master branch. HEAD points to the tip of my current branch. You can also see a remote tracking branch named origin/master (this is a special branch that tracks the master branch on the remote server). So at this point, my local master matches the master on the server.

This is the state of my repository when I am struck by inspiration and I start to code.


I make one commit. Then two.

Second Commit - fixing time

Each time I make a commit, the local master branch is updated to the new commit. Uh oh! As in the scenario in the opening paragraph, I meant to create these two commits on a new branch creatively named new-branch. I better fix this up.

The first step is to create the new branch. We can create it and check it out all in one step.

git checkout -b new-branch

checkout a new branch

At this point, both the new-branch and master point to the same commit. Now I can force the master branch back to its original position.

git branch --force master origin/master

force branch master

Here’s the set of commands that I ran all together.

git checkout -b new-branch
git branch --force master origin/master

Fixing up a non-master branch

The wrong branch

This case is a bit more complicated. Here I have a branch named wrong-branch that is my current branch. But I thought I was working in the master branch. I make two commits in this branch by mistake which causes this fine mess.

A fine mess

What I want here is to migrate commits E and F to a new branch off of master. Here’s the set of commands.

Let’s walk through these steps one by one. Not to worry, as before, I create a new branch.

git checkout -b new-branch

Always a new branch

Again, just like before, I force wrong-branch to its state on the server.

git branch --force wrong-branch origin/wrong-branch

force branch

But now, I need to move the commits from the branch new-branch onto master.

git rebase --onto master wrong-branch

Note that git rebase --onto works on the current branch (HEAD). So git rebase --onto master wrong-branch is saying migrate the commits between wrong-branch and HEAD onto master.

Final result

The git rebase command is a great way to move (well, actually you replay commits, but that’s a story for another day) commits onto other branches. The handy --onto flag makes it possible to specify a range of commits to move elsewhere. Pivotal Labs has a helpful post that describes this option in more detail.

So in this case, I moved commits E and F because they are the ones since wrong-branch on the current branch, new-branch.

Here’s the set of command I ran all together.

git checkout -b new-branch
git branch --force wrong-branch origin/wrong-branch
git rebase --onto master wrong-branch

Migrate commit ranges - great for local only branches

The assumption I made in the past two examples is that I’m working with branches that I’ve pushed to a remote. When you push a branch to a remote, you can create a local “remote tracking branch” that tracks the state of the branch on the remote server using the -u option.

For example, when I pushed the wrong-branch, I ran the command git push -u origin wrong-branch which not only pushes the branch to the remote (named origin), but creates the branch named origin/wrong-branch which corresponds to the state of wrong-branch on the server.

I can use a remote tracking branch as a convenient “Save Point” that I can reset to if I accidentally make commits on the corresponding local branch. It makes it easy to find the range of commits that are only on my machine and move just those.

But I could be in the situation where I don’t have a remote branch. Or maybe the branch I started muddying up already had a local commit that I don’t want to move.

That’s fine, I can just specify a commit range. For example, if I only wanted to move the last commit on wrong-branch into a new branch, I might do this.

git checkout -b new-branch
git branch --force wrong-branch HEAD~1
git rebase --onto master wrong-branch

Alias was a fine TV show, but a better Git technique

When you see the set of commands I ran, I hope you’re thinking “Hey, that looks like a rote series of steps and you should automate that!” This is why I like you. You’re very clever and very correct!

Automating a series of git commands sounds like a job for a Git Alias! Aliases are a powerful way of automating or extending Git with your own Git commands.

In a blog post I wrote last year, GitHub Flow Like a Pro with these 13 Git aliases, I wrote about some aliases I use to support my workflow.

Well now I have one more to add to this list. I decided to call this alias, migrate. Here’s the definition for the alias. Notice that it uses git rebase --onto which we used for the second scenario I described. It turns out that this happens to work for the first scenario too.

    migrate = "!f(){ CURRENT=$(git symbolic-ref --short HEAD); git checkout -b $1 && git branch --force $CURRENT ${3-'$CURRENT@{u}'} && git rebase --onto ${2-master} $CURRENT; }; f"

There’s a lot going on here and I could probably write a whole blog post unpacking it, but for now I’ll try and focus on the usage pattern.

This alias has one required parameter, the new branch name, and two optional parameters.

parameter type Description
branch-name required Name of the new branch.
target-branch optional Defaults to “master”. The branch that the new branch is created off of.
commit-range optional The commits to migrate. Defaults to the current remote tracking branch.

This command always migrates the current branch.

If I’m on a branch and want to migrate the local only commits over to master, I can just run git migrate new-branch-name. This works whether I’m on master or some other wrong branch.

I can also migrate the commits to a branch created off of something other than master using this command: git migrate new-branch other-branch

And finally, if I want to just migrate the last commit to a new branch created off of master, I can do this.

git migrate new-branch master HEAD~1

And there you go. A nice alias that automates a set of steps to fix a common mistake. Let me know if you find it useful!

Also, I want to give a special thanks to @mhagger for his help with this post. The original draft pull request had the grace of a two-year-old neurosurgeon with a mallet. The straightforward Git commands I proposed would rewrite the working tree twice. With his proposed changes, this alias never rewrites the working tree. Like math, there’s often a more elegant solution with Git once you understand the available tools.

code bugs software 14 comments suggest edit

The beads of sweat gathered on my forehead were oddly juxtaposed against the cool temperature of the air conditioned room. But there they were, caused by the heat of the CTO’s anger. I made a sloppy mistake and now sat in his office wondering if I was about to lose my job. My first full-time job. I recently found some archival footage of this moment.

I wore headphones everywhere back then

So why do I write about this? Unless you’ve been passed out drunk in a gutter for the last week (which is much more believable than living under a rock), you’ve heard about this amazing opus by Paul Ford entitled “What is Code?

If you haven’t read it yet, cancel all your appointments, grab a beer, find a nice shady spot, and soak it all in. The whole piece is great, but there was one paragraph in particular that I zeroed in on. In the intro, Paul talks about his programming start.

I began to program nearly 20 years ago, learning via oraperl, a special version of the Perl language modified to work with the Oracle database. A month into the work, I damaged the accounts of 30,000 fantasy basketball players. They sent some angry e-mails. After that, I decided to get better.

This was his “getting better moment” and like many such moments, it was the result of a coding mistake early in his career. It caused me to reminisce about the moment I decided to get better.

When I graduated from college, websites were still in black and white and connected to the net by string and cans. They pretty much worked like this.

The Internet circa 1997 - image from Wikipedia - Public Domain

As a fresh graduate, I was confident that I would go on to grad school and continue my studies in Mathematics. But deep in debt, I decided to get a job long enough to pay down this debt a bit before I returned to the warm comfort of academia. After all, companies were keen to hire people to work on this “Web” thing. It wouldn’t hurt to dabble.

Despite my lack of experience, a small custom software shop named Sequoia Softworks hired me. It was located in the quaint beach town of Seal Beach, California. You know it’s a beach town because it’s right there in the name. The company is still around under the name Solien and now is located in Santa Monica, California.

My first few weeks were a nervous affair as my degree in Math was pretty much useless for the work I was about to engage in. Sure, it prepared me to think logically, but I didn’t know a database from a VBScript, and my new job was to build database driven websites with this hot new technology called Active Server Pages (pre .NET, we’d now call this “Classic ASP” if we call it anything).

Fortunately, the president of the company assigned a nice contractor to mentor me. She taught me VBScript, ADODB, and how to access a SQL Server database. Perhaps the most valuable lesson I learned was this:

Dim conn, rs
Set conn = Server.CreateObject("ADODB.Connection")
conn.Open("Driver={SQL Server};Server=XXX;database=XXX;uid=XXX;pwd=XXX")
Set rs = conn.Execute("SELECT * FROM SomeTable")
Do Until rs.EOF
  ' ...


As the comment states, never ever ever ever forget to call rs.MoveNext. Ever.

A benefit of working at a tiny company, it wasn’t long before I got to work on important and interesting projects. One of these projects was a music website called myLaunch. The website was a companion to the multi-media Launch CD-ROM magazine. Cutting edge stuff, you can still find them on Amazon. I wish I had kept the Radiohead issue, it sells for $25!

Launch magazine

It wasn’t long before the CD-ROM magazine was discontinued and the website became the main product. Launch later was bought by and incorporated into Yahoo Music.

One of my tasks was to make some changes to the Forgot Password flow. I dove in and cranked out the improvements. This was before Extreme Programming popularized the idea of test driven development, so I didn’t write any automated tests. I was so green, it hadn’t even occurred to me yet that such a thing was possible.

So I manually tested my changes. At least, I’m pretty sure I did. I probably tried it a couple times, saw the record in the database, might have seen the email or not. I don’t recall. You know, rigorous testing.

And that brings me to the beginning of this post. Not long after the change was deployed the CTO (and co-founder) called me into his office. Turns out that a Vice President at our client company had a daughter who used the website to read about her favorite bands and she had forgotten her password. She went to reset her password, but never got the email with the new generated password and was completely locked out. And we had no way of knowing how many people had run into this problem and were currently locked out, never to return.

When I returned to my desk and sprinkled the code with Response.Write statements (the sophisticated debugging technique I had at my disposal), I discovered that sure enough, the code to email the new password never ran due to a logic bug.

I soon learned there’s a pecking order to finding bugs. It’s better to

  1. … have the computer find the bug (compiler, static analysis, unit tests) than to find it at runtime.
  2. … find a bug at runtime yourself (or have a co-worker find it) before a user runs into it.
  3. … have a user find a bug (and report it to you) before the daughter of your client’s Vice President does.

I wasn’t fired right then, but it was made clear to me that wouldn’t hold true if I made another mistake like that. Gulp! And by “Gulp!” I don’t mean a JavaScript build system.

Inspired by a fear of losing my job, this was my Getting Better Moment. Like Paul Ford, I decided right then to get better. Problem was, I wasn’t sure exactly how to go about it. Fortunately, a co-worker had the answer. He lent me his copy of Code Complete, and my eyes were opened.

Reading this book changed the arc of my career for the better. Programming went from a dalliance to pay off some of my student loan bills, to a profession I wanted to make a career out of. I fell in love with the practice and craft of writing code. This was for me.

The good news is I never was fired from my first job. I ended up staying there seven years, grew into a lead and then a manager of all the developers, before deciding to leave when my interests lead elsewhere. During that time, I certainly deployed more bugs, but I was much more rigorous and the impact of those bugs were small.

So there you go, that’s my Getting Better Moment. What was yours like?

conf github travel medical 2 comments suggest edit

This past week I had the great pleasure to speak in Puerto Rico at their TechSummit conference.

Tech Summit 2015 is the premier new business development and technology event pushing the boundaries for government redesign and transformation.

My colleague, Ben Balter referred me to the CIO of Puerto Rico; Giancarlo Gonzales, an energetic advocate for government embrace of technology; to speak about the transformative power of Open Source on businesses and government agencies. I partnered with Rich Lander from Microsoft. Rich is on the CLR team and has been heavily involved in all the work to open source the Core CLR, etc.

Colorful Puerto Rico Buildings

The local technology sector is heavily pro Microsoft. Giancarlo had a vision that we could help convince these business leaders that the world has changed, Microsoft has changed, and it’s now perfectly fine to mix and match technologies as they make sense. It’s ok to use GitHub along with your Microsoft stack. You don’t have to be locked in to a single vendor. We tried our best.

The Forts

Most of my time in Puerto Rico was spent working on the talk and in an emergency room (more on that later). But Rich and I did manage a short trip to the forts at San Cristóbal and El Morrow.

San Cristobal

I was absolutely giddy with excitement when I set foot in these forts. As a fan of Sid Meier’s Pirates and later Assassin’s Creed Black Flag, which both take place in the West Indies, I really enjoyed seeing one of the settings in real life.

El Morrow

There are tunnels to explore, ramparts to patrol, and views of the ocean to soak in. I highly recommend a visit. The imprressiveness of the forts are a reflection of how Puerto Rico was a historically strategic outpost.

ER shenanigans

A couple weeks back, while back home in Bellevue, I hurt my elbow somehow. I’m not even sure how, but almost certainly one of my many injuries from playing soccer.

It was sore for a while, but no big deal. A couple days before I was set to fly to Puerto Rico, my elbow started to swell with fluid. Looking online, it appeared to be Elbow (Olecranon) Bursitis. This is when the bursa in the elbow get inflamed due to trama and fluid starts to gather. I went to an Urgent care and received a prescription for an anti-inflammatory and a bandage to wrap my arm for compression. At this point, because there was no external wound, the doctor didn’t think it was likely to be infected. However, we did both notice that the temperature of my elbow was very hot.

Unfortunately, it kept getting worse every day from that point on. I just assumed it was taking time for the medicine to really kick in. But it came to a head the night before my talk. I was in pain and I couldn’t sleep. At this point, I felt like my body was trying to tell me something. And if you’re a long time reader of my blog, you’ll know I’ve been in this situation before. I also noticed that my elbow had gone from a soft sack of fluid to become very hard. I was a bit nervous.

So I got out of bed at 2 AM and grabbed a taxi to the emergency room at Ashford Presbyterian hospital. The doctor took a look at it and ordered some X-rays and they gave me an IV of anti-biotics.

Antibiotics IV

It turns out that I had experienced an elbow fracture and there was a small bone chip. The doctor prescribed a more powerful anti-inflammatory and some antibiotics. He also gave me a sling to wear.

I ended up getting back to the hotel around 7:30 AM. I immediately headed out to fill the prescriptions and then Rich and I continued to work on our talks up until the point we had to go on stage and deliver the talk.

This was the first all-nighter I’ve pulled in a very long time. I only tell the story for two reasons. I mentioned the ER room visit on Twitter and some folks expressed concern. I wanted them to know it’s not as bad as it sounds. But it does suck.

But more importantly, once again it’s a reminder to listen to your body when it’s giving you pain signals. The last time I shared one of my medical stories, I heard back that people appreciated the heads up.

UPDATE So when I got back to the states, I got another X-Ray and it turns out it’s not a fracture at all. Tendons can have a bit of calcification that looks like bone chips. I noticed the X-Ray at my local hospital is much higher resolution. What I have is called Septic Bursitis (Bursitis with a side of infection). So I’m still on a bunch of antibiotics.

github visualstudio 24 comments suggest edit

I heard you liked GitHub, so today my team put GitHub inside of your Visual Studio. This has been a unique collaboration with the Visual Studio team. In this post, I’ll walk you through installation and the features. I’ll then talk a bit about the background for how this came to be.

If you are attending Build 2015, I’ll be giving a demo of this as part of the talk Martin Woodward and I are giving in room 2009

If you’re a fan of video, here’s a video I recorded for Microsoft’s Channel 9 site that walks through the features. I also recorded an interview with the .NET Rocks folks where we have a rollicking good time talking about it.


If you have Visual Studio 2015 installed, visit the Visual Studio Extension gallery to download and install the extension. You can use the following convenient URL to grab it:

If you haven’t installed Visual Studio 2015, you can obtain the installation as part of the installation process. Just make sure to customize the installation.

Customize install

This brings up a list of optional components. Choose wisely. Choose GitHub!

GitHub option

This’ll install the GitHub Extension for Visual Studio (heretofore shortened to GHfVS to save my fingers) as part of the Visual Studio installation process.


One of the previous pain points with working with GitHub using Git inside of Visual Studio was dealing with Two-Factor authentication. If you have 2fa set up (and you should!), then you probably ran across this great post by Kris van der Mast.

I hope you don’t mind Kris, but we’ve just made your post obsolete.

If you go to the Team Explorer section, you’ll see an invitation to connect to GitHub.

GitHub Invitation Section

Click the “Connect…” button to launch the login dialog. If you’ve used GitHub for Windows, this’ll look a bit familiar.

Login Dialog

After you log in, you’ll see the Two-Factor authentication dialog if you have 2fa enabled.

2fa dialog

Once you log-in, you’ll see a new GitHub section in Team Explorer with a button to clone and a button to create.

GitHub Section


Click the clone button to launch the Repository Clone Dialog. This is a quick way to get one of your repositories (or any repository shared with you), into Visual Studio.

Clone dialog

Double click a repository (or select one and click Clone) to clone it to your machine.


Click the “create” button to launch the Repository Creation Dialog. This lets you create a repository both on your machine and on GitHub all at once.

Create dialog

Repository Home Page

When you open a repository in Visual Studio that’s connected to GitHub (its remote “origin” is a URL), the Team Explorer homepage provides GitHub specific navigation items.

GitHub Repository Home Page

Many of these, such as Pull Requests, Issues, and Graphs, simply navigate you to But over time, who knows what could happen?


If you have a repository open that does not have a remote (it’s local only), click on the Sync navigation item for the repository and you’ll see a new option to publish to Github.

Publish control

Open in Visual Studio

The last feature is actually a change to When you log in to the extension for the first time, learns that you have the extension installed. So if you’re also logged into, you’ll notice a new button under the Clone in Desktop button.

Open in Visual Studio

The Open in Visual Studio button launches Visual Studio 2015 and clones the repository to your machine.


This has been an exciting and fun project to work on with the Visual Studio and TFS team. It required that Microsoft create some new extensibility points for us and helped walk us through getting included in the new optional installation process.

On the GitHub side, Andreia Gaita (shana on GitHub and @sh4na on Twitter) and I wrote most of the code, borrowing heavily from GitHub for Windows (GHfW). Andreia provided the expertise, especially with Visual Studio extensibility. I provided moral support, cheerleading, and helped port code over from GHfW.

This collaboration with Microsoft really highlights the New Microsoft to me. When I pitched this project, our CEO asked me why don’t we ask Microsoft to include it. Based on my history and battle scars, I gave him several rock solid reasons why that would never ever ever happen. But later, I had an unrelated conversation with my former Microsoft manager (Scott Hunter) who was regaling me with how much commitment the new CEO of Microsoft, Satya Nadella, has with changing the company. Even drastic changes.

So that got me thinking, it doesn’t hurt to ask. So I went to a meeting with Somasegar (aka Soma), the Corporate VP of Developer Division and asked him. I’m pretty sure it went something like, “Hey, I don’t know if you’d be interested in this crazy idea. I mean, just maybe, only if you’re interested, it’s no big deal if you don’t want to. But, what do you think of including GitHub functionality inside of Visual Studio?” Ok, maybe I didn’t downplay it that much, but I wasn’t expecting what happened next.

Without hesitation, he said yes! Let’s do it! And so here we are, working hard to make using GitHub an amazing and integrated part of working with your code from Visual Studio. Stay tuned as we have big plans for the future.

oss nuget 15 comments suggest edit

The other day I was discussing the open source dependencies we had in a project with a lawyer. Forgetting my IANAL (I am not a lawyer) status, I made some bold statement regarding our legal obligations, or lack thereof, with respect to the licenses.

I can just see her rolling her eyes and thinking to herself, “ORLY?” She patiently and kindly asked if I could produce a list of all the licenses in the project.

Groan! This means I need to look at every package in the solution and then either open the package and look for the license URL in the metadata, or I need to search for each package and find the license on

If only the original creators of NuGet exposed the package metadata in a structured manner. If only they had the foresight to provide that information in a scriptable fashion.

Then it dawned on me. Hey! I’m one of those people! And that’s exactly what we did! I bet I could programmatically access this information. So I immediately opened up the Package Manager Console in Visual Studio and cranked out a PowerShell script…HA HA HA! Just kidding. I, being the lazy ass I am, turned to Google and hoped someone else figured it out before me.

I didn’t find an exact solution, but I found a really good start. This StackOverflow answer by Matt Ward shows how to download every license for a single package. I then found this post by Ed Courtenay to list every package in a solution. I combined the two together and tweaked them a bit (such as filtering out null project names) and ended up with this one liner you can paste into your Package Manager Console. Note that you’ll want to change the path to something that makes sense on your machine.

I posted this as a gist as well.

@( Get-Project -All | ? { $_.ProjectName } | % { Get-Package -ProjectName $_.ProjectName } ) | Sort -Unique | % { $pkg = $_ ; Try { (New-Object System.Net.WebClient).DownloadFile($pkg.LicenseUrl, 'c:\dev\licenses\' + $pkg.Id + ".txt") } Catch [system.exception] { Write-Host "Could not download license for $pkg" } }

UPDATE: My first attempt had a bug in the catch clause that would prevent it from showing the package when an exception occurred. Thanks to Graham Clark for noticing it, Stephen Yeadon for suggesting a fix, and Gabriel for providing a PR for the fix.

Be sure to double check that the list is correct by comparing it to the list of package folders in your packages directory. This isn’t the complete list for my project because we also reference submodules, but it’s a really great start!

I have high hopes that some PowerShell guru will come along and improve it even more. But it works on my machine!

personal management 34 comments suggest edit

A lot of the advice you see about management is bullshit. For example, I recently read some post, probably on some pretentious site like, about how you shouldn’t send emails late at night if you’re a manager because it sends the wrong message to your people. It creates the impression that your people should be working all the time and destroys the idea of work-life balance.

whaaaaat's happening?

Don’t get me wrong, I get where they’re coming from. The 1990s.

For some reason, this piece of management advice made me angry. Let me describe my team. I have one person in San Francisco, two in Canada, one in Sweden, one in Copenhagen, a couple in Ohio, one in Australia, and I live in Washington. So pray tell me, when exactly can I send an email that won’t be received by someone out of “normal” working hours?

I believe the advice is well meaning, but it’s severely out of date with how distributed modern teams work today. I also think it mythologizes managers. It creates this mindset that managers wield some magical power in the actions they take.

True, there’s an implicit power structure at work between managers and those they manage. But healthy organizations understand that managers are servant leaders. They serve the needs of the team. Managers are not a special class of people. They are beautifully flawed like the rest of us. I sometimes have too much to drink and write tirades like this. Sometimes I get caught up in work and am short with my spouse or children. I say things I don’t mean at work because I’m angry or tired. We have to recognize management as a role, not a status.

The point is, rather than rely on these “rules” of business conduct, we’d be better served by building real trust amongst members of a team. My team understands that I might send an email at night not because I expect a response at night. It’s not because I expect people to work night and day. No, it’s because I understand we all work in different time zones. They know that I sometimes work at night because I took two hours out during the middle of the day to play soccer. And I understand they’ll respond to my emails when they’re damn good and ready to.

personal dotnet oss 10 comments suggest edit

Unless you live in a cave, you are probably aware of the large leaps forward Microsoft and .NET has made in the open source community.

Although I do wonder about that phrase “unless you live in a cave.” By now, don’t cave dwellers have decent internet access?

As usual, I digress.

Over at GitHub, we’re pretty excited to see Microsoft put so much of .NET on GitHub under permissive licenses. Not only have they put a large amount of code on GitHub, they work hard to manage their open source projects well.

I am excited by all this. It’s been a long time coming. It’s a good thing.

That being said, Microsoft, being the giant company it is, casts a large shadow. It’s good to praise the vigor with which Microsoft adopts open source. At the same time, it’s important not to forget all the projects that have been here all along, nor the new ones that crop up all the time. The lesser-known projects and independent open source developers are an important part of the .NET open source ecosystem.

DotNetFringe (April 12-14 in Portland, Oregon) is a new conference that will help bring all these grass root independent efforts from out of the shadow. This conference is organized by a group of independent folks (myself included) who have a deep-seated passion for .NET open source.

And we collected a great line-up of speakers. Some of the names you’ll recognize as fixtures in the .NET open source community. Many are regular speakers. We also worked hard to create an environment that welcomes fresh new voices you may not have heard before.

We know your time and money is valuable. We’ve tried to keep the price low and the content quality high. So definitely buy a ticket and come say hello to me in Portland! I’ll bring some Octocat stickers to give out!

personal management 5 comments suggest edit

I’m often amazed at the Sisyphean lengths people will go to try and prevent failure, yet prepare so little for its inevitability. Ed Catmull, president of Pixar, noted the following in his book Creativity Inc.

Do not fall for the illusion that by preventing errors you won’t have errors to fix.

The truth is the cost of preventing errors is often greater than the cost of fixing errors later.

There’s nothing wrong with attempting to prevent failures that are easily preventable. But such preventative measures have to be weighed against the friction and cost the measure introduces. Lost in this calculation is the consideration that much of the energy and effort that goes into prevention might be better spent in preparing to respond to failure and the repair process.

This is a lesson that’s not just true for software, but all aspects of life. The following are examples of where this principle applies to social policy, parenting, relationships, and code.

Social Policy

The “War on Drugs” is a colossal failure of social policy…

If there is one number that embodies the seemingly intractable challenge imposed by the illegal drug trade on the relationship between the United States and Mexico, it is $177.26. That is the retail price, according to Drug Enforcement Administration data, of one gram of pure cocaine from your typical local pusher. That is 74 percent cheaper than it was 30 years ago.

So after thirty years spent, $51 Trillion (yes, Trillion!) spent, not to mention the incredible social costs, the result we get for all that expenditure is it’s 74 percent cheaper for that hit of cocaine. Wall street traders are rejoicing.

It doesn’t take 5 Nobel Prize winning economists to tell you that the drug war is a failure.

The idea that you can tell people to “Just Say No” and that will somehow turn the tide of human nature is laughably ridiculous. This is the failure of the prevention model.

A response that focuses on repair as opposed to all out prevention realizes that you can’t stop people from taking drugs, but you can help with the repair process for those who do get addicted. You can get better results if you treat drugs as a health problem and not a criminal problem. It’s worked very well for Portugal. Ten years after they decriminalized all drugs, drug abuse is down by half.

This development can not only be attributed to decriminalisation but to a confluence of treatment and risk reduction policies.

It’s a sane approach and it works. Locking an addict up in jail doesn’t help them to repair.


A while back I wrote about the practice of Reflective Parenting. In that post, I wrote about the concept of repairing.

Now this last point is the most important lesson. Parents, we are going to fuck up. We’re going to do it royally. Accept it. Forgive yourself. And then repair the situation.

If there’s ever a situation that will disabuse a person of the notion that they’re infallible, it’s becoming a parent. An essential part of being human is that mistakes will be made. Learning how to gracefully repair relationships afterwards helps lessen the long term impact of such mistakes.

Perhaps I’m fortunate that I get a lot of practice fucking up and then repairing with my own kids. Just the other day I was frazzled trying to get the kids ready for some birthday party. I told my son to fill out the birthday card, but avoid a splotch of water on the table while I went to grab a towel. Sure enough, he put the card on the water. I was pissed. I berated him for doing the one thing I just finished explicitly telling him not to do that. Why would he do that?! Why didn’t he listen?!

His normal smile was replaced with a crestfallen face as his eyes teared up. That struck me. When I calmed down, he pointed to a little plate full of water on the table. He thought I had meant that water. “Asshole” doesn’t even begin to describe how much of a schmuck I felt at that moment. It was a total misunderstanding. He didn’t even see the splotch of water next to the more conspicuous plate of water.

I got down to his eye level and told him that I’m sorry. I made a mistake. I understand how my instructions would be confusing. I was sincere, remorseful, and honest. We hugged it out and things were fine afterwards. Learning to repair is essential to good parenting.


I’ve been reading Difficult Conversations: How to Discuss What Matters Most. This book is a phenomenal guide to communicating well, both at work and home. Even if you think you are great at communicating with others, there’s probably something in here for you.

It helped me through a recent difficult situation where I hurt someone’s feelings. I had no idea that my words would prompt the response it did and I was surprised by the reaction. Prior to reading this book, my typical approach would be to try and defend my actions and help this person see the obvious reason in my position. I would try to win the argument.

Difficult Conversations proposes a third approach, rather than try to win the argument, it suggests you move towards a learning conversation.

Instead of wanting to persuade and get your way, you want to understand what has happened from the other person’s point of view, explain your point of view, share and understand feelings, and work together to figure out a way to manage the problem going forward. In so doing, you make it more likely that the other person will be open to being persuaded, and that you will learn something that significantly changes the way you understand the problem. Changing our stance means inviting the other person into the conversation with us, to help us figure things out. If we’re going to achieve our purposes, we have lots we need to learn from them and lots they need to learn from us. We need to have a learning conversation.

What I’ve learned is that people in general aren’t irrational. They only appear to be irrational because you are often missing a piece of context about how they view the world and interpret the actions of others.

This becomes crystal clear when you consider how you interpret your own actions. When was the last time you concluded that you acted with malicious intent or irrationally? How is it that you always act rationally with good intent, and others don’t? Given your impeccable track record, how is it that sometimes, others ascribe malice to your actions? Well they must be irrational! Or is it that they are missing a piece of context that you have? Could it be possible, when you’ve been on the other end, that you ascribed malice in a situation where you really were missing some information?

It’s not until you realize most people are alike in this way that you can start to have more productive learning conversations - even with folks you strongly disagree with.

Back to the story, despite all my good intentions and all my efforts to be respectful, I still failed and hurt my friend’s feelings. It’s just not possible to avoid this in every situation, though I strive to greatly reduce the occurrences. Fortunately, I’ve prepared for failure. By focusing on a learning conversation, we were able to repair the relationship. I believe it’s even stronger as a result.


There’s so many examples in software, it’s hard to point to just one. So I’ll pick two. First, let’s talk about The Thing About Git. I’ve linked to this post many times because one of its key points really resonates with me.

Git means never having to say, “you should have”

If you took The Tangled Working Copy Problem to the mailing lists of each of the VCS’s and solicited proposals for how best to untangle it, I think it’s safe to say that most of the solutions would be of the form: “You should have XXX before YYY.” … More simply, the phrase: “you should have,” ought to set off alarm bells. These are precisely the types of problems I want my VCS to solve, not throw back in my face with rules for how to structure workflow the next time.

Git recognizes that people make mistakes and rather than tell you that your only recourse is to grab a time machine and do what you should have done in the first place, it gives you tools to repair mistakes.

The theme of preparing for failure applies just as much to software and systems as it does to dealing with people.


There are a lot of backup systems out there. And to a degree, backups are a step in recognizing the value in preparing for disasters. But as any good system administrators know, backups are not the important part of the process. Backups are nothing without restores. Restores are what we really care about. That’s the “repair” step when a hard-drive fails.


Systems and policies that require 100% failure prevention to work are highly suspect. Such a system should trigger your Spidey sense. When building a system or policy, think not only about how the system or policy might fail, but how users of the system and those subject to the policy might fail. And give them tools to recover and repair failures. Perhaps the only guarantee you can provide is there will be failure. So prepare for it and prepare to recover from it.

csharp mef 2 comments suggest edit

There are times when the Managed Extensibility Framework (aka MEF, the close relative of “Meh”) cannot compose a part. In those cases it’ll shrug (¯\(ツ)/¯) and then take a dump on your runtime execution by throwing a CompositionException.

There are many reasons a composition will fail. There are two I run into the most often. The first is that I simply forgot to export a type. The CompositionException in this case is actually helpful.

But the other case is when an imported type throws an exception in its constructor. Here, the exception message is pretty useless. Here’s an example taken from a real application, GitHub for Windows. The actual exception is contrived for the purposes of demonstration. I changed the constructor of CreateNewRepositoryViewModel to throw an exception with the message “haha”.

System.ComponentModel.Composition.CompositionException: The composition produced a single composition error. The root cause is provided below. Review the CompositionException.Errors property for more detailed information.

1) haha

Resulting in: An exception occurred while trying to create an instance of type ‘GitHub.ViewModels.CreateNewRepositoryViewModel’.

Resulting in: Cannot activate part ‘GitHub.ViewModels.CreateNewRepositoryViewModel’. Element: GitHub.ViewModels.CreateNewRepositoryViewModel –> GitHub.ViewModels.CreateNewRepositoryViewModel –> AssemblyCatalog (Assembly=”GitHub, Version=, Culture=neutral, PublicKeyToken=null”)

It goes on and on. But note that the one piece of information I really really want, the piece of information that would actually help me figure out what’s going on, is not present. What is the stack trace of the exception that caused this cascade of failures in the first place?!

This exception stack trace is pretty useless. Note that you do see the root cause exception message “haha”, but nothing else. You don’t even know the exception type.

But don’t worry, I wouldn’t be writing this blog post if I didn’t have an answer for you. It may not be a good answer, but it’s something that seems to work for me. I wrote a method that unwraps the composition exception and tries to retrieve the actual original exception.

/// <summary>
/// Attempts to retrieve the real cause of a composition failure.
/// </summary>
/// <remarks>
/// Sometimes a MEF composition fails because an exception occurs in the ctor of a type we're trying to
/// create. Unfortunately, the CompositionException doesn't make that easily available, so we don't get
/// that info in haystack. This method tries to find that exception as that's really the only one we care
/// about if it exists. If it can't find it, it returns the original composition exception.
/// </remarks>
/// <param name="exception"></param>
/// <returns></returns>
public static Exception UnwrapCompositionException(this Exception exception)
    var compositionException = exception as CompositionException;
    if (compositionException == null)
        return exception;

    var unwrapped = compositionException;
    while (unwrapped != null)
        var firstError = unwrapped.Errors.FirstOrDefault();
        if (firstError == null)
        var currentException = firstError.Exception;

        if (currentException == null)

        var composablePartException = currentException as ComposablePartException;

        if (composablePartException != null
            && composablePartException.InnerException != null)
            var innerCompositionException = composablePartException.InnerException as CompositionException;
            if (innerCompositionException == null)
                return currentException.InnerException ?? exception;
            currentException = innerCompositionException;

        unwrapped = currentException as CompositionException;

    return exception; // Fuck it, couldn't find the real deal. Throw the original.

What this method does is search through the CompositionException structure looking for an exception that is the root cause of the failure. Basically an exception that isn’t a CompositionException nor ComposablePartException.

It seems to work fine for me, but I would love to have any MEF experts look at it and let me know if I’m missing anything. For example, I only look at the first error in each CompositionException because I’ve never seen more than one. But that could be an implementation detail.

Even if that strategy is incomplete, the code should be safe because if it can’t find the root cause exception, it’ll just return the original exception, so you’re no worse off than before.

Here’s an example of our log file with this code in place, emphasis mine.

System.InvalidOperationException: haha at GitHub.Api.ApiClient.Throw() in c:\dev\Windows\GitHub.Core\Api\ApiClient.cs:line 57 at GitHub.Api.ApiClient..ctor(HostAddress hostAddress, IObservableGitHubClient gitHubClient, Func`2 twoFactorChallengeHandler) in c:\dev\Windows\GitHub.Core\Api\ApiClient.cs:line 52 at GitHub.Api.ApiClientFactory.Create(HostAddress hostAddress) in…

Now I can see the actual stack trace. In cases where an exception in the constructor is the cause, I really don’t care about all the composition errors. This is what I really want to see. I hope you find this useful.

git github 12 comments suggest edit

Learning Git and GitHub can be daunting if you’re new to it. I recently gave a small presentation where I pretty much firehosed a group of people about Git and GitHub for one hour. I felt bad that I could only really scratch the surface.

I thought it might be useful to collect some resources that have helped me understand Git and GitHub better. If you only read one thing, read Think like a git. That’ll provide a good understanding and maybe motivate you to read the others.


  • Pro Git If you have time to read a full book, read this one. It’s free!
  • The Git Parable This story walks through what it would be like to create a Git like system from the ground up. In the process, you learn a lot about how Git is designed.
  • Think like a git This is for someone who’s been using Git, but doesn’t feel they really understand it. If you’re afraid of the rebase, this is for you. It made Git click for me and inspired me to build SeeGit.
  • The thing about Git This is a bit of a philosophical piece with practical Git workflow suggestions.
  • GitHub Flow Like a Pro with these 13 Git Aliases This is about Git, but also GitHub workflows. It’s a useful collection of aliases I put together.

Git on Windows

  • Better Git with PowerShell Introduces Posh-Git. But don’t follow the instructions for installing Posh-Git here. Instead use…
  • Introducing GitHub for Windows GitHub for Windows is not only a nice UI client for Git geared towards GitHub, but it also is a great way to get the git command line and Posh-Git onto your machine.


This is by no means a comprehensive list, and perhaps not the best list, but it’s my list. Happy reading!

csharp xunit tdd wpf 13 comments suggest edit

If you’ve ever written a unit test that instantiates a WPF control, you might have run into one of the following errors:

The calling thread cannot access this object because a different thread owns it.


The calling thread must be STA, because many UI components require this.

Prior to xUnit 2.0, we used a little hack to force a test to run on the STA Thread. You simply set the Timeout to 0.

XUnit 1.9

[Fact(Timeout=0 /* This runs on STA Thread */)]
public void SomeTest() {...}

But due to the asynchronous all the way down design of XUnit 2.0, the Timeout property was removed. So what’s a WPF Testing person to do?

Well, I decided to fix that problem by writing two custom attributes for tests:

  • STAFactAttribute
  • STATheoryAttribute

STAFactAttribute is the same thing as FactAttribute but it makes sure the test runs on the STA thread. Same goes for STATheoryAttribute. It’s the same thing as TheoryAttribute.

For example,

public async Task SomeTest(int someValue)

I contributed this code to the xunit/samples repository on GitHub. There’s a lot of great examples in this repository that demonstrate how easy it is to extend XUnit to provide a nice custom experience.

STA Thread

So you might be curious, what is an STA Thread? Stop with the curiosity. Some doors you do not want to open.

But you keep reading because you can’t help yourself. STA stands for Single Threaded Apartment. Apparently this is where threads go when their parents kick them out of the house and they haven’t found a life partner yet. They mostly sit in this apartment, ordering takeout and playing X-Box all day long.

STA Threads come into play when you interop with COM. Most of the time, as a .NET developer, you can ignore this. Unless you write WPF code in which case many of the controls you use depend on COM under the hood.

What is COM? Didn’t I tell you this rabbit hole goes deep? COM stands for Component Object Model. It’s an insanely complicated thing created by Don Box to subjugate the masses. At least that’s what my history book tells me.

Ok, I sort of glossed over the STA part, didn’t I. If you want to know more, check out the Process, Threads, and Apartments article on MSDN.

Apartments are a way of controlling communication between objects on multiple threads. A COM object lives in an apartment and can directly communicate (call methods on) their roommates. Calls to objects in other apartments require involving the nosy busybodies of the object world, proxies.

Single-threaded apartments consist of exactly one thread, so all COM objects that live in a single-threaded apartment can receive method calls only from the one thread that belongs to that apartment. All method calls to a COM object in a single-threaded apartment are synchronized with the windows message queue for the single-threaded apartment’s thread. A process with a single thread of execution is simply a special case of this model.

In WPF, the UI loop is an example of this. UI components must be created on the main application thread and only invoked on that thread. UI components may look pretty, but they’re all single.

For completeness, the alternative to STA is MTA or Multithreaded Apartments. This is where things get really interesting.

Multithreaded apartments consist of one or more threads, so all COM objects that live in an multithreaded apartment can receive method calls directly from any of the threads that belong to the multithreaded apartment. Threads in a multithreaded apartment use a model called free-threading. Calls to COM objects in a multithreaded apartment are synchronized by the objects themselves.

Yes, threads that live in a multithreaded apartment are into this whole “free-threading” lifestyle. Make of it what you will.

csharp async 23 comments suggest edit

Repeat after me, “Avoid async void!” (Now say that ten times fast!) Ha ha ha. You sound funny.

In C#, async void methods are a scourge upon your code. To understand why, I recommend this detailed Stephen Cleary article, Best Practices in Asynchronous Programming. In short, exceptions thrown when calling an async void method isn’t handled the same way as awaiting a Task and will crash the process. Not a great experience.

Recently, I found another reason to avoid async void methods. While investigating a bug, I noticed that the unit test that should have ostensibly failed because of the bug passed with flying colors. That’s odd. There was no logical reason for the test to pass given the bug.

Then I noticed that the return type of the method was async void. On a hunch I changed it to async Task and it started to fail. Ohhhhh snap!

If you write unit tests using XUnit.NET and accidentally mark them as async void instead of async Task, the tests are effectively ignored. I furiously looked for other cases where we did this and fixed them.

Pretty much the only valid reason to use async void methods is in the case where you need an asynchronous event handler. But if you use Reactive Extensions, there’s an even better approach that I’ve written about before, Observable.FromEventPattern.

Because there are valid reasons for async void methods, Code analysis won’t flag them. For example, Code Analysis doesn’t flag the following method.

public async void Foo()
    await Task.Run(() => {});

It’s pretty easy to manually search for methods with that signature. You might even catch them in code review. But there are other ways where async void methods crop up that are extremely subtle. For example, take a look at the following code.

new Subject<Unit>().Subscribe(async _ => await Task.Run(() => {}));

Looks legit, right? You are wrong my friend. Take a shot of whiskey (or tomato juice if you’re a teetotaler)! Do it even if you were correct, because, hey! It’s whiskey (or tomato juice)!

If you look at all the overloads of Subscribe you’ll see that we’re calling one that takes in an Action<T> and not a Func<T, Task>. In other words, we’ve unwittingly passed in an async void lambda. Because of the beauty of type inference and extension methods, it’s hard to look at code like this and know whether that’s being called correctly. You’d have to know all the overloads as well as any extension methods in play.

Here I Come To Save The Day

Clearly I should tighten up code reviews to keep an eye out for this problem, right? Hell nah! Let a computer do this crap for you. I wrote some code I’ll share here to look out for this problem.

These tests make use of this method I wrote a while back to grab all loadable types from an assembly.

public static IEnumerable<Type> GetLoadableTypes(this Assembly assembly)
    if (assembly == null) throw new ArgumentNullException("assembly");
        return assembly.GetTypes();
    catch (ReflectionTypeLoadException e)
        return e.Types.Where(t => t != null);

I also wrote this other extension method to make the final result a bit cleaner.

public static bool HasAttribute<TAttribute>(this MethodInfo method) where TAttribute : Attribute
    return method.GetCustomAttributes(typeof(TAttribute), false).Any();

And the power of Reflection compels you! Here’s a method that will return every async void method or lambda in an assembly.

public static IEnumerable<MethodInfo> GetAsyncVoidMethods(this Assembly assembly)
    return assembly.GetLoadableTypes()
      .SelectMany(type => type.GetMethods(
        | BindingFlags.Public
        | BindingFlags.Instance
        | BindingFlags.Static
        | BindingFlags.DeclaredOnly))
      .Where(method => method.HasAttribute<AsyncStateMachineAttribute>())
      .Where(method => method.ReturnType == typeof(void));

And using this method, I can write a helper method for all my unit tests.

public static void AssertNoAsyncVoidMethods(Assembly assembly)
    var messages = assembly
        .Select(method =>
            String.Format("'{0}.{1}' is an async void method.",
        "Async void methods found!" + Environment.NewLine + String.Join(Environment.NewLine, messages));

Here’s an example where I use this method.

public void EnsureNoAsyncVoidTests()

Here’s an example of the output. In this case, it found two async void lambdas.

------ Test started: Assembly: GitHub.Tests.dll ------

Test 'GitHub.Tests.IntegrityTests.EnsureNoAsyncVoidTests' failed: Async void methods found!
'<>c__DisplayClass10.<RetrievesOrgs>b__d' is an async void method.
'<>c__DisplayClass70.<ClearsExisting>b__6f' is an async void method.
	IntegrityTests.cs(104,0): at GitHub.Tests.IntegrityTests.EnsureNoAsyncVoidTests()

0 passed, 1 failed, 0 skipped, took 0.97 seconds ( 1.9.2 build 1705).

These tests will help ensure my team doesn’t make this mistake again. It’s really subtle and easy to miss during code review if you’re not careful. Happy coding!

personal 45 comments suggest edit

For most of my life, I was a man without a drink.

Yeah, this is not a typical topic for this blog. I normally try to stick to writing about software and such but I’ve been in a bit of a rut. I hope it helps to change it up and write about something I’ve been enjoying lately.

Now back to the story. I didn’t drink in college because the environment was quite Lohanesque and that didn’t appeal to me. For many college kids, the focus of drinking is to get wasted. The choice of alcohol is dictated by what a student can afford and what will get them shitfaced most expediently. Not all college kids, mind you. But a lot.

When I first started to frequent bars and clubs, my drink choices revolved around what drinks were sweet and would get me buzzed. I went through that puerile Long Island Iced Tea period. I had my brief infatuation with Adios Motherfuckers. Remember kids, blue drank spells trouble! But sometimes, trouble is exactly what you want.

Look at this ridiculous drink!

But when faced with a classy social situation, I always hesitated when it came to ordering a drink. I hadn’t found my drink. I went through a Mojito period (which I still enjoy on a hot summer day), the gin and tonic period (which I mostly just tolerated), and so on.

I envied the people in movies and television who knew exactly what they wanted. Bond, James Bond, knew he wanted a Martini, shaken, not stirred (which I hear is actually the opposite of how a martini should actually be prepared but I digress). Don Draper can be counted on to order an Old Fashioned.

Which coincidentally became the first drink that was “my drink”. To celebrate shipping GitHub for Windows 1.0, GitHub threw a party at the Ferry House in San Francisco. It was a classy affair with several bar stations.

Inside of the Ferry building

As per the usual, I asked around for drink recommendations. Someone mentioned I might like a Gimlet so I got in line with that in mind. When I get to the front of the line to order, the bartender swept his hand across the ingredients arrayed in front of him and told me that this was an Old Fashioned station. He only made Old Fashioneds.

Perfect! It was the direct opposite of the Paradox of Choice. Given one choice, I knew what to do. So I ordered an Old Fashioned and we instantly hit it off like a traveler in a foreign country without an international data plan who finds an open wifi network. Where have you been all my life? So began my love affair with the Old Fashioned.

There was only one problem, I couldn’t for the life of me make a decent one at home. Part of the problem is there’s too much work involved and I’m lazy. Like - turning on the tv, jumping on the hotel bed, and watching the hotel channel for an hour because the remote is broken and I can’t be bothered to get back up - lazy. Around that time I was introduced to the Classic Manhattan. There was a particularly good one at the iPic theater (of all places) made with a bourbon they kept in a small cask (they no longer have this specific bourbon, but their Manhattan’s are still good).

This became my new drink. It’s Old Fashioned’s simpler more refined minimalist cousin. There were evenings I’d walk a mile to the local Tutta Bella to order one because I’d get the craving. Apparently, there are antidotes to my laziness. Until my wife smacked me with an epiphany. Go to the Total Wine and More store, pick up the ingredients, and learn to make one.

Of course! A classic Manhattan is much easier to make than a proper Old Fashioned. There’s only three ingredients, Rye Whiskey (for a classic, though I prefer Bourbon), Sweet Vermouth, and Angostura Bitters. Should you choose, you can also garnish it with a maraschino cherry, which I always do.

Here’s how I prepare my Manhattan. I’m interested in some simple easy variations though, if you have them. Remember though, I’m lazy.

Tools of the trade


  • Measuring tool
  • Stirring spoon (not shake)
  • Cocktail Shaker/Mixer


  • Maker’s Mark (It’s a very good smooth tasty Bourbon. I’m on the lookout for recommendations though.)
  • Sweet Vermouth (I’ve not tried multiple brands yet so I just went with the one they had.)
  • Angostura Bitters
  • Maraschino Cherry

An important note. If you’ve ever gone to a classy bar, you’ll notice that the cherry they use is darker and more flavorful than the bright red supermarket Maraschino cherry. These cherries are often Bourbon infused and cost a lot more than the bright red variety. I think they’re absolutely worth it.

If you’re single and have ample freezer room, freeze your glass. What I do is fill it up with crushed ice while I’m making the drink to keep the glass cold. I’ll remove the ice before I pour in the drink.


  1. Freeze or fill up your glass with crushed ice. It’s nice to have a cold glass.
  2. Fill up your cocktail shaker/mixer with ice. I use crushed ice.
  3. Pour 2 ounces of Rye Whiskey/Bourbon into the shaker.
  4. Pour 1/2 ounce of Sweet Vermouth into the shaker.
  5. Add 3 dashes of the Angostura Bitters to the shaker.
  6. Stir well to get it cold. Most sites (and I) recommend not shaking it and making it frothy. It’s really up to you. If you want to go all Tom Cruise on that, by all means.
  7. Empty the ice from the glass (important!).
  8. Put the shaker cap on and pour the cocktail into the glass.
  9. Add the Maraschino cherry and maybe a bit of the cherry juice if you like.
  10. Savor your perfect classic Manhattan, you classy person, you.

a finished manhattan

Learning to make this drink saved me a lot of money considering a Manhattan with good Bourbon can go for over ten to fifteen bucks at a bar. My next goal is to learn a few variations and maybe one alternative cocktail for when I get tired of drinking the same thing all the time.

In addition to a fine cocktail, I’ve come to appreciate a good Scotch. Like Ron Swanson, I like a good bottle of Lagavulin 16. When I finished that, I started experimenting with other bottles and am currently working my way through a Springbank 15, a gift from my wife.

a good scotch

It looks particularly good poured into an Octocat Glencairn glass. I have Richard Campbell, a most knowledgeable Scotch connoisseur, to thank for my Scotch kick.

What’s your go to drink and why?

vanity 2 comments suggest edit

I love to communicate through the written word because it offers me a chance to really consider what I say, and then rewrite it, and then rewrite it again. And in the end, I still don’t communicate as clearly and eloquently as I would like.

Which makes it a wonder that I would subject myself to a podcast where I’m forced to think on the spot for thirty minutes to an hour and then spend the next week lamenting all the amazing comebacks and other witty things I should have said.

That I still do it is really a testament to the quality of the hosts on these podcasts. If you follow me on Twitter (Perhaps follow me if you don’t. I’m @haacked) you might already know about these. But I thought I’d list them here. In most of these podcasts I talk a lot about Open Source, Microsoft’s improving relationship to Open Source, and of course, GitHub!

Thinking Open Source With Phil Haack on .NET Rocks.

Carl Franklin and Richard Campbell are old pros at this and make it easy. They’re always a riot to chat with. It’s too bad a lot of the good stuff is not recorded. They should start an Unauthorized .NET Rocks where all the in-between profane conversation is recorded.

In this podcast we get into a bit about Git, GitHub, and Open Source in general. As usual, I end up talking a bit about open source licensing. I’m like a broken record.

Yet Another Podcast #131 with Jesse Liberty

In this podcast Jesse Liberty and I talk a bit about Git and GitHub including our desktop clients. Jesse is getting into Git and it’s a great episode for Git newbies. Jesse is easy to talk to and is effusive with the praise.

GitHub with Phil Haack on the MS Dev Show

Back in July I spoke with the folks at the MS Dev Show podcast.

This podcast is a Microsoft Developer focused podcast with Jason Young and Carl Schweitzer. They start off the podcast with a bit of techie news where they call out some interesting blog posts and discuss them. It’s a pretty neat intro.

In this podcast, we talked about a bit about how I host my blog on GitHub, GitHub for Windows,,, etc.

Phil Haack and a Bad Joke with Shawn Wildermuth

I did this one much earlier in the year (in May), but thought I’d list it here in case you really like the sound of my voice (I don’t understand, but I don’t judge).

This podcast is with Shawn Wildermuth and here’s his synopsis:

I get to finally use my bad joke about his last name on this week’s podcast. We also talk about his start, the move to Seattle to work at Microsoft and how to keep Californian’s out of Washington state.

Spoiler alert, while I’m known for the occasional #dadjoke, the bad joke in this case was his and it wasn’t dirty. Sorry.

Go Talk Good

If you’re doing something interesting with software, in addition to blogging about it, I think going on a podcast is a great way to get the word out to others. It’s actually more approachable than speaking in front of a crowd. It’s just you in your own space (hopefully a comfortable one) having a friendly conversation with the host or hosts. It takes a lot less up front preparation and good hosts will help draw out information.

management 34 comments suggest edit

If you run a company, stop increasing pay based on performance reviews. No, I’m not taking advantage of all that newly legal weed in my state (Washington). I know this challenges a belief as old as business itself. It challenges something that seems so totally obvious that you’re still not convinced I’m not smoking something. But hear me out.

money money money! - by Andrew Magill CC BY 2.0

This excellent post in the Harvard Business Review Blog, Stop Basing Pay on Performance Reviews, makes a compelling case for this. It won’t take long, so please go read it. Here’s an excerpt.

If your company is like most, it tries to drive high performance by dangling money in front of employees’ noses. To implement this concept, you sit down with your direct reports every once in a while, assess them on their performance, and give them ratings, which help determine their bonuses or raises.

What a terrible system.

Performance reviews that are tied to compensation create a blame-oriented culture. It’s well known that they reinforce hierarchy, undermine collegiality, work against cooperative problem solving, discourage straight talk, and too easily become politicized. They’re self-defeating and demoralizing for all concerned. Even high performers suffer, because when their pay bumps up against the top of the salary range, their supervisors have to stop giving them raises, regardless of achievement.

The idea that more pay decreases intrinsic motivation is supported by a lot of science. In my one year at GitHub post I highlighted a talk that referred to a set of such studies:

I can pinpoint the moment that was the start of this journey to GitHub, I didn’t know it at the time. It was when I watched the RSA Animate video of Dan Pink’s talk on The surprising truth about what really motivates us.

Hint: It’s not more money.

I recommend this talk to just about everyone I know who works for a living. I’m kind of obsessed with it. It’s mind opening. Dan Pink shows how study after study demonstrate that for work that contains a cognitive element (such as programming), more pay undermines motivation.

More recently, researchers found a neurological basis to support the idea that monetary rewards undermine intrinsic motivation.

This rings true to me personally because of all the open source work I do for which I don’t get paid. I do it for the joy of building something useful, for the recognition of my peers, and because I enjoy the process of writing code.

Likewise, at work, the reason I work hard is I love the products I work on. I care about my co-workers. And I enjoy the recognition for the good work I do. The compensation doesn’t motivate me to work harder. All it does is give me the means and reason to stay at my company.

Not to mention, should the company dangle a bonus to improve my performance, there’s some questions to ask. Why wasn’t I already trying to improve my performance? Where will this new performance come from? Often, the extra performance comes from attempting to work long hours which backfires and is unsustainable.

So what’s the alternative?

Pay according to the market

This is what Lear did, emphasis mine:

In 2010, we replaced annual performance reviews with quarterly sessions in which employees talk to their supervisors about their past and future work, with a focus on gaining new skills and mitigating weaknesses. We rolled out the change to our 115,000 employees across 36 countries, some of which had cultures far different from that of our American base.

The quarterly review sessions have no connection to decisions on pay. None. Employees might have been skeptical at first, so to drive the point home, we dropped annual individual raises. Instead we adjust pay only according to changing local markets.

They pay according to the market.

This makes a lot of sense when you consider the purpose of compensation:

  • It’s an exchange of money for work.
  • It helps a company attract and hire talent.
  • It helps a company retain talent.

It’s not a reward. You wouldn’t go to your neighborhood kid and say, “Hey, I’ll pay you 50% of what the market would normally offer you, but I’ll increase it 4% every year if you do a really good job.” The kid would rightfully give you the middle finger. But companies do this to employees all the time. Don’t believe me?

A recent study showed,

Staying employed at the same company for over two years on average is going to make you earn less over your lifetime by about 50% or more.

Keep in mind that 50% is a conservative number at the lowest end of the spectrum. This is assuming that your career is only going to last 10 years. The longer you work, the greater the difference will become over your lifetime.

Let that sink in.

If your employees act rationally, they’d be stupid to stay at your company for longer than two years watching their pay drop over the years in comparison to the market for their skills. And if they wise up and leave every two years, the turnover is very costly. The total cost of turnover can be as high as 150% of an employee’s salary when you factor in lost opportunity costs and the time and expense in hiring a replacement.

So even if you decide to continue on a pay for performance system, market forces necessitate that you adjust pay to market value. Or continue selling your employees a story about how they should stay out of “loyalty”. This story is never bidirectional.

And what should you do if someone tries to take advantage of the system and consistently underperforms? You fire them. They are not upholding their side of the exchange. Most of the time, people want to do good work. Optimize for that scenario. People will have occasional ruts. Help them through it. That’s what the separate performance reviews are for. It provides a means of providing candid feedback without the extra charged atmosphere that money can bring to the discussion.

The Netflix model

This is one area where I think the Netflix model is very interesting. They try to pay top of market for each employee using the following test:

  1. What could person get elsewhere?
  2. What could we pay for replacement?
  3. What would we pay to keep that person (if they had a bigger offer elsewhere)?

After all, when you hire someone, the offer is usually based on the market. So why stop adjusting it after that? This also solves the problem I’ve seen companies run into when the market is hot, they’ll hire a fresh college grad for more than a much more experienced developer makes because the developer’s performance bonuses haven’t kept up with the market.

Keep in mind, this is good for employees too. If an employee wants to make more money, they will focus on increasing their value to your company and the market as a whole. This aligns the employee’s interest with the company’s interest.

Another cool feature of the Netflix model is they give employees a choice to take as much or as little of that compensation as stock instead of cash. I think that’s a great way to give employees a choice in how they invest in the company’s future.


If you insist on continuing to believe that bonuses for performance is the right approach, I’d be curious to hear what science and data you have that refutes the evidence presented by the various people I’ve referenced. What do you know that they don’t? It’d make for some interesting research.

UPDATE: Based on some comments, there’s one thing I want to clarify. I don’t think the evidence suggests that all companies should pay absolute top of Market. That’s not what I’m suggesting. Many companies can’t afford that and offer other compensating factors to lure developers. For example, a desirable company that makes amazing products might be able to get away with paying closer to the market average because of the prestige and excitement of working there.

The point is not that you have to be at 99%. The point is to use the market value for an individual’s skills as your index for compensation adjustments. When it goes up, you raise accordingly. When it flatlines or goes down, well, I’m not sure what Lear does. I certainly wouldn’t lower salaries. I’d just stop having raises until the market value is above an individual’s current pay. I’d be curious to hear what others think.

rx rxui akavache ghfw 7 comments suggest edit

GitHub for Windows (often abbreviated to GHfW) is a client WPF application written in C#. I think it’s beautiful.

This is a credit to our designers. I’m pretty sure if I had to design it, it would look like


To keep our code maintainable and testable, we employ the Model-View-ViewModel pattern (MVVM). To keep our app responsive, we use Reactive Extensions (Rx) to help us make sense of all the asynchronous code.

ReactiveUI (RxUI) combines the MVVM pattern with Reactive Extensions to provide a powerful framework for building client and mobile applications. The creator of RxUI, Paul Betts, suffers through the work to test it on a huge array of platforms so you don’t have to. Seriously, despite all his other vices, this cross-platform support alone makes this guy deserve sainthood. And I don’t just say that because I work with him.

It can be tough to wrap your head around Reactive Extensions, and by extension ReactiveUI, when you start out. As with any new technology, there are some pitfalls you fall into as you learn. Over time, we’ve learned some hard lessons by failing over and over again as we build GHfW. All those failures are interspersed with an occasional nugget of success where we learn a better approach.

Much of this knowledge is tribal in nature. We tell stories to each other to remind each other of what to do and what not to do. However, that’s fragile and doesn’t help anyone else.

So we’re making a more concerted attempt to record that tribal knowledge so others can benefit from what we learn and we can benefit from what others learned. To that end, we’ve made our ReactiveUI Design Guidelines public.

It’s a bit sparse, but we hope to build on it over time as we learn and improve. If you use ReactiveUI, I hope you find it useful as well.

Also, if you use Akavache, we have an even sparser design guideline. Our next step is to add a WPF specific guideline soon.

vs vsix dev encouragement 15 comments suggest edit

Recently I wrote what many consider to be the most important Visual Studio Extension ever shipped - Encourage for Visual Studio. It was my humble attempt to make a small corner of the world brighter with little encouragements as folks work in Visual Studio. You can get it via the Visual Studio Extension Manager.

But not everyone has a sunny disposition like I do. Some folks want to watch the world burn. What they want is Discouragements.

Well an idiot might write a whole other Visual Studio Extension with a set of discouragements. I may be many things, but I am no idiot. This problem is better solved by allowing users to configure the set of encouragements to be anything they want.

And that’s what I did. I added an Options pane to allow users to configure the set of encouragements. It turned out to be a more confusing ordeal than I expected. But with some help from Jared Parsons, I may now present to you, discouragements!

Encourage options

So if you’re of the masochistic inclination, you can treat yourself to custom discouragements all day long if you so choose.

Discouragement in use

As you can see from the screenshot, it supports native emoji!. If you want these for yourself, I posted them into a gist.

Challenges and Travails

So why was this challenging? Well like many things with development platforms, to do the basic thing is really easy, but when you want to deviate, things become hard.

If you follow the Walkthrough: Creating an Options Page you’ll be able to add settings to your Visual Studio extension pretty easily. Using this approach, you can even rely on Visual Studio to generate a properties UI for you.

basic options

But that’s pretty rudimentary.

What I wanted was very simple, I wanted a multi-line text box that let you type in or paste in an encouragement per line. So I derived from DialogPage as you do, created a WPF User Control with a TextBox. I added the user control to an ElementHost, a Windows Forms control that can host a WPF control because, apparently, the Options dialog is still hosting Windows Forms controls.

This approach was easy enough, but the text box didn’t accept any of my input. I ran into the same problem as this person writes about StackOverflow.

I could cut and paste into the TextBox, but I couldn’t type anything. That’s not very useful.

I wasn’t interested in overriding WndProc mainly because I feel I shouldn’t have to. Instead I gave up on WPF, and ported it over to a regular Windows Forms user control. That allowed me to type in the textbox, but if I hit the Enter key, instead of adding a newline, the OK button stole it. So I couldn’t actually add more than one encouragement.


Thankfully, Jared pointed me to the UIElementDialogPage.

If you want to provide a WPF User Control for your Visual Studio Extension, derive from UIElementDialogPage and not DialogPage like all the samples demonstrate!

It does all the necessary WndProc magic under the hood for you. Note that it was introduced in Visual Studio 2012 so if you take a dependency on it, your extension won’t work in Visual Studio 2010. Live in the present I always say.

Storing Settings

The other thing I learned is that AppSettings is not the place to save your extension’s settings. As Jared explained,

The use of application settings is not version safe in a VSIX. The location of the stored setting file path in part includes the version string and hashes of the executable. When Visual Studio installs an official update these values change and as a consequence change the setting file path. Visual Studio itself doesn’t support the use of application settings hence it makes no attempt to migrate this file to the new location and all information is essentially lost.

The supported method of settings is the WritableSettingsStore. It’s very similar to application settings and easy enough to access via SVsServiceProvider

public static WritableSettingsStore GetWritableSettingsStore(this SVsServiceProvider vsServiceProvider)
    var shellSettingsManager = new ShellSettingsManager(vsServiceProvider);
    return shellSettingsManager.GetWritableSettingsStore(SettingsScope.UserSettings);

If this is interesting to you, I encourage (tee hee) you to read through the Pull Request that adds settings to the Encourage pull request. You can read through the commits to watch me flailing around, or you can read the final DIFF to see what changes I had to make.

PS: If you liked this post follow me on Twitter for interesting links and my wild observations about pointless drivel

git github 73 comments suggest edit

BONUS! I’ve added a useful 14th Git Alias: git migrate and now a 15th useful alias to open the repository in the browser

GitHub Flow is a Git work flow with a simple branching model. The following diagram of this flow is from Zach Holman’s talk on How GitHub uses GitHub to build GitHub.


You are now a master of GitHub flow. Drop the mic and go release some software!

Ok, there’s probably a few more details than that diagram to understand. The basic idea is that new work (such as a bug fix or new feature) is done in a “topic” branch off of the master branch. At any time, you should feel free to push the topic branch and create a pull request (PR). A Pull Request is a discussion around some code and not necessarily the completed work.

At some point, the PR is complete and ready for review. After a few rounds of review (as needed), either the PR gets closed or someone merges the branch into master and the cycle continues. If the reviews have been respectful, you may even still continue to like your colleagues.

It’s simple, but powerful.

Over time, my laziness spurred me to write a set of Git aliases that streamline this flow for me. In this post, I share these aliases and some tips on writing your own. These aliases start off simple, but they get more advanced near the end. The advanced ones demonstrate some techniques for building your own very useful aliases.

Intro to Git Aliases

An alias is simply a way to add a shorthand for a common Git command or set of Git commands. Some are quite simple. For example, here’s a common one:

git config --global checkout

This sets co as an alias for checkout. If you open up your .gitconfig file, you can see this in a section named alias.

co = checkout

With this alias, you can checkout a branch by using git co some-branch instead of git checkout some-branch. Since I often edit aliases by hand, I have one that opens the gitconfig file with my default editor.

ec = config --global -e

These sort of simple aliases only begin to scratch the surface.

GitHub Flow Aliases

Get my working directory up to date.

When I’m ready to start some work, I always do the work in a new branch. But first, I make sure that my working directory is up to date with the origin before I create that branch. Typically, I’ll want to run the following commands:

git pull --rebase --prune
git submodule update --init --recursive

The first command pulls changes from the remote. If I have any local commits, it’ll rebase them to come after the commits I pulled down. The --prune option removes remote-tracking branches that no longer exist on the remote.

This combination is so common, I’ve created an alias up for this.

up = !git pull --rebase --prune $@ && git submodule update --init --recursive

Note that I’m combining two git commands together. I can use the ! prefix to execute everything after it in the shell. This is why I needed to use the full git commands. Using the ! prefix allows me to use any command and not just git commands in the alias.

Starting new work

At this point, I can start some new work. All new work starts in a branch so I would typically use git checkout -b new-branch. However I alias this to cob to build upon co.

cob = checkout -b

Note that this simple alias is expanded in place. So to create a branch named “emoji-completion” I simply type git cob emoji-completion which expands to git checkout -b emoji-completion.

With this new branch, I can start writing the crazy codes. As I go along, I try and commit regularly with my cm alias.

cm = !git add -A && git commit -m

For example, git cm "Making stuff work". This adds all changes including untracked files to the index and then creates a commit with the message “Making Stuff Work”.

Sometimes, I just want to save my work in a commit without having to think of a commit message. I could stash it, but I prefer to write a proper commit which I will change later.

git save or git wip. The first one adds all changes including untracked files and creates a commit. The second one only commits tracked changes. I generally use the first one.

save = !git add -A && git commit -m 'SAVEPOINT'
wip = commit -am "WIP"

When I return to work, I’ll just use git undo which resets the previous commit, but keeps all the changes from that commit in the working directory.

undo = reset HEAD~1 --mixed

Or, if I merely need to modify the previous commit, I’ll use git amend

amend = commit -a --amend

The -a adds any modifications and deletions of existing files to the commit but ignores brand new files. The --amend launches your default commit editor (Notepad in my case) and lets you change the commit message of the most recent commit.

A proper reset

There will be times when you explore a promising idea in code and it turns out to be crap. You just want to throw your hands up in disgust and burn all the work in your working directory to the ground and start over.

In an attempt to be helpful, people might recommend: git reset HEAD --hard.

Slap those people in the face. It’s a bad idea. Don’t do it!

That’s basically a delete of your current changes without any undo. As soon as you run that command, Murphy’s Law dictates you’ll suddenly remember there was that one gem among the refuse you don’t want to rewrite.

Too bad. If you reset work that you never committed it is gone for good. Hence, the wipe alias.

wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard

This commits everything in my working directory and then does a hard reset to remove that commit. The nice thing is, the commit is still there, but it’s just unreachable. Unreachable commits are a bit inconvenient to restore, but at least they are still there. You can run the git reflog command and find the SHA of the commit if you realize later that you made a mistake with the reset. The commit message will be “WIPE SAVEPOINT” in this case.

Completing the pull request

While working on a branch, I regularly push my changes to GitHub. At some point, I’ll go to and create a pull request, people will review it, and then it’ll get merged. Once it’s merged, I like to tidy up and delete the branch via the Web UI. At this point, I’m done with this topic branch and I want to clean everything up on my local machine. Here’s where I use one of my more powerful aliases, git bdone.

This alias does the following.

  1. Switches to master (though you can specify a different default branch)
  2. Runs git up to bring master up to speed with the origin
  3. Deletes all branches already merged into master using another alias, git bclean

It’s quite powerful and useful and demonstrates some advanced concepts of git aliases. But first, let me show git bclean. This alias is meant to be run from your master (or default) branch and does the cleanup of merged branches.

bclean = "!f() { git checkout ${1-master} && git branch --merged ${1-master} | grep -v " ${1-master}$" | xargs git branch -d; }; f"

If you’re not used to shell scripts, this looks a bit odd. What it’s doing is defining a function and then calling that function. The general format is !f() { /* git operations */; }; f We define a function named f that encapsulates some git operations, and then we invoke the function at the very end.

What’s cool about this is we can take advantage of arguments to this alias. In fact, we can have optional parameters. For example, the first argument to this alias can be accessed via $1. But suppose you want a default value for this argument if none is provided. That’s where the curly braces come in. Inside the braces you specify the argument index ($0 returns the whole script) followed by a dash and then the default value.

Thus when you type git bclean the expression ${1-master} evaluates to master because no argument was provided. But if you’re working on a GitHub pages repository, you’ll probably want to call git bclean gh-pages in which case the expression ${1-master} evaluates to gh-pages as that’s the first argument to the alias.

Let’s break down this alias into pieces to understand it.

git branch --merged ${1-master} lists all the branches that have been merged into the specify branch (or master if none is specified). This list is then piped into the grep -v "${1-master}" command. Grep prints out lines matching the pattern. The -v flag inverts the match. So this will list all merged branches that are not master itself. Finally this gets piped into xargs which takes the standard input and executes the git branch -d line for each line in the standard input which is piped in from the previous command.

In other words, it deletes every branch that’s been merged into master except master. I love how we can compose these commands together.

With bclean in place, I can compose my git aliases together and write git bdone.

bdone = "!f() { git checkout ${1-master} && git up && git bclean ${1-master}; }; f"

I use this one all the time when I’m deep in the GitHub flow. And now, you too can be a GitHub flow master.

The List

Here’s a list of all the aliases together for your convenience.

  co = checkout
  ec = config --global -e
  up = !git pull --rebase --prune $@ && git submodule update --init --recursive
  cob = checkout -b
  cm = !git add -A && git commit -m
  save = !git add -A && git commit -m 'SAVEPOINT'
  wip = !git add -u && git commit -m "WIP"
  undo = reset HEAD~1 --mixed
  amend = commit -a --amend
  wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard
  bclean = "!f() { git branch --merged ${1-master} | grep -v " ${1-master}$" | xargs git branch -d; }; f"
  bdone = "!f() { git checkout ${1-master} && git up && git bclean ${1-master}; }; f"

Credits and more reading

It would be impossible to source every git alias I use as many of these are pretty common and I’ve adapted them for my own needs. However, here are a few blog posts that provided helpful information about git aliases that served as my inspiration. I also added a couple posts about how GitHub uses pull requests.

PS: If you liked this post follow me on Twitter for interesting links and my wild observations about pointless drivel

PPS: For Windows users, these aliases don’t require using Git Bash. They work in PowerShell and CMD when msysgit is in your path. For example, if you install GitHub for Windows and use the GitHub Shell, these all work fine.

personal github 25 comments suggest edit

GitHub is a great tool for developers to work together on software. Though its primary focus is software, a lot of people find it useful for non-software projects. For example, a co-worker of mine has a repository where he tracks a pet project:

I bought a crappy 1987 Honda XR600 and I am going to turn it into something awesome

A while back, Wired ran an article about a man who renovated his home on GitHub. He even has a 3-D model of his beskpoke artisanal bathroom plug. Send a pull request why dontcha?


Another person dedicated his genome to the public domain on GitHub. Sadly, genetic technology is not quite at the point where he can merge a pull request all the way to his own body. But who knows? Someday it might be possible to hit that green Merge button and instantly sprout wings. Of course, the downside of such genetic tinkering is you’d need a new wardrobe. I’m not sure Gap sells shirts with wing holes.

Just the other day, I read a blog post about a company that uses GitHub for everything.

  • Internal wiki
  • Recruitment process
  • Day-to-day operations
  • Marketing efforts
  • And a lot more …

As for me…

Meanwhile, I used GitHub to save my marriage. Ok, that might be a tiny bit of hyperbole for dramatic effect (right honey? Right?!!!).

Let me back up a moment to provide some context.

One of the central points of David Allen’s Getting Things Done system is that all the lists of stuff we hold in our head uses up “psychic RAM” and that creates stress. This “psychic weight” drags us down and wears on our psyche.

When you’re a family with a house and kids, you have a lot of lists and thus need a lot of mental RAM. Things break down in the house all the time. You have to drag the kids to a myriad of events and appointments. You need to attend to recurring chores. If you’re not on top of these things, they fall through the cracks.

There’s two common approaches to deal with this.

The first is to always think about everything that needs to be done and bear the full psychic weight and stress of always being on point.

The second is what I call the squeaky wheel approach. You remain blissfully ignorant of all these demands until such a time comes when something is so bad that it forces your attention. Or, as is often the case, it gets so bad for someone else (after all, you’re blissfully ignorant) that they make it a priority for you. A lot of things tend to get dropped with this approach that shouldn’t be dropped.

The second approach carries with it much less psychic weight, but it isn’t very respectful to a person who employs the first approach and leads to a lot of interpersonal tension.

I’ll let you guess which approach I tend to employ.

Part of the reason I employ the squeaky wheel approach is that I have a terrible memory and I’m quite good at not noticing things that need to get done. Worse, despite my ignorance, things were getting done and I just didn’t even notice. This reinforced the belief that everything was fine.

So my wife and I had a discussion about this. I can’t will myself to a better memory and improve my ability to notice things that need to get done. At the same time, while I suck at this stuff at home, I tend to be much more conscientious at work.

So I proposed an idea. What if we ran our household chores like a software project? By that, I mean a well-run software project, not your typical death march past deadline over budget projects. At work, I run everything through GitHub issues. So let’s try that at home!

The goal is that we should no longer maintain all these lists in our head. Instead, when we noticed something that needs to be done, we create an issue and we’re free to forget it right then and there because we can trust the process. Every week, we review the list together and try complete what issues we can. It relieves a lot of mental stress to rely on the system instead of our own fallible memories.

I created a private repository for our household. The following screenshot shows an example of a recent issue. Notice that I take advantage of the wonderful Task Lists feature of GitHub Flavored Markdown. That feature has been a godsend.

I broke down the task of cleaning out the dead bugs from the light fixtures into a list of tasks. I decided to take on this one and assigned it to myself.

A different kind of bug report for GitHub issues

The work involved to close an issue sometimes leads to the need to create more issues. In this case, I learned a valuable lesson - don’t use a screw driver to put a glass light cover back on. I was fortunate that the explosion of glass this created didn’t get anyone hurt.

Broken Lights

My wife and I have tried Trello and other systems in the past, but this one has been very successful for us and she’s been very happy with the results.

I also use Markdown documents in the repository to track kids meal ideas, lists of babysitters, weekend fun ideas, etc. It’s become even better now that we have rendered prose diffs. Our household GitHub repository helps me track just about related to our household. What interesting ways do you use GitHub for non-software projects?

UPDATE 2014-08-07 I just learned about a service called GitHub Reminders.

Get a email reminder by creating a GitHub issue comment with a emoji and a naural language date. Login and Signup with your Account to get started

This sounds like it could be very useful for a household issue tracker.

vs vsix dev encouragement 27 comments suggest edit

I love to code as much as the next developer. I even professed my love in a keynote once. And judging by the fact that you’re reading this blog, I bet you love to code too.

But in the immortal words of that philosopher, Pat Benatar,

Love is a battlefield.

There are times when writing code is drudgery. That love for code becomes obsession and leads to an unhealthy relationship. Or worse, there are times when the thrill is gone and the love is lost. You’re just going through the motions.

In those dark times, bathed in the soft glow of your monitor, engrossed in the rhythmic ticky tacka sound of of your keyboard, a few kind words can make a big difference. And who better to give you those kind words than your partner in crime - your editor.

With that, I give you ENCOURAGE. It’s a Visual Studio extension that provides a bit of encouragement every time you save your document. Couldn’t we all use a bit more whimsy in our work?

encouragement light

And it’s theme aware!

encouragement dark


Yes, it’s silly. But try it out and tell me it doesn’t put an extra smile on your face during your day.

This wasn’t my idea. My co-worker Pat Nakajima came up with this idea and built a TextMate extension to do this. He showed it to me and I instantly fell in love. With the idea. And Pat, a little.

Apparently it’s very easy to do this in TextMate. Here’s the full source code:

 #!/usr/bin/env ruby -wU

  puts ['Nice job!', 'Way to go!', 'Wow, nice change!'].sample

It’s a bit deceiving because most of the work in getting this to work in Textmate is configuration.


As for Visual Studio, it takes quite a bit more work. You can find the source code on GitHub under an MIT license.

The code hooks into the DocumentSaved event on the DTE and then cleverly (or hackishly depending on how you look at it) uses an IIntellisenseController combined with an ISignatureHelpSource to provide the tool tip.

Here’s the relevant code snippet from the EncourageIntellisenseController class:

public EncourageIntellisenseController(
  ITextView textView,
  DTE dte,
  EncourageIntellisenseControllerProvider provider)
  this.textView = textView;
  this.provider = provider;
  this.documentEvents = dte.Events.DocumentEvents;
  documentEvents.DocumentSaved += OnSaved;

void OnSaved(Document document)
  var point = textView.Caret.Position.BufferPosition;
  var triggerPoint = point.Snapshot
    .CreateTrackingPoint(point.Position, PointTrackingMode.Positive);
  if (!provider.SignatureHelpBroker.IsSignatureHelpActive(textView))
    session = provider.SignatureHelpBroker
      .TriggerSignatureHelp(textView, triggerPoint, true);

Many thanks to Pat Nakajima for the idea and Jared Parsons for his help with the Visual Studio extensibility parts. I’m still a n00b when it comes to extending Visual Studio and this silly project has been one fun way to try and get a handle on things.

Get Involved!

As of today, this only supports Visual Studio 2013 because of my ineptitude and laziness. I welcome contributions to make it support more platforms.

Parting Thoughts

On the positive side, when you need a specific service, it’s nice to be able to slap an [Import] attribute and magically have the type available. The extensibility of Visual Studio appears to be nearly limitless.

On the downside, it’s ridiculously difficult to write extensions to do some basic tasks. Yes, a big part of it is the learning curve. But when you compare the Textmate example to what I had to do here, clearly there’s some middle ground here between simplicity and power.

Also, the documentation is quite good but often wrong in places. For example, in this Walkthrough it notes:

Make sure that the Content heading contains a MEF Component content type and that the Path is set to QuickInfoTest.dll.

That might have been true with the old VSIX manifest format, but is not correct for the new one. So none of my MEF imports worked until I added this line to the Assets element in my .vsixmanifest folder.

  Path="|%CurrentProject%|" />

I’m not really sure why that’s just not there by default.

There are certainly a lot of extensions in the Visual Studio Extension Gallery, so I would still call consider the extensibility model to be a success for the most part. But there could be a lot more extensions in there. More people should be able to extend the IDE for their own needs without having to take a graduate course in Visual Studio Extensibility.