code comments edit

From the topic of this and my last post, you would be excused if you think I have some weird fascination with ducks. In fact, I’m starting to question it myself.

Is it a duck? - CC BY-ND 2.0 from

I just find the topic of duck typing (and structural typing) to be interesting and I’m not known to miss an opportunity to beat a dead horse (or duck for that matter).

Often, when we talk about duck typing, code examples only ask if the object quacks. But shouldn’t it also ask if it quacks like a duck?

For example, here are some representative examples you might find if you Google the terms “duck typing”. The first example is Python.

def func(arg):
    if hasattr(arg, 'quack'):
    elif hasattr(arg, 'woof'):

Here’s an example in Ruby.

def func(arg)
  if arg.respond_to?(:quack)

Does this miss half the point of duck typing?

In other words, don’t check whether it IS-a duck: check whether it QUACKS-like-a duck, WALKS-like-a duck, etc, etc,…

Note that this doesn’t just suggest that we check whether the object quacks and stop there, as these examples do. It suggests we should go further and ask if it quacks like a duck.

Most discussions of duck typing tend to focus on whether the object has methods that match a given name. I haven’t seen examples that also check the argument list. But yet another important aspect of a method is its return type. As far as I know, that’s not something you can test in advance with Ruby or Python.

Suppose we have the following client code.

def func(arg)
  return unless arg.respond_to?(:quack)
  sound = arg.quack
  /quack quack/.match(sound)

And we have the following two classes.

class Duck
  def quack
    "quacketty quack quack"

class Scientist
  def quack

The Scientist certainly quacks, but it doesn’t quack like a duck.

func( // returns a MatchData object
func( // returns nil (match failed)

I think that’s one reason why the example in my previous post actually tries and call the method to ensure that the argument indeed quacks like a duck.

My guess is that in practice, conflicts like this where a method has the same name, but different type, is rare enough that Ruby and Python developers don’t worry about it too much.

Also, with such dynamic languages, it’s possible to monkey patch an object to conform to the implicit contract if it doesn’t match it exactly. Say if you have a RobotDuck you got from another library you didn’t write and want to pass it in as a duck.

Thanks to GeekSam for reviewing my Ruby and Python idioms.

code comments edit

Eric Lippert writes one of my all time favorite tech blogs. Sadly, the purple font he was famous for is no longer, but the technical depth is still there.

In a recent post, he asks the question, “What is Duck Typing?” His post provides a thoughtful critique and deconstruction of the Wikipedia entry on the subject. Seriously, go read it, but please come back here afterwards!

For those of you too lazy to read it, I’ll try and summarize crudely. He starts off with his definitions of “typing”:

A compile-time type system provides three things: first, rules for defining types, like “array of pointers to integer”. Second, rules for logically deducing what type is associated with every expression, variable, method, property, and so on, in a program. And third, rules for determining what programs are legal or illegal based on the type of each entity. These types are used by the compiler to determine the legality of the program.

A run-time type system provides three similar things: first, again, rules for defining types. Second, every object and storage location in the system is associated with a type. And third, rules for what sorts of objects can be stored in what sorts of storage locations. These types are used by the runtime to determine how the program executes, including perhaps halting it in the event of a violation of the rules.

He continues with a description of structural typing that sounds like what he always thought “duck typing” referred to, but notes that his idea differs from the Wikipedia definition. As far as he can tell, the Wikipedia definition sounds like it’s just describing Late Binding.

But this is not even typing in the first place! We already have a name for this; this is late binding. “Binding” is the association of a particular method, property, variable, and so on, with a particular name2 in a particular context; if done by the compiler then it is “early binding”, and if it is done at runtime then it is “late binding”.3 Why would we even need to invent this misleadingly-named idea of “duck typing” in the first place??? If you mean “late binding” then just say “late binding”!

I agree that the Wikipedia definition is a bit unclear, but I think there’s more to it than simple late binding. Also, I think some of the confusion lies in the fact that duck typing isn’t so much a type system as it is a fuzzy approach to treating objects as if they are certain types (or close enough) based on their behavior rather than their declared type. This is a subtle distinction to late binding.

To back this up, I looked at the original Google Group post where the Alex Martelli first described this concept.

In other words, don’t check whether it IS-a duck: check whether it QUACKS-like-a duck, WALKS-like-a duck, etc, etc, depending on exactly what subset of duck-like behaviour you need to play your language-games with.

This was a response to a question that asked the question (I’m paraphrasing), how do you handle method overloading with a single parameter in a dynamic language? Specifically, the question was in reference to the Python language.

To illustrate, in a static typed language like C#, you might have the following three methods of a class (forgive me if the example seems contrived. I lack imagination.):

public class PetOwner {
  public void TakeCareOf(Duck duck) {...}
  public void TakeCareOf(Robot robot) {...}
  public void TakeCareOf(Car car) {...}

In C#, the method that gets called is resolved at compile time depending on the type of the argument passed to it.

var petOwner = new PetOwner();
petOwner.TakeCareOf(new Duck()); // calls first method.
petOwner.TakeCareOf(new Robot()); // calls second method.
petOwner.TakeCareOf(new Car()); // calls third method.

But in a dynamic language, such as Python, you can’t have three methods with the same name each with a single argument. Without a type declared for the method argument, there is no way to distinguish between the methods. Instead, you’d need a single method and do something else.

One approach is you could switch based on the runtime type of the argument passed in, but Alex points out that would be inappropriate in Python. I assume because it conflicts with Python’s dynamic nature. Keep in mind that I’m not a Python programmer so I’m basing this on my best attempt to interpret Alex’s words:

In other words, don’t check whether it IS-a duck: check whether it QUACKS-like-a duck, WALKS-like-a duck, etc, etc, depending on exactly what subset of duck-like behaviour you need to play your language-games with.

As I said before, I don’t know a lick of Python, so I’ll pseuducode what this might look like.

class PetOwner:
    def take_care_of(arg):
        if behaves_like_duck(arg):
            #Pout lips and quack quack
        elif behaves_like_robot(arg):
            #Domo arigato Mr. Roboto
        elif behaves_like_car(arg):
            #Vroom vroom vroom farfegnugen

So rather than check if the arg IS A duck, you check if it behaves like a duck. The question is, how do you do that?

Alex notes this could be tricky.

On the other hand, this can be a considerable amount of work, depending on how you go about it (actually, it need not be that bad if you “just go ahead and try”, of course catching the likely exceptions if the try does not succeed; but still, greater than 0).

One proposed approach is to simply treat it like a duck, and if it fails, start treating it like a fish. If that fails, try treating it like a dog.

I’d guess that code would look something like:

class PetOwner:
  def take_care_of(arg):

Note that this is not exactly the same as late binding as Eric proposes. Late binding is involved, but that’s not the full picture. It’s late binding combined with the branching based on the set of methods and properties that make up “duck typing.”

What’s interesting is that this was not the only possible solution that Alex proposed. In fact, he concludes it’s not the optimal approach.

Besides, “explicit is better than implicit”, goes one of Python’s mantras. Just let the client-code explicitly TELL you which kind of argument they are passing you (and doing so through a named argument is simple and readable), and your work drops to zero, while removing no useful functionality whatever from the client.

He goes on to state that this implicit duck typing approach to method overloading seems to have dubious benefit.

The “royal-road” alternative route to overloading would, I think, be the use of suitable named-arguments. A rockier road, perhaps preferable in some cases, but more work for dubious benefit, would be the try/except approach to see if an argument supplies the functionalities you require.

The Python approach would be to pass in a discriminator. Even so, the object passed in would have to fulfill the set of requirements for the selected branch of code indicated by the discriminator. With the discriminator, it does feel more like we’re just talking about late binding, but applied to a set of methods and properties, not just each one individually as you might do with late binding.

One observation I’ve heard is that “duck typing” sounds kind of like “duct taping.” Not sure if there’s anything to that, but if you forgive a bit of a stretch, I think there it may be an apt analogy.

On the Apollo 13 mission, the crew was faced with a situation where carbon dioxide levels were rising to dangerous levels in the Lunar Module. They had plenty of filters, but their square filters would not fit in the round barrels that housed the filters. In other words, their square filters were the wrong type (whether dynamic or static). Their solution was to use duct tape to cobble something together that would work. It wasn’t the solution intended by the original design, but as long as the final contraption acts like an air filter (duck typing), they would survive. And they did. Like I said, the analogy is a bit of as stretch, but I think it embodies the duck typing approach.

Perhaps a better term is typing by usage. With explicit typing, you explicitly declare an object to be one type or another (whether at compile time or run time). With typing by usage, if it just happens to meet the needs of the consumer, then hey! It’s a duck!

For static typed languages, I really like the idea of structural typing. It provides a nice combination of type safety and flexibility. Mark Rendle, in the comments to Eric’s blog post provides this observation:

Structural Typing may be thought of as a kind of compiler-enforced subset of Duck Typing.

In other words, it’s duck typing for static typed languages.

Also in the comments to Eric’s post, someone linked to my blog post about duck typing. At the time I wrote that, “structural typing” wasn’t in my vocabulary. If it had been, I could have been more precise in my post. For static languages, I find structural typing to be very compelling.

What do you think? Did I nail it? Or did I drop the ball and get something wrong or misrepresent an idea? Let me know in the comments.

UPDATE: Sam Livingston-Gray, also known as @geeksam notes another key difference between late binding that I completely missed:

@haacked method_missing illustrates the disconnect between binding and typing: an obj can choose how and whether to respond to a message

Recall that Eric defines “Late Binding” as:

“Binding” is the association of a particular method, property, variable, and so on, with a particular name in a particular context; if done by the compiler then it is “early binding”, and if it is done at runtime then it is “late binding”.

You could argue that method_missing is another form of late binding where the name is bound to method_missing because there is no other name to bind to. But conceptually, it feels very different to me. With binding, you usually think of the caller determining which method to call by name. And whether it’s bound early or late is no matter, it’s still the caller’s choice. With method_missing it’s the object in control of whethere it’s going to respond to the method call (message).

oss comments edit

Today, I read a comment about a group of people who feel betrayed by the increase in code that Microsoft is releasing under an open source license.

I and my team are really troubled by MS’s apparent policy of Open-sourcing code from under our feet.

We are in an industry with rather paranoid clients that contractually bar us from using Open Source software. We have immensely enjoyed using Rx but its open-sourcing threw us into turmoil.

I feel like we have been betrayed, and will think twice before adopting some new Microsoft framework again, wary of it being Open Sourced later on without prior warning.

And I understand this feeling. I really do. Feelings of betrayal are a natural consequence of progress.

By betrayal, I mean the following definition from Webster,

to hurt (someone who trusts you, such as a friend or relative) by not giving help or by doing something morally wrong

The Catholic Church at the time must have felt betrayed by Galileo when he lent his support to the heliocentric model of the universe because it deviated from their orthodoxy.

Factory owners who profited on cheap labor from children must have felt betrayed by the passage of child labor laws.

Racists who held onto the idea that other races were inferior to their own must have felt betrayed by the passage of the civil rights act.

These are grand examples. But often we experience much smaller and simpler betrayals as a result of little tiny foot steps of progress. Such as the betrayal some might feel when changing winds in the industry makes their antiquated business practices harder to sustain.

It’s important to note, that betrayal does not imply progress. It can also result from regression. But doing the right thing always leads to some group feeling betrayed.

And what do we do about those who feel betrayed? It’s easy to deride them as an anachronistic holdover from a past that no longer has a place in the present. That’s the easy way to deal with it.

But for those who’ve only known the world to be one way all their life, feelings of betrayal are understandable when the world suddenly changes. Hopefully we can reach out and help those adapt to the new way. There’s room for everyone. It’s a painful path. But it can happen.

For the rest who would rather stew in the feelings of betrayal and hold onto their outdated ideas with an iron fist. These are the folks we should never, under any circumstances, let impede progress.

I’m sad to hear that these folks feel betrayed because their clients won’t allow them to use Rx now that it is open source. Rx is a powerful and useful library. But when you put it in perspective, clients like these are going extinct. History has never been kind to businesses who cannot adapt to change. Best to try and educate the clients about how their policy is regressive and detrimental to their future health. And if that’s not successful, then look for clients who are adaptable and wisely embrace open source as one of many means of ensuring their own survival in the long run.

jekyll comments edit

In my last 2013 recap blog post I wrote about the number of steps I recorded with Fitbit last year and the year prior. In case you missed it, they were:

  • 2012 - 3,115,606 steps (Note, I started recording in March)
  • 2013 - 4,577,481 steps

Someone asked me how I got those numbers because the Fitbit dashboard is confusing. Indeed it is. Here’s how.

First, when you go to the dashboard, you have to mouse over the section to see the “more info” link.

Fitbit dashboard

Then, click on the “Year” tab. But you’ll notice that you still don’t see summary data. You have to click on the little back link.

Fitbit activity without totals

There’s a brief pause, but the summary totals should show up on bottom.

Fitbit activity with totals

Unfortunately I can’t get the same report for my sleep patterns without paying for the premium account. I might do that if I had tracked my sleep better last year.

Hopefully, if you have a Fitbit, this helps you.

blogging, personal comments edit

Another year comes to an end and tradition demands that I write a recap for the year. But it doesn’t require that I write a very good one.

I wish I had the time and energy to write one of those recaps that captures the essence of the year in a thoughtful insightful manner. The best I can muster is “a lot of stuff happened.”

Here, look at this picture of my tiny kids playing chess.



This has been a great year for me. My son started first grade, and much to our relief, he loves it. At home, he started to learn to program. I even had my first conversation with him about refactoring and the DRY principle. Parents, it’s never too early to talk to your kids about clean code!

My daughter just gets more and more interesting and fun to be around. She has a big personality and just wins over any room she’s in. Sometimes we take walks together and she’s now able to walk with me over a mile to the local frozen yogurt place. But she usually makes me carry her part of the way back.

And I finished my second year at GitHub. After a year and a half solely focused on GitHub for Windows, I’ve been able to bounce around a few other cool projects which keeps me excited every day. I still love working here.

I spoke at a few conferences, but I’ve certainly ramped that down as travel is tough on the family and I had a tiny bit more work travel this year.


Contribution graphs are not a great way to determine the impact you’ve had in a year. They don’t capture a lot of important work that happens outside of GitHub. Yes, it’s true. Productive work does happen that’s not captured by a Git commit.

Even so, I find them interesting to look at for some historical perspective. The gaps in a contribution graph tell as much a story as the areas that are filled in. For example, you can see when I go on vacation based on my graphs, though I’m not very good at staying away from the computer when I do.

Here’s two of my contribution graphs. The first one is what I see as it shows contributions to both public and private repositories.

Haacked Contribution Graph

The second one shows what the public sees. This is perhaps a decent, though not perfect, representation of the work I’ve done with open source.

Haacked public Contribution Graph

As you can see, after shipping a major release of GitHub for Windows, I shifted my focus to some open source projects like and, making my public contribution graph much greener in the latter half of the year.

What I wrote, that people seemed to like

My three most popular posts written in 2013 according to Google Analytics are:

  1. Death to the if statement - more robust code with less control structures with 25,987 page views.
  2. Argue well by losing - You only learn something when you lose an argument with 21,264 views.
  3. Test Better - How developers should become better testers with 15,618 views

By the way, does anyone know how to easily do a report in Google Analytics for content created in a year? I’d find that useful.

What I’ve Shipped

This past year, I’ve had the pleasure to be involved in shipping the following:

  1. GitHub Enterprise support in GitHub for Windows
  4. RestSharp - a few releases actually.
  5. According to FitBit, I had 4,577,481 steps this year. That’s approximately 2,099 miles. Compare this to the 3.1 million steps I took the year before. That’s a huge improvement!

You People

Yeah, let’s talk about you. You people are my favorite. Well, most of you.

  • Visitors 1,462,003 unique visitors made 2,091,606 visits. Those numbers are down 24.9% and 22.27% respectively from the previous year. I’d like to blame the death of blogging, but I suspect the quality of my writing has declined as I’ve focused more on other areas of my life.

  • RSS Subscribers According to FeedBurner, there are still 84,377 subscribers to my RSS feed which is surprising given the demise of Google Reader. I guess everybody found replacements. Or the stats are jacked.

Next Year

I’m looking forward to 2014. I’ve started learning F# by reading the Real-World Functional Programming book by Tomas Petricek and Jon Skeet. I’m hoping to incorporate more functional programming into my toolset. And I’m hoping to take even more steps.

Hopefully I can speak at a few conferences again this year. I’d love to speak in some new places. I’m really hoping to get a gig in South Korea this year. It’d be a chance to see how the industry is really growing there and to visit some of my family.

jekyll comments edit

Well this is a bit embarrassing.

I recently migrated my blog to Jekyll and subsequently wrote about my painstaking work to preserve my URLs.

But after the migration, despite all my efforts, I faced an onslaught of reports of broken URLs. So what happened?

Broken glass by Tiago Pádua CC-BY-2.0

Well it’s silly. The program I wrote to migrate my posts to Jekyll had a subtle flaw. In order to verify that my URL would be correct, it made a web request to my old blog (which was still up at the time) using the generated file name.

This was how I verified that the Jekyll URL would be correct. The problem is that Subtext had this stupid feature where the date part of the URL didn’t matter so much. It only cared about the slug at the end of the URL.

Thus requests for the following two URLs would receive the same content:


Picard Face Palm

This “feature” masked a timezone bug in my exporter that was causing many posts to generate the wrong date. Unfortunately, my export script had no idea these were bad URLs.

Fixing it!

So how’d I fix it? First, I updated my 404 page with information about the problem and where to report the missing file. You can set a 404 page by adding a 404.html file at the root of your Jekyll repository. GitHub pages will serve this file in the case of a 404 error.

I then panicked and started fixing errors by hand until my helpful colleagues Ben Balter and Joel Glovier reminded me to try Google Analytics and Google Webmaster Tools.

If you haven’t set up Google Webmaster Tools for your website, you really should. There are some great tools in there including the ability to export a CSV file containing 404 errors.

So I did that and wrote a new program, Jekyll URL Fixer, to examine the 404s and look for the corresponding Jekyll post files. I then renamed the affected files and updated the YAML front matter with the correct date.

Hopefully this fixes most of my bad URLs. Of course, if anyone linked to the broken URL in the interim, they’re kind of hosed in that regard.

I apologize for the inconvenience if you couldn’t find the content you were looking for and am happy to refund anyone’s subscription fees to (up to a maximum of $0.00 per person).

jekyll comments edit

In my last post, I wrote about preserving URLs when migrating to Jekyll. In this post, show how to preserve your Disqus comments.

This ended up being a little bit tricker. By default, disqus stores comments keyed by a URL. So if you people create Disqus comments at, you need to preserve that exact URL in order for those comments to keep showing up.

In my last post, I showed how to preserve such a URL, but it’s not quite exact. With Jekyll, I can get a request to to redirect to Note that trailing slash. To Disqus, these are two different URLs and thus my comments for that page would not load anymore.

Fortunately, Disqus allows you to set a Disqus Identifier that it uses to look up a page’s comment thread. For example, if you view source on a migrated post of mine, you’ll see something like this:

<script type="text/javascript">
  var disqus_shortname = 'haacked';
  var disqus_identifier = '18902';
  var disqus_url = '';
  // ...omitted

The disqus_identifier can pretty much be any string. Subtext, my old blog engine, set this to the database generated ID of the blog post. So to keep my post comments, I just needed to preserve that as I migrated over to Jekyll.

So what I did was add my own field to my migrated Jekyll posts. You can see an example by clicking edit on one of the older posts. Here’s the Yaml frontmatter for that post.

layout: post
title: "Code Review Like You Mean It"
date: 2013-10-28 -0800
comments: true
disqus_identifier: 18902
categories: [open source,github,code]

This adds a new disqus_identifier field that can be accessed in the Jekyll templates. Unfortunately, the default templates you’ll find in the wild (such as the Octopress ones) won’t know what to do with this. So I updated the disqus.html Jekyll template include that comes with most templates. You can see the full source in this gist.

But here’s the gist of that gist:

var disqus_identifier = '{% if page.disqus_identifier %}{{ page.disqus_identifier}}{% else %}{{ site.url }}{{ page.url }}{% endif %}';
var disqus_url = '{{ site.url }}{{ page.url }}';

If your current blog engine doesn’t explicitly set a disqus_identifier, the identifier is the exact URL where the comments are hosted. So you could set the disqus_identifier to that for your old posts and leave it empty for your new ones.

jekyll comments edit

In my last post I wrote about migrating my blog to Jekyll and GitHub Pages. Travis Illig, a long time Subtext user asked me the following question:

The only thing I haven’t really figured out is how to nicely handle the redirect from old URLs (/archive/blah/something.aspx) to the new ones without extensions (/archive/blah/something/). I’ve seen some meta redirect stuff combined with JavaScript but… UGH.

UGH Indeed! I decided not to bother with changing my existing URLs to be extensionless. Instead, I focused on preserving my existing permalinks by structuring my posts such that they preserved their existing URLs.

How did I do this? My old URLs have an ASP.NET .aspx extension. Surely, GitHub Pages won’t serve up ASPX files. This is true. But what it will serve up is a folder that just happens to have a name that ends with “.aspx”.

The trick is in how I named the markdown files for my old posts. For example, check out a recent post: 2013-11-20-declare-dont-tell.aspx.markdown

Jekyll takes the part after the date and before the .markdown extension and uses that as the post’s URL slug. In this case, the “slug” is declare-dont-tell.aspx.

The way it handles extensionless URLs is to create a folder with the slug name (in this case a folder named declare-dont-tell.aspx) and creates the blog post as a file named index.html in that folder. Simple.

Thus the URL for that blog post is But here’s the beautiful part. GitHub Pages doesn’t require that trailing slash. So if you make a request for, everything still works! GitHub simply redirects you to the version with the trailing slash.

Meanwhile, all my new posts from this point on will have a nice clean extensionless slug without breaking any permalinks for my old posts.

blogging, jekyll comments edit

The older I get, the less I want to worry about hosting my own website. Perhaps this is the real reason for the rise of cloud hosting. All of us old fogeys became too lazy to manage our own infrastructure.

For example, a while back my blog went down and as I frantically tried to fix it, I received this helpful piece of advice from Zach Holman.

@haacked the ops team gets paged when is down. You still have a lot to learn, buddy.

Indeed. Always be learning.

What Zach refers to is the fact that his blog is hosted as a GitHub Pages repository. So when his blog goes down (ostensibly because GitHub Pages is down), the amazing superheroes of the GitHub operations team jumps into action to save the day. These folks are amazing. Why not benefit from their expertise?

So I did.

One of the beautiful things about GitHub Pages is that it supports Jekyll, a simple blog aware static site generator.

If you can see this blog post, then the transition of my blog over to Jekyll is complete and (mostly) successful. The GitHub repository for this blog is located at Let me know if you find any issues. Or better yet, click that edit button and send me a pull request!

Screen grab from the 1931 movie Dr. Jekyll and Mr. Hide public domain

There are two main approaches you can take with Jekyll. In one approach, you can use something like Octopress to generate your site locally and then deploy the locally generated output to a gh-pages branch. Octopress has a nice set of themes (my new design is based off of the Greyshade theme) and plugins you can take advantage of with this approach. The downside of that approach is you can’t publish a blog post solely through the website.

Another approach is to use raw Jekyll with GitHub pages and let GitHub Pages generate your site when your content changes. The downside of this approach is that for security reasons, you have a very limited set of Jekyll plugins at your disposal. Even so, there’s quite a lot you can do. My blog is using this approach.

This allows me to create and edit blog posts directly from the web interface. For example, every blog post has an “edit” link. If you click on that, it’ll fork my blog and take you to an edit page for that blog post. So if you’re a kind soul, you could fix a typo and send me a pull request and I can update my blog simply by clicking the Merge button.

Local Jekyll

Even with this latter approach, I found it useful to have Jekyll running locally on my Windows machine in order to test things out. I just followed the helpful instructions on this GitHub Help page. If you are on Windows, you will inevitably run into some weird UTF Encoding issue. The solution is fortunately very easy.

Migrating from Subtext

Previously, I hosted my blog using Subtext, a database driven ASP.NET application. In migrating to Jekyll, I decided to go all out and convert all of my existing blog posts into Markdown. I wrote a hackish ugly console application, Subtext Jekyll Exporter, to grab all the blog post records from my existing blog database.

The app then shells out to Pandoc to convert the HTML for each post into Markdown. This isn’t super fast, but it’s a one time only operation.

If you have a blog stored in a database, you can probably modify the Subtext Jekyll Exporter to create the markdown post files for your Jekyll blog. I apologize for the ugliness of the code, but I have no plans to maintain it as it’s done its job for me.

The Future of Subtext

It’s with heavy heart that I admit publicly what everyone has known for a while. Subtext is done. None of the main contributors, myself included, have made a commit in a long while.

I don’t say dead because the source code is available on GitHub under a permissive open source license. So anyone can take the code and continue to work on it if necessary. But the truth is, there are much better blog engines out there.

I started Subtext with high hopes eight years ago. Despite a valiant effort to tame the code, what I learned in that time was that I should have started from scratch.

I was heavily influenced by this blog post from Joel Spolksy, Things You Should Never Do.

Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make:

They decided to rewrite the code from scratch.

Perhaps it is a strategic mistake for a software company, but I’m not so sure the same rules apply to an open source project done in your spare time.

So much time and effort was sacrificed at the altar of backwards compatibility as we moved mountains to make the migration from previous versions to next continue to work while trying to refactor as much as possible. All that time dealing with the past was time not spent on innovative new features. I was proud of the engineering we did to make migrations work as well as they did, but I’m sad I never got to implement some of the big ideas I had.

Despite the crap ton of hours I put into it, so much so that it strained my relationship at times, I don’t regret the experience at all. Working on Subtext opened so many doors for me and sparked many lifelong friendships.

So long Subtext. I’ll miss that little submarine.

code, rx comments edit

Judging by the reaction to my Death to the If statement where I talked about the benefits of declarative code and reducing control statements, not everyone is on board with this concept. That’s fine, I don’t lose sleep over people being wrong.

Photo by Grégoire Lannoy CC BY 2.0

My suspicion is that the reason people don’t have the “aha! moment” is because examples of “declarative” code are too simple. This is understandable because we’re trying to get a concept across, not write the War and Peace of code. A large example becomes unwieldy to describe.

A while back, I tried to tackle this with an example using Reactive Extensions. Imagine the code you would write to handle both the resize and relocation of a window, where you want to save the position to disk, but only after a certain interval has passed since the last of either event.

So you resize the window, then before the interval has passed you move the window. And only have you stop moving it and resizing it for this interval, does it save to disk.

Set aside your typical developer bravado and think about what that code looks like in a procedural or object oriented language. You functional reactive programmers can continue to smirk smugly.

The code is going to be a bit gnarly. You will have to write bookkeeping code such as saving the time of the last event so you can check that the duration has passed. This is because you’re telling the computer how to throttle.

With declarative code, you more or less declare what you want. “Hey! Give me a throttle please!” (Just because you are declaring, it doesn’t mean you can’t be polite. I like to add a Please suffix to all my methods). And declarations are much easier to compose together.

This example is one I wrote about in my post Make Async Your Buddy with Reactive Extensions. But I made a mistake in the post. Here’s the code I showed as the end result:

      <SizeChangedEventHandler, SizeChangedEventArgs>
        (h => SizeChanged += h, h => SizeChanged -= h)
        .Select(e => Unit.Default),
    Observable.FromEventPattern<EventHandler, EventArgs>
        (h => LocationChanged += h, h => LocationChanged -= h)
        .Select(e => Unit.Default)
).Throttle(TimeSpan.FromSeconds(5), RxApp.DeferredScheduler)
.Subscribe(_ => this.SavePlacement());

I’ll give you a second to recover from USBS (Ugly Syntax Blindness Syndrome).

The code isn’t incorrect, but there’s a lot of noise in here due to the boilerplate expressions used to convert an event into an observable sequence of events. I think this detracted from my point.

So today, I realized I should add a couple of really simple extension methods that describe what’s going on and hides the boilerplate.

// Returns an observable sequence of a framework element's
// SizeChanged events.
public static IObservable<EventPattern<SizeChangedEventArgs>> 
    ObserveResize(this FrameworkElement frameworkElement)
  return Observable.FromEventPattern
    <SizeChangedEventHandler, SizeChangedEventArgs>(
        h => frameworkElement.SizeChanged += h,
        h => frameworkElement.SizeChanged -= h)
      .Select(ep => ep.EventArgs);

// Returns an observable sequence of a window's 
// LocationChanged events.
public static IObservable<EventPattern<EventArgs>> 
    ObserveLocationChanged(this Window window)
  return Observable.FromEventPattern<EventHandler, EventArgs>(
      h => window.LocationChanged += h,
      h => window.LocationChanged -= h)
    .Select(ep => ep.EventArgs);

This then allows me to rewrite the original code like so:

  .Throttle(TimeSpan.FromSeconds(5), RxApp.MainThreadScheduler)
  .Subscribe(_ => SavePlacement());

That code is much easier to read and understand what’s going on and avoids the plague of USBS (unless you’re a Ruby developer in which case you have a high sensitivity to USBS).

The important part is we don’t have to maintain tricky bookkeeping code. There’s no code here that keeps track of the last time we saw one or the other event. Here, we just declare what we want and Reactive Extensions handles the rest.

This is what I mean by declare, don’t tell. We don’t tell the code how to do its job. We just declare what we need done.

UPDATE: ReactiveUI (RxUI) 5.0 has an assembly Reactive.Events that maps every event to an observable for you! For example:

  .Subscribe(_ => Console.WriteLine("foo"));

That makes things much easier!

code comments edit

Not long ago I wrote a blog post about how platform restrictions harm .NET. This led to a lot of discussion online and on Twitter. At some point David Kean suggested a more productive approach would be to create a UserVoice issue. So I did and it quickly gathered a lot of votes.

I’m visiting Toronto right now so I’ve been off of the Internet all day and missed all the hubbub when it happened. I found out about it when I logged into Gmail and I saw I had an email that the user voice issue I created was closed. My initial angry knee-jerk reaction was “What?! How could they close this without addressing it?!” as I furiously clicked on the subject to read the email and follow the link to this post.


Serious Kudos to the .NET team for this. It looks like most of the interesting PCL packages are now licensed without platform restrictions. As an example of how this small change sends out ripples of goodness, we can now make depend on portable HttpClient and make itself more cross platform and portable without a huge amount of work.

I’m also excited about the partnership between Microsoft and Xamarin this represents. I do believe C# is a great language for cross-platform development and it’s good to see Microsoft jumping back on board with this. This is a marked change from the situation I wrote about in 2012.

code comments edit

Over the past few years I’ve become more and more interested in functional programming concepts and the power, expressiveness, and elegance they hold.

But you don’t have to abandon your language of choice and wander the desert eating moths and preaching the gospel of F#,  Haskell, or Clojure to enjoy these benefits today!

In his blog post, Unconditional Programming, Michael Feathers ponders how less control structures lead to better code,

Control structures have been around nearly as long as programming but it’s hard for me to see them as more than an annoyance.  Over and over again, I find that better code has fewer if-statements, fewer switches, and fewer loops.  Often this happens because developers are using languages with better abstractions.  They aren’t consciously trying to avoid control structures but they do.

We don’t need to try and kill every if statement, but perhaps the more we do, the better our code becomes.

Msi_if_coverPhoto from wikimedia: Cover of If by the artist Mindless Self Indulgence

He then provides an example in Ruby of a padded “take” method.

…I needed to write a ‘take’ function to take elements from the beginning of an array.  Ruby already has a take function on Enumerable, but I needed to special behavior.  If the number of elements I needed was larger than the number of elements in the array, I needed to pad the remaining space in the resulting array with zeros.

I recommend reading his post. It’s quite interesting. At the risk of spoiling the punch line, here’s the before code which makes use of a conditional…

  def padded_take ary, n
    if n <= ary.length
      ary + [0] * (n - ary.length)

… and here is the after code without the conditional. In this case, he pads the source array with just enough elements as needed and then does the take.

  def pad ary, n
    pad_length = [0, n - ary.length].max
    ary + [0] * pad_length

  def padded_take ary, n
    pad(ary, n).take(n)

I thought it would be interesting to translate the after code to C#. One thing to note about the Ruby code is that it always allocates a new array whether it’s needed or not.

Now, I haven’t done any benchmarks on it so I have no idea if that’s bad or not compared to how often the code is called etc. But it occurred to me that we could use lazy evaluation in C# and completely circumvent the need to allocate a new array while still being expressive and elegant.

I decided to write it as an extension method (I guess that’s similar to a Mixin for you Ruby folks?).

public static IEnumerable<T> PaddedTake<T>(
  this IEnumerable<T> source, int count)
  return source
    .Concat(Enumerable.Repeat(default(T), count))

This code takes advantage of some Linq methods. The important thing to note is that Concat and Repeat are lazily evaluated. That’s why I didn’t need to do any math to figure out the difference in length between the source array and the the take count.

I just passed the total count we want to take to Repeat. Since Repeat is lazy, we could pass in int.MaxValue if we wanted to get all crazy up in here. I just passed in count as it will always be enough and I like to play it safe.

Now my Ruby friends at work might scoff at all those angle brackets and parentheses in the code, but you have to admit that it’s an elegant solution to the original problem.

Here is a test to demonstrate usage and show it works.

var items = new[] {1, 2, 3};

var result = items.PaddedTake(5).ToArray();

Assert.Equal(5, result.Length);
Assert.Equal(1, result[0]);
Assert.Equal(2, result[1]);
Assert.Equal(3, result[2]);
Assert.Equal(0, result[3]);
Assert.Equal(0, result[4]);

I also ran some quick perf tests on it comparing PaddedTake to the built in Take . PaddedTake is a tiny bit slower, but the amount is like the extra light cast by a firefly at noon of a sunny day. The performance of this method is way more affected by the number of elements in the array and the number of elements you are taking. But in my tests, the performance of PaddedTake stays pretty close to Take as we grow the array and the take.

I think it’d be interesting to have a build task that reported back the number of `if` statements and other control structures per line of code and see if you can bring that down over time. In any case, I hope this helps you improve your own code!

open source, code comments edit targets multiple platforms. This involves a large risk to my sanity. You can see the general approach here in the Octokit directory of our project:


Mono gets a project! MonoAndroid gets a project file! Monotuch gets a project file! Everybody gets a project file!

Each of these projects references the same set of .cs files. When I add a file to Octokit.csproj, I have to remember to add that file to the other four project files. As you can imagine, this is easy to forget.

It’s a real pain. So I opened up a feature request on FAKE, the tool we use for our build (more on that later) and asked them for a task that would fail the build if another project file in the same directory was missing a file from the “source” project file. I figured this would be something easy for F# to handle.

The initial response from the maintainer of FAKE, Steffen Forkman, was this:

What you need is a better project system ;-)


This problem (along with so many project file merge conflicts) would almost completely go away with file patterns in project files. I’ve been asking for this for a long time (I asked the Visual Studio team for this the day I joined Microsoft, or maybe it was the first month, I don’t recall). There’s a User Voice item requesting this,go vote it up! (Also, go vote up thisplatform restriction issuethat’s affecting as well)

In any case, sorry to say unlimited chocolate fountains don’t exist and I don’t have the project system I want. So let’s deal with it.

A few days later, I get this PR to When I ran the build, I see the following snippet in the build output.

Running build failed.
System.Exception: Missing files in  D:\Octokit\Octokit-MonoAndroid.csproj:

That’s telling me that somebody forgot to add the class OrganizationMembersClient.cs to the Octokit-MonoAndroid.csproj. Wow! Isn’t open source grand?

A big thanks to Steffen and other members of the FAKE community who pitched in to build a small but very useful feature. In a follow-up post, I’ll write a little bit about why we moved to using FAKE to build


I opened an issue to take this to the next step. Rather than just verify the project files, I want some way to automatically modify or generate them.

Update 2

FAKE just got even better with the new FixProjects task! For now, we’ve added this as an explicit command.

.\build FixProjects

Over time, we may just integrate this into the build directly.

open source, code, github comments edit

Most developers are aware of the potential pitfalls of premature optimization and premature generalization. At least I hope they are. But what about premature standardization, a close cousin to premature generalization?

It’s human nature. When patterns emerge, they tempt people to drop everything and put together a standard to codify the pattern. After all, everyone benefits from a standard, right? Isn’t it a great way to ensure interoperability?

Yes, standards can be helpful. But to shoehorn a pattern into a standard prematurely can stifle innovation. New advances are often evolutionary. Multiple ideas compete and the best ones (hopefully) gain acceptance over time while the other ideas die out from lack of interest.

Once standardization is in place, people spend so much energy on abiding by the standard rather than experiment with alternative ideas. Those who come up with alternative ideas become mocked for not following “the standard.” This is detrimental.

In his Rules of Standardization, Yaron Goland suggests that before we adopt a standard,

The technology must be very old and very well understood

He proposes twenty years as a good rule of thumb. He also suggests that,

Standardizing the technology must provide greater advantage to the software writing community then keeping the technology incompatible

This is a good rule of thumb to contemplate before one proposes a standard.

Social Standards

So far, I’ve focused on software interoperability standards. Software has a tendency to be a real stickler when it comes to data exchange. If even one bit is out of place, software loses its shit.

For example, if my code sends your code a date formatted as ISO 8601, but your code expects a date in Unix Time, stuffs gonna be broke™.

But social standards are different. By a “social standard” I mean a convention of behavior among people. And the thing about people is we’re pretty damn flexible, Hacker News crowd notwithstanding.

Rather than being enforced by software or specifications, social standards tend to be enforced through the use of encouragement, coercion, and shaming.

Good social standards are not declared so much as they emerge based on what people do already. If people converge on a standard, then it becomes the standard. And it’s only the standard so long as people adopt it.

This reminds me of a quote by W.L. Gore & Associates’ CEO, Terri Kelly on leadership at a non-hierarchical company,

If you call a meeting, and no one shows up, you’re probably not a leader, because no one is willing to follow you.

Standard GitHub issue labels?

I wrote a recent tweet to announce a label that the Octokit team uses to denote low hanging fruit for new contributors,

For those looking to get started with .NET OSS and, we tag low hanging fruit as “easy-fix”.

It was not my intention to create a new social standard.

Someone questioned me why we didn’t use the “jump in” label proposed by Nik Molnar,

The idea for a standardized issue label for open source projects came from the two pieces of feedback I consistently hear from would-be contributors:

  1. “I’m not sure where to start with contributing to project X.”
  2. “I’ll try to pick off a bug on the backlog as soon as I’ve acquainted myself enough with the codebase to provide value.” “”

In the comments to that blog post, Glenn Block notes that the ScriptCS project is using the “YOU TAKE IT” label to accomplish the same thing.

About two and a half years earlier, I blogged about the “UpForGrabs” label the NuGet team was using for the same reason.

As you can see, multiple people over time have had the same idea. So the question was raised to me, would I agree that “standardizing” a label to invite contributors might be a good thing?

To rephrase one of Goland’s rule of standardization,

A social standard must provide greater advantage to the software community than just doing your own thing.

This is a prime example of a social standard and in this case, I don’t think it provides a greater advantage than each project doing it’s own thing. At least not yet. If one arises naturally because everyone thinks it’s a great idea, then I’m sold! But I don’t think this is something that can just be declared to be a standard. It requires more experimentation.

I think the real problem is that these labels are just not descriptive enough. One issue I have with Up For Grabs, You Take It,and Jump Inis they seem too focused on giving commands to the potential contributor, “HEY! YOU TAKE IT! WE DON’T WANT IT!”. They’re focused on the relationship of the core team to the issue. I think the labels should describe the issue and not how the core team wants new contributors to interact with the issue.

What makes an issue appeal to a new contributor is different from contributor to contributor. So rather than a generic “UpForGrabs” label, I think a set of labels that are descriptive of the issue make sense. People can then self-select the issues that appeal to them.

For many new contributors, an issue labeled as “easy-fix” is going to appeal to their need to dip their toe into OSS. For others, issues labeled as “docs-and-samples” will fit their abilities better.

So far, I’ve been delighted that several brand new OSS contributors sent us pull requests. It far surpassed my expectations. Of course, I don’t have a control project with the different labels, so I can’t rightly attribute it to the labels. Science doesn’t work that way. Even if we did, I doubt it’s the labels that made much of any difference here.

Again, this is not an attempt to propose a new standard. This is just an approach we’re experimenting with in If you like this idea, please steal it. If you have a better idea, I’d love to hear it!

github comments edit

Today on the GitHub blog, we announced the first release of

Octokit is a family of client libraries for the GitHub API. Back in May, we released Octokit libraries for Ruby and Objective-C.

Today we’re releasing the third member of the Octokit family,, the GitHub API toolkit for .NET developers.


GitHub provides a powerful set of tools for developers who build amazing software together. But these tools extend way beyond the website and Git clients.

The GitHub API provides a rich web based way to leverage within your own applications. The Octokit family of libraries makes it easy to call into the API. I can’t wait to see what you build with it.

The Project is an open source project on GitHub so feel free to contribute with pull requests, issues, etc. You’ll notice that we call it a 0.1.0 release. As of today, it doesn’t implement every API endpoint that supports.

We wanted to make sure that it was in use by a real application so we focused on the endpoints that GitHub for Windows needs. If there’s an endpoint that is not implemented, please do log an issue. Or even better, send a pull request!

Our approach in implementing this library was to avoid being overly speculative. We tried to implement features as we needed them based on developing a real production application.

But now that it’s in the wild, we’re curious to see what other types of applications will need from the library.

Platform and Licensing Details is licensed under the MIT license.

As of today, requires .NET 4.5. We also have a WinRT library for .NET 4.5 Core. This is because we build on top of HttpClient is not available in .NET 4.0.

There is a Portable HttpClient package that does work for .NET 4.0, but we won’t distribute it because it has platform limitations that are incompatible with our license.

I had hoped that its platform limitations would have been removed by now, but that sadly is not the case. If you’re wondering why that matters, read my post here.

However, if you check the repository out, you’ll notice that there’s a branch named haacked/portable-httpclient. If you only plan to deploy on Windows, you can build that branch yourself and make use of it.

Go Forth And Build!

I’ve had great fun working with my team at GitHub on the past few weeks. I hope you have fun building amazing software that extends GitHub in ways we never imagined. Enjoy!

open source, github, code, code review comments edit

If I had to pick just one feature that embodies GitHub (besides emoji support of course , I’d easily choose the Pull Request (aka PR). According to GitHub’s help docs (emphasis mine),

Pull requests let you tell others about changes you’ve pushed to a GitHub repository. Once a pull request is sent, interested parties can review the set of changes, discuss potential modifications, and even push follow-up commits if necessary.

Some folks are confused by the name “pull request.” Just think of it as a request for the maintainer of the project to “pull” your changes into their repository.

Here’s a screenshot of a pull request for GitHub for Windows where Paul Betts patiently explains why my code might result in the total economic collapse of the world economy.

sample code review

A co-worker code review is a good way to avoid the Danger Zone (slightly NSFW).

Code review is at the heart of the GitHub collaboration model. And for good reason! There’s a rich set of research about the efficacy of code reviews.

In one of my favorite software books, Facts and Fallacies of Software Engineering by Robert Glass, Fact 37 points out,

Rigorous inspections can remove up to 90 percent of errors from a software product before the first test case is run.

And the best part is, that reviews are cost effective!

Furthermore, the same studies show that the cost of inspections is less than the cost of the testing that would be necessary to find the same errors.

One of my other favorite software books, Code Complete by Steve McConnell, points out that,

the average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent.

Note that McConnell is referring to evidence for the average effectiveness while Glass refers to evidence for the peak effectiveness.

The best part though, is that Code Review isn’t just useful for finding defects. It’s a great way to spread information about coding standards and conventions to others as well as a great teaching tool. I learn a lot when my peers review my code and I use it as an opportunity to teach others who submit PRs to my projects.

Effective Code Review

You’ll notice that Glass and McConnell use the term “code inspection” and not review. A lot of time, when we think of code review, we think of simply looking the code up and down a bit, making a few terse comments about obvious glaring errors, and then calling it a day.

I know I’ve been guilty of this “drive-by” code review approach. It’s especially easy to do with pull requests.

But what these gentlemen refer to is a much more thorough and rigorous approach to reviewing code. I’ve found that when I do it well, a proper code review is just as intense and mentally taxing as writing code, if not more so. I usually like to take a nap afterwards.

Here are a few tips I’ve learned over the years for doing code reviews well.

Review a reasonable amount of code at a time

This is one of the hardest tips for me to follow. When I start a review of a pull request, I am so tempted to finish it in one sitting because I’m impatient and want to get back to my own work. Also, I know that others are waiting on the review and I don’t want to hold them up.

But I try and remind myself that code review is my work! Also, a poorly done review is not much better than no review at all. When you realize that code reviews are important, you understand that it’s worth the extra time to do it well.

So I usually stop when I reach that point of review exhaustion and catch myself skipping over code. I just take a break, move onto something else, and return to it later. What better time to catch up on Archer episodes?!

Focus on the code and not the author

This has more to do with the social aspect of code review than defect finding. I try to do my best to focus my comments on the code and not the ability or the mental state of the author. For example, instead of asking “What the hell were you thinking when you wrote this?!” I’ll say, “I’m unclear about what this section of code does. Would you explain it?”.

See? Instead of attacking the author, I’m focusing on the code and my understanding of it.

Of course, it’s possible to follow this advice and still be insulting, “This code makes me want to gouge my eyes out in between my fits of vomiting.” While this sentence focuses on the code and how it makes me feel, it’s still implicitly insulting to the author. Try to avoid that.

Keep a code review checklist

A code review checklist is a really great tool for conducting an effective code review. The checklist should be a gentle reminder of common issues in code you want to review. It shouldn’t represent the only things you review, but a minimal set. You should always be engaging your brain during a review looking for things that might not be on your checklist.

I’ll be honest, as I started writing this post, I only had a mental checklist I ran through. In an effort to avoid being a hypocrite and leveling up my code review, I created a checklist gist.

My checklist includes things like:

  1. Ensure there are unit tests and review those first looking for test gaps. Unit tests are a fantastic way to grasp how code is meant to be used by others and to learn what the expected behavior is.
  2. Review arguments to methods. Make sure arguments to methods make sense and are validated. Consider what happens with boundary conditions.
  3. Look for null reference exceptions. Null references are a bitch and it’s worth looking out for them specifically.
  4. Make sure naming, formatting, etc. follow our conventions and are consistent.I like a codebase that’s fairly consistent so you know what to expect.
  5. Disposable things are disposed. Look for usages of resources that should be disposed but are not.
  6. Security.There is a whole threat and mitigation review process that falls under this bucket. I won’t go into that in this post. But do ask yourself how the code can be exploited.

I also have separate checklists for different platform specific items. For example, if I’m reviewing a WPF application, I’m looking out for cases where we might update the UI on a non-UI thread. Things like that.

Step Through The Code

You’ll note that I don’t mention making sure the code compiles and that the tests pass. I already know this through the magic of the commit status API which is displayed on our pull requests.


However, for more involved or more risky code changes, I do think it’s worthwhile to actually try the code and step through it in the debugger. Here, GitHub has your back with a relatively new feature that makes it easy to get the code for a specific pull request down to your machine.

If you have GitHub for Windows or GitHub for Mac installed and you scroll down to the bottom of any pull request, you’ll see a curious new button.


Click on that button and we’ll clone the pull request code to your local machine so you can quickly and easily try it out.

Note that in Git parlance, this is not the original pull request branch, but reference (usually named something like pr/42 where 42 is the pull request number) so you should treat it as a read-only branch. But you can always create a branch from that reference and push it to GitHub if you need to.

I often like to do this and run Resharper analysis on the code to highlight things like places where I might want to convert code to use a LINQ expression and things like that.

Sign Off On It

After a few rounds of review, when the code looks good, make sure you let the author know! Praise where praise is due is an important part of code reviews.

At GitHub, when a team is satisfied with a pull request, we tend to comment it and include the ship it squirrel emoji (:shipit:) . That indicates the review is complete, everything looks good, and you are free to ship the changes and merge it to master.

Every team is different, but on the GitHub for Windows team we tend to let the author merge the code into master after someone else signs off on the pull request.

This works well when dealing with pull requests from people who also have commit access. On my open source projects, I tend to post a thumbs up reaction gif to show my immense appreciation for their contribution. I then merge it for them.

Here’s one of my favorites for a very good contributions.

Bruce Lee gives a thumbs up

Be Good To Each Other

Many of my favorite discussions happen around code. There’s something about having working code to focus a discuss in a way that hypothetical discussions do not.

Of course, even this can break down on occasion. But for the most part, if you go into a code review with the idea of both being taught as well as teaching, good things result.

community, personal comments edit

I love a good argument. No really! Even ones online.

The problem is, so few of them are any good. They tend to go nowhere and offer nothing of value. They just consist of one side attempting to browbeat the other into rhetorical submission.

What?! You are not persuaded by my unassailable argument? THEN LET ME MAKE THE SAME POINTWITH ALL CAPS!


red-cardYou want to argue? Argue with this card! Image: from wikipedia CC BY-SA 3.0.

So what makes an argument good? (besides when you agree with me which is always a good move)

A while back, I read an interesting article about Professor Daniel H. Cohen, a philosopher who specializes in argumentation theory, that tackles this question.

As an aside, I wonder how his kids feel arguing with someone who’s basically a professor of arguing? Must be hard winning that argument about extending that curfew.

The article starts off with a scenario that captures 99.9% of arguments (online or offline) well:

You are having a heated debate with friends about, say, equality of the sexes. You’ve taken a standpoint and you’re sticking with it. Before you know it, you’ve got so worked up that, regardless of whether you believe your argument is the most valid, you simply just want to win, employing tactics and subterfuge to seek victory.

I like to think of myself as a very logical reasonable person. But when I read this scenario, I realized how often I’ve fallen prey to that even in what should be dispassionate technical arguments!

I’m pretty sure I’m not the only one. I’m just willing to admit it.

Cohen points out that the “war metaphor” is at fault for this tendency. Often, it’s the so-called “losers” of an argument who really win:

He explains, “Suppose you and I have an argument. You believe a proposition, P, and I don’t. I’ve objected, I’ve questioned, I’ve raised all sorts of counter-considerations, and in every case you’ve responded to my satisfaction. At the end of the day, I say, ‘You know what? I guess you’re right.’ So I have a new belief. And it’s not just any belief, but it’s a well-articulated, examined and battle-tested belief. Cohen continues, “So who won that argument? Well, the war metaphor seems to force us into saying you won, even though I’m the only one who made any cognitive gain.

The point of a good argument isn’t for one person to simply win over the other. It’s ideally for both to come away with cognitive gains.

Even if the goal of an argument is to reach a decision, the goal isn’t to win, the goal is to define the parameters for a good decision and then make the best possible decision with that in mind.

I’ve come to believe that when two reasonably smart people disagree on a subject, at the core, it is often because one of the following:

  1. One or both of the participants is missing key information.
  2. One or both of the participants made a logic error that leads to a wrong conclusion.
  3. The participants agree on the facts, but have different values and priorities leading them to either disagree on what conclusion should come from the facts.

In my mind, a good debate tries to expose missing facts and illogical conclusions so that two in the debate can get to the real crux of the matter, how their biases, experiences, and values shape their beliefs.

I’m assuming here that both participants are invested in the debate. When one isn’t, it becomes overwhelmingly tempting to resort to any means necessary in order to wipe that smug smirk off your opponent’s face.


Of course, both sides will believe they’re the one who is drawing conclusions from years of objective rational analysis, but they’re both wrong. In the end, we all succumb to various biases and our values. And a good debate can expose those and allow participants to discuss whether those are the right biases and values to have in the first place? That’s where an argument really gets somewhere.

Another philosopher, Daniel Dennett, lays out these rhetorical habits when critiquing or arguing in his book, Intuition Pumps And Other Tools for Thinking:

How to compose a successful critical commentary:

​1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”

​2. List any points of agreement (especially if they are not matters of general or widespread agreement).

​3. Mention anything you have learned from your target.

​4. Only then are you permitted to say so much as a word of rebuttal or criticism.

These habits nicely complement the improved metaphor for arguing espoused by Cohen.

So the next time you get into an argument, think about your goals. Are you just trying to win or are you trying to reach mutual understanding? Then try to apply Dennett’s rhetorical habits as you argue. I’ll try to do the same so if we end up in an argument, there’s a better chance it’ll result in a good one.

This will serve you well not only in your work, but in your personal relationships as well.

code, open source comments edit

Just shipped a new release of RestSharp to NuGet. For those who don’t know, RestSharp is a simple REST and HTTP API Client for .NET.

This release is primarily a bug fix release with a whole lotta bug fixes. It should be fully compatible with the previous version. If it’s not, I’m sorry.

Some highlights:

  • Added Task<T> Overloads for async requests
  • Serialization bug fixes
  • ClientCertificate bug fix for Mono
  • And many more bug fixes…

Full release notes are up on GitHub. If you’re interested in the nitty gritty, you can see every commit that made it into this release using the GitHub compare view.

I want to send a big thanks to everyone who contributed to this release. You should feel proud of your contribution!

Who are you and what did you do to Sheehan?!

Don’t worry! John Sheehan is safe and sound in an undisclosed location. Ha! I kid. I’m beating him senseless every day.

Seriously though, if you use RestSharp, you should buy John Sheehan a beer. Though get in line as Paul Betts owes him at least five beers.

-359John started RestSharp four years ago and has shepherded it well for a very long time. But a while back he decided to focus more on other technologies. Even so, he held on for a long time tending to his baby even amidst a lot of frustrations, until he finally stopped contributing and left it to the community to handle.

And the community did. Various other folks started taking stewardship of the project and it continued along. This is the beauty of open source.

We at GitHub use RestSharp for the GitHub for Windows application. A little while back, I noticed people stopped reviewing and accepting my pull requests. Turns out the project was temporarily abandoned. So Sheehan gave me commit access and I took the helm getting our bug fixes in as well as reviewing and accepting the many dormant pull requests. That’s why I’m here.

Why RestSharp when there’s HttpClient?

Very good question! System.Net.HttpClient is only available for .NET 4.5. There’s the Portable Class Library (PCL) version, but that is encumbered by silly platform restrictions. I’ve written before that this is harms .NET. I am hopeful they will eventually change it.

RestSharp is unencumbered by platform restrictions - another beautiful thing about open source.

So until Microsoft fixes the licensing on HttpClient, RestSharp is one of the only options for a portable, multi-platform, unencumbered, fully open source HTTP client you can use in all of your applications today. Want to build the next great iOS app using Xamarin tools? Feel free to use RestSharp. Find a bug in using it on Mono? Send a pull request.

The Future of RestSharp

I’m not going to lie. I’m just providing a temporary foster home for RestSharp. When the HttpClient licensing is fixed, I may switch to that and stop shepherding RestSharp. I fully expect others will come along and take it to the next level. Of course it really depends on the feature set it supplies and whether they open source it.

As they say, open source is about scratching an itch. Right now, I’m scratching the “we need fixes in RestSharp” itch. When I no longer have that itch, I’ll hand it off to the next person who has the itch.

But while I’m here, I’m going to fix things up and make them better.

code, open source, github comments edit

The first GitHub Data Challenge launched in 2012 and asked the following compelling question: what would you do with all this data about our coding habits?

The GitHub public timeline is now easy to query and analyze. With hundreds of thousands of events in the timeline every day, there are countless stories to tell.

Excited to play around with all this data? We’d love to see what you come up with.

It was so successful, we did it again this past April. One of those projects really caught my eye, a site that analysise Popular Coding Conventions on GitHub. It ended up winning second place.

It analyzes GitHub and provides interesting graphs on which coding conventions are more popular among GitHub users based on analyzing the code. This lets you fight your ever present software religious wars with some data.

For example, here’s how the Tabs vs Spaces debate lands among Java developers on GitHub.


With that, I’m sure nobody ever will argue tabs over spaces again right? RIGHT?!

What about C#?!

UPDATE: JeongHyoon Byun added C# support! Woohoo!

Sadly, there is no support for C# yet. I logged an issue in the repository about that a while back and was asked to provide examples of C# conventions.

I finally got around to it today. I simply converted the Java examples to C# and added one or two that I’ve debated with my co-workers.

However, to get this done faster, perhaps one of you would be willing to add a simple CSharp convention parser to this project. Here’s a list of the current parsers that can be used as the basis for a new one.

Please please please somebody step up and write that parser. That way I can show my co-worker Paul Betts the error of his naming ways.

humor, personal, company culture comments edit

I avoid mailing lists the same way I avoid fun activities like meetings and pouring lemon juice into bloody scrapes. Even so, I still somehow end up subscribed to one or two. Even worse, once in a while, despite my better judgment, I send an email to such a list and am quickly punished for my transgression with an onslaught of out of office auto replies. You know the type:

Hey there friend! Thanks for your email! No seriously, I’m thanking you even though I haven’t read it. I’m sure it’s important because I’m important.

Unfortunately (for you), I’m off to some island paradise drinking one too many Mai Tais and probably making an ass of myself.

If you need to reach me, you can’t, LOL! You can contact this list of people you don’t know in my absence. Good luck with that!

punishment Wait till you see the punishment for sending an email during the holidays! Photo by: Tomas Alexsiejunas license: CC BY 2.0

If you have such a rule set up, let me humbly offer you a friendly constructive tip:


The universe has gone on for around 14 billion years before you were born just fine. And chances are, it’ll continue to survive after your death for another 100 googol years until the entropy death of the last proton, or another universe inflates to take its place. Whichever comes first.

So in the grand scheme of things, nobody cares that you’re out of the office, on vacation, or worse, too busy to respond to email so you automatically send me one as if I have all the time in the world to deal with more email.

Ok, that might have come across as a eensy weensy bit ranty. I’ll try to tone it down and offer something more constructive. After all, I’ve probably been guilty of this in my past and I apologize and have paid my penance (see photo above).

Maybe there’s a tiny bit of a purpose

The first time I experienced widespread use of out-of-office replies is during my time at Microsoft. And to be fair, it does serve a small purpose. While 99.999999% of the world doesn’t care if you’re out of the office (that’s science folks), sometimes someone has a legitimate need to know who they should contact instead of you. For example, at Microsoft, I had an internal reply set up that directed emails to my manager. The lucky guy.

Fortunately for those using Outlook with Exchange, you can choose a different reply for internal emails than external emails. So definitely go and do that.

The two email rule of out-of-office replies

But what about the rest of us who really don’t care? I offer the following simple idea:

If you must have an out-of-office auto reply, create a rule to only send it when you receive two direct emails without a response. The idea here is that if I send you one email directly, I can probably wait for you to get back to respond. If I can’t, I’ll send you another “hey, bump!” email and then receive auto notice. After all, if I send you two emails, sending me one is fair game.

Also, make sure you never ever ever send an auto-reply when you are not in the TO list. That rule alone will cut out billions of email list out-of-office spam. Ideally, the auto-reply should only occur if you’re the only one in the TO list. Chances are someone else in the TO list will know you’re gone and can reply to the others if necessary. Again, the two email rule could come into play here.

In the meanwhile, I think I’m going to switch tactics. Spam me, I spam you. So I may respond to those auto-replies with something like:

Dear So-and-so,

Hey dude, thanks for letting me know that you’re not in your office. I bet you’re on a really important business trip and/or vacation! I bet you have such important things to do there!

Me? Not so much. I wish I was on an important business trip and/or vacation.

It turns out, I have nothing better to do than respond to your automatically generated email to me! Thank you so much for that. The fact that it had your name at the bottom of the email and my email address in the TO: list was a nice personal touch. It really felt like you took the time to lovingly craft that email just for me.

So I thought it would be rude not to respond in kind with a thoughtful response of my own.

Sincerely and without regret,


p.s. I left you a little surprise in your office, but since you’re not there, I hope it doesn’t die before you get back. If it smells bad when you get back, you’ll know.

Hopefully email clients take this up and just implement it automatically because I don’t expect people to take the time to do this right.

What do you think? Are auto-replies more important than I give credit or do we live in a world with a lot of narcissists who must be stopped? Tell me how I’m right or wrong in the comments. Thanks!