git crypto humor oss bash shell unix comments edit

A recent wry tweet by @bcrypt really tickled my funny bone:

gitcoin: the author of the commit sha1 with the longest prefix of 0’s in your repository is now the project maintainer

The genius in the tweet is how it draws a comparison to Bitcoin’s approach to achieving distributed consensus with achieving consensus on choosing a project maintainer.

With Bitcoin, there’s a proof-of-work algorithm that relies on generating SHAs until you find one with a certain number of leading zeros. Git commit SHAs could perhaps serve a similar purpose.

Is this a good way to pick a project maintainer? Probably not. But then again, it’s not that far off from how I make most important life decisions. If your project wants to take a walk on the wild side, I’ve got just the command for you.

A simple solution

Run the following command in a Git repository and it’ll return the name of the author, the commit date, and the SHA of the commit that has the lowest SHA sorted lexicographically.

git log --pretty=format:"%H %ad %an" | sort | head -n1

Or, if you prefer a Git alias:

  coin = !git log --pretty=format:'%H %ad %an' | sort | head -n1

The SHA that results will have the most leading zeros in the repository. There may be other commits with the same number of leading zeros, but for the sake of this thought exercise, I’ll just pick the one that’s sorted first.

For those not familiar with the git log command, there are a gaggle of options. I’ll break down this specific invocation.

The --pretty=format option takes a custom format string that specifies the contents of the output. %H is the commit SHA. %ad is the commit date and %an is the author name.

We pipe that to the sort command. Since no two SHAs can be the same, we don’t have to worry about sorting on just the first column. We can just sort using the entire line as the sort key. Then we use head -n1 to pluck the first item.

It’s possible that there won’t be any commits with leading 0s, but I ignore that for now. I figure the commit with the lowest SHA sorted alphabetically fits with the spirit of the idea.

Since runs on Ruby on Rails, I thought it’d be fun to try it out on I cloned the repository to my machine and ran git coin on it. Here’s the output (SHA truncated for presentation purposes):

000121e8 Thu Nov 15 23:10:45 2012 +0200 Agis Anastasopoulos

Congratulations Agis! You are the new maintainer of Rails!

Not so fast!

I know what some of you are thinking, “You are ridiculous. This is a waste of time.” To those I say, hold my beer because I’m not done yet.

Others who are familiar with Bitcoin’s consensus protocol are thinking, “This is not how the protocol works. It’s not about choosing the lowest sorted SHA, it’s about reaching a target number of leading zeros.” To those I say, you’re taking this too seriously!

Even so, in anticipation of all the “Well, Actually” responses I’m sure to receive, I’ll address this fair point.

With Bitcoin, the first miner to generate a SHA with the target number of leading zeros is the one to add their block to the global blockchain.

I’m hand waving a bit here for the sake of brevity. The important point is that it’s not the block with the lowest SHA. It has nothing to do with the sorting of SHAs.

Over time, the protocol compensates for the global increase in computing power by increasing the number of leading zeros in the proof of work target. That way a block is added roughly every ten minutes no matter how fast computers get and no matter how many computers are mining.

If we translate this to the gitcoin idea, we probably want to look at the first commit to reach the most number of leading zeros.

For example, say that the current maintainer was chosen because of a commit with a SHA that has two leading zeros. The next maintainer is chosen by the commit that has three (or more) leading zeros. The next maintainer after that is chosen by the first commit with one more leading zero than the commit that chose the previous maintainer. And so on.

In other words, every time a new maintainer is chosen by this protocol, the target number of leading zeros increases by one. The implication is that over time, maintainers chosen by this project will spend more and more time maintaining the project. Not sure that’s necessarily a desirable trait or not.

The shell script to find the maintainer with these rules is considerably more complex than my previous script. This is why I originally wanted to stop with that script and call it a day. Also, I’m lazy.

Not to mention, my background is primarily with Windows so my Unix-fu is fairly weak. However, my time at GitHub working with Git has helped me exercise those muscles quite a bit more than I did before. So I thought it’d be fun to give it a shot. Here’s the script I came up with:

TZ=UTC git log --pretty=format:'%H%x09%ad%x09%an' --date=iso-local | grep ^0.* | sed -E 's/(0+)(.*)/\1\t\1\2/' | sort -k1,1 -k3,3r | tail -n1 | cut -f 2,3,4

And the award goes to…

When I run this against the Rails repository, it outputs (again, SHA truncated for presentation purposes):

00050dfe	2006-04-09 21:27:32 +0000	David Heinemeier Hansson

Sorry Agis, this David Heinemeier Hansson person is now the Rails maintainer! I hope David accepts this responsibility seriously.

Uh, still not there.

If you read an earlier version of this post, you’ll note I declared DHH the maintainer of Rails. But Jean-Jacques Lafay noted in the comments to this post that I need to look at the leading zeros of the SHA when written in binary form. Whoops!

This makes a lot of sense, when you think about it. Under my original implementation, every time we choose a new maintainer, we increased the difficulty in choosing the next maintainer by 16 times. When we look at leading zeros in binary form, we only double the time.

Fortunately, the correction to my script is pretty simple, I need to grab all the zeros (if any) and the first non-zero character when creating the sort key. Any characters after that won’t change the leading zeros.

For example, 001a and 001b have the same number of leading zeros when expressed as binary. But 00a and 00b do not have the same number of leading zeros.

So here’s the updated script:

TZ=UTC git log --pretty=format:'%H%x09%ad%x09%an' --date=iso-local | sed -E 's/^(0*[1-9a-f])(.*)/\1\t\1\2/' | sort -k1,1 -k3,3 | head -n1 | cut -f 2,3,4

And once again, Agis is the new maintainer for Rails!

The excrutiatingly detailed breakdown

Let’s break this down piece by piece for those of you like me who don’t eat and breath shell scripting.

The first thing we do is set the local timezone to UTC (TZ=UTC) so we can sort by date and compare apples to apples.

git log --pretty=format:'%H%x09%ad%x09%an' --date=iso-local

Just like before, we’re running a git log command. It looks ugly, but all I’m doing here is using tab character %x09 in place of spaces. That’ll come in handy later. I also specify that the date format should be iso-local. This provides a date that’s sortable lexicographically. We’ll need that later too.

sed -E 's/^(0*[1-9a-f])(.*)/\1\t\1\2/'

Sed is a powerful command used to perform text transformations on an input stream. In this case, we’re using the s/ command which is a regex replacement. The -E indicates that sed should use extended regular expressions. What I’m doing here is extracting the consecutive sequence of 0s that the SHA starts with as a new column in the output.

So if the git log command we ran earlier returned something like this (SHAs and name truncated for presentation purposes):

005371e1	2004-12-01 13:59:16 +0000	David
0daa29ec	2004-12-01 13:18:51 +0000	David
08a2249e	2004-11-26 02:16:05 +0000	David

Piping this output to this sed expression results in (name truncated for brevity):

005	005371e1	2004-12-01 13:59:16 +0000	David
0d	0daa29ec	2004-12-01 13:18:51 +0000	David
08	08a2249e	2004-11-26 02:16:05 +0000	David

That format is pretty handy because we can sort this by the first column. This sorts commits from those with the most leading zeros to the least.

This will also group all SHAs with the same number of leading zeros together. Then we can sort by the date column to find the first commit in any such group.

sort -k1,1 -k3,3

Does exactly that. One thing that tripped me up when I first worked on this is I thought I should be able to sort -k1 -k3. The -k option specifies a sort key. By default, when you specify a column, it takes that column and all columns after it as the sort key. Thus -k1 is pretty much equivalent to not specifying a sort key at all as it sorts by the whole line.

Fortunately, you can specify an end column for the sort key using the comma. So -k1,1 sorts just by the first column. Whereas -k1,3 would take the first three columns as a sort key.

head -n1

Now that we have the proper sort in play, we just need to take the first entry. this is the oldest commit with the most leading zeros.

cut -f 2,3,4

And finally, we don’t need the leading zeros column in the final output so I run the cut command and only keep columns 2, 3, and 4. This is where inserting the tabs before comes in handy. By default, cut uses the tab character as a delimiter.

leadership management comments edit

In Endless Immensity of the Sea I wrote about a leadership style that encourages intrinsic motivation. Many people I talk to don’t work in such an environment. Even those who work in places that promote the ideals of autonomy and intrinsic motivation often find that over time, things change for the worse. Why does this happen?

I believe it’s the result of management entropy. Over time, if an organization doesn’t actively work to fight it, their leaders start to lose touch with what really motivates people.

Theory X and Theory Y are two theories of human motivation and management devised by Douglas McGregor that serve to explain how managers view human motivation.

Theory X is an authoritarian style where the emphasis is on “productivity, on the concept of a fair day’s work, on the evils of feather-bedding and restriction of output, on rewards for performance … [it] reflects an underlying belief that management must counteract an inherent human tendency to avoid work”


Theory Y is a participative style of management which “assumes that people will exercise self-direction and self-control in the achievement of organisational objectives to the degree that they are committed to those objectives”. It is management’s main task in such a system to maximise that commitment.

There’s also a Theory Z style of management that came later.

One of the most important pieces of this theory is that management must have a high degree of confidence in its workers in order for this type of participative management to work. This theory assumes that workers will be participating in the decisions of the company to a great degree.

It’s pretty clear that in the tech industry, most companies aspire to have a management style that encourages intrinsic motivation and personal autonomy. As Dan Pink notes, there’s a lot of evidence that it’s more motivating and effective for the type of creative work we do than Theory X.

However, I have a theory that despite all this evidence and aspirations to be Theory Y or Z, many managers in the tech industry are really closet Theory X practitioners.

In many cases, it may not even be a conscious choice. Or, perhaps they didn’t start that way, but over time they drift. One scenario that could cause such a drift is when a company encounters a series of setbacks.

A good leader looks hard at the culture and system put in place and how they contribute to the setbacks. A good leader makes it a priority to improve those things. A bad leader blames individuals. This blame feeds into the Theory X narrative and causes leaders to lose trust in their people.

In a following post, I hope to cover some typical myths and incorrect beliefs that managers have that also contribute to managers drifting to the dark side of Theory X.

leadership management comments edit

There’s this quote about leadership that resonates with me.

If you want to build a ship, don’t drum up people together to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.

Most attribute it to the French author Antoine de Saint-Exupéry, but it’s doubtful that he wrote these exact words. For one, he’s French, so the words he wrote probably had a lot of àccênts and “le”, “la”, and “et” words in them.

This English quote appears to be one of the rare cases where a paraphrase has more impact than the original. None of that diminishes the power of the quote.

Obligatory image of the sea

The quote encourages leaders to cultivate intrinsic motivation as a means of leading people rather than an approach built on authority and command. Surprisingly, Cartman, with his incessant requests to respect his authority, is not an exemplar of good leadership.

If you question the value of intrinsic motivation, take a moment to watch this Dan Pink video. I’ve referenced it in the past, and I’ll keep referencing it until every single one of you (or perhaps more than one of you) watch it!

It’s easy to read this quote as a laud to leadership and a rejection of management by contrast. As if management by necessity must be built on command and control. But I reject that line of thinking. Management and leadership address different needs and can be complementary.

To me, the quote contrasts leadership with a particular style of management built on hierarchy and control. This is a style that is antithetical to both building ships and shipping software.

The Valve handbook covers this well.

Hierarchy is great for maintaining predictability and repeatability. It simplifies planning and makes it easier to control a large group of people from the top down, which is why military organizations rely on it so heavily.

But when you’re an entertainment company that’s spent the last decade going out of its way to recruit the most intelligent, innovative, talented people on Earth, telling them to sit at a desk and do what they’re told obliterates 99 percent of their value. We want innovators, and that means maintaining an environment where they’ll flourish.

I anticipate some commenters will point out that, in practice, Valve might not live up to this ideal. I don’t know anything about the inner workings of Valve. I do know that with any human endeavor, there will be failures and successes. And they won’t be distributed evenly, even within a single company. Perhaps they do not live up to these ideals, but that doesn’t change the value of the ideals themselves.

The Valve handbook addresses entertainment companies, but the ideas apply to any company where the nature of the work is creative and intellectual in nature. Or put another way, it applies to any environment where you want your workers to be creative and intellectual.

Even the handbook makes the mistake of mischaracterizing the nature of the work our military does. It assumes that the military gets the best results when folks just do what they’re told.

Leaders such as David Marquet, a former nuclear submarine commander, challenge this idea. He notes that when he stopped giving orders, his crew performed better.

This is not a polemic against managers or management. Rather, this is an encouragement for a style of management that fosters intrinsic motivation.

It’s not easy. There’s a lot of factors that hinder attempts at this style of leadership. All too often companies conflate hierarchy with structure and management with leadership. It’s important to separate and understand these concepts and how to apply them. Especially when you’re a small company reaching the point where you feel the need for more structure and management.

In a follow-up post, I’ll write more about some of these points. I plan to cover what I mean when I say that leadership and management are complementary. I’ll also cover what it means to conflate all these distinct concepts.

In the meanwhile, as you build your next ship, I encourage you to focus on the longing that leads you to build it. What is endless immense sea in your work?

personal blogging comments edit

I started my first blog at some time in the year 2000. You can still see pieces of it in the Internet Archive Wayback machine.

My first blog

You have to love this part…

IE 4 Only yo!

Ah, the bad old days of the internet.

Back then I could probably count the number of folks who read my blog with the fingers of one hand. Perhaps not even counting the thumb. It was just an outlet for me to share inside jokes with other friends who had their own blogs.

I started this before I knew what a weblog or blog was. I wrote this with a bespoke artisanal classic ASP (Active Server Pages without the “.NET” part. We lived like savages back then.) site I built. It was terrible. No database. Just me writing HTML for every post. I let that blog die due to neglect and didn’t start blogging again until around 2004.

The new blog ran on Subtext, an open source ASP.NET blog engine I ported from an older .TEXT platform. It was a real labor of love. Four years ago, I switched to hosting by blog on GitHub Pages with Jekyll.

The point of this stroll down memory lane is to say that I’ve always felt it was important to host my blog on something under my control with my own domain name. My blog has always been primarily an outlet for me.

When I first started, my blog was more of an online diary. I’d write about my day, movie reviews, etc. When I restarted my blog, I tended to write more technical pieces in the hopes of helping others out.

My friends who weren’t programmers would ask what language my blog was written in. It was all gibberish to them. However, it was important to me that represented the full me. One day I might write about playing soccer against Vinnie Jones or with Agent Coulson. On another day I might write about parenting. And yet another day I might write about auditing ASP.NET MVC actions.

The point is, I wrote what I wanted to write and didn’t worry about what others wanted to read too much.

But there are consequences. After a million posts about the intricacies of Git aliases, it’s inevitable that my friends who aren’t techies got bored. And I have to say, I missed their involvement with my writing. I enjoy the interactions and feedback that came of it and I was sad that they were excluded from the blogging community I had become a part of.

Enter Medium

When Medium first came on the scene, I ignored it. I’ve ignored it for a long while.

But not too long ago, my wife started a Medium blog. I may be biased, but I think she’s a beautiful writer who writes beautifully. And that got me more interested in the platform.

That lead me to learn that if you import a blog post into Medium, it sets the original post as the canonical source via a link tag. Here’s an example of the link tag for a post I imported into Medium from This ensures that search engines aren’t confused by multiple sources of content and sees your original blog as the ultimate authority.

<link rel="canonical"

This alleviates my concerns about being in control of my blog. The canonical source is still which is in a Git repository that is hosted on GitHub, but is cloned to my machine. If Medium and GitHub were to go down, I’d be sad and unemployed, but I’d have the free time available to move my blog to another host and keep it up at

Importing into Medium is quick and easy. Visit and paste in the URL to the post you want to import. That’s it!

It plucks the contents of my post without of all the extra navigation and header/footer material like magic.

So now, I’m experimenting with Medium as my blog for my non-programmer friends. When I write something that isn’t deeply technical on, I’ll cross-post it to Medium. But for my Git posts, I’ll keep those here only.

I’ll revisit this idea down the road to see if it works for me. I’m curious to hear your thoughts in the comments.

personal comments edit

Twin Falls lies around a forty minute drive east of Bellevue, Washington. From the trail head, the path leads to views of three separate waterfalls. Yes, three. “Twin Falls” has a nicer ring to it than “Triplet falls.”

Image of the river

Focus too much on the hike to the falls and you might miss the side trails down to the Snoqualmie River. The river is cold (or “refreshing” as they say around here) and full of boulders big and small. If you’re careful, you can hop from boulder to boulder to reach an island that splits the river. Or you can sit back and watch others attempt it and fall in. That’s always good for a chuckle.

On a recent trip, I took my kids and their friends to this island. There’s a trio of elephant size boulders in the middle of the river off the tip of the island. To reach them requires a bit of foolhardiness and a compunction to wade into the water and fight a strong current. I am such a foolhardy compulsive so I ventured into the water to climb a boulder with a flat, but angled, top. The vantage point gave me a fine view of the river valley and the kids skipping stones down below.

As I sat there, I contemplated a random thought. Scenes like this are often used to set up a blog post (or “Medium piece” if you’re fancy). An author takes a story from their life, or a historical anecdote, and uses it to start a post. The story seems unrelated at first. What’s the point?

This is before the author employs some rhetorical wizardry and by the end of the piece, an important life lesson is revealed! Like a fine rug, the story ties it all together. It’s a pattern so common I consider it the calling card of the Medium post.

The thought struck me then, would this very moment serve such a purpose? Would a major life lesson reveal itself to me right now? Something I could leverage as social media fodder for the consumption of others. I pondered. And pondered. Nothing.

No life hack or societal lesson or philosophical truth revealed itself to me. Nothing I could sell for increased follower counts or “likes” or ad revenue invaded my thoughts.

No, I would have to come to grips that I had nothing at all to learn from this moment. Nothing to share. There, with the sun shining overhead, the river flowing around me, a breeze on my face, I would have to enjoy it for what it is. A moment. My moment. And just be present.

At least until I reached my computer.

aspnetmvc security comments edit

Phil Haack is writing a blog post about ASP.NET MVC? What is this, 2011?

No, do not adjust your calendars. I am indeed writing about ASP.NET MVC in 2017.

It’s been a long time since I’ve had to write C# to put food on the table. My day job these days consists of asking people to put cover sheets on TPS reports. And only one of my teams even uses C# anymore, the rest moving to JavaScript and Electron. On top of that, I’m currently on an eight week leave (more on that another day).

But I’m not completely disconnected from ASP.NET MVC and C#. Every year I spend a little time on a side project I built for a friend. He uses the site to manage and run a yearly soccer tournament.

Every year, it’s the same rigmarole. It starts with updating all of the NuGet packages. Then fixing all the breaking changes from the update. Only then do I actually add any new features. At the moment, the project is on ASP.NET MVC 5.2.3.

I’m not ready to share the full code for that project, but I plan to share some interesting pieces of it. The first piece is a little something I wrote to help make sure I secure controller actions.

The Problem

You care about your users. If not, at least pretend to do so. With that in mind, you want to protect them from potential Cross Site Request Forgery attacks. ASP.NET MVC includes helpers for this purpose, but it’s up to you to apply them.

By way of review, there are two steps to this. The first step is to update the view and add the anti-forgery hidden input to your HTML form via the Html.AntiForgeryToken() method. The second step is to validate that token in the action that receives the form post. Do this by decorating that action method with the [ValidateAntiForgeryToken] attribute.

You also care about your data. If you have actions that modify that data, you may want to ensure that the user is authorized to make that change via the [Authorize] attribute.

This is a lot to track. Especially if you’re in a hurry to build out a site. On this project, I noticed I forgot to apply some of these attributes where they should be placed. When I fixed the few places I happened to notice, I wondered what places did I miss?

It would be tedious to check every action by hand. So I automated it. I wrote a simple controller action that reflects over every controller action. It then displays all the actions that might need one of these attributes.

Here’s a screenshot of it in action.

Screenshot of Site Checker in action

There’s a few important things to note.

Which actions are checked?

The checker looks for all actions that might modify an HTTP resource. In other words, any action that responds to the following HTTP verbs: POST, PUT, PATCH, DELETE. In code, these correspond to action methods decorated with the following attributes: [HttpPost], [HttpPut], [HttpPatch], [HttpDelete] respectively. The presence of these attributes are good indicators that the action method might modify data. Action methods that respond to GET requests should never modify data.

Do all these need to be secured?


For example, it wouldn’t make sense to decorate your LogOn action with [Authorize] as that violates causality. You don’t want to require users to be already authenticated before the log in to your site. That’s just silly sauce.

There’s no way for the checker to understand the semantics of your action method code to determine whether an action should be authorized or not. So it just lists everything it finds. It’s up to you to figure out if there’s any action (no pun intended) required on your part.

How do I deploy it?

All you have to do is copy and paste this SystemController.cs file into your ASP.NET MVC project. It just makes it easier to compile this into the same assembly where your controller actions exist.

Next, make sure there’s a route that’ll hit the Index action of the SystemController. If you have the default route that ASP.NET MVC project templates include present, you would visit this at /system/index.

Be aware that if you accidentally deploy SiteController, it will only responds to local requests (requests from the hosting server itself) and not to public requests. You really don’t want to expose this information to the public. That would be an open invitation to be hacked. You may like being Haacked, it’s no fun to be hacked.

And that’s it.

How’s it work?

I kept all the code in a single file, so it’s a bit ugly, but should be easy to follow.

The key part of the code is how I obtain all the controllers.

var assembly = Assembly.GetExecutingAssembly();

var controllers = assembly.GetTypes()
    .Where(type => typeof(Controller).IsAssignableFrom(type)) //filter controllers
    .Select(type => new ReflectedControllerDescriptor(type));

The first part looks for all types in the currently executing assembly. But notice that I wrap each type with a ReflectedControllerDescriptor. That type contains the useful GetCanonicalActions() method to retrieve all the actions.

It would have been possible for me to get all the action methods without using GetCanonicalActions by calling type.GetMethods(...) and filtering the methods myself. But GetCanonicalActionsis a much better approach since it encapsulates the same logic ASP.NET MVC uses to locate actions.

As such, it handles cases such as when an action method is named differently from the underlying class method via the [ActionName("SomeOtherMethod")] attribute.

What’s Next?

There’s so many improvements we could make (notice how I’m using “we” in a bald attempt to pull you into this?) to this. For example, the code only looks at the HTTP* attributes. But to be completely correct, it should also check the [AcceptVerbs] attribute. I didn’t bother because I never use that attribute, but maybe you have some legacy code that does.

Also, there might be other things you want to check. For example, what about mass assignment attacks? I didn’t bother because I tend to use input models for my action methods. But if you use the [Bind] attribute, you might want this checker to look for issues there.

Well that’s great. I don’t plan to spend a lot of time on this, but I’d be happy to accept your contributions! The source is on GitHub.

Let me know if this is useful to you or if you use something better.

github comments edit

One of my goals at GitHub is to make GitHub more approachable to developers. If you use GitHub, I want you to have tools that complement the way you work and help you to be more effective. In some cases that’s integrating directly in your Editor or IDE of choice. In other cases, it’s offering tools that work side-by-side with your existing tools.

Today, we took one step towards that goal with the release of two major releases: Git and GitHub integration in Atom and the new Desktop Beta rebuilt on Electron.

For Desktop, our plan is to eventually replace the existing platform-specific clients with the new Electron based client. For now, you can run them side-by-side.

If you’re interested in more details about our efforts, we wrote a pair of posts in the GitHub Engineering blog.

And before I forget, all of this is open source.

I hope you get involved!

If you are attending GitHub Satellite next week (May 22-23) in London, I’ll be giving a talk that demonstrates some of the great work my teams are doing and how that fits into this grand vision. See you there!

github dotnet aspdotnet microsoft comments edit

Yesterday was the 15th anniversary of .NET’s debut to the world. And Visual Studio was first released twenty years ago! In a recent episode of On .NET, I went to the Channel 9 studios to talk a bit about the history of .NET, my work at GitHub, and challenges to .NET’s future success among other random diversions.

I hope you enjoy the interview!

On a personal note, I’ve found it hard to blog lately because every topic seems so trivial in light of what’s happening in our country. It’s easy to feel helpless and despair. If I could humbly recommend one thing that can help give you some semblance of power back, it’s to call your representatives. It’s more impactful than partaking in internet surveys, and way more useful than debating people on Facebook. I’ve been using to walk me through what to do. It only takes around five minutes of your time (I have a weekly appointment on my calendar).

If you want to understand why this is effective, check out which talks about the motivations of your local representatives.

For my part, I will continue to RESIST. But I do think that continuing to live my life and write about topics that interest me can be part of that. So I hope to get back to writing about software and software leadership more. Stay tuned!

git comments edit

Happy New Year! I hope you make the most of this year. To help you out, I have a tiny little Git alias that might save you a few seconds here and there.

When I’m working with Git on the command line, I often want to navigate to the repository on GitHub. So I open my browser and type in the URL like a Neanderthal. Yes, a little known fact about Neanderthals is that they were such hipsters they were using browsers before computers were even invented. Look it up.

But I digress. Typing in all those characters is a lot of work and I’m lazy and I like to automate all the things. So I wrote the following Git alias.

  open = "!f() { REPO_URL=$(git config remote.origin.url); explorer ${REPO_URL%%.git}; }; f"
  browse = !git open

So when I’m in a repository directory on the command line, I can just type git open and it’ll launch my default browser to the URL specified by the remote origin. In my case, this is typically a GitHub repository, but this’ll work for other hosts.

The second line in that snippet is an alias for the alias. I wrote that because I just know I’m going to forget one day and type git browse instead of git open. So future me, you’re welcome.

This alias makes a couple of assumptions.

  1. You’re running Windows
  2. You use https for your remote origin.

In the first case, if you’re running a Mac, you probably want to use open instead of explorer. For Linux, I have no idea, but I assume the same will work.

In the second case, if you’re not using https, I can’t help you. You might try this approach instead.

Update 2017-05-09 I updated the alias to truncate .git at the end.

social coding management comments edit

On Tuesday, November 8, 2016 I’ll be giving a talk entitled “Social Coding for Effective Teams and Products” at QCon SF as part of the “Soft Skills” track. If you happen to be in San Francisco at that time, come check it out.

In anticipation of this talk, I recorded a podcast for InfoQ where I pointed out the irony of using the term “soft skills” to describe the track as these are often the most challenging skills we deal with day to day. They are indeed the hard skills of being a software developer.

In the podcast, we also cover what it was like in the early days of ASP.NET MVC as we went from closed source to open source and how far Microsoft has come since then in the open source space.

Afterwards, we talked a bit about Atom and Electron and the community around those products. And to finish the podcast, we gabbed about my transition into management at GitHub, which is something I wrote about recently.

So if you don’t mind hearing my nasally voice, take a listen and let me know what you thought here.

github csharp dotnet scientist comments edit

In the beginning of the year I announced a .NET Port of GitHub’s Scientist library. Since then I and several contributors from the community (kudos to them all!) have been hard at work getting this library to 1.0 status. Ok, maybe not that hard considering how long it’s taken. This has been a side project labor of love for me and the others.

Today I released an official 1.0 version of Scientist.NET with a snazzy new logo from the GitHub creative team. It’s feature complete and used in production by some of the contributors.

Scientist logo with two test tubes slightly unbalanced

You can install it via NuGet.

Install-Package scientist

I transferred the repository to the github organization to make it all official and not just some side-project of mine. So if you want to get involved by logging issues, contributing code, whatever, it’s now located at

You’ll note that the actual package version is 1.0.1 and not 1.0.0. Why did I increment the patch version for the very first release? A while back I made a mistake and uploaded an early pre-release as 1.0.0 on accident. And NuGet doesn’t let you overwrite an existing version. Who’s fault is that? Well, partly mine. When we first built NuGet, we didn’t want people to be able to replace a known good package to help ensure repeatable builds. So while this decision bit me in the butt, I still stand by that decision.


management github hiring comments edit

I’m coming on five years at GitHub (in December) and I thought I’d write a bit about what I’ve been up to lately and the fact that several of my teams are hiring. Five years passes by so quickly, right? I still get emails for feature requests on ASP.NET MVC. I always reply that the team would be happy to implement all of the suggestions and to just check the repository in a week’s time. I’m sure the team loves me for that.

If you don’t give a rat’s ass about what I’m up to, but are interested in our open positions, feel free to skip to the job postings at the bottom. By the way, even if you do give a rat’s ass, please keep it to yourself. What I’ve been up to does not include collecting rodent derrières.

I still don’t know why that’s a phrase we use, but I’m sure Mark Twain is involved…that rapscallion. But as usual, I digress.

What inspires me

When I think about the work we do at GitHub, the Story of Anna comes to mind. Building software is a great creator of opportunities for those from all walks of life. I get a kick out of writing software for people like Anna, or my friend Noah, or NASA and many others who are using it to build great things.

In a recent Octotale video, Desert Horse-Grant, the Director of Strategic Planning and Operations at Fred Hutch Cancer Research Center noted that “cancer will be solved on a computer.” At GitHub, we’re not solving cancer, but I like to think we build the tools for those who will. And that’s what gets me inspired every day.

My new position

Several months ago, I took a new position as an Engineering Director at GitHub. It’s not clear to a lot of people that GitHub has managers much less directors now. When I started, we had around fifty employees and a flat corporate structure. Two years ago, we introduced management.

Several months ago, we introduced directors, a position that’s also new to me. What this means is that I now manage managers. I guess this is what happens to people who like to blog about blogging, they end up managing managers. I enjoy the meta.

At GitHub, engineering managers are very hands on technically. They are technical leaders who help coach teams to greater success. Kind of like Pete Rose who was a player-manager when he broke the all-time hit record set by the irascible Ty Cobb.

Directors, on the other hand tend to focus more on people and management issues. We’re much less hands on technically, though I try to keep my hands dirty with code here and there. Instead, we try and focus on what will equip the managers and their teams to be more successful. How can I help my managers be better? What systems can I put in place so the people they manage work well together and grow in their careers and as teams. Sometimes I make mistakes, but I try hard to learn from them and then incorporate that learning into the systems and culture at work so they’re less likely to happen again.

When I do spend time on technical work, it is focused on strategic and big picture issues. Every engineer should be thinking this way, but I have the “benefit” of not having a primary responsibility to write production code which means I can dedicate more time to this sort of work. And note that we’re constantly iterating on how we work so this is how I see things today, but it’s always open to improvement tomorrow.

The four teams that I work with are Atom, Electron, Desktop, and Editor Tools (the team responsible for the GitHub extension for Visual Studio). I am incredibly lucky to get to work with such a talented group of people. I’ve been really stretched in a technical sense as these teams use a wide variety of technology.

Open positions on my teams

So that leads me to the topic at hand. Several of these teams are hiring. Here are the job postings.

If building tools for this and the next generation of developers inspires you, take a look. We’re looking for software engineers who thrive as part of a team in a supportive environment. The New York Times recently published an article about what Google learned in its quest to build a perfect team. The lessons they learned about what people think makes a great team and what actually works are very interesting. We want to be a place that embodies that sort of team.

As our Jobs page mentions, we’re focused on building a diverse and inclusive workplace. We have a nice benefits package that includes a generous parental leave policy. We have flexible work schedules and a generous vacation policy.

I believe the reason we provide all this is because we’re focused on building a sustainable environment for people to do great work. We don’t want to bring a person in just to wring out as much code as possible from them because people bring so much more than just the code they can write to the table.

If that all sounds appealing to you, click on the big blue “Apply for a Job” button in those job postings.

management personal comments edit

Last week my family and I went on a cruise to Alaska with four other families and we didn’t die. Not that we should expect to die on a cruise, but being confined with a bunch of kids on a giant hunk of steel has a way of making one consider one’s mortality.

Cruise ship parking lot

Not only did we not die, but I learned a thing or two. For example, it’s common knowledge that the constant wave like motion of a ship can make one queasy. I learned that I could counteract that effect. Drink just the right amount of alcohol and its effect cancels out the queasiness in a process called phase cancellation. Look it up, it’s SCIENCE.


We went on a Holland America cruise to Alaska in part because a family friend is a Senior VP at the cruise line and they convinced us it’d be a good idea. The cruise tends to cater to an older crowd than something like Disney Cruises. Even so, it worked pretty well for us. It meant that the pool was never too crowded.

I used to live in Anchorage, Alaska. This ensured I was ready with the puns for our first port, Juneau.

Me: Where’s our first stop?
Friend: Juneau Alaska.
Me: Yes, I know Alaska. But what city?
Friend: Juneau.
Me: If I knew, I wouldn’t ask.

This was when I wisely ducked away.

But since you like puns, here’s a couple of other Alaska related puns as told by my coworker, Kerry Miller:

Hey pal, Alaska the questions here
I really do appreciate the way Alaska survived the 2008 financial crisis. Their secret? Fairbanks.

Lesson here, puns are awesome.


Back to the cruise. Our friend arranged a couple behind the scenes tours. One was below deck where we got to see the galley where all the food is made and the storage facilities. I was particularly excited to tour the room where they stock all the liquor.

The logistics of stocking a ship of two thousand passengers and one thousand crew is mind boggling. They take a very data driven approach tracking every meal ordered so they can predict what supplies they need given the specific trip, time of year, and audience.

One thing we noticed while touring the storage was they stocked expensive premium sticky rice for the crew that was different from the rice they usually served to customers. We noticed this because we’re Asian and good rice is important.

It turns out that the crew is predominantly Filipino and Indonesian and our friend noted that if they tried to cut costs with cheaper rice, they’d face a revolt. They know this because they’ve seen how much of a hit to morale cheaper rice was on other cruise lines. He fought hard to keep the quality rice because it’s important to keep the crew’s morale high. Not just with rice, but also by enlisting and empowering the crew itself to notice when conditions could be better and to do something about it.

Lesson here, foster a culture where people are empowered to find and fix problems rather than always looking to you to fix it and things actually will improve.

And we noticed the impact of high morale. We were really impressed with the quality of service. The crew always seemed genuinely happy and friendly. Perhaps it’s years of practice in the service industry, but I’ve been to nice hotels where everyone is nice, but you get the sense they don’t really care about you. I really got the sense the crew cared.

So the lesson here is to stock the good rice. Happy people do better work in every way.

Alaska Raptor Center

Another port we stopped in was beautiful Sitka. We took a tour of the Alaska Raptor Center where they rehabilitate injured raptors such as eagles and owls and release them into the wild when they’re strong enough fliers to be on their own.

Our second tour was of the bridge where the Dutch captain showed us the navigation systems and the controller for the ship. The view from the bridge was quite spectacular. We asked the captain whether he’d been on any trips where anyone fell overboard. No, but there was one trip where a very drunk passenger dropped anchor while they were out to sea. At the next port, the passenger tried to sneak off but they had the authorities waiting and they had camera footage of the incident.

Lesson here, phase cancellation only works when the wavelengths are equivalent amplitude. In other words, don’t overdo the drinking.

The ship had a place for kids called “Club Hal” where you could drop kids off for a few hours at a time and go enjoy some Pina Coladas (to help with motion sickness of course). They had a lot of structured activities and a few X-boxes set up. Naturally, since this was convenient for us, my kids hated it. Over time, they warmed up to it a little as the kids at Club Hal held a revolt and demanded more kids choice activities and got their way.

Lesson here, it’s important to balance a bit of structure with letting kids choose what they want to do.

Now I’m back home and back to work and after a few days, the ground has stopped moving, so all in all, a successful trip. We didn’t have internet access for most of the time and I think that was a huge factor in me feeling refreshed by the end. I definitely recommend when you take vacation, fully disconnect from work and even the internet. It’ll do you a lot of good and the tire fire on Twitter will still be there when you get back.

Lesson here, take a vacation now and then, eh?

nuget comments edit

The tagline for the Atom text editor is “A hackable text editor for the 21st Century”. As a Haack, this is a goal I can get behind.

It accomplishes this hackability by building on Electron, a platform for building cross-platform desktop applications with web technology (HTML, CSS, and JavaScript). The ability to leverage these skills in order to extend your text editor is really powerful.

I thought I’d put this to the test by building a simple extension for Atom. I decided to port the Encourage extension for Visual Studio I wrote a while back. For a lot of developers, this image rings true every day.

How to program

Who needs that negativity?! The Encourage extension for Atom displays a small bit of encouragement (“Way to go!”, “You rock!”, “People like you!”) every time you save your document. Maybe it’s true that nobody loves you, but your editor will, if you let it.

Encourage screenshot

Writing the extension

The Atom Flight Manual has a great guide to creating and publishing an Atom package. The guide walks through using an Atom package that generates a simple package you can use as a starting point for your own package.

One tricky aspect though is that the documentation still assumes that the generated package is CoffeeScript. But all new Atom development (including the actual generated package) uses the latest version of JavaScript - ES6 (or ES2015 depending on who you ask).

I won’t go into every detail about the package. You can see the code on GitHub yourself. I’ll just highlight a few gotchas I encountered.

By default, the “Generate Package” command creates a package that is activated via a command. Until you invoke the command, the package isn’t activated. This confused me for a while because I wanted my package to be active when Atom starts up since it passively listens for the onDidSave event.

The trick here is to simply remove the activationCommands section from the package.json file.

"activationCommands": {
  "atom-workspace": "my-package:toggle"

Then, the activation happens when the package is loaded. Many thanks to @binarymuse for that tip!

When you make changes to your extension, you can reload Atom by invoking CTRL + ALT + R. That’ll save you from closing and reopening Atom all the time.

You can invoke the Developer Tools with the CTRL + ALT + I shortcut (similar to CTRL + SHIFT + I for Google Chrome). That’ll allow you to step through the package code with the debugger.

Be sure to check out the Atom API documentation for details about the extensibility points provided by Atom. One of the challenges with Atom is there are so many different ways to extend it it’s hard to know what the best approach is. Over time, I hope we start to gather these best practices.

For example, my package abuses the Panel class slightly by hacking the DOM element created to render the Panel. Panels tend to be a bar that’s docked to the top, bottom, or side of the editor pane. The current API doesn’t support resizing or fading out the Panel. I ended up using a mix of CSS and JavaScript to bend the Panel to my will and create the effect you get when you use this extension.

Maybe there’s a better way, but I love that I had the ability to get this to work. I’ll iterate on the package over time and make it better.

Building and Testing the extension

By default, the extension comes with a few specs. You can run the specs by invoking CTRL + ALT + P. I set up continuous integration (CI) for the package with AppVeyor by following these helpful instructions. I had continuous integration up and running in the matter of minutes.


Publishing an Atom package is super easy. Push your code to a public GitHub repository and then from the repository directory call apm publish patch|minor|major depending on the type of change. The flight manual I mentioned has details on this command.

What’s Next?

I don’t plan on investing a huge amount of time in this extension. It was more an exercise for me to learn about the Atom packaging system. If you’re interested in helping out, I’ve already started logging issues such as being able to set the list of encouragements. I’d welcome the help!

For example, I want to add the ability for those who use the package to set up their own encouragements. Or perhaps, discouragements. I actually find it really funny when my editor shits on my code. In fact, it causes me to think harder about my code because I want to prove it wrong. I should probably stop with all this editor anthropomorphism, huh? Tell me what you think in the comments.

UPDATE: A couple days after publishing this package, Nathan Armstrong (aka armstnp on GitHub) sent me a pull request that implemented the ability to configure the list of encouragements via the Package Settings (Thanks!). This has been published in Encourage v0.2.0. To set this, go to the Settings view, select the Packages tab, and find “encourage” under the Installed Packages section. Then can click the Settings button for the package and update the comma separated list of encouragements.

UPDATE 2: There’s a port for VS Code users now!

nuget comments edit

Yesterday, the NuGet team announced that reached one billion package downloads!

With apologies to everybody for drudging up this tired old meme.

It’s exciting to see NuGet still going strong. As part of the original team that created NuGet, we always had high hopes for its future but were also cognizant of all the things that could go wrong. So seeing hope turn into reality is a great feeling. At the same time, there is still so much more to do. One billion is just a number, albeit a significant and praiseworthy one.

I love that the post calls out the original name for NuGet, aka NuPack. I loved the original name and there was a lot of upset feelings about the change at the time, but the experience taught me an important lesson about naming. Not only is it hard, there will always be a lot of people who will immediately hate whatever name you choose. It takes time for people to adjust to any name. Unless the name is truly terrible like Qwikster. What was that about?

At the time, every name we chose felt wrong, but over time, the name and the identity of the product start to mesh together and now, I can’t imagine any other name other than NuGet. Except for HaackGet. I would have totally been all over that.

Just recently I cleaned out some long neglected DropBox folders and found an old PowerPoint presentation about a design change to the license acceptance flow for packages. Yes, license acceptance sounds like boring stuff but it’s the only remnant I have from the design process back in the day and it’s more interesting than you think if you’re a licensing nerd like me.

We had a goal to make installing packages as frictionless as possible. To that end, we didn’t make license acceptance explicit, but instead we noted that by installing the package, you accept its license and we told you where to find the license. It was a more implicit license acceptance flow that we felt was unintrusive and would serve most package authors fine.

However, this didn’t work for everybody. We had to deal with the reality that some package authors (especially large corporations such as Microsoft) required explicit acceptance of the license before they could install it. So we made this an opt-in feature for package authors which represented itself like so in the GUI.

NuPack License Acceptance Flow Mockup

As you can see, I used Balsamiq to mock up the UI. I used Balsamiq a lot back then to play around with UI mockups. This mockup is from the time when the project was still called NuPack. It’s a fun (to me at least) bit of history.

These days I’m not as involved with NuGet as I used to be, but I have no shortage of opinions on what I hope to see in its future. I may not be contributing to NuGet directly anymore, but I’m still a NuGet package user and author. All of my useful repositories have corresponding packages on

code review github code comments edit

As an open source maintainer, it’s important to recognize and show appreciation for contributions, especially external contributions.

We’ve known for a while that after a person’s basic needs are met, money is a poor motivator and does not lead to better work. This seems especially true for open source projects. Often, people are motivated by other intrinsic factors such as the recognition and admiration of their peers, the satisfaction of building something that lasts, or because they need the feature. In the workplace, good managers understand that acknowledging good work is as important if not more so than providing monetary rewards.

This is why it’s so important to thank contributors for their contributions to your projects, big and small.

Seems obvious, but I was reminded of this when I read this blog post by Hugh Bellamy about his experiences contributing to the .NET CoreFX repository. In the post, he describes both his positive and negative experiences. Here’s one of his negative experiences.

In the hustle and bustle of working at Microsoft, many of my PRs (of all sizes) are merged with only a “LGTM” once the CI passes. This can lead to a feeling of lack of recognition of the work you spent time on.

Immo Landwerth, a program manager on the .NET team, gracefully responds on Twitter in a series of Tweets

.@bellamy_hugh Thanks for the valid criticism and the points raised. We’ve started to work so closely with many contributors that team…

@bellamy_hugh …members treat virtually all PRs as if coming from Microsofties. This results reduction to essence, LGTM, and micro speak.

.@bellamy_hugh Quite fair to say that we should improve in this regard!

What I found interesting though was the part where they treat PRs if it came from fellow employees. That’s very admirable! But it did make me wonder, “WHA?! You don’t thank each other!” ;)

To be clear, I have a lot of admiration for Immo and the CoreFX team. They’ve been responsive to my own issues in the past and I think overall they’re doing a great job of managing open source on GitHub. In fact, a tremendous job! (Side note, Hey Immo! Would love to see a new Open Source Update)

This is one of those easy things to forget. In fact, I forgot to call it out in my own blog post about conducting effective code reviews. Recognition makes contributors feel appreciated. And often, all it takes is something small. It doesn’t require a ceremony.

GitHub Selfie to the rescue

However, if you want to add a little bit of ceremony, I recommend the third party GitHub Selfie Extension which is available in the Chrome Web Store as well as for Firefox.

One important thing to note is that this extension does a bit of HTML screen scraping to inject itself into the web page, so when changes its layout, it can sometimes be broken until the author updates it. The extension is not officially associated with GitHub.

I’ve tweeted about it before, but realized I never blogged about it. The extension adds a selfie button for Pull Requests that let you take a static selfie or an animated Gif. My general rule of thumb is to try and post an animated selfie for first time contributions and major contributions. In other cases, such as when I’m reviewing code on my phone, I’ll just post an emoji or animated gif along with a simple thank you.

Here’s an example from the haacked/ repository.

Phil checks the timing

My co-worker improved on it.

Phil's Head Explodes

Here’s an example where I post a regular animated gif because the contributor is a regular contributor.

Dancing machines

However, there’s a dark side to GitHub Selfie I must warn you about. You can start to spend too much time filming selfies when you should be reviewing more code. Mine started to get a bit elaborate and I nearly hurt myself in one.

Phil crash

Octocat involved review

Code review in the car. I was not driving.

Fist pump

These became such a thing a co-worker created a new Hubot command at work .haack me that brings up a random one of these gifs in Slack.

Anyways, I’m losing the point here. GitHub Selfie is just one approach, albeit a fun one that adds a nice personal touch to Pull Request reviews and managing an OSS project. There are many other ways. The common theme though, is that a small word of appreciation goes a long way!

regex comments edit

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. - Jamie Zawinski

For other people, when confronted with writing a blog post about regular expressions, think “I know, I’ll quote that Jamie Zawinski quote!”

It’s the go to quote about regular expressions, but it’s probably no surprise that it’s often taken out of context. Back in 2006, Jeffrey Friedl tracked down the original context of this statement in a fine piece of “pointless” detective work. The original point, as you might guess, is a warning against trying to shoehorn Regular Expressions to solve problems they’re not appropriate for.

As XKCD noted, regular expressions used in the right context can save the day!

XKCD - CC BY-NC 2.5 by Randall Munroe

If Jeffrey Friedl’s name sounds familiar to you, it’s probably because he’s the author of the definitive book on regular expressions, Mastering Regular Expressions. After reading this book, I felt like the hero in the XKCD comic, ready to save the day with regular expressions.

The Setup

This particular post is about a situation where Jamie’s regular expressions prophecy came true. In using regular expressions, I discovered a subtle unexpected behavior that could have lead to a security vulnerability.

To set the stage, I was working on a regular expression to test to see if potential GitHub usernames are valid. A GitHub username may only consist of alphanumeric characters. (The actual task I was doing was a bit more complicated than what I’m presenting here, but for the purposes of the point I’m making here, this simplification will do.)

For example, here’s my first take at it ^[a-z0-9]+$. Let’s test this expression against the username shiftkey (a fine co-worker of mine). Note, these examples assume you import the System.Text.RegularExpressions namespace like so: using System.Text.RegularExpressions; in C#. You can run these examples online using CSharpPad, just be sure to output the statement to the console. Or you can use to test out the .NET regular expression engine.

Regex.IsMatch("shiftkey", "^[a-z0-9]+$"); // true

Great! As expected, shiftkey is a valid username.

You might be wondering why GitHub restricts usernames to the latin alphabet a-z. I wasn’t around for the initial decision, but my guess is to protect against confusing lookalikes. For example, someone could use a character that looks like an i and make me think they are shiftkey when in fact they are shıftkey. Depending on the font or whether someone is in a hurry, the two could be easily confused.

So let’s test this out.

Regex.IsMatch("shıftkey", "^[a-z0-9]+$"); // false

Ah good! Our regular expression correctly identifies that as an invalid username. We’re golden.

But no, we have another problem! Usernames on GitHub are case insensitive!

Regex.IsMatch("ShiftKey", "^[a-z0-9]+$"); // false, but this should be valid

Ok, that’s easy enough to fix. We can simply supply an option to make the regular expression case insensitive.

Regex.IsMatch("ShiftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true

Ahhh, now harmony is restored and everything is back in order. Or is it?

The Subtle Unexpected Behavior Strikes

Suppose our resident shiftkey imposter returns again.

Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true, DOH!

Foiled! Well that was entirely unexpected! What is going on here? It’s the Turkish İ problem all over again, but in a unique form. I wrote about this problem in 2012 in the post The Turkish İ Problem and Why You Should Care. That post focused on issues with Turkish İ and string comparisons.

The tl;dr summary is that the uppercase for i in English is I (note the lack of a dot) but in Turkish it’s dotted, İ. So while we have two i’s (upper and lower), they have four.

This feels like a bug to me, but I’m not entirely sure. It’s definitely a surprising and unexpected behavior that could lead to subtle security vulnerabilities. I tried this with a few other languages to see what would happen. Maybe this is totally normal behavior.

Here’s the regular expression literal I’m using for each of these test cases: /^[a-z0-9]+$/i The key thing to note is that the /i at the end is a regular expression option that specifies a case insensitive match.

/^[a-z0-9]+$/i.test('ShİftKey'); // false

The same with Ruby. Note that the double negation is to force this method to return true or false rather than nil or a MatchData instance.

!!/^[a-z0-9]+$/i.match("ShİftKey")  # false

And just for kicks, let’s try Zawinski’s favorite language, Perl.

if ("ShİftKey" =~ /^[a-z0-9]+$/i) {
  print "true";    
else {
  print "false"; # <--- Ends up here

As I expected, these did not match ShİftKey but did match ShIftKey, contrary to the C# behavior. I also tried these tests with my machine set to the Turkish culture just in case something else weird is going on.

It seems like .NET is the only one that behaves in this unexpected manner. Though to be fair, I didn’t conduct an exhaustive experiment of popular languages.

The Fix

Fortunately, in the .NET case, there’s two simple ways to fix this.

Regex.IsMatch("ShİftKey", "^[a-zA-Z0-9]+$"); // false
Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant); // false

In the first case, we just explicitly specify capital A through Z and remove the IgnoreCase option. In the second case, we use the CultureInvariant regular expression option.

Per the documentation,

By default, when the regular expression engine performs case-insensitive comparisons, it uses the casing conventions of the current culture to determine equivalent uppercase and lowercase characters.

The documentation even notes the Turkish I problem.

However, this behavior is undesirable for some types of comparisons, particularly when comparing user input to the names of system resources, such as passwords, files, or URLs. The following example illustrates such as scenario. The code is intended to block access to any resource whose URL is prefaced with FILE://. The regular expression attempts a case-insensitive match with the string by using the regular expression $FILE://. However, when the current system culture is tr-TR (Turkish-Turkey), “I” is not the uppercase equivalent of “i”. As a result, the call to the Regex.IsMatch method returns false, and access to the file is allowed.

It may be that the other regular expression engines are culturally invariant by default when ignoring case. That seems like the correct default to me.

While writing this post, I used several helpful online utilities to help me test the regular expressions in multiple languages.

Useful online tools

  • provides a REPL for multiple languages such as Ruby, JavaScript, C#, Python, Go, and LOLCODE among many others.
  • is a Perl REPL since that last site did not include Perl.
  • is a regular expression tester that uses the .NET regex engine.
  • allows testing regular expressions using PHP, JavaScript, and Python engines.
  • allows testing using the Ruby regular expression engine.

hr vacation comments edit

Vacation, All I ever wanted
Vacation, Had to get away
Vacation, Meant to be spent alone
Lyrics by The Go Go’s

Beatnik Beach CC BY-SA 2.0 Photo by Ocad123

When I joined GitHub four years ago, I adored its unlimited paid time off benefit. It’s not that I planned to take a six month trek across Nepal (or the more plausible scenario of playing X-Box in my pajamas for six months), but I liked the message it sent.

It told me this company valued its employees, wanted them to not burn out, and trusted them to behave like stakeholders in the company and be responsible about their vacation.

And for me, it’s worked out well. This, in tandem with our flexible work hours, helps me arrange my work schedule so that I can be a better spouse and parent. I walk my son to the bus stop in the mornings. I chaperone every field trip I can. I take the day off when my kids have no school. It’s great!

I also believe it’s a tool to help recruit great people.

For example, in their famous Culture Deck, Netflix notes that…

Responsible People Thrive on Freedom and are Worthy of Freedom

They go on…

Our model is to increase employee freedom as we grow, rather than limit it, to continue to attract and nourish innovative people, so we have a better chance of sustained success

In one slide they note that “Process-focus Drives More Talent Out”

The most talented people have the least tolerance for processes that serve to curtail their freedom to do their best work. They’d rather be judged by the impact of their work than when and how much they worked.

This is why Netflix also has a policy that there is no vacation policy. They do not track vacation in the same way they do not track hours worked per day or week.


As you might expect, there are some subtle pitfalls to such a policy or lack thereof. I believe such policies that rely on the good judgment of individuals are well intentioned, but often ignore the very real psychological and sociological factors that come into play with such policies.

Only the pathologically naïve employee would believe they can go on a world tour for twelve months and expect no repercussions when they return to work.

In the absence of an explicit policy, there’s an implicit policy. But it’s an implicit policy that in practice becomes a big game of Calvinball where nobody understands the rules.

But unlike Calvinball where you make the rules as you go, the rules of vacationing are driven by subtle social cues from managers and co-workers. And the rules might even be different from team to team even in a small company because of different unspoken expectations.

At GitHub, this confusion comes into sharp relief when you look at our generous parental policy. GitHub provides four months of paid time off for either parent when a new child enters the family through birth or adoption. I love how family friendly this policy is, but it raises the question, why is it necessary when we already have unlimited paid time?

Well, one benefit of this policy, even if it seems redundant, is that it sets the right expectations of what is deemed reasonable and acceptable.

Travis CI (the company) realized this issue in 2014.

When everyone keeps track of their own vacation days, two things can happen. They either forget about them completely, or they’re uncertain about how much is really okay to use as vacation days.

They also noted that people at companies with unlimited time off tend to take less time off or work here and there during their vacations.

A short-sighted management team might look at this as a plus, but it’s a recipe for burnout among their most committed employees. Humans need to take a break from time to time to recharge.

Travis CI took the unusual step of instituting a minimum vacation policy. It sets a shared understanding of what is considered an acceptable amount of time to take.

In talking about this with a my friend Drew Miller, he had an astute observation. He noted that while such a policy is a good start, it doesn’t address the root cause. A company with no vacation policy where people don’t take vacation should take a deep look at its culture and ask itself, “What about our culture causes people to feel they can’t take time off?”

For example, and the Travis-CI post notes this, leaders at a company have to model good behavior. If the founders, executives, managers, take very little vacation, they unconsciously communicate to others that going on vacation is not valued or important at this company.

His words struck me. While I like the idea of a minimum vacation to help people feel more comfortable taking vacation, I feel such a move has to be in tandem with a concerted effort to practice what we preach.

Ever since then, as a manager, I’ve tried to model good responsible vacation behavior. Before I take off, I communicate to those who need to know and perform the necessary hand-offs. And more importantly, while on vacation, I disconnect and stay away from work. I do this even though I sometimes want to check work email because I enjoy reading about work. I abstain because I want the people on my team to feel free to do the same when they are on vacation.

Apparently I’ve done a good job of vacationing because somebody at GitHub noticed. We have an internal site called Team that’s sort of like an internal Twitter and then some. One fun feature is that anybody at GitHub can change your team status. At one point, I returned from vacation and noticed my status was…

I can live with that!

It’s since been changed to “Supreme Vice President.” It’s a long story.

github csharp dotnet scientist comments edit

Over on the GitHub Engineering blog my co-worker Jesse Toth published a fascinating post about the Ruby library named Scientist we use at GitHub to help us run experiments comparing new code against the existing production code.

Photo by tortmaster on flickr - CC BY 2.0

It’s an enjoyable read with a really great analogy comparing this approach to building a new bridge. The analogy feels very relevant to those of us here in the Seattle area as we’re in the midst of a major bridge construction project across Lake Washington as they lay a new bridge alongside the existing 520 bridge.

Naturally, a lot of people asked if we were working on a C# version. In truth, I had been toying with it for a while. I had hoped to have something ready to ship on the day that Scientist 1.0 shipped, but life has a way of catching up to you and tossing your plans in the gutter. The release of Scientist 1.0 lit that proverbial fire under my ass to get something out that people can play with and help improve.

Consider this a working sketch of the API. It’s very rough, but it works! I don’t have a CI server set up yet etc. etc. I’ll get around to it.

The plan is to start with this repository and once we have a rock solid battle tested implementation, we can move it to the GitHub Organization on If you’d like to participate, jump right in. There’s plenty to do!

I tried to stay true to the Ruby implementation with one small difference. Instead of registering a custom experimentation type, you can register a custom measurement publisher. We don’t have the ability to override the new operator like those Rubyists and I liked keeping publishing separate. But I’m not stuck to this idea.

Here’s a sample usage:

public bool MayPush(IUser user)
  return Scientist.Science<bool>("may-push", experiment =>
      experiment.Use(() => IsCollaborator(user));
      experiment.Try(() => HasAccess(user));

As expected, you can install it via NuGet Install-Package Scientist -Pre


semver comments edit

A long time request of (just shy of five years!) is to be able to link to specific headings and clauses of the Semver specification. For example, want to win that argument about PATCH version increments? Link to that section directly.

Today I pushed a change to that implements this. Go try it out by hovering over any section heading or list item in the main specification section! Sorry for the long delay. I hope to get the next feature request more promptly, like in four years.

In this post, I discuss some of the interesting non-obvious challenges in the implementation, some limitations of the implementation, and my hope for the future.


The Semver specification is hosted in a different GitHub repository than the website.

The specification itself is a markdown file named When I publish a new release, I take that one file, rename it to, and replace this file with it. Actually, I do a lot more, but that’s the simplified view of it.

The site is a statically generated Jekyll site hosted by the GitHub Pages system. I love it because it’s so simple and easy to update.

So one of my requirements was to require zero changes to when publishing a new version to the web. I wanted to make all transformations outside of the document to make it web friendly.

However, this meant that I couldn’t easily control adding HTML id attributes to relevant elements. If you want add links to specific elements of an HTML page, giving elements an ID gives you a nice anchor target.

Fortunately, there’s a Markdown renderer supported by GitHub that generates IDs for headings. Up until now, was using rdiscount. I switched it to use Kramdown. Kramdown generates heading IDs by default.

But there’s a problem. It doesn’t generate IDs for list items. Considering the meat of the spec is in the section with list items, you would guess people would want to be able to link to a specific list item.

I explored using AnchorJs which is a really wonderful library for adding deep anchor links to any HTML page. You give the library a CSS selector and it’ll both generate IDs for the elements and add a nice hover link to link to that anchor.

Unfortunately, I couldn’t figure out a nice way to control the generated IDs. I wanted a nice set of sequential IDs for the list items so you could easily guess the next item.

I thought about changing the list items to headings, but I didn’t want to change the original markdown file just for the sake of its rendering as a website. I think the ordered list is the right approach.

My solution was to implement a specific implementation in JavaScript to add IDs to the relevant list items and then add a hover link to all elements in the document that has an ID.

This solves things in the way I want, but it has one downside. If a user has JavaScript disabled, deep links to the list items won’t work. I can live with that for now.

My hope is that someone will add support for generated list item IDs in Kramdown. I would do it, but all I really wanted to do was add deep links to this document. Also, my Ruby skills are old Ford mustang sitting on the lawn on concrete blocks rusty.

If you have concerns or suggestions about the current implementation, please log an issue here.


In 2016, I hope to release Semver 3.0. But I don’t want to do it alone. I’m going to spend some time thinking about the best way to structure the project moving forward so those with the most skin in the game are more involved. For example, I’d really like to have a representative from NPM, NuGet, Ruby Gems, etc. work closely with me on it.

I unfortunately have very little time to devote to it. On one level, that’s a feature. I believe stability is a feature for a specification like this and constant change creates a rough moving target. On the other hand, the world changes and I don’t want Semver to become completely irrelevant to those who depend and care about it most.

Anyways, this change is a small thing, but I hope it works well for you.