github csharp dotnet comments edit

Over on the GitHub Engineering blog my co-worker Jesse Toth published a fascinating post about the Ruby library named Scientist we use at GitHub to help us run experiments comparing new code against the existing production code.

Photo by tortmaster on flickr - CC BY 2.0

It’s an enjoyable read with a really great analogy comparing this approach to building a new bridge. The analogy feels very relevant to those of us here in the Seattle area as we’re in the midst of a major bridge construction project across Lake Washington as they lay a new bridge alongside the existing 520 bridge.

Naturally, a lot of people asked if we were working on a C# version. In truth, I had been toying with it for a while. I had hoped to have something ready to ship on the day that Scientist 1.0 shipped, but life has a way of catching up to you and tossing your plans in the gutter. The release of Scientist 1.0 lit that proverbial fire under my ass to get something out that people can play with and help improve.

Consider this a working sketch of the API. It’s very rough, but it works! I don’t have a CI server set up yet etc. etc. I’ll get around to it.

The plan is to start with this repository and once we have a rock solid battle tested implementation, we can move it to the GitHub Organization on GitHub.com. If you’d like to participate, jump right in. There’s plenty to do!

I tried to stay true to the Ruby implementation with one small difference. Instead of registering a custom experimentation type, you can register a custom measurement publisher. We don’t have the ability to override the new operator like those Rubyists and I liked keeping publishing separate. But I’m not stuck to this idea.

Here’s a sample usage:

public bool MayPush(IUser user)
{
  return Scientist.Science<bool>("may-push", experiment =>
  {
      experiment.Use(() => IsCollaborator(user));
      experiment.Try(() => HasAccess(user));
  });
}

As expected, you can install it via NuGet Install-Package Scientist -Pre

Enjoy!

semver comments edit

A long time request of http://semver.org/ (just shy of five years!) is to be able to link to specific headings and clauses of the Semver specification. For example, want to win that argument about PATCH version increments? Link to that section directly.

Today I pushed a change to semver.org that implements this. Go try it out by hovering over any section heading or list item in the main specification section! Sorry for the long delay. I hope to get the next feature request more promptly, like in four years.

In this post, I discuss some of the interesting non-obvious challenges in the implementation, some limitations of the implementation, and my hope for the future.

Implementation

The Semver specification is hosted in a different GitHub repository than the website.

The specification itself is a markdown file named semver.md. When I publish a new release, I take that one file, rename it to index.md, and replace this index.md file with it. Actually, I do a lot more, but that’s the simplified view of it.

The semver.org site is a statically generated Jekyll site hosted by the GitHub Pages system. I love it because it’s so simple and easy to update.

So one of my requirements was to require zero changes to semver.md when publishing a new version to the web. I wanted to make all transformations outside of the document to make it web friendly.

However, this meant that I couldn’t easily control adding HTML id attributes to relevant elements. If you want add links to specific elements of an HTML page, giving elements an ID gives you a nice anchor target.

Fortunately, there’s a Markdown renderer supported by GitHub that generates IDs for headings. Up until now, Semver.org was using rdiscount. I switched it to use Kramdown. Kramdown generates heading IDs by default.

But there’s a problem. It doesn’t generate IDs for list items. Considering the meat of the spec is in the section with list items, you would guess people would want to be able to link to a specific list item.

I explored using AnchorJs which is a really wonderful library for adding deep anchor links to any HTML page. You give the library a CSS selector and it’ll both generate IDs for the elements and add a nice hover link to link to that anchor.

Unfortunately, I couldn’t figure out a nice way to control the generated IDs. I wanted a nice set of sequential IDs for the list items so you could easily guess the next item.

I thought about changing the list items to headings, but I didn’t want to change the original markdown file just for the sake of its rendering as a website. I think the ordered list is the right approach.

My solution was to implement a Semver.org specific implementation in JavaScript to add IDs to the relevant list items and then add a hover link to all elements in the document that has an ID.

This solves things in the way I want, but it has one downside. If a user has JavaScript disabled, deep links to the list items won’t work. I can live with that for now.

My hope is that someone will add support for generated list item IDs in Kramdown. I would do it, but all I really wanted to do was add deep links to this document. Also, my Ruby skills are old Ford mustang sitting on the lawn on concrete blocks rusty.

If you have concerns or suggestions about the current implementation, please log an issue here.

Future

In 2016, I hope to release Semver 3.0. But I don’t want to do it alone. I’m going to spend some time thinking about the best way to structure the project moving forward so those with the most skin in the game are more involved. For example, I’d really like to have a representative from NPM, NuGet, Ruby Gems, etc. work closely with me on it.

I unfortunately have very little time to devote to it. On one level, that’s a feature. I believe stability is a feature for a specification like this and constant change creates a rough moving target. On the other hand, the world changes and I don’t want Semver to become completely irrelevant to those who depend and care about it most.

Anyways, this change is a small thing, but I hope it works well for you.

personal comments edit

I have a big problem as a dad.

For the most part, my kids are wonderful. Like most kids, they have their infuriating moments. But that’s not the problem of which I speak. It may help for me to describe one such scenario.

My family and I are in Portland for a brief trip. As I drove around an unfamiliar location, my daughter asked the typical question kids ask as a form of advanced psychological torture, “Are we there yet?”

This wasn’t so bad. My wife calmly explained to my daughter not to ask that again because she’d already done so several hundred times. She then asked my daughter what the consequence should be if she asks again and they started negotiating about the consequence of that action.

Not long after, my daughter repeatedly poked my ear with a stick made of straws as I drove. We have a strict and, in my humble opinion, eminently reasonable rule not to distract the driver. My wife calmly told my daughter that the consequence of that action next time would be to break the stick.

Now if you’ve ever seen or met my daughter, she’s a delightful joyful child who’s usually all bright eyes and big smiles. She can also be a bit mercurial. Like most kids, that temper burns a little hotter when she’s hungry.

At being admonished, she started whimpering and crying. But then, out of nowhere, she screams with the full fury of her six year old being, “Try it and I’ll kill you!”

public domain image

My eyes went wide and I looked at my wife in shock mouthing “Holy fuck!” As I searched for the words to convey to my daughter the gravity of how wrong it was to say that, I bust out laughing. Out loud. Uncontrollably.

The intense incongruity of this little bundle of cuteness screaming such an over the top aggressive threat was too comical and I could not keep a straight face.

Of course, this is the wrong reaction for so many reasons. For one, it could encourage such behavior. Or worse, it might appear that I’m mocking her feelings.

And this is my problem. This isn’t the first time she let out her outsized rage that spurred uncontrollable laughter on my part. Of course I struggle to take a serious tone and talk through her feelings with her, but I can’t help to take an outside view of the situation and see how funny it is.

Fortunately, we apply the principle of failure and repair and talked it through after my laughter subsided. She recognized her words were hurtful and apologized. We then had lunch and she’s back to her sweet full of joy self. Until next time.

personal comments edit

I planned to skip the tried and true year in review post because who reads such drivel anyways, amirite? They feel like one big exercise in vanity.

But it dawned on me. Perhaps, I lost touch with what blogging was all about. What it’s always been all about. Hasn’t it always been one big unabashed and unashamed exercise in vanity?

Though writers better than me tend to couch it as something that sounds more virtuous with words like “write for yourself” and such while they smirk awash in the glow of their laptop screen with a giant glamour shot of themselves on the wall.

So, um, yeah. I write for myself. But when others than your close friends and family (well, my family long since stopped reading it) start to read it, it’s easy to lose sight of that fact. Also, I’m on a train to Portland and I don’t really have anything much better to do.

So on that note, here it is, my look back at 2015.

new year celebration

Highlights

.NET on the Fringe

As one of the organizers of the conference, I had the great pleasure to deliver the closing talk about a topic near and dear to me, Social Coding. The conference had a great infectious energy and many people told me it inspired them to make their first commit to open source.

Nearly dying in Puerto Rico

Ok, I didn’t really almost die, but I was really really uncomfortable. I did have a wonderful time speaking about GitHub to the Latin American audience and exploring the old forts like I was inside Assassin’s Creed Black Flag. In 2016 I’ll be speaking in Peru!

The release of GitHub Extension for Visual Studio

This was a fun collaboration with Microsoft and represents just the beginning. We later made it open source! In 2016, we plan to invest a lot more time and energy into the extension to make your experience working with GitHub projects from Visual Studio better.

DevInteresections EU

I had the pleasure to speak at DevIntersections EU in Amsterdam. It was my first time in Amsterdam and I immediately loved it. The week of the conference just happened to coincide with ADE (Amsterdam Dance Event) so there were a lot of great music events to check out. One of my favorite sites though was the library I went to with the infamous Mr. Hanselman and a local friend of his.

QCon SF

QCon is an international collection of conferences and the organizers asked me to chair the the first .NET track in a long while and the second one ever. They mentioned that the first one didn’t go so well.

The title of the track was The Amazing Potential of .NET Open Source. I wanted to highlight all the wonderful things happening in the .NET world in the wake of Microsoft making a big splash on GitHub. But I also focused on talks I thought would have crossover appeal to those who are not .NET developers.

I am eternally grateful to the speakers who accepted my invitation and delivered a knockout track. Our track ended up being the top rated track based on attendee voting of the conference. So I think this .NET thing still has legs. It probably helped that I didn’t speak at the conference.

Vacations!

This year the family and I didn’t take any major vacations. We did go on an outing to Lake Chelan and got caught in a wild fire. My wife and I went to Vancouver for a couple days to see a concert which was a lot of fun. Next year we’re doing a proper vacation to Maui with another family which should be a lot of fun.

Looking to 2016

I’m really looking forward to 2016. My team has a new focus to make GitHub great for .NET developers and I can’t wait to dive into that.

It’s tough on my kids when I travel, so I try not to speak at too many conferences, but there’s so many good ones around these days. So far, I plan to speak at a DevDays in Peru. I’m also hoping to return to NDC in Oslo, which will be exciting.

Other than that, I haven’t really confirmed anywhere else. Perhaps we’ll throw another .NET Fringe. I’m hoping to reach some new audiences in some new places so we’ll see what the year has in store.

Happy New Year!

csharp style code comments edit

Like many developers, I have many strong opinions about things that really do not matter. Even worse, I have the vanity to believe other developers want to read about it.

For example, a recent Octokit.net pull request changed all instances of String to string. As a reminder, String actually represents the type System.String and string is the nice C# alias for System.String. To the compiler, these are the exact same thing. So ultimately, it doesn’t really matter.

However, as I just said, I care. I care about things that don’t matter.

Resharper, a tool we use to maintain our code conventions, by default suggests changing all String to string. But this doesn’t fit the convention we follow.

To understand the convention I have in my head, it probably helps to think about why are there aliases in the first place for these types.

I’m not really sure, but types like string and int feel special to me. They feel like primitives. In C#, keywords are lowercase. So in my mind, when we’re using these types in this manner, they should be lowercase. Thus my convention is to lowercase string and int any time we’re using them in a manner where they feel like a keyword.

For example,

int x;
string foo;
string SomeMethod();
void AnotherMethod(string x);
public string Foo { get; }

But when we’re using it to call static methods or calling constructors on these types, I want it to look like a normal type. These constructs don’t look like you’re using a keyword.

Int32.Parse("42");
var foo = String.Empty;
String.Format("blah{0}", "blah");
var baz = new String();

Maybe I’m being too nitpicky about this. For those of you who also care about unimportant things, I’m curious to hear what your thoughts on this matter. Is it possible to configure R# or other tools to enforce this convention?

UPDATE

One of the reasons I follow this convention is for readability. When things don’t fit my expectations a tiny bit of extra processing is needed. So as I’m scanning code in Visual Studio and I see keyword.MethodCall it takes a second.

Having said that, a commenter noted that the corefx coding guidelines always use lowercase. And one of my principles with coding conventions is I tend to adopt something I feel is something of a widespread standard even if I disagree with it because at the end of the day, these are all mostly arbitrary and rather than spend a lot of time arguing, I’d rather just point to something and say “We’re doing it that way because why not?” That’s why I tend to follow the Framework Design Guidelines for example.

Having said that, I still need to let this one sink in.

git github console shell comments edit

GitHub Desktop, the application formerly known as GitHub for Windows, is a streamlined GUI that makes it easy to contribute to repositories on GitHub.

GitHub Desktop Screenshot

At the same time, I often hear from people that they don’t need a GUI because they’re perfectly happy to use the command line. If you feel this way, that’s great! I love the command line too! Let’s be friends!

Even so, in a set of blog posts I plan to write, I hope to convince you that GitHub Desktop is one GUI application that augments and complements the Git command line in such a powerful way that you’ll want to integrate it into your GitHub workflow.

But for now, I’ll work to convince you that GitHub Desktop is the easiest, fastest, and best way to get the Git command line set up on a Windows machine and keep that installation up to date.

Installation

Installation is easy. Visit desktop.github.com and click the big blue button that says “Download GitHub Desktop.” We use ClickOnce to streamline the installation process.

Once you install, you’ll notice a GitHub icon and a Git Shell Icon on your desktop.

GitHub Desktop Icons

By default, the Git Shell shortcut launches PowerShell with Git set up. You can also launch the Git Shell from the Desktop GUI application by pressing the ~ key (or CTRL + ~ at any time such as when a text field has focus) or via the gear menu in the top right. Select the “Open in Git Shell” menu item.

Support for the ~ key to launch the shell was inspired by the 3D shooter Quake which uses that key to bring up the console within the game. Many thanks to Scott Hanselman for that idea.

Sane Defaults

We often mention that we install Git with “sane defaults.” What do we mean by that? Well, let me tell you an old joke first.

Q: How do you generate a random string? A: Put a Windows user in front of vi, and tell them to exit

I can’t take credit for this joke. The first instance I’ve found is on Twitter by Aaron Jorbin.

By default, Git sets vi as the default commit editor. Statistically speaking, that’s never the right answer for Windows users. GitHub Desktop sets your default editor (typically NotePad) as the default editor via the open source GitPad tool.

Also, GitHub Desktop sets itself up as the Git credential provider. That way, you can log into the app and we’ll handle the credential management when you push to or pull from GitHub using the Git command line. This is especially useful if you have two-factor authentication enabled (WHICH YOU SHOULD!) as we handle the 2fa dance on your behalf. There’s no need for messing around with personal access (OAuth) tokens.

Posh-Git

The other “sane default” is that we include Posh-Git (maintained by Keith Dahlby) with our installation.

When you launch the shell, you’ll notice that the Powershell window now has a Git enhanced prompt that displays your current branch name. Posh-Git provides tab completion within PowerShell for Git commands and Git data. For example, you can type git {TAB}, where {tab} is the TAB key, and it will complete with the first Git command (abort in this case). Keep pressing TAB to cycle through all the Git commands.

Likewise, if you type git checkout b{tab}, you can cycle through all branches in your repository that start with b.

Here is an animated gif that shows what it looks like to tab through all my repository’s branches.

Posh-Git Tab Expansion

PowerShell Configuration

One thing to note is that if you launch PowerShell via some other means other than our Git Shell shortcut or the Desktop application, our version of Git won’t be there. That’s because we install a portable version of Git that does not change any of your system settings.

You can add our version of Git to your PowerShell profile so it’s always there. Simply edit the PowerShell profile document: %UserProfile%\Documents\WindowsPowershell\Microsoft.Powershell_profile.ps1

Add this snippet to the file.

Write-Host "Setting up GitHub Environment"
. (Resolve-Path "$env:LOCALAPPDATA\GitHub\shell.ps1")

Shell.ps1 contains the environment settings that the Desktop application uses when it launches the Git Shell.

You can also have your default profile load up Posh-Git if you haven’t already. Make sure this snippet comes after the previous one.

Write-Host "Setting up Posh-Git"
. (Resolve-Path "$env:github_posh_git\profile.example.ps1")

$env:github_posh_git is one of the environment variables that Shell.ps1 sets up.

The other thing to note is if you have already customized PowerShell with a profile, the Git Shell doesn’t load your existing PowerShell profile. We wanted to avoid conflicts with existing Posh-Git installs or whatever else you might have. To load your existing profile requires you to opt into this behavior. To do this, create a PowerShell script file named GitHub.PowerShell_profile.ps1. Desktop will execute this file, if it exists.

Desktop looks for this file in the same directory as your $profile script (typically %UserProfile%\Documents\WindowsPowershell\). I simply load my regular profile from that script.

Here’s the full contents of my GitHub.PowerShell_profile.ps1.

Write-Host "Setting environment for GitHub Powershell Profile"

$dir = (Split-Path -Path $MyInvocation.MyCommand.Definition -Parent)
Push-Location $dir

. (Resolve-Path "$dir\Microsoft.Powershell_profile.ps1")

Pop-Location

Turbocharge the Shell with ConsoleZ

Hey, you’re an individual and you need to express that individuality. Nothing says “I’m boring” like the default blue PowerShell console (if you do always use the default, ignore what I just said. You might be the real noncomformist). I express my individuality with a fork of Console2 called ConsoleZ just like every one of my friends! If you have Chocolatey installed, you can run this command to install ConsoleZ:

chocolatey install ConsoleZ

Then in GitHub Desktop, go to the Tools menu (top right gear icon), select Options..., and find the section labeled Default Shell. Select Custom and enter the path to ConsoleZ. On my machine, that’s C:\Chocolatey\lib\ConsoleZ.1.15.0.15253\tools\Console.exe.

Default Shell

You’re not quite done yet. By default, ConsoleZ loads the Command shell. You’ll want to change it so it loads PowerShell.

  • Right-click in the main console and click Edit Settings.
  • Under Tabs, paste the path %SystemRoot%\syswow64\WindowsPowerShell\v1.0\powershell.exe into the “Shell” text box.

ConsoleZ Tab Settings

You’ll notice I got a little fancy and use the Git Shell icon for the tab. I put icon here (or use our Octocat App Icon) so you can download it and do the same if you’d like. I saved the icon to the %LocalAppData%\GitHub\icons directory and then pasted the path to the Icon in the Icon text box.

Customize ConsoleZ

That’s just enough to make ConsoleZ work with Desktop. But since you’ll spend a lot of time in the shell, let’s really turbo charge it. Here is a set of customizations adapted from Scott Hanselman’s Console2 blog post. I’ve made a few small tweaks in places to ensure that the customizations don’t interfere with the Git Shell functionality.

  • Under the Console tab, do not enter a value for the “Startup Dir” setting as Hanselman suggests. When you launch the Git Shell from the Desktop application, it will set the working directory to the currently selected repository’s working directory. If you set ConsoleZ’s startup directory, then that feature of Desktop is overridden. If you don’t care about this feature, by all means set a startup directory.
    • Under the Console tab, do play around with the Columns and Rows. I have mine set to 30 rows, and 80 columns.
  • Under Appearance More, hide the menu, status bar and toolbar.
  • Under Appearance Font, set the font to Consolas 15. Unlike Hanselman’s advice, do not change the font color. Posh-Git makes important use of color. Not to mention the output of many Git commands will use color to signify additions and deletions, for example. If you want to change the default font color, do it via your PowerShell profile or by hacking Posh-Git.
  • Under Appearance Transparency, Hanselman recommends setting Window Transparency to a nice conservative 40 for both Active and Inactive. I like setting Inactive to 80. Your mileage may vary.
  • Under Hotkeys, set “Group Tab” to something else like CTRL+ALT+T. This frees you up to change the “New Tab 1” hotkey to CTRL+T as God intended. You’ll have to click on the hotkey, then in the textbox, then type the hot-key you want AND press the Assign button for it to stick.
  • Under Hotkeys, change “Copy Selection” to Ctrl-C and “Paste” to Ctrl-V

Here’s what it looks like on my machine. You can see my background image poking through.

Customized ConsoleZ

Finally, one setting that makes ConsoleZ feel really badass to me is to go to Appearance Full Screen and check “Start in full screen”. I’ve been using this for a bit and it’s growing on me. Just remember, hit ALT+F4 or type exit to close it.

Updates

We put in a lot of work to ensure that GitHub Desktop silently and automatically updates itself. When we ship an update, you’ll see a subtle notification in the application (a little up arrow in the top right) that an update is available. Just restart the application to install the update.

Desktop updates also include updates to Posh-Git, the Git command line, and Git LFS. It’s convenient to have all of that updated for you automatically so you don’t have to track down each update individually.

We keep everything updated so you don’t have to.

Conclusion

With these customizations in place, you’ll have a really great Git command line setup that we’ll work hard to keep up to date so you don’t have to. In fact, we have a few updates coming up very soon! Follow this up with some of my Git aliases and you’ll really be smoking.

Be careful though, a setup like this communicates that you’re a Git Pro and everyone will start to bug you with their Git problems. Just remember, git reflog is your friend.

In some follow-up posts, I’ll demonstrate how this setup works really nicely in tandem with GitHub Desktop.

work comments edit

The TED Radio Hour podcast has an amazing episode entitled “The Meaning of Work”. It consists of four segments that cover various aspects of finding meaning and motivation at work. You should definitely listen to it, but I’ll provide a brief summary here of some points I found interesting.

The first segment features Margaret Heffernan who gave this TED talk about what makes high functioning teams.

At the beginning of the talk, she recounts a study by the biologist William Muir, emphasis mine.

Chickens live in groups, so first of all, he selected just an average flock, and he let it alone for six generations. But then he created a second group of the individually most productive chickens – you could call them superchickens – and he put them together in a superflock, and each generation, he selected only the most productive for breeding.

After six generations had passed, what did he find? Well, the first group, the average group, was doing just fine. They were all plump and fully feathered and egg production had increased dramatically. What about the second group? Well, all but three were dead. They’d pecked the rest to death. The individually productive chickens had only achieved their success by suppressing the productivity of the rest.

Sound familiar? We’re often taught that great results come from solitary geniuses who bunker down to work hard and emerge some time later with some great work of genius to bestow upon the world, alongside a luxurious beard perhaps.

Jim Carrey in some movie

But it’s a myth. Great results come from deep collaborations among teams of people who trust each other. Heffernan goes on to cite an MIT study that noted what lead to high functioning teams.

Nor were the most successful groups the ones that had the highest aggregate I.Q. Instead, they had three characteristics, the really successful teams. First of all, they showed high degrees of social sensitivity to each other. This is measured by something called the Reading the Mind in the Eyes Test. It’s broadly considered a test for empathy, and the groups that scored highly on this did better. Secondly, the successful groups gave roughly equal time to each other, so that no one voice dominated, but neither were there any passengers. And thirdly, the more successful groups had more women in them.

People who are attuned to each other, who can talk to each other with trust, empathy, and candor, create an environment where ideas can really flow. It’s wonderful to work in such an environment.

Another thing that struck me in the Radio Hour interview is this exchange she describes she often has with businesses.

What’s the driving goal here? And they answer, $60 billion in revenue. And I’ll say, “you have got to be joking! What on earth makes you think that everybody is really going to give it their all to hit a revenue target. You know you have to talk to something much deeper inside people than that. You have to talk to people about something that makes a difference to them everyday if you want them to bring their best and do their best and feel that you’ve given them the opportunity to do the best work they’ve ever done.”

This resonates with me. Revenue and profit targets don’t put a spark in my step in the morning.

Rather, it’s the story of Anna and how my own kids reflect that story that get me motivated. I’m excited to work on a platform that the future makers of the world will use to build their next great ideas.

I don’t mind getting paid well, but it doesn’t produce a deep connection to my work. As Heffernan points out,

For decades, we’ve tried to motivate people with money, even though we’ve got a vast amount of research that shows that money erodes social connectedness.

That lesson is reiterated in Drive: the surprising truth about what motivates us which I’ve linked to many times in the past.

What gets me up in the morning with a spring in my step is working on a platform that houses the code that helps send people into space or coordinates humanitarian efforts here on earth.

What motivates you?

Oh, by the way, many teams at GitHub including mine are hiring. Come do meaningful work with us!

hiring industry comments edit

The shaman squatted next to the entrails on the ground and stared intently at the pattern formed by the splatter. There was something there, but confirmation was needed. Turning away from the decomposing remains, the shaman consulted the dregs of a cup of tea, searching the shifting patterns of the swirling tea leaves for corroboration. There it was. A decision could be made. “Yes, this person will be successful here. We should hire this person.”

Spring Pouchong tea - CC BY SA 3.0 by Richard Corner

Such is the state of hiring developers today.

Our approach to hiring is misguided

The approach to hiring developers and managing their performance afterwards at many if not most tech companies is based on age old ritual and outdated ideas of what predicts how an employee will perform. Most of it ends up being very poor at predicting success and rife with bias.

For example, remember when questions like “How would you move Mt. Fuji?” were all the rage at companies like Microsoft and Google? The hope was that in answering such questions, the interviewee would demonstrate clever problem solving skills and intelligence. Surely this would help hire the best and brightest?

Nope.

According to Google’s Senior VP of People Operations Laszlo Bock, Google long ago realized these questions were complete wastes of time.

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.

We’re not the first to face this

Our industry is at a stage where we are rife with superstition about what factors predict putting together a great team. This sounds eerily familiar.

The central premise of Moneyball is that the collected wisdom of baseball insiders (including players, managers, coaches, scouts, and the front office) over the past century is subjective and often flawed. Statistics such as stolen bases, runs batted in, and batting average, typically used to gauge players, are relics of a 19th-century view of the game and the statistics available at that time.

Moneyball, a book by Michael Lewis documents how the Oakland Athletics baseball team decided to ignore tradition and use an evidence-based statistical approach to figuring out what makes a strong baseball team. This practice of using empirical statistical analysis towards baseball became known as sabermetrics.

Prior to this approach, conventional wisdom looked at stolen bases, runs batted in, and batting average as indicators of success. Home run hitters were held in especially high esteem, even those with low batting averages. It was not unlike our industry’s fascination with hiring Rock Stars. But the sabermetrics approach found otherwise.

Rigorous statistical analysis had demonstrated that on-base percentage and slugging percentage are better indicators of offensive success.

Did it work?

By re-evaluating the strategies that produce wins on the field, the 2002 Athletics, with approximately US$44 million in salary, were competitive with larger market teams such as the New York Yankees, who spent over US$125 million in payroll that same season…This approach brought the A’s to the playoffs in 2002 and 2003.

Moneyball of Hiring

It makes me wonder, where is the Moneyball of software hiring and performance management?

Companies like Google, as evidenced by the previously mentioned study, are applying a lot of data to the analysis of hiring and performance management. I bet that analysis is a competitive advantage in their ability to hire the best and form great teams. It gives them the ability to hire people overlooked by other companies still stuck in the superstition that making candidates code on white boards or reverse linked lists will find the best people.

Even so, this area is ripe to apply more science to it and study it on a grander scale. I would love to see multiple large companies collect and share this data for the greater good of the industry and society at large. Studies like this often are a force in reducing unconscious bias and increasing diversity.

Having this data in the open might remove this one competitive advantage in hiring, but companies can still compete by offering interesting work, great compensation, and benefits.

The good news is, there are a lot of people and companies thinking about this. This article, What’s Wrong with Job Interviews, and How to Fix Them is a great example.

We’ll never get it right

Even with all this data, we’ll never perfect hiring. Studying human behavior is a tricky affair. If we could predict it well, the stock market would be completely predictable.

Companies should embrace the fact that they will often be wrong. They will make mistakes in hiring. As much time as a company spends attempting to make their hiring process rock solid, they should also spend a similar amount of time building humane systems for correcting hiring mistakes. This is a theme I’ve touched upon before - the inevitability of failure.

github jekyll pages comments edit

A while back I migrated my blog to Jekyll and GitHub Pages. I worked hard to preserve my existing URLs.

But the process wasn’t perfect. My old blog engine was a bit forgiving about URLs. As long as the URL “slug” was correct, the URL could have any date in it. So there happened to be quite a few non-canonical URLs out in the wild.

So what I did was create a 404 page that had a link to log an issue against my blog. GitHub Pages will serve up this page for any file not found errors. Here’s an example of the rendered 404 page.

And the 404 issues started to roll in. Great! So what do I do with those issues now? How do I fix them?

GitHub Pages fortunately supports the Jekyll Redirect From plugin. For a guide on how to set it up on your GitHub Pages site, check out this GitHub Pages help documentation.

Here’s an example of my first attempt at front-matter for a blog post on my blog that contains a redirect.

---
layout: post
title: "Localizing ASP.NET MVC Validation"
date: 2009-12-07 -0800
comments: true
disqus_identifier: 18664
redirect_from: "/archive/2009/12/12/localizing-aspnetmvc-validation.aspx"
categories: [aspnetmvc localization validation]
---

As you can see, my old blog was an ASP.NET application so all the file extensions end with .aspx. Unfortunately, this caused a problem. GitHub currently serves unknown extensions like this using the application/octet-stream content type. So when someone visits the old URL using Google Chrome, instead of a redirect, they end up downloading the HTML for the redirect. It happens to work on Internet Explorer which I suspect does a bit of content sniffing.

It turns out, there’s an easy solution as suggested by @charliesome. If you add the .html extension to a Jekyll URL, GitHub Pages will handle the omission of the extension just fine.

Thus, I fixed the redirect like so:

redirect_from: "/archive/2009/12/12/localizing-aspnetmvc-validation.aspx.html"

By doing so, a request for http://haacked.com/archive/2009/12/12/localizing-aspnetmvc-validation.aspx is properly redirected. This is especially useful to know for those of you migrating from old blog engines that appended a file extension other than ``.html` to all post URLs.

Also, if you need to redirect multiple URLs, you can use a Jekyll array like so:

redirect_from:
  - "/archive/2012/04/15/The-Real-Pain-Of-Software-Development-2.aspx.aspx.html/"
  - "/archive/2012/04/15/The-Real-Pain-Of-Software-Development-2.aspx.html/"

Note that this isn’t just useful for blogs. If you have a documentation site and re-organize the content, use the redirect_from plug-in to preserve the old URLs. Hope to see your content on GitHub Pages soon!

github git oss comments edit

Writing extensions to Visual Studio is really hard. As in turn your hair gray hard. In fact, I now have gray hairs and I didn’t when I started the GitHub Extension for Visual Studio project (true story).

Some of the challenge is in the fact that Visual Studio has been around a long time and has accumulated multiple different extensibility systems (such as DTE and MEF) and points (such as Editor extensions and Project Systems) to learn. But that’s not the real challenge. The real challenge is that Visual Studio is so flexible and offers so many extensibility points that it can be hard to figure out where to start. It’s almost too much of a good thing.

Fortunately, Andreia and I had the help of the Visual Studio team at Microsoft as we worked together to add GitHub features to the release candidate for Visual Studio 2015 last April. Not to mention the fact that Andreia has more Visual Studio extensibility experience in her pinky than ten of me would have.

Well today Microsoft announced the RTM of Visual Studio 2015. Congratulations!

Glad I dry cleaned the tux for this occasion

We figured, what better time than now to open up the code we used to build the extension? In discussions with the Visual Studio team, we all agreed that the more real world examples of Visual Studio extensions are out there, the better for everyone. This is in part why we released the GitHub Extension for Visual Studio under the MIT license today.

The Future

I’ll be honest with you, the extension doesn’t do a whole lot today. What it does do is important, it handles the authentication aspects (including two-factor authentication) of working with Git and GitHub. It makes it easy to clone repositories into Visual Studio as well as publishing new projects.

But we plan to do so much more. Our next major feature will add support for creating Pull Requests right from Visual Studio similar to the way GitHub for Windows supports it.

Get involved!

If this sort of thing appeals to you, we’d love to have you participate. Every contribution matters whether it’s logging an issue or submitting a pull request.

On a personal level, this has been a fun project for me. The idea of extending my primary development environment has always appealed to me, but I never felt I had the time to learn how.

It was great to work with people who are world class experts in this environment. I learned a huge amount. Not to mention the fact that even though I’m a manager now and don’t get to code as much as I used to, I made an exception (out of necessity) and spent a lot of time writing code for this project which made me very happy.

Preparing for this project is what lead me to write the Encourage Extension (shameless plug).

science holography comments edit

This past week I learned a new party trick. I can turn a holographic researcher beet red by simply referencing Tupac’s Hologram. It’s not because they identify strongly with East Coast hip-hop. Rather, it has to do with calling it a hologram in the first place.

Tupac's Peppers Ghost

A hologram is a very specific effect taking advantage of interference and refraction of light. As Wikipedia describes it,

Normally, a hologram is a photographic recording of a light field, rather than of an image formed by a lens, and it is used to display a fully three-dimensional image of the holographed subject, which is seen without the aid of special glasses or other intermediate optics. … When suitably lit, the interference pattern diffracts the light into a reproduction of the original light field and the objects that were in it appear to still be there, exhibiting visual depth cues such as parallax and perspective that change realistically with any change in the relative position of the observer.

The Tupac “hologram” was a variant of Pepper’s Ghost which is a technique that’s been described as early as the 16th century by Giambattista della Porta:

Let there be a chamber wherein no other light comes, unless by the door or window where the spectator looks in. Let the whole window or part of it be of glass, as we used to do to keep out the cold. But let one part be polished, that there may be a Looking-glass on bothe sides, whence the spectator must look in. For the rest do nothing. Let pictures be set over against this window, marble statues and suchlike. For what is without will seem to be within, and what is behind the spectator’s back, he will think to be in the middle of the house, as far from the glass inward, as they stand from it outwardly, and clearly and certainly, that he will think he sees nothing but truth. But lest the skill should be known, let the part be made so where the ornament is, that the spectator may not see it, as above his head, that a pavement may come between above his head. And if an ingenious man do this, it is impossible that he should suppose that he is deceived.

I learned this recently at a Microsoft Research Faculty Summit that brought together leading experts in machine vision, natural language processing, AI, etc. I was there to be on a panel about what academia could learn from the open source community. I’ll write more about this later when the video becomes available.

At the conference reception dinner, I found myself in a conversation with some pioneers of holography and machine vision. During most of the conversation I wisely kept my mouth shut and did my best bobblehead impression.

However, at one point I did open my mouth just long enough to betray my complete ignorance on the topic, asking the researchers about the Tupac Hologram. Dark clouds gathered overhead casting a pall on the once bright conversation. To their credit, with beads of sweat rolling off their foreheads under the strain of self-control, the researchers politely and calmly educated me about holography. One key aspect of a hologram is that it does not require special equipment to see it. It’s a thing of beauty. It appears to be right there.

By that definition, it occurred to me that HoloLens therefore isn’t a true hologram either. This came up in the conversation (but not by me as I decided to continue keeping my mouth shut) and the HoloLens researcher noted that while true, it’s much closer to the experience of a hologram than using the Pepper’s Ghost technique. So while it makes him cringe as someone who understands the distinction, it doesn’t make him as angry as it used to.

So next time the topic of HoloLens comes up and someone uses the term holgram, you can pull an epic Well, Actually and come out of the conversation victorious!

UPDATE: Daniel Rose left a comment pointing to this post that goes into more detail about holography and makes a convincing case that the distinction between what HoloLens does and holography is overly pedantic and not useful. It looks like the well actuallyer has been well actually’d!

github git comments edit

Show of hands if this ever happens to you. After a long day of fighting fires at work, you settle into your favorite chair to unwind and write code. Your fingers fly over the keyboard punctuating your code with semi-colons or parentheses or whatever is appropriate.

But after a few commits, it dawns on you that you’re in the wrong branch. Yeah? Me too. This happens to me all the time because I lack impulse control. You can put your hands down now.

GitHub Flow

As you may know, a key component of the GitHub Flow lightweight workflow is to do all new feature work in a branch. Fixing a bug? Create a branch! Adding a new feature? Create a branch! Need to climb a tree? Well, you get the picture.

So what happens when you run into the situation I just described? Are you stuck? Heavens no! The thing about Git is that its very design supports fixing up mistakes after the fact. It’s very forgiving in this regard. For example, a recent blog post on the GitHub blog highlights all the different ways you can undo mistakes in Git.

The Easy Case - Fixing master

This is the simple case. I made commits on master that were intended for a branch off of master. Let’s walk through this scenario step by step with some visual aids.

The following diagram shows the state of my repository before I got all itchy trigger finger on it.

Initial state

As you can see, I have two commits to the master branch. HEAD points to the tip of my current branch. You can also see a remote tracking branch named origin/master (this is a special branch that tracks the master branch on the remote server). So at this point, my local master matches the master on the server.

This is the state of my repository when I am struck by inspiration and I start to code.

First

I make one commit. Then two.

Second Commit - fixing time

Each time I make a commit, the local master branch is updated to the new commit. Uh oh! As in the scenario in the opening paragraph, I meant to create these two commits on a new branch creatively named new-branch. I better fix this up.

The first step is to create the new branch. We can create it and check it out all in one step.

git checkout -b new-branch

checkout a new branch

At this point, both the new-branch and master point to the same commit. Now I can force the master branch back to its original position.

git branch --force master origin/master

force branch master

Here’s the set of commands that I ran all together.

git checkout -b new-branch
git branch --force master origin/master

Fixing up a non-master branch

The wrong branch

This case is a bit more complicated. Here I have a branch named wrong-branch that is my current branch. But I thought I was working in the master branch. I make two commits in this branch by mistake which causes this fine mess.

A fine mess

What I want here is to migrate commits E and F to a new branch off of master. Here’s the set of commands.

Let’s walk through these steps one by one. Not to worry, as before, I create a new branch.

git checkout -b new-branch

Always a new branch

Again, just like before, I force wrong-branch to its state on the server.

git branch --force wrong-branch origin/wrong-branch

force branch

But now, I need to move the new-branch onto master.

git rebase --onto master wrong-branch

Final result

The git rebase command is a great way to move (well, actually you replay commits, but that’s a story for another day) commits onto other branches. The handy --onto flag makes it possible to specify a range of commits to move elsewhere. Pivotal Labs has a helpful post that describes this option in more detail.

So in this case, I moved commits E and F because they are the ones since wrong-branch on the current branch, new-branch.

Here’s the set of command I ran all together.

git checkout -b new-branch
git branch --force wrong-branch origin/wrong-branch
git rebase --onto master wrong-branch

Migrate commit ranges - great for local only branches

The assumption I made in the past two examples is that I’m working with branches that I’ve pushed to a remote. When you push a branch to a remote, you can create a local “remote tracking branch” that tracks the state of the branch on the remote server using the -u option.

For example, when I pushed the wrong-branch, I ran the command git push -u origin wrong-branch which not only pushes the branch to the remote (named origin), but creates the branch named origin/wrong-branch which corresponds to the state of wrong-branch on the server.

I can use a remote tracking branch as a convenient “Save Point” that I can reset to if I accidentally make commits on the corresponding local branch. It makes it easy to find the range of commits that are only on my machine and move just those.

But I could be in the situation where I don’t have a remote branch. Or maybe the branch I started muddying up already had a local commit that I don’t want to move.

That’s fine, I can just specify a commit range. For example, if I only wanted to move the last commit on wrong-branch into a new branch, I might do this.

git checkout -b new-branch
git branch --force wrong-branch HEAD~1
git rebase --onto master wrong-branch

Alias was a fine TV show, but a better Git technique

When you see the set of commands I ran, I hope you’re thinking “Hey, that looks like a rote series of steps and you should automate that!” This is why I like you. You’re very clever and very correct!

Automating a series of git commands sounds like a job for a Git Alias! Aliases are a powerful way of automating or extending Git with your own Git commands.

In a blog post I wrote last year, GitHub Flow Like a Pro with these 13 Git aliases, I wrote about some aliases I use to support my workflow.

Well now I have one more to add to this list. I decided to call this alias, migrate. Here’s the definition for the alias. Notice that it uses git rebase --onto which we used for the second scenario I described. It turns out that this happens to work for the first scenario too.

    migrate = "!f(){ CURRENT=$(git symbolic-ref --short HEAD); git checkout -b $1 && git branch --force $CURRENT ${3-'$CURRENT@{u}'} && git rebase --onto ${2-master} $CURRENT; }; f"

There’s a lot going on here and I could probably write a whole blog post unpacking it, but for now I’ll try and focus on the usage pattern.

This alias has one required parameter, the new branch name, and two optional parameters.

parameter type Description
branch-name required Name of the new branch.
target-branch optional Defaults to “master”. The branch that the new branch is created off of.
commit-range optional The commits to migrate. Defaults to the current remote tracking branch.

This command always migrates the current branch.

If I’m on a branch and want to migrate the local only commits over to master, I can just run git migrate new-branch-name. This works whether I’m on master or some other wrong branch.

I can also migrate the commits to a branch created off of something other than master using this command: git migrate new-branch other-branch

And finally, if I want to just migrate the last commit to a new branch created off of master, I can do this.

git migrate new-branch master HEAD~1

And there you go. A nice alias that automates a set of steps to fix a common mistake. Let me know if you find it useful!

Also, I want to give a special thanks to @mhagger for his help with this post. The original draft pull request had the grace of a two-year-old neurosurgeon with a mallet. The straightforward Git commands I proposed would rewrite the working tree twice. With his proposed changes, this alias never rewrites the working tree. Like math, there’s often a more elegant solution with Git once you understand the available tools.

code bugs software comments edit

The beads of sweat gathered on my forehead were oddly juxtaposed against the cool temperature of the air conditioned room. But there they were, caused by the heat of the CTO’s anger. I made a sloppy mistake and now sat in his office wondering if I was about to lose my job. My first full-time job. I recently found some archival footage of this moment.

I wore headphones everywhere back then

So why do I write about this? Unless you’ve been passed out drunk in a gutter for the last week (which is much more believable than living under a rock), you’ve heard about this amazing opus by Paul Ford entitled “What is Code?

If you haven’t read it yet, cancel all your appointments, grab a beer, find a nice shady spot, and soak it all in. The whole piece is great, but there was one paragraph in particular that I zeroed in on. In the intro, Paul talks about his programming start.

I began to program nearly 20 years ago, learning via oraperl, a special version of the Perl language modified to work with the Oracle database. A month into the work, I damaged the accounts of 30,000 fantasy basketball players. They sent some angry e-mails. After that, I decided to get better.

This was his “getting better moment” and like many such moments, it was the result of a coding mistake early in his career. It caused me to reminisce about the moment I decided to get better.

When I graduated from college, websites were still in black and white and connected to the net by string and cans. They pretty much worked like this.

The Internet circa 1997 - image from Wikipedia - Public Domain

As a fresh graduate, I was confident that I would go on to grad school and continue my studies in Mathematics. But deep in debt, I decided to get a job long enough to pay down this debt a bit before I returend to the warm comfort of academia. After all, companies were keen to hire people to work on this “Web” thing. It wouldn’t hurt to dabble.

Despite my lack of experience, a small custom software shop named Sequoia Softworks hired me. It was located in the quaint beach town of Seal Beach, California. You know it’s a beach town because it’s right there in the name. The company is still around under the name Solien and now is located in Santa Monica, California.

My first few weeks were a nervous affair as my degree in Math was pretty much useless for the work I was about to engage in. Sure, it prepared me to think logically, but I didn’t know a database from a VBScript, and my new job was to build database driven websites with this hot new technology called Active Server Pages (pre .NET, we’d now call this “Classic ASP” if we call it anything).

Fortunately, the president of the company assigned a nice contractor to mentor me. She taught me VBScript, ADODB, and how to access a SQL Server database. Perhaps the most valuable lesson I learned was this:

Dim conn, rs
Set conn = Server.CreateObject("ADODB.Connection")
conn.Open("Driver={SQL Server};Server=XXX;database=XXX;uid=XXX;pwd=XXX")
Set rs = conn.Execute("SELECT * FROM SomeTable")
Do Until rs.EOF
  ' ...

  rs.MoveNext ' NEVER EVER EVER EVER FORGET THIS CALL!
Loop
conn.Close

As the comment states, never ever ever ever forget to call rs.MoveNext. Ever.

A benefit of working at a tiny company, it wasn’t long before I got to work on important and interesting projects. One of these projects was a music website called myLaunch. The website was a companion to the multi-media Launch CD-ROM magazine. Cutting edge stuff, you can still find them on Amazon. I wish I had kept the Radiohead issue, it sells for $25!

Launch magazine

It wasn’t long before the CD-ROM magazine was discontinued and the website became the main product. Launch later was bought by and incorporated into Yahoo Music.

One of my tasks was to make some changes to the Forgot Password flow. I dove in and cranked out the improvements. This was before Extreme Programming popularized the idea of test driven development, so I didn’t write any automated tests. I was so green, it hadn’t even occurred to me yet that such a thing was possible.

So I manually tested my changes. At least, I’m pretty sure I did. I probably tried it a couple times, saw the record in the database, might have seen the email or not. I don’t recall. You know, rigorous testing.

And that brings me to the beginning of this post. Not long after the change was deployed the CTO (and co-founder) called me into his office. Turns out that a Vice President at our client company had a daughter who used the website to read about her favorite bands and she had forgotten her password. She went to reset her password, but never got the email with the new generated password and was completely locked out. And we had no way of knowing how many people had run into this problem and were currently locked out, never to return.

When I returned to my desk and sprinkled the code with Response.Write statements (the sophisticated debugging technique I had at my disposal), I discovered that sure enough, the code to email the new password never ran due to a logic bug.

I soon learned there’s a pecking order to finding bugs. It’s better to

  1. … have the computer find the bug (compiler, static analysis, unit tests) than to find it at runtime.
  2. … find a bug at runtime yourself (or have a co-worker find it) before a user runs into it.
  3. … have a user find a bug (and report it to you) before the daughter of your client’s Vice President does.

I wasn’t fired right then, but it was made clear to me that wouldn’t hold true if I made another mistake like that. Gulp! And by “Gulp!” I don’t mean a JavaScript build system.

Inspired by a fear of losing my job, this was my Getting Better Moment. Like Paul Ford, I decided right then to get better. Problem was, I wasn’t sure exactly how to go about it. Fortunately, a co-worker had the answer. He lent me his copy of Code Complete, and my eyes were opened.

Reading this book changed the arc of my career for the better. Programming went from a dalliance to pay off some of my student loan bills, to a profession I wanted to make a career out of. I fell in love with the practice and craft of writing code. This was for me.

The good news is I never was fired from my first job. I ended up staying there seven years, grew into a lead and then a manager of all the developers, before deciding to leave when my interests lead elsewhere. During that time, I certainly deployed more bugs, but I was much more rigorous and the impact of those bugs were small.

So there you go, that’s my Getting Better Moment. What was yours like?

conf github travel medical comments edit

This past week I had the great pleasure to speak in Puerto Rico at their TechSummit conference.

Tech Summit 2015 is the premier new business development and technology event pushing the boundaries for government redesign and transformation.

My colleague, Ben Balter referred me to the CIO of Puerto Rico; Giancarlo Gonzales, an energetic advocate for government embrace of technology; to speak about the transformative power of Open Source on businesses and government agencies. I partnered with Rich Lander from Microsoft. Rich is on the CLR team and has been heavily involved in all the work to open source the Core CLR, etc.

Colorful Puerto Rico Buildings

The local technology sector is heavily pro Microsoft. Giancarlo had a vision that we could help convince these business leaders that the world has changed, Microsoft has changed, and it’s now perfectly fine to mix and match technologies as they make sense. It’s ok to use GitHub along with your Microsoft stack. You don’t have to be locked in to a single vendor. We tried our best.

The Forts

Most of my time in Puerto Rico was spent working on the talk and in an emergency room (more on that later). But Rich and I did manage a short trip to the forts at San Cristóbal and El Morrow.

San Cristobal

I was absolutely giddy with excitement when I set foot in these forts. As a fan of Sid Meier’s Pirates and later Assassin’s Creed Black Flag, which both take place in the West Indies, I really enjoyed seeing one of the settings in real life.

El Morrow

There are tunnels to explore, ramparts to patrol, and views of the ocean to soak in. I highly recommend a visit. The imprressiveness of the forts are a reflection of how Puerto Rico was a historically strategic outpost.

ER shenanigans

A couple weeks back, while back home in Bellevue, I hurt my elbow somehow. I’m not even sure how, but almost certainly one of my many injuries from playing soccer.

It was sore for a while, but no big deal. A couple days before I was set to fly to Puerto Rico, my elbow started to swell with fluid. Looking online, it appeared to be Elbow (Olecranon) Bursitis. This is when the bursa in the elbow get inflamed due to trama and fluid starts to gather. I went to an Urgent care and received a prescription for an anti-inflammatory and a bandage to wrap my arm for compression. At this point, because there was no external wound, the doctor didn’t think it was likely to be infected. However, we did both notice that the temperature of my elbow was very hot.

Unfortunately, it kept getting worse every day from that point on. I just assumed it was taking time for the medicine to really kick in. But it came to a head the night before my talk. I was in pain and I couldn’t sleep. At this point, I felt like my body was trying to tell me something. And if you’re a long time reader of my blog, you’ll know I’ve been in this situation before. I also noticed that my elbow had gone from a soft sack of fluid to become very hard. I was a bit nervous.

So I got out of bed at 2 AM and grabbed a taxi to the emergency room at Ashford Presbyterian hospital. The doctor took a look at it and ordered some X-rays and they gave me an IV of anti-biotics.

Antibiotics IV

It turns out that I had experienced an elbow fracture and there was a small bone chip. The doctor prescribed a more powerful anti-inflammatory and some antibiotics. He also gave me a sling to wear.

I ended up getting back to the hotel around 7:30 AM. I immediately headed out to fill the prescriptions and then Rich and I continued to work on our talks up until the point we had to go on stage and deliver the talk.

This was the first all-nighter I’ve pulled in a very long time. I only tell the story for two reasons. I mentioned the ER room visit on Twitter and some folks expressed concern. I wanted them to know it’s not as bad as it sounds. But it does suck.

But more importantly, once again it’s a reminder to listen to your body when it’s giving you pain signals. The last time I shared one of my medical stories, I heard back that people appreciated the heads up.

UPDATE So when I got back to the states, I got another X-Ray and it turns out it’s not a fracture at all. Tendons can have a bit of calcification that looks like bone chips. I noticed the X-Ray at my local hospital is much higher resolution. What I have is called Septic Bursitis (Bursitis with a side of infection). So I’m still on a bunch of antibiotics.

github visualstudio comments edit

I heard you liked GitHub, so today my team put GitHub inside of your Visual Studio. This has been a unique collaboration with the Visual Studio team. In this post, I’ll walk you through installation and the features. I’ll then talk a bit about the background for how this came to be.

If you are attending Build 2015, I’ll be giving a demo of this as part of the talk Martin Woodward and I are giving in room 2009

If you’re a fan of video, here’s a video I recorded for Microsoft’s Channel 9 site that walks through the features. I also recorded an interview with the .NET Rocks folks where we have a rollicking good time talking about it.

Installation

If you have Visual Studio 2015 installed, visit the Visual Studio Extension gallery to download and install the extension. You can use the following convenient URL to grab it: https://aka.ms/ghfvs.

If you haven’t installed Visual Studio 2015, you can obtain the installation as part of the installation process. Just make sure to customize the installation.

Customize install

This brings up a list of optional components. Choose wisely. Choose GitHub!

GitHub option

This’ll install the GitHub Extension for Visual Studio (heretofore shortened to GHfVS to save my fingers) as part of the Visual Studio installation process.

Login

One of the previous pain points with working with GitHub using Git inside of Visual Studio was dealing with Two-Factor authentication. If you have 2fa set up (and you should!), then you probably ran across this great post by Kris van der Mast.

I hope you don’t mind Kris, but we’ve just made your post obsolete.

If you go to the Team Explorer section, you’ll see an invitation to connect to GitHub.

GitHub Invitation Section

Click the “Connect…” button to launch the login dialog. If you’ve used GitHub for Windows, this’ll look a bit familiar.

Login Dialog

After you log in, you’ll see the Two-Factor authentication dialog if you have 2fa enabled.

2fa dialog

Once you log-in, you’ll see a new GitHub section in Team Explorer with a button to clone and a button to create.

GitHub Section

Clone

Click the clone button to launch the Repository Clone Dialog. This is a quick way to get one of your repositories (or any repository shared with you), into Visual Studio.

Clone dialog

Double click a repository (or select one and click Clone) to clone it to your machine.

Create

Click the “create” button to launch the Repository Creation Dialog. This lets you create a repository both on your machine and on GitHub all at once.

Create dialog

Repository Home Page

When you open a repository in Visual Studio that’s connected to GitHub (its remote “origin” is a github.com URL), the Team Explorer homepage provides GitHub specific navigation items.

GitHub Repository Home Page

Many of these, such as Pull Requests, Issues, and Graphs, simply navigate you to GitHub.com. But over time, who knows what could happen?

Publish

If you have a repository open that does not have a remote (it’s local only), click on the Sync navigation item for the repository and you’ll see a new option to publish to Github.

Publish control

Open in Visual Studio

The last feature is actually a change to GitHub.com. When you log in to the extension for the first time, GitHub.com learns that you have the extension installed. So if you’re also logged into GitHub.com, you’ll notice a new button under the Clone in Desktop button.

Open in Visual Studio

The Open in Visual Studio button launches Visual Studio 2015 and clones the repository to your machine.

Epilogue

This has been an exciting and fun project to work on with the Visual Studio and TFS team. It required that Microsoft create some new extensibility points for us and helped walk us through getting included in the new optional installation process.

On the GitHub side, Andreia Gaita (shana on GitHub and @sh4na on Twitter) and I wrote most of the code, borrowing heavily from GitHub for Windows (GHfW). Andreia provided the expertise, especially with Visual Studio extensibility. I provided moral support, cheerleading, and helped port code over from GHfW.

This collaboration with Microsoft really highlights the New Microsoft to me. When I pitched this project, our CEO asked me why don’t we ask Microsoft to include it. Based on my history and battle scars, I gave him several rock solid reasons why that would never ever ever happen. But later, I had an unrelated conversation with my former Microsoft manager (Scott Hunter) who was regaling me with how much commitment the new CEO of Microsoft, Satya Nadella, has with changing the company. Even drastic changes.

So that got me thinking, it doesn’t hurt to ask. So I went to a meeting with Somasegar (aka Soma), the Corporate VP of Developer Division and asked him. I’m pretty sure it went something like, “Hey, I don’t know if you’d be interested in this crazy idea. I mean, just maybe, only if you’re interested, it’s no big deal if you don’t want to. But, what do you think of including GitHub functionality inside of Visual Studio?” Ok, maybe I didn’t downplay it that much, but I wasn’t expecting what happened next.

Without hesitation, he said yes! Let’s do it! And so here we are, working hard to make using GitHub an amazing and integrated part of working with your code from Visual Studio. Stay tuned as we have big plans for the future.

oss nuget comments edit

The other day I was discussing the open source dependencies we had in a project with a lawyer. Forgetting my IANAL (I am not a lawyer) status, I made some bold statement regarding our legal obligations, or lack thereof, with respect to the licenses.

I can just see her rolling her eyes and thinking to herself, “ORLY?” She patiently and kindly asked if I could produce a list of all the licenses in the project.

Groan! This means I need to look at every package in the solution and then either open the package and look for the license URL in the metadata, or I need to search for each package and find the license on NuGet.org.

If only the original creators of NuGet exposed the package metadata in a structured manner. If only they had the foresight to provide that information in a scriptable fashion.

Then it dawned on me. Hey! I’m one of those people! And that’s exactly what we did! I bet I could programmatically access this information. So I immediately opened up the Package Manager Console in Visual Studio and cranked out a PowerShell script…HA HA HA! Just kidding. I, being the lazy ass I am, turned to Google and hoped someone else figured it out before me.

I didn’t find an exact solution, but I found a really good start. This StackOverflow answer by Matt Ward shows how to download every license for a single package. I then found this post by Ed Courtenay to list every package in a solution. I combined the two together and tweaked them a bit (such as filtering out null project names) and ended up with this one liner you can paste into your Package Manager Console. Note that you’ll want to change the path to something that makes sense on your machine.

I posted this as a gist as well.

@( Get-Project -All | ? { $_.ProjectName } | % { Get-Package -ProjectName $_.ProjectName } ) | Sort -Unique | % { $pkg = $_ ; Try { (New-Object System.Net.WebClient).DownloadFile($pkg.LicenseUrl, 'c:\dev\licenses\' + $pkg.Id + ".txt") } Catch [system.exception] { Write-Host "Could not download license for $pkg" } }

UPDATE: My first attempt had a bug in the catch clause that would prevent it from showing the package when an exception occurred. Thanks to Graham Clark for noticing it, Stephen Yeadon for suggesting a fix, and Gabriel for providing a PR for the fix.

Be sure to double check that the list is correct by comparing it to the list of package folders in your packages directory. This isn’t the complete list for my project because we also reference submodules, but it’s a really great start!

I have high hopes that some PowerShell guru will come along and improve it even more. But it works on my machine!

personal management comments edit

A lot of the advice you see about management is bullshit. For example, I recently read some post, probably on some pretentious site like medium.com, about how you shouldn’t send emails late as night if you’re a manager because it sends the wrong message to your people. It creates the impression that your people should be working all the time and destroys the idea of work-life balance.

whaaaaat's happening?

Don’t get me wrong, I get where they’re coming from. The 1990s.

For some reason, this piece of management advice made me angry. Let me describe my team. I have one person in San Francisco, two in Canada, one in Sweden, one in Copenhagen, a couple in Ohio, one in Australia, and I live in Washington. So pray tell me, when exactly can I send an email that won’t be received by someone out of “normal” working hours?

I believe the advice is well meaning, but it’s severely out of date with how distributed modern teams work today. I also think it mythologizes managers. It creates this mindset that managers wield some magical power in the actions they take.

True, there’s an implicit power structure at work between managers and those they manage. But healthy organizations understand that managers are servant leaders. They serve the needs of the team. Managers are not a special class of people. They are beautifully flawed like the rest of us. I sometimes have too much to drink and write tirades like this. Sometimes I get caught up in work and am short with my spouse or children. I say things I don’t mean at work because I’m angry or tired. We have to recognize management as a role, not a status.

The point is, rather than rely on these “rules” of business conduct, we’d be better served by building real trust amongst members of a team. My team understands that I might send an email at night not because I expect a response at night. It’s not because I expect people to work night and day. No, it’s because I understand we all work in different time zones. They know that I sometimes work at night because I took two hours out during the middle of the day to play soccer. And I understand they’ll respond to my emails when they’re damn good and ready to.

personal dotnet oss comments edit

Unless you live in a cave, you are probably aware of the large leaps forward Microsoft and .NET has made in the open source community.

Although I do wonder about that phrase “unless you live in a cave.” By now, don’t cave dwellers have decent internet access?

As usual, I digress.

Over at GitHub, we’re pretty excited to see Microsoft put so much of .NET on GitHub under permissive licenses. Not only have they put a large amount of code on GitHub, they work hard to manage their open source projects well.

I am excited by all this. It’s been a long time coming. It’s a good thing.

That being said, Microsoft, being the giant company it is, casts a large shadow. It’s good to praise the vigor with which Microsoft adopts open source. At the same time, it’s important not to forget all the projects that have been here all along, nor the new ones that crop up all the time. The lesser-known projects and independent open source developers are an important part of the .NET open source ecosystem.

DotNetFringe (April 12-14 in Portland, Oregon) is a new conference that will help bring all these grass root independent efforts from out of the shadow. This conference is organized by a group of independent folks (myself included) who have a deep-seated passion for .NET open source.

And we collected a great line-up of speakers. Some of the names you’ll recognize as fixtures in the .NET open source community. Many are regular speakers. We also worked hard to create an environment that welcomes fresh new voices you may not have heard before.

We know your time and money is valuable. We’ve tried to keep the price low and the content quality high. So definitely buy a ticket and come say hello to me in Portland! I’ll bring some Octocat stickers to give out!

personal management comments edit

I’m often amazed at the Sisyphean lengths people will go to try and prevent failure, yet prepare so little for its inevitability.

There’s nothing wrong with attempting to prevent failures that are easily preventable. But such preventative measures have to be weighed against the friction and cost the measure introduces. Lost in this calculation is the consideration that much of the energy and effort that goes into prevention might be better spent in preparing to respond to failure and the repair process.

This is a lesson that’s not just true for software, but all aspects of life. The following are examples of where this principle applies to social policy, parenting, relationships, and code.

Social Policy

The “War on Drugs” is a colossal failure of social policy…

If there is one number that embodies the seemingly intractable challenge imposed by the illegal drug trade on the relationship between the United States and Mexico, it is $177.26. That is the retail price, according to Drug Enforcement Administration data, of one gram of pure cocaine from your typical local pusher. That is 74 percent cheaper than it was 30 years ago.

So after thirty years spent, $51 Trillion (yes, Trillion!) spent, not to mention the incredible social costs, the result we get for all that expenditure is it’s 74 percent cheaper for that hit of cocaine. Wall street traders are rejoicing.

It doesn’t take 5 Nobel Prize winning economists to tell you that the drug war is a failure.

The idea that you can tell people to “Just Say No” and that will somehow turn the tide of human nature is laughably ridiculous. This is the failure of the prevention model.

A response that focuses on repair as opposed to all out prevention realizes that you can’t stop people from taking drugs, but you can help with the repair process for those who do get addicted. You can get better results if you treat drugs as a health problem and not a criminal problem. It’s worked very well for Portugal. Ten years after they decriminalized all drugs, drug abuse is down by half.

This development can not only be attributed to decriminalisation but to a confluence of treatment and risk reduction policies.

It’s a sane approach and it works. Locking an addict up in jail doesn’t help them to repair.

Parenting

A while back I wrote about the practice of Reflective Parenting. In that post, I wrote about the concept of repairing.

Now this last point is the most important lesson. Parents, we are going to fuck up. We’re going to do it royally. Accept it. Forgive yourself. And then repair the situation.

If there’s ever a situation that will disabuse a person of the notion that they’re infallible, it’s becoming a parent. An essential part of being human is that mistakes will be made. Learning how to gracefully repair relationships afterwards helps lessen the long term impact of such mistakes.

Perhaps I’m fortunate that I get a lot of practice fucking up and then repairing with my own kids. Just the other day I was frazzled trying to get the kids ready for some birthday party. I told my son to fill out the birthday card, but avoid a splotch of water on the table while I went to grab a towel. Sure enough, he put the card on the water. I was pissed. I berated him for doing the one thing I just finished explicitly telling him not to do that. Why would he do that?! Why didn’t he listen?!

His normal smile was replaced with a crestfallen face as his eyes teared up. That struck me. When I calmed down, he pointed to a little plate full of water on the table. He thought I had meant that water. “Asshole” doesn’t even begin to describe how much of a schmuck I felt at that moment. It was a total misunderstanding. He didn’t even see the splotch of water next to the more conspicuous plate of water.

I got down to his eye level and told him that I’m sorry. I made a mistake. I understand how my instructions would be confusing. I was sincere, remorseful, and honest. We hugged it out and things were fine afterwards. Learning to repair is essential to good parenting.

Relationships

I’ve been reading Difficult Conversations: How to Discuss What Matters Most. This book is a phenomenal guide to communicating well, both at work and home. Even if you think you are great at communicating with others, there’s probably something in here for you.

It helped me through a recent difficult situation where I hurt someone’s feelings. I had no idea that my words would prompt the response it did and I was surprised by the reaction. Prior to reading this book, my typical approach would be to try and defend my actions and help this person see the obvious reason in my position. I would try to win the argument.

Difficult Conversations proposes a third approach, rather than try to win the argument, it suggests you move towards a learning conversation.

Instead of wanting to persuade and get your way, you want to understand what has happened from the other person’s point of view, explain your point of view, share and understand feelings, and work together to figure out a way to manage the problem going forward. In so doing, you make it more likely that the other person will be open to being persuaded, and that you will learn something that significantly changes the way you understand the problem. Changing our stance means inviting the other person into the conversation with us, to help us figure things out. If we’re going to achieve our purposes, we have lots we need to learn from them and lots they need to learn from us. We need to have a learning conversation.

What I’ve learned is that people in general aren’t irrational. They only appear to be irrational because you are often missing a piece of context about how they view the world and interpret the actions of others.

This becomes crystal clear when you consider how you interpret your own actions. When was the last time you concluded that you acted with malicious intent or irrationally? How is it that you always act rationally with good intent, and others don’t? Given your impeccable track record, how is it that sometimes, others ascribe malice to your actions? Well they must be irrational! Or is it that they are missing a piece of context that you have? Could it be possible, when you’ve been on the other end, that you ascribed malice in a situation where you really were missing some information?

It’s not until you realize most people are alike in this way that you can start to have more productive learning conversations - even with folks you strongly disagree with.

Back to the story, despite all my good intentions and all my efforts to be respectful, I still failed and hurt my friend’s feelings. It’s just not possible to avoid this in every situation, though I strive to greatly reduce the occurrences. Fortunately, I’ve prepared for failure. By focusing on a learning conversation, we were able to repair the relationship. I believe it’s even stronger as a result.

Git

There’s so many examples in software, it’s hard to point to just one. So I’ll pick two. First, let’s talk about The Thing About Git. I’ve linked to this post many times because one of its key points really resonates with me.

Git means never having to say, “you should have”

If you took The Tangled Working Copy Problem to the mailing lists of each of the VCS’s and solicited proposals for how best to untangle it, I think it’s safe to say that most of the solutions would be of the form: “You should have XXX before YYY.” … More simply, the phrase: “you should have,” ought to set off alarm bells. These are precisely the types of problems I want my VCS to solve, not throw back in my face with rules for how to structure workflow the next time.

Git recognizes that people make mistakes and rather than tell you that your only recourse is to grab a time machine and do what you should have done in the first place, it gives you tools to repair mistakes.

The theme of preparing for failure applies just as much to software and systems as it does to dealing with people.

Restores

There are a lot of backup systems out there. And to a degree, backups are a step in recognizing the value in preparing for disasters. But as any good system administrators know, backups are not the important part of the process. Backups are nothing without restores. Restores are what we really care about. That’s the “repair” step when a hard-drive fails.

Moral

Systems and policies that require 100% failure prevention to work are highly suspect. Such a system should trigger your Spidey sense. When building a system or policy, think not only about how the system or policy might fail, but how users of the system and those subject to the policy might fail. And give them tools to recover and repair failures. Perhaps the only guarantee you can provide is there will be failure. So prepare for it and prepare to recover from it.