management comments edit

If you run a company, stop increasing pay based on performance reviews. No, I'm not taking advantage of all that newly legal weed in my state (Washington). I know this challenges a belief as old as business itself. It challenges something that seems so totally obvious that you're still not convinced I'm not smoking something. But hear me out.

money money money! - by Andrew Magill CC BY 2.0 https://flic.kr/p/68vjKV

This excellent post in the Harvard Business Review Blog, Stop Basing Pay on Performance Reviews, makes a compelling case for this. It won't take long, so please go read it. Here's an excerpt.

If your company is like most, it tries to drive high performance by dangling money in front of employees’ noses. To implement this concept, you sit down with your direct reports every once in a while, assess them on their performance, and give them ratings, which help determine their bonuses or raises.

What a terrible system.

Performance reviews that are tied to compensation create a blame-oriented culture. It’s well known that they reinforce hierarchy, undermine collegiality, work against cooperative problem solving, discourage straight talk, and too easily become politicized. They’re self-defeating and demoralizing for all concerned. Even high performers suffer, because when their pay bumps up against the top of the salary range, their supervisors have to stop giving them raises, regardless of achievement.

The idea that more pay decreases intrinsic motivation is supported by a lot of science. In my one year at GitHub post I highlighted a talk that referred to a set of such studies:

I can pinpoint the moment that was the start of this journey to GitHub, I didn’t know it at the time. It was when I watched the RSA Animate video of Dan Pink’s talk on The surprising truth about what really motivates us.

Hint: It’s not more money.

I recommend this talk to just about everyone I know who works for a living. I'm kind of obsessed with it. It's mind opening. Dan Pink shows how study after study demonstrate that for work that contains a cognitive element (such as programming), more pay undermines motivation.

More recently, researchers found a neurological basis to support the idea that monetary rewards undermine intrinsic motivation.

This rings true to me personally because of all the open source work I do for which I don't get paid. I do it for the joy of building something useful, for the recognition of my peers, and because I enjoy the process of writing code.

Likewise, at work, the reason I work hard is I love the products I work on. I care about my co-workers. And I enjoy the recognition for the good work I do. The compensation doesn't motivate me to work harder. All it does is give me the means and reason to stay at my company.

Not to mention, should the company dangle a bonus to improve my performance, there's some questions to ask. Why wasn't I already trying to improve my performance? Where will this new performance come from? Often, the extra performance comes from attempting to work long hours which backfires and is unsustainable.

So what's the alternative?

Pay according to the market

This is what Lear did, emphasis mine:

In 2010, we replaced annual performance reviews with quarterly sessions in which employees talk to their supervisors about their past and future work, with a focus on gaining new skills and mitigating weaknesses. We rolled out the change to our 115,000 employees across 36 countries, some of which had cultures far different from that of our American base.

The quarterly review sessions have no connection to decisions on pay. None. Employees might have been skeptical at first, so to drive the point home, we dropped annual individual raises. Instead we adjust pay only according to changing local markets.

They pay according to the market.

This makes a lot of sense when you consider the purpose of compensation:

  • It's an exchange of money for work.
  • It helps a company attract and hire talent.
  • It helps a company retain talent.

It's not a reward. You wouldn't go to your neighborhood kid and say, "Hey, I'll pay you 50% of what the market would normally offer you, but I'll increase it 4% every year if you do a really good job." The kid would rightfully give you the middle finger. But companies do this to employees all the time. Don't believe me?

A recent study showed,

Staying employed at the same company for over two years on average is going to make you earn less over your lifetime by about 50% or more.

Keep in mind that 50% is a conservative number at the lowest end of the spectrum. This is assuming that your career is only going to last 10 years. The longer you work, the greater the difference will become over your lifetime.

Let that sink in.

If your employees act rationally, they'd be stupid to stay at your company for longer than two years watching their pay drop over the years in comparison to the market for their skills. And if they wise up and leave every two years, the turnover is very costly. The total cost of turnover can be as high as 150% of an employee's salary when you factor in lost opportunity costs and the time and expense in hiring a replacement.

So even if you decide to continue on a pay for performance system, market forces necessitate that you adjust pay to market value. Or continue selling your employees a story about how they should stay out of "loyalty". This story is never bidirectional.

And what should you do if someone tries to take advantage of the system and consistently underperforms? You fire them. They are not upholding their side of the exchange. Most of the time, people want to do good work. Optimize for that scenario. People will have occasional ruts. Help them through it. That's what the separate performance reviews are for. It provides a means of providing candid feedback without the extra charged atmosphere that money can bring to the discussion.

The Netflix model

This is one area where I think the Netflix model is very interesting. They try to pay top of market for each employee using the following test:

  1. What could person get elsewhere?
  2. What could we pay for replacement?
  3. What would we pay to keep that person (if they had a bigger offer elsewhere)?

After all, when you hire someone, the offer is usually based on the market. So why stop adjusting it after that? This also solves the problem I've seen companies run into when the market is hot, they'll hire a fresh college grad for more than a much more experienced developer makes because the developer's performance bonuses haven't kept up with the market.

Keep in mind, this is good for employees too. If an employee wants to make more money, they will focus on increasing their value to your company and the market as a whole. This aligns the employee's interest with the company's interest.

Another cool feature of the Netflix model is they give employees a choice to take as much or as little of that compensation as stock instead of cash. I think that's a great way to give employees a choice in how they invest in the company's future.

Conclusion

If you insist on continuing to believe that bonuses for performance is the right approach, I'd be curious to hear what science and data you have that refutes the evidence presented by the various people I've referenced. What do you know that they don't? It'd make for some interesting research.

UPDATE: Based on some comments, there's one thing I want to clarify. I don't think the evidence suggests that all companies should pay absolute top of Market. That's not what I'm suggesting. Many companies can't afford that and offer other compensating factors to lure developers. For example, a desirable company that makes amazing products might be able to get away with paying closer to the market average because of the prestige and excitement of working there.

The point is not that you have to be at 99%. The point is to use the market value for an individual's skills as your index for compensation adjustments. When it goes up, you raise accordingly. When it flatlines or goes down, well, I'm not sure what Lear does. I certainly wouldn't lower salaries. I'd just stop having raises until the market value is above an individual's current pay. I'd be curious to hear what others think.

rx rxui akavache ghfw comments edit

GitHub for Windows (often abbreviated to GHfW) is a client WPF application written in C#. I think it's beautiful.

This is a credit to our designers. I'm pretty sure if I had to design it, it would look like

wgetgui-screenshot

To keep our code maintainable and testable, we employ the Model-View-ViewModel pattern (MVVM). To keep our app responsive, we use Reactive Extensions (Rx) to help us make sense of all the asynchronous code.

ReactiveUI (RxUI) combines the MVVM pattern with Reactive Extensions to provide a powerful framework for building client and mobile applications. The creator of RxUI, Paul Betts, suffers through the work to test it on a huge array of platforms so you don't have to. Seriously, despite all his other vices, this cross-platform support alone makes this guy deserve sainthood. And I don't just say that because I work with him.

It can be tough to wrap your head around Reactive Extensions, and by extension ReactiveUI, when you start out. As with any new technology, there are some pitfalls you fall into as you learn. Over time, we've learned some hard lessons by failing over and over again as we build GHfW. All those failures are interspersed with an occasional nugget of success where we learn a better approach.

Much of this knowledge is tribal in nature. We tell stories to each other to remind each other of what to do and what not to do. However, that's fragile and doesn't help anyone else.

So we're making a more concerted attempt to record that tribal knowledge so others can benefit from what we learn and we can benefit from what others learned. To that end, we've made our ReactiveUI Design Guidelines public.

It's a bit sparse, but we hope to build on it over time as we learn and improve. If you use ReactiveUI, I hope you find it useful as well.

Also, if you use Akavache, we have an even sparser design guideline. Our next step is to add a WPF specific guideline soon.

vs vsix dev encouragement comments edit

Recently I wrote what many consider to be the most important Visual Studio Extension ever shipped - Encourage for Visual Studio. It was my humble attempt to make a small corner of the world brighter with little encouragements as folks work in Visual Studio. You can get it via the Visual Studio Extension Manager.

But not everyone has a sunny disposition like I do. Some folks want to watch the world burn. What they want is Discouragements.

Well an idiot might write a whole other Visual Studio Extension with a set of discouragements. I may be many things, but I am no idiot. This problem is better solved by allowing users to configure the set of encouragements to be anything they want.

And that's what I did. I added an Options pane to allow users to configure the set of encouragements. It turned out to be a more confusing ordeal than I expected. But with some help from Jared Parsons, I may now present to you, discouragements!

Encourage options

So if you're of the masochistic inclination, you can treat yourself to custom discouragements all day long if you so choose.

Discouragement in use

As you can see from the screenshot, it supports native emoji!. If you want these for yourself, I posted them into a gist.

Challenges and Travails

So why was this challenging? Well like many things with development platforms, to do the basic thing is really easy, but when you want to deviate, things become hard.

If you follow the Walkthrough: Creating an Options Page you'll be able to add settings to your Visual Studio extension pretty easily. Using this approach, you can even rely on Visual Studio to generate a properties UI for you.

basic options

But that's pretty rudimentary.

What I wanted was very simple, I wanted a multi-line text box that let you type in or paste in an encouragement per line. So I derived from DialogPage as you do, created a WPF User Control with a TextBox. I added the user control to an ElementHost, a Windows Forms control that can host a WPF control because, apparently, the Options dialog is still hosting Windows Forms controls.

This approach was easy enough, but the text box didn't accept any of my input. I ran into the same problem as this person writes about StackOverflow.

I could cut and paste into the TextBox, but I couldn't type anything. That's not very useful.

I wasn't interested in overriding WndProc mainly because I feel I shouldn't have to. Instead I gave up on WPF, and ported it over to a regular Windows Forms user control. That allowed me to type in the textbox, but if I hit the Enter key, instead of adding a newline, the OK button stole it. So I couldn't actually add more than one encouragement.

UIElementDialogPage

Thankfully, Jared pointed me to the UIElementDialogPage.

If you want to provide a WPF User Control for your Visual Studio Extension, derive from UIElementDialogPage and not DialogPage like all the samples demonstrate!

It does all the necessary WndProc magic under the hood for you. Note that it was introduced in Visual Studio 2012 so if you take a dependency on it, your extension won't work in Visual Studio 2010. Live in the present I always say.

Storing Settings

The other thing I learned is that AppSettings is not the place to save your extension's settings. As Jared explained,

The use of application settings is not version safe in a VSIX. The location of the stored setting file path in part includes the version string and hashes of the executable. When Visual Studio installs an official update these values change and as a consequence change the setting file path. Visual Studio itself doesn't support the use of application settings hence it makes no attempt to migrate this file to the new location and all information is essentially lost.

The supported method of settings is the WritableSettingsStore. It's very similar to application settings and easy enough to access via SVsServiceProvider

public static WritableSettingsStore GetWritableSettingsStore(this SVsServiceProvider vsServiceProvider)
{
    var shellSettingsManager = new ShellSettingsManager(vsServiceProvider);
    return shellSettingsManager.GetWritableSettingsStore(SettingsScope.UserSettings);
}

If this is interesting to you, I encourage (tee hee) you to read through the Pull Request that adds settings to the Encourage pull request. You can read through the commits to watch me flailing around, or you can read the final DIFF to see what changes I had to make.

PS: If you liked this post follow me on Twitter for interesting links and my wild observations about pointless drivel

git github comments edit

GitHub Flow is a Git work flow with a simple branching model. The following diagram of this flow is from Zach Holman's talk on How GitHub uses GitHub to build GitHub.

github-branching

You are now a master of GitHub flow. Drop the mic and go release some software!

Ok, there's probably a few more details than that diagram to understand. The basic idea is that new work (such as a bug fix or new feature) is done in a "topic" branch off of the master branch. At any time, you should feel free to push the topic branch and create a pull request (PR). A Pull Request is a discussion around some code and not necessarily the completed work.

At some point, the PR is complete and ready for review. After a few rounds of review (as needed), either the PR gets closed or someone merges the branch into master and the cycle continues. If the reviews have been respectful, you may even still continue to like your colleagues.

It's simple, but powerful.

Over time, my laziness spurred me to write a set of Git aliases that streamline this flow for me. In this post, I share these aliases and some tips on writing your own. These aliases start off simple, but they get more advanced near the end. The advanced ones demonstrate some techniques for building your own very useful aliases.

Intro to Git Aliases

An alias is simply a way to add a shorthand for a common Git command or set of Git commands. Some are quite simple. For example, here's a common one:

git config --global alias.co checkout

This sets co as an alias for checkout. If you open up your .gitconfig file, you can see this in a section named alias.

[alias]
    co = checkout

With this alias, you can checkout a branch by using git co some-branch instead of git checkout some-branch. Since I often edit aliases by hand, I have one that opens the gitconfig file with my default editor.

    ec = config --global -e

These sort of simple aliases only begin to scratch the surface.

GitHub Flow Aliases

Get my working directory up to date.

When I'm ready to start some work, I always do the work in a new branch. But first, I make sure that my working directory is up to date with the origin before I create that branch. Typically, I'll want to run the following commands:

git pull --rebase --prune
git submodule update --init --recursive

The first command pulls changes from the remote. If I have any local commits, it'll rebase them to come after the commits I pulled down. The --prune option removes remote-tracking branches that no longer exist on the remote.

This combination is so common, I've created an alias up for this.

    up = !git pull --rebase --prune $@ && git submodule update --init --recursive

Note that I'm combining two git commands together. I can use the ! prefix to execute everything after it in the shell. This is why I needed to use the full git commands. Using the ! prefix allows me to use any command and not just git commands in the alias.

Starting new work

At this point, I can start some new work. All new work starts in a branch so I would typically use git checkout -b new-branch. However I alias this to cob to build upon co.

    cob = checkout -b

Note that this simple alias is expanded in place. So to create a branch named "emoji-completion" I simply type git cob emoji-completion which expands to git checkout -b emoji-completion.

With this new branch, I can start writing the crazy codes. As I go along, I try and commit regularly with my cm alias.

    cm = !git add -A && git commit -m

For example, git cm "Making stuff work". This adds all changes including untracked files to the index and then creates a commit with the message "Making Stuff Work".

Sometimes, I just want to save my work in a commit without having to think of a commit message. I could stash it, but I prefer to write a proper commit which I will change later.

git save or git wip. The first one adds all changes including untracked files and creates a commit. The second one only commits tracked changes. I generally use the first one.

    save = !git add -A && git commit -m 'SAVEPOINT'
    wip = !git add -u && git commit -m "WIP" 

When I return to work, I'll just use git undo which resets the previous commit, but keeps all the changes from that commit in the working directory.

    undo = reset HEAD~1 --mixed

Or, if I merely need to modify the previous commit, I'll use git amend

    amend = commit -a --amend

The -a adds any modifications and deletions of existing files to the commit but ignores brand new files. The --amend launches your default commit editor (Notepad in my case) and lets you change the commit message of the most recent commit.

A proper reset

There will be times when you explore a promising idea in code and it turns out to be crap. You just want to throw your hands up in disgust and burn all the work in your working directory to the ground and start over.

In an attempt to be helpful, people might recommend: git reset HEAD --hard.

Slap those people in the face. It's a bad idea. Don't do it!

That's basically a delete of your current changes without any undo. As soon as you run that command, Murphy's law dictates you'll suddenly remember there was that one gem among the refuse you don't want to rewrite.

Too bad. If you reset work that you never committed it is gone for good. Hence, the wipe alias.

    wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard

This commits everything in my working directory and then does a hard reset to remove that commit. The nice thing is, the commit is still there, but it's just unreachable. Unreachable commits are a bit inconvenient to restore, but at least they are still there. You can run the git reflog command and find the SHA of the commit if you realize later that you made a mistake with the reset. The commit message will be "WIPE SAVEPOINT" in this case.

Completing the pull request

While working on a branch, I regularly push my changes to GitHub. At some point, I'll go to github.com and create a pull request, people will review it, and then it'll get merged. Once it's merged, I like to tidy up and delete the branch via the Web UI. At this point, I'm done with this topic branch and I want to clean everything up on my local machine. Here's where I use one of my more powerful aliases, git bdone.

This alias does the following.

  1. Switches to master (though you can specify a different default branch)
  2. Runs git up to bring master up to speed with the origin
  3. Deletes all branches already merged into master using another alias, git bclean

It's quite powerful and useful and demonstrates some advanced concepts of git aliases. But first, let me show git bclean. This alias is meant to be run from your master (or default) branch and does the cleanup of merged branches.

bclean = "!f() { git branch --merged ${1-master} | grep -v " ${1-master}$" | xargs -r git branch -d; }; f"

If you're not used to shell scripts, this looks a bit odd. What it's doing is defining a function and then calling that function. The general format is !f() { /* git operations */; }; f We define a function named f that encapsulates some git operations, and then we invoke the function at the very end.

What's cool about this is we can take advantage of arguments to this alias. In fact, we can have optional parameters. For example, the first argument to this alias can be accessed via $1. But suppose you want a default value for this argument if none is provided. That's where the curly braces come in. Inside the braces you specify the argument index ($0 returns the whole script) followed by a dash and then the default value.

Thus when you type git bclean the expression ${1-master} evaluates to master because no argument was provided. But if you're working on a GitHub pages repository, you'll probably want to call git bclean gh-pages in which case the expression ${1-master} evaluates to gh-pages as that's the first argument to the the alias.

Let's break down this alias into pieces to understand it.

git branch --merged ${1-master} lists all the branches that have been merged into the specify branch (or master if none is specified). This list is then piped into the grep -v "${1-master}" command. Grep prints out lines matching the pattern. The -v flag inverts the match. So this will list all merged branches that are not master itself. Finally this gets piped into xargs which takes the standard input and executes the git branch -d line for each line in the standard input which is piped in from the previous command.

In other words, it deletes every branch that's been merged into master except master. I love how we can compose these commands together.

With bclean in place, I can compose my git aliases together and write git bdone.

    bdone = "!f() { git checkout ${1-master} && git up && git bclean ${1-master}; }; f"

I use this one all the time when I'm deep in the GitHub flow. And now, you too can be a GitHub flow master.

The List

Here's a list of all the aliases together for your convenience.

[alias]
    co = checkout
    ec = config --global -e
    up = !git pull --rebase --prune $@ && git submodule update --init --recursive
    cob = checkout -b
    cm = !git add -A && git commit -m
    save = !git add -A && git commit -m 'SAVEPOINT'
    wip = !git add -u && git commit -m "WIP" 
    undo = reset HEAD~1 --mixed
    amend = commit -a --amend
    wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard
    bclean = "!f() { git branch --merged ${1-master} | grep -v " ${1-master}$" | xargs -r git branch -d; }; f"
    bdone = "!f() { git checkout ${1-master} && git up && git bclean ${1-master}; }; f"

Credits and more reading

It would be impossible to source every git alias I use as many of these are pretty common and I've adapted them for my own needs. However, here are a few blog posts that provided helpful information about git aliases that served as my inspiration. I also added a couple posts about how GitHub uses pull requests.

PS: If you liked this post follow me on Twitter for interesting links and my wild observations about pointless drivel

PPS: For Windows users, these aliases don't require using Git Bash. They work in PowerShell and CMD when msysgit is in your path. For example, if you install GitHub for Windows and use the GitHub Shell, these all work fine.

personal github comments edit

GitHub is a great tool for developers to work together on software. Though its primary focus is software, a lot of people find it useful for non-software projects. For example, a co-worker of mine has a repository where he tracks a pet project:

I bought a crappy 1987 Honda XR600 and I am going to turn it into something awesome

A while back, Wired ran an article about a man who renovated his home on GitHub. He even has a 3-D model of his beskpoke artisanal bathroom plug. Send a pull request why dontcha?

bathroom-plug

Another person dedicated his genome to the public domain on GitHub. Sadly, genetic technology is not quite at the point where he can merge a pull request all the way to his own body. But who knows? Someday it might be possible to hit that green Merge button and instantly sprout wings. Of course, the downside of such genetic tinkering is you'd need a new wardrobe. I'm not sure Gap sells shirts with wing holes.

Just the other day, I read a blog post about a company that uses GitHub for everything.

  • Internal wiki
  • Recruitment process
  • Day-to-day operations
  • Marketing efforts
  • And a lot more ...

As for me...

Meanwhile, I used GitHub to save my marriage. Ok, that might be a tiny bit of hyperbole for dramatic effect (right honey? Right?!!!).

Let me back up a moment to provide some context.

One of the central points of David Allen’s Getting Things Done system is that all the lists of stuff we hold in our head uses up “psychic RAM” and that creates stress. This “psychic weight” drags us down and wears on our psyche.

When you’re a family with a house and kids, you have a lot of lists and thus need a lot of mental RAM. Things break down in the house all the time. You have to drag the kids to a myriad of events and appointments. You need to attend to recurring chores. If you’re not on top of these things, they fall through the cracks.

There’s two common approaches to deal with this.

The first is to always think about everything that needs to be done and bear the full psychic weight and stress of always being on point.

The second is what I call the squeaky wheel approach. You remain blissfully ignorant of all these demands until such a time comes when something is so bad that it forces your attention. Or, as is often the case, it gets so bad for someone else (after all, you're blissfully ignorant) that they make it a priority for you. A lot of things tend to get dropped with this approach that shouldn’t be dropped.

The second approach carries with it much less psychic weight, but it isn’t very respectful to a person who employs the first approach and leads to a lot of interpersonal tension.

I’ll let you guess which approach I tend to employ.

Part of the reason I employ the squeaky wheel approach is that I have a terrible memory and I'm quite good at not noticing things that need to get done. Worse, despite my ignorance, things were getting done and I just didn’t even notice. This reinforced the belief that everything was fine.

So my wife and I had a discussion about this. I can't will myself to a better memory and improve my ability to notice things that need to get done. At the same time, while I suck at this stuff at home, I tend to be much more conscientious at work.

So I proposed an idea. What if we ran our household chores like a software project? By that, I mean a well-run software project, not your typical death march past deadline over budget projects. At work, I run everything through GitHub issues. So let's try that at home!

The goal is that we should no longer maintain all these lists in our head. Instead, when we noticed something that needs to be done, we create an issue and we're free to forget it right then and there because we can trust the process. Every week, we review the list together and try complete what issues we can. It relieves a lot of mental stress to rely on the system instead of our own fallible memories.

I created a private repository for our household. The following screenshot shows an example of a recent issue. Notice that I take advantage of the wonderful Task Lists feature of GitHub Flavored Markdown. That feature has been a godsend.

I broke down the task of cleaning out the dead bugs from the light fixtures into a list of tasks. I decided to take on this one and assigned it to myself.

A different kind of bug report for GitHub issues

The work involved to close an issue sometimes leads to the need to create more issues. In this case, I learned a valuable lesson - don’t use a screw driver to put a glass light cover back on. I was fortunate that the explosion of glass this created didn’t get anyone hurt.

Broken Lights

My wife and I have tried Trello and other systems in the past, but this one has been very successful for us and she's been very happy with the results.

I also use Markdown documents in the repository to track kids meal ideas, lists of babysitters, weekend fun ideas, etc. It's become even better now that we have rendered prose diffs. Our household GitHub repository helps me track just about related to our household. What interesting ways do you use GitHub for non-software projects?

UPDATE 2014-08-07 I just learned about a service called GitHub Reminders.

Get a email reminder by creating a GitHub issue comment with a emoji and a naural language date. Login and Signup with your GitHub.com Account to get started

This sounds like it could be very useful for a household issue tracker.

vs vsix dev encouragement comments edit

I love to code as much as the next developer. I even professed my love in a keynote once.
And judging by the fact that you're reading this blog, I bet you love to code too.

But in the immortal words of that philosopher, Pat Benatar,

Love is a battlefield.

There are times when writing code is drudgery. That love for code becomes obsession and leads to an unhealthy relationship. Or worse, there are times when the thrill is gone and the love is lost. You're just going through the motions.

In those dark times, bathed in the soft glow of your monitor, engrossed in the rhythmic ticky tacka sound of of your keyboard, a few kind words can make a big difference. And who better to give you those kind words than your partner in crime - your editor.

With that, I give you ENCOURAGE. It's a Visual Studio extension that provides a bit of encouragement every time you save your document. Couldn't we all use a bit more whimsy in our work?

encouragement light

And it's theme aware!

encouragement dark

hWhat?!

Yes, it's silly. But try it out and tell me it doesn't put an extra smile on your face during your day.

This wasn't my idea. My co-worker Pat Nakajima came up with this idea and built a TextMate extension to do this. He showed it to me and I instantly fell in love. With the idea. And Pat, a little.

Apparently it's very easy to do this in TextMate. Here's the full source code:

 #!/usr/bin/env ruby -wU

  puts ['Nice job!', 'Way to go!', 'Wow, nice change!'].sample

It's a bit deceiving because most of the work in getting this to work in Textmate is configuration.

encourage-nakajima

As for Visual Studio, it takes quite a bit more work. You can find the source code on GitHub under an MIT license.

The code hooks into the DocumentSaved event on the DTE and then cleverly (or hackishly depending on how you look at it) uses an IIntellisenseController combined with an ISignatureHelpSource to provide the tool tip.

Here's the relevant code snippet from the EncourageIntellisenseController class:

public EncourageIntellisenseController(
  ITextView textView,
  DTE dte,
  EncourageIntellisenseControllerProvider provider)
{
  this.textView = textView;
  this.provider = provider;
  this.documentEvents = dte.Events.DocumentEvents;
  documentEvents.DocumentSaved += OnSaved;
}

void OnSaved(Document document)
{
  var point = textView.Caret.Position.BufferPosition;
  var triggerPoint = point.Snapshot
    .CreateTrackingPoint(point.Position, PointTrackingMode.Positive);
  if (!provider.SignatureHelpBroker.IsSignatureHelpActive(textView))
  {
    session = provider.SignatureHelpBroker
      .TriggerSignatureHelp(textView, triggerPoint, true);
  }
}

Many thanks to Pat Nakajima for the idea and Jared Parsons for his help with the Visual Studio extensibility parts. I'm still a n00b when it comes to extending Visual Studio and this silly project has been one fun way to try and get a handle on things.

Get Involved!

As of today, this only supports Visual Studio 2013 because of my ineptitude and laziness. I welcome contributions to make it support more platforms.

Parting Thoughts

On the positive side, when you need a specific service, it's nice to be able to slap an [Import] attribute and magically have the type available. The extensibility of Visual Studio appears to be nearly limitless.

On the downside, it's ridiculously difficult to write extensions to do some basic tasks. Yes, a big part of it is the learning curve. But when you compare the Textmate example to what I had to do here, clearly there's some middle ground here between simplicity and power.

Also, the documentation is quite good but often wrong in places. For example, in this Walkthrough it notes:

Make sure that the Content heading contains a MEF Component content type and that the Path is set to QuickInfoTest.dll.

That might have been true with the old VSIX manifest format, but is not correct for the new one. So none of my MEF imports worked until I added this line to the Assets element in my .vsixmanifest folder.

<Asset
  Type="Microsoft.VisualStudio.MefComponent"
  d:Source="Project"
  d:ProjectName="%CurrentProject%"
  Path="|%CurrentProject%|" />

I'm not really sure why that's just not there by default.

There are certainly a lot of extensions in the Visual Studio Extension Gallery, so I would still call consider the extensibility model to be a success for the most part. But there could be a lot more extensions in there. More people should be able to extend the IDE for their own needs without having to take a graduate course in Visual Studio Extensibility.

github emoji octokit comments edit

I love emojis. Recently, I had the fun task to add emoji auto completion to the latest GitHub for Windows release, among other contributions.

In this post, I want to walk through how to use Octokit.NET to download all the emojis that GitHub supports.

The process is pretty simple, we're going to make a request to the Emojis API to get the list of emojis, and then download each image.

The first example uses the vanilla Octokit package. The second example uses the Octokit.Reactive package. Both examples pretty much accomplish the same thing, but the Rx version downloads emojis four at a time in parallel instead of one by one.

All the code for this example is available in the haacked/EmojiDownloader repository on GitHub.

The Code

To get started, create a console project and install the Octokit.NET package:

Install-Package Octokit

The first step is to create an instance of the GitHubClient. We don't have to provide any credentials to call the Emojis API.

var githubClient = new GitHubClient(
    new ProductHeaderValue("Haack-Emoji-Downloader"));

The string in the ProductHeaderValue is used to form a User Agent for the request. The GitHub API requires a valid user agent.

Now we can request the list of emojis.

var emojis = await githubClient.Miscellaneous.GetEmojis();

This returns a IReadOnlyList<Emoji>.

Now we can iterate through each one and use an HttpClient to download each image. We'll use the following code to download the image.

public static async Task DownloadImage(Uri url, string filePath)
{
    Console.WriteLine("Downloading " + filePath);

    using (var httpClient = new HttpClient())
    {
        using (var request = new HttpRequestMessage(HttpMethod.Get, url))
        {
            var response = await httpClient.SendAsync(request);

            using (var responseStream = await response.Content.ReadAsStreamAsync())
            using (var writeStream = new FileStream(filePath, System.IO.FileMode.Create))
            {
                await responseStream.CopyToAsync(writeStream);
            }     
        }
    }
}

Here's the code of the Console's Main method that puts all this together. Note that I wrap everything in a Task.Run so I can use the async and await keywords.

static void Main(string[] args)
{
    string outputDirectory = args.Any()
        ? String.Join("", args)
        : Path.GetDirectoryName(Assembly.GetExecutingAssembly().CodeBase);

    Debug.Assert(outputDirectory != null, "The output directory should not be null.`");

    Task.Run(async () =>
    {
        var githubClient = new GitHubClient(new ProductHeaderValue("Haack-Emoji-Downloader"));
        var emojis = await githubClient.Miscellaneous.GetEmojis();
        foreach (var emoji in emojis)
        {
            string emojiFileName = Path.Combine(outputDirectory, emoji.Name + ".png");
            await DownloadImage(emoji.Url, emojiFileName);
        }

    }).Wait();
}

The first part of the method sets up the output directory. By default, it will create the emojis wherever the program EXE is located. But you can also specify a path as the sole argument to the program.

Let's get Reactive!

If you prefer to use the Reactive version of Octokit.NET, the following example will get you started.

Install-Package Octokit.Reactive

Instead of the GitHubClient we'll create an ObservableGitHubClient.

var githubClient = new ObservableGitHubClient(
    new ProductHeaderValue("Haack-Reactive-Emoji-Downloader"));            

Now we can call the equivalent method, but we have the benefit of using the Buffer method.

githubClient.Miscellaneous.GetEmojis()
    .Buffer(4) // Downloads 4 at a time.
    .Do(group => Task.WaitAll(group
        .Select(emoji => new
        {
            emoji.Url,
            FilePath = Path.Combine(outputDirectory, emoji.Name + ".png")
        })
        .Select(download => DownloadImage(download.Url, download.FilePath)).ToArray()))
    .Wait();

The buffer method groups the sequence of emojis into groups of four so we can then kick off the download for four emojis at a time and then wait for the group to finish before requesting the next four.

The reason we don't just request them all at the same time is we don't want to flood the network card or local network.

UPDATE: My buddy Paul Betts suggests an even better more Rx-y approach in the comments.

githubClient.Miscellaneous.GetEmojis()
    .Select(emoji => Observable.FromAsync(async () =>
    {
        var path = Path.Combine(outputDirectory, emoji.Name + ".png");
        await DownloadImage(emoji.Url, path);
        return path;
    }))
    .Merge(4)
    .ToArray()
    .Wait();

We'll use the Merge method instead of Buffer to throttle requests to four at a time.

And with that, you'll have 887 (as of right now) emoji png files downloaded to disk.

Other Octokit.NET blog posts

ghfw github comments edit

Today we released GitHub for Windows 2.0 after a long development cycle. You can read some details about the release on the GitHub blog.

The team worked very hard on this release while simultaneously continuing to release improvements to the GitHub for Windows 1.X series. I've been using it for a while and really like how much better it integrates into my workflow.

I won't reiterate all the changes we've made. Instead I'll focus on one feature I worked on that is clearly the most important of all the changes: emoji autocomplete.

emoji

When you create a commit message, you can now invoke the emoji auto complete drop down using the : character just as you would on the github.com website. This brings a new level of expressiveness to your commit messages.

In fact, we've started to establish our own conventions of prefixing certain commits with an emoji as you can see in the screenshot. For example:

  • :lipstick: indicates a commit that's primarily just refactoring
  • :fire: indicates removing code.
  • :money: indicates a developer is very proud of her work.

commit messages

To implement this, I started with the AutocompleteTextBox control from the WPF Toolkit and then proceeded to strip 90% of the code away and then rewrote most of it to have observable properties (Yay Rx!) instead of events. There's very little left of the original code, but it was a nice head start on getting the behaviors correct.

While I'm personally excited about this feature, I do have to admit it might not be as important as the improved navigation that makes it easier to switch between projects while working.

But one of the big improvements that won't be quite visible to end-users is the improvements we made to the codebase. The refactorings will increase our velocity as we add improvements to the application. Enjoy!

aspnet comments edit

This post is sort of a continuation of my post on Microsoft's New Running Shoes.

The Importance of Backwards Compatibility

If anyone tells you that backwards compatibility isn't important, they're wrong. And in fact, if they use any software long enough, they'll tell you themselves. Another upgrade of some framework they depend on will break their application and they'll get real care mad about it. I know because I've been on both sides of this river. I've shipped a Framework that broke people who told me we should break compatibility and experienced the heat of their anger. Usually, when someone tells you breaking compatibility is fine, they mean as long as it doesn't affect them.

Microsoft is famous for its tenacious dedication to backwards compatibility. In his post, How Microsoft Lost the API War Spolsky highlights this comment from Raymond Chen, famous for his stories of the crazy lengths they went to maintain backwards compat.

Look at the scenario from the customer's standpoint. You bought programs X, Y and Z. You then upgraded to Windows XP. Your computer now crashes randomly, and program Z doesn't work at all. You're going to tell your friends, "Don't upgrade to Windows XP. It crashes randomly, and it's not compatible with program Z." Are you going to debug your system to determine that program X is causing the crashes, and that program Z doesn't work because it is using undocumented window messages? Of course not. You're going to return the Windows XP box for a refund. (You bought programs X, Y, and Z some months ago. The 30-day return policy no longer applies to them. The only thing you can return is Windows XP.)

I've been there. I've written applications for friends that I hope I never have to change again just to move it to a new server. I've also had times where I depended on some 3rd party code where I didn't have the source and the author was long gone. If that code breaks because of an operating system upgrade, I'm in a world of hurt. It's situations like these where Microsoft's adherence to backwards compatibility is a sanity saver.

Backwards Compatibility is a Tax

But there's another side to the backwards compatibility story. All of those benefits I mentioned have a cost. Backwards compatibility is a tax that creates significant drag on a team's agility and its ability to innovate. A long time ago I wrote a post that suggested this blind adherence to backwards compatibility was holding Microsoft back.

Wait, so now I've argued both for and against backwards compatibility. Does it sound like I want to have it both ways? Well of course I do! But good design is a series of trade-offs and good execution is knowing when to make one trade-off vs the other. Nobody said it would be easy and straightforward to compete in the software industry and give users what they need.

For example, many discussions about this topic miss another key consideration. Technical compatibility isn't the sole factor in backwards compatibility. For example, if a company isn't able to innovate and have its product stay relevant, it might need to remove investment in the product, or worse, go out of business. This also breaks backwards compatibility.

In this case, compatibility isn't broken by new versions of the product. It's broken by how the world continues to change around the now stagnant unsupported version of the product. We've seen this happen not long after Microsoft pulled support for Windows XP, a zero-day exploit was discovered. Microsoft relented and patched it, but makes no promise to patch the next zero day.

At this point, users of the product have to make a decision, switch, or stay and risk the exploits.

ASP.NET vNext

I believe this is the situation that ASP.NET found itself in recent years. When I was part of the team several years ago, I worked on a new product called ASP.NET MVC. Though it was "new", it was really just a layer on top of the existing 13 year-old, at the time, System.Web stack.

This code had accumulated a LOT of cruft and any change to it was a slow and tedious process that required a huge test effort on multiple operating systems etc. There were compatibility fixes so old we were quite convinced they paradoxically predated the advent of computers. There were even fixes I wasn't sure anyone understood the code, but we were afraid to change it nonetheless.

Around this time, my manager, Scott Hunter (heretofore known as "The Hu" to complement Scott Guthrie who is known as "The Gu") and I often day dreamed (as one does) about a complete rewrite of the stack. As a joke, I coined the name "ASP2.NET" as the moniker for this new stack. At the time, we were Don Quixotes dreaming the impossible dreams. The disruption to existing customers would be too great. Backwards compatibility is monarch! It could never happen!

Don Quixote charging the windmills by Dave Winer CC BY-SA 2.0

But the world changed around Microsoft. Node.js and many other modern web frameworks, unencumbered by years of compatibility drag, exploded on the scene. These frameworks felt fresh and lightweight. Meanwhile, as I mentioned in my last post, Azure's business model created new incentives.

Azure provides an environment that is not limited to hosting .NET web applications. Azure makes money whether you host ASP.NET, NodeJs, or whatever. This is analogous to how the release of Office for iPad is a sign that Office will no longer help prop up Windows. Windows must live or die on its own merits.

In this new environment, ASP.NET was starting to show its age. Continue on its current course and it would risk complete irrelevance with the next generation of developers. It reached a crossroads where it had two possible strategies:

  1. Continue to invest in the existing stack and try to slow the bleeding as much as possible.
  2. Disrupt everything and build something new.

The first strategy is appealing. It makes existing customers feel comfortable and happy. Heck, it would be profitable for a very long time. But it's ultimately not sustainable. It hamstrings Microsoft from making inroads with new developers not already on its stack. Also, eventually, companies switch from old technologies to newer platforms. It might be the result of the fact that they can't hire developers to work on the old platforms. Or it might be that the software that meets their new business needs are on newer platforms. Either way, what happens when they are ready to make this switch? What platform would they choose?

ASP.NET wants to be that next platform. It needs to be that next modern platform. It might rattle existing customers a bit, but that's a calculated risk. After all, if you're an existing customer, you know you have 10 years of support on the current stack. It goes by fast, but it's still a long time. You might be angry at having to make that switch at that point, but Microsoft's betting that might happen anyways over time and they're hoping to provide the next platform that you switch to. As Steve Jobs famously said,

If you don't cannibalize yourself, someone else will.

Microsoft wants to be the zombie, not the zombie food.

I left Microsoft a little under two years ago, but Scott Hunter stayed on and continued to execute on the impossible dream with his team and folks like David Fowler, Louis De Jardin, and others. That's why I'm pretty excited about ASP.NET vNext. It's not just another flavor of the day framework from Microsoft. It represents a new modular approach that makes it easier to swap out the parts you don't like and keep the parts you do. And it's a sign that Microsoft is more and more taking a page from Steve Jobs playbook and solve the Innovator's Dilemma,

My passion has been to build an enduring company where people were motivated to make great products. The products, not the profits, were the motivation. Sculley flipped these priorities to where the goal was to make money. It’s a subtle difference, but it ends up meaning everything.

microsoft aspnet oss comments edit

When Ballmer famously said, "Linux is a cancer that attaches itself in an intellectual property sense to everything it touches," it was fair to characterize Microsoft's approach to open source as hostile. But over time, forces within Microsoft pushed to change this attitude. Many groups inside of Microsoft continue to see the customer and business value in fostering, rather than fighting, OSS.

Under the leadership of Scott Guthrie, the ASP.NET and Azure teams were trailblazers in this area. They were not the only trailblazers, but they were influential ones. Disclaimer, I'm a former employee of these teams so I am totally objective and devoid of any bias whatsoever.

While change carved its glacial path, its pace angered some who wanted to see more movement. The prevailing metaphor that folks like Scott Hanselman and I would use was this idea of "baby steps".

Here's a snippet from a post Scott wrote five years ago when we first released ASP.NET MVC 1.0 under a permissive open source license:

These are all baby steps, but more and more folks at The Company are starting to "get it." We won't rest until we've changed the way we do business.

Here's my use of the phrase in my notes about the release a year prior.

As I mentioned before, routing is not actually a feature of MVC which is why it is not included. It will be part of the .NET Framework and thus its source will eventually be available much like the rest of the .NET Framework source. It’d be nice to include it in CodePlex, but as I like to say, baby steps.

The point we tried to impress on people is that changes in momentum of a massive object (and a 90,000 person company is quite large) takes a lot of small forces that over time sum up to a big force.

However, Microsoft's recent remarkable announcements around the next generation of ASP.NET this week have made it clear that they've dispensed with the baby steps and have put on their running shoes.

David Fowler's summarizes some of the interesting changes in the next generation of ASP.NET which I've summarized even further:

  • ASP.NET vNext builds on NuGet as unit of reference instead of assemblies.
  • Roslyn-based runtime hackable compilation model.
  • Dependency Injection from the ground up.
  • No Strong-Naming! (See this discussion for the headache strong-naming has been)

But most exciting to me is that all of this is open source, accepts contributions, and hosted on GitHub. This isn't a project that just targets .NET developer. This is a project that wants all web developers to take it seriously.

In other areas of Microsoft they released Microsoft Office for the iPad and made Windows free for small devices. It's definitely a new Microsoft.

How did this come about?

Well breathless headlines would have you believe that Satya Nadella singlehandedly built a new Microsoft in his first three months. It makes for a good story, but it's clearly wrong. It's lazy thinking.

Look at this contribution graph for Project K's Runtime.

Project KRuntime commit

The initial commit was on November 7, 2013. Satya become CEO on February 4, 2014. Now I'm no math major (oh wait! I was a math major!), but I'm pretty sure February 2014 comes after November 2013. It's apparent this had been underway for a long time before Satya became CEO.

To be clear, I don't want to take anything away from Satya's importance to the new Microsoft. While this effort didn't start under him, he does create the right climate within Microsoft for this effort to thrive. His leadership and vision sets these new efforts up for success and that's a big deal. The appointment of Satya makes Microsoft a force to be reckoned with again.

But efforts like this started in a more grass roots fashion, albeit with the support of big hitters like the Gu. In a large part, the focus on the Azure business paved the way for this to happen.

Azure provides an environment that is not limited to hosting .NET web applications. Azure makes money whether you host ASP.NET, NodeJs, or whatever. This is analogous to how the release of Office for iPad is a sign that Office will no longer help prop up Windows. Windows must live or die on its own merits.

In the same way, ASP.NET must compete on its own merits and to do so requires drastic changes.

In a follow-up post I'd love to delve a little more into the history of the ASP.NET changes and my thoughts on what this means for existing customers and backwards compatibility. If you find this sort of analysis interesting, let me know in the comments. Otherwise I'll go back to blogging about fart jokes and obscure code samples or something.

blog github pages comments edit

Software collaboration goes beyond just working on the code. In addition to writing a lot of code, software involves writing a lot of words. Prose shows up in documentation, tutorials, blog posts, and so on.

GitHub Pages is a great platform for sharing and working together on those words, and it keeps getting better. This is why I host my blog using Jekyll on GitHub Pages. So far, I really love it.

One recent improvement that Reginald “Raganwald” Braithwaite wrote about is the rendered prose diffs for markdown. This provides a nice prose-centric way of looking at changes to a written document.

Another recent improvement that Ben "I am now a lawyer" Balter wrote about is the ability to surface GitHub metadata in your site without having to make API calls. Since GitHub hosts your GitHub Pages site, this metadata is directly available.

This opens up some cool opportunities.

Give credit to your blog collaborators

Every blog post in my blog has an edit link under the title. Even this one! That link takes you to an editor that lets you edit the blog post and submit your changes as a pull request. It's so easy anyone can do it!

I really appreciate when folks send me corrections. So much so that I just added a page to give credit to those who have collaborated with me on my blog. This uses the new metadata that's available to every GitHub Pages site.

GitHub Pages Demo Repository

If you want to understand how this works, I set up a minimal GitHub Pages Demo repository here. It includes a bare minimum Jekyll site with a page that shows how to display GitHub metadata. The goal of this page is to help you get started with your own content.

You can visit the actual rendered page here. It's pretty butt ugly right now. Why not help make it pretty and get your name on the page?!

The Repository metadata on GitHub Pages page provides more information about what GitHub metadata is available to your blog.

Whether you are building a documentation site for your repository, a personal blog that shows off your GitHub repositories, or whatever, this metadata is extremely useful.

personal comments edit

The screaming was unexpected.

The field trip guide passed out over-sized plastic models of pollinators to the first graders. A beetle, a bee, a humming bird. But as the critters, passed from hand to hand, approached young Tiberius (name changed to protect the innocent), he began a full scale nuclear grade melt down.

As a chaperone, my first thought was this kid has lungs! My next thought was this kid is really leaning into this joke of pretending to be afraid. But it dawned on me that the kid was no comedian. His screams and eyes wide with terror were the real thing.

the shire

This happened last week as I chaperoned my son's first-grade field trip to the Bellevue Botanical Gardens. I'm still not sure why the school would just trust me with that responsibility. Perhaps the fact my children are still alive so far qualifies me. Not to mention that there were four other parents and two adult guides - a total of seven adults for a group of fourteen kids. Still outnumbered, but it gave us a fighting chance.

The theme of the trip was "flower power." Kids are already well versed in passive resistance and non-violent protest (and violent protest for that matter), so this trip focused instead on the constituent parts of the flower and how all of them work together with pollinators to propagate the species.

At the moment, these plastic pollinators were not spreading pollen but terrorizing a poor kid.

Perhaps it was fortunate that I was a chaperone as Tiberius was my son's friend and had been over to my house a few times. A friendly face could be comforting, but he had to make do with mine. I waved him over to stand next to me and the guide gave him a pinwheel representing wind. Wind is a great pollinator. It pollinates cars and patio furniture with that sticky yellow crap.

Tiberius calmed down and all seemed well. I felt for the kid, but I was also disconcerted by some thoughts I had at the time that I'm not proud of.

To explain, I'll need the help of Louis CK. In his HBO special, Oh My God, there's this great bit where Louis describes the sort of cognitive dissonance I experienced.

Everybody has a competition in their brain of good thoughts and bad thoughts. Hopefully the good thoughts win. For me I always have both. I have the thing I believe, the good thing. That's the thing I believe. And then there's this thing. And I don't believe it, but it is there.

He calls it a category of his brain called "Of course! But maybe" and provides several examples. They're pretty harsh, so you've been warned. Here's his first one,

Of course, of course, children who have nut allergies need to be protected. Of course! We have to segregate their food from nuts. Have their medication available at all times. And anybody who manufactures or serves food need to be aware of deadly nut allergies. Of course!

He goes on.

But maybe. Maybe, if touching a nut kills you, you're supposed to die.

Ouch!

Of course not! Of course not! Of course not. Jesus! I have a nephew who has that I'd be devastated if something happened to him. But maybe...

Of course I felt compassion for Tiberius. But there was this other thought. But maybe this kid needs to get a grip and toughen up. It was a momentary knee jerk reaction. Of course he's only six years old, but maybe... Later on in the day something would happen that would give me a different insight into this idea.

After exploring more of the garden, we end up at the visitor center and another guide, a former teacher, cross examines the children about what they've learned on the trip as they sat cross legged on the floor.

As they reviewed pollinators, she held up a large photo of a bee and Tiberius again loses his shit (thankfully not in the literal sense). But maybe popped in my head as I stood there slack jawed with the other adults. I braced for the worse expecting the other kids to snigger at him or tease him outright.

Before any of us could act, the boy in front of Tiberius sat up straighter to block his view of the photo. The girl next to that boy scooted over as well creating a little first-grader phalanx to block Tiberius's view while he buried his head in fear in the boy's back.

With his hand placed softly on Tiberius's back, the boy behind Tiberius quietly gave him directions "Don't look now" and "Ok, you can look now" as the teacher cycled through photos.

I remember often being on the receiving end of such teasing at that age. Instead the kids surprised me. They banded together to care for one of their own with compassion while I was having my shameful but maybe thought.

I rarely cry, but I nearly lost it right then and there. I'm ashamed at my own initial gut reaction. Our culture is pretty fucked up when you think about. We tell boys it's not ok to be afraid. It's not ok to cry as if this is some universal truth about manhood.

But a cursory reading of historical texts demonstrates this is a cultural construct, and an unhealthy one at that. The Old Testament is rife with men weeping. Old Japanese and European epics are full of the "manliest" heroes crying buckets.

In his book Crying, The Natural and Cultural History of Tears Tom Lutz notes,

In the twelfth-century Tales of the Heiki, men cry copiously. The warrior Koremori declares, "I am forever undecided," and weeps. The monk Sonei weeps in abjection as he pleads to be told the way to escape the endless circle of death and rebirth, and weeps tears of joy when he is told.

In the translation of Beowulf by Leslie Hall,

Then the noble-born king kissed the distinguished, Dear-lovèd liegeman, the Dane-prince saluted him, And claspèd his neck; tears from him fell,

These passages describe very "manly" warriors crying as something natural. My story concerns a six-year old child crying as if it were abnormal.

These kids showed me what true empathy and compassion is. The tragedy is that over the next few years, socialization, peer-pressure, media influences, etc. will destroy that pure compassion and empathy. They'll be inundated with messages that such behavior is weak and pathetic until they're as snarky and mean spirited as I often am. I hope to fight that influence on my kids every step of the way. If they can retain an ounce of the caring these six year olds naturally have, it would do a lot of good in this world.

Some will say the boy needs to toughen up for his own sake. It's a dog-eat-dog world out there after all. Perhaps that's true, but we are not dogs and we don't have to participate in the dog eating. We are humans and we can do better.

octokit github aspnetmvc oauth comments edit

Some endpoints in the GitHub API require authorization to access private details. For example, if you want to get all of a user's repositories, you'll need to authenticate to see private repositories.

If you're building a third-party application that integrates with the GitHub API, it's poor form to ask for a user's GitHub credentials. Most users would be wary of providing that information.

Fortunately, Github supports the OAuth web application flow. This allows your app to authenticate with GitHub without ever having access to a user's GitHub credentials.

In this post, I'll show the basics of implementing this workflow using Octokit.NET.

OAuth Web Flow

The basic worfklow of an OAuth flow is as follows.

  1. On an unauthenticated request to your site, your site redirects the user to the GitHub oauth login URL (hosted on github.com) with some information in the query string such as your application's identity and the list of scopes (permissions) your application requests.
  2. The GitHub Oauth login page then prompts the user to either accept or reject this authentication request. If the user is not already logged into GitHub.com, they'll login first.
  3. If the user clicks "Authorize application", then this page redirects back to your site with a special session code.
  4. Your site will then make a server to server request and exchange that session code and your application's client secret for an OAuth Access Token. You can then use that token with Octokit.net to make other API requests.

Register your application

Before any of this will work, you'll need to register your application on GitHub.com to get your application's client secret. Never share this secret with anyone else!

While logged in, go to your account settings and click the Applications tab. Then click "Register new application". Or you can just navigate to https://github.com/settings/applications/new.

Here's where you can fill in some details about your application.

application registration

After you click "Register application", you'll see your client id and client secret.

OAuth application registration details

Implement the web flow

I've put together a simple raw ASP.NET MVC demonstration of this workflow to illustrate how the workflow works. In a real ASP.NET MVC application, I would probably implement Owin middleware (which has been done before and I link to it later). In an older ASP.NET MVC application I might implement a custom AuthorizeAttribute.

If you want to follow along from scratch, create a new ASP.NET MVC project in Visual Studio and then install the Octokit.net package:

Install-Package Octokit

Here's the code for the HomeController. I tried to make it easy to follow along.

public class HomeController : Controller
{
    // TODO: Replace the following values with the values from your application registration. Register an
    // application at https://github.com/settings/applications/new to get these values.
    const string clientId = "106002c37f27482617fb";
    private const string clientSecret = "66d5263cadd3bfe056dd46147154ba1eb2fe60b8";
    readonly GitHubClient client =
        new GitHubClient(new ProductHeaderValue("Haack-GitHub-Oauth-Demo"), new Uri("https://github.com/"));

    // This URL uses the GitHub API to get a list of the current user's
    // repositories which include public and private repositories.
    public async Task<ActionResult> Index()
    {
        var accessToken = Session["OAuthToken"] as string;
        if (accessToken != null)
        {
            // This allows the client to make requests to the GitHub API on the user's behalf
            // without ever having the user's OAuth credentials.
            client.Credentials = new Credentials(accessToken);
        }

        try
        {
            // The following requests retrieves all of the user's repositories and
            // requires that the user be logged in to work.
            var repositories = await client.Repository.GetAllForCurrent();
            var model = new IndexViewModel(repositories);

            return View(model);
        }
        catch (AuthorizationException)
        {
            // Either the accessToken is null or it's invalid. This redirects
            // to the GitHub OAuth login page. That page will redirect back to the
            // Authorize action.
            return Redirect(GetOauthLoginUrl());
        }
    }

    // This is the Callback URL that the GitHub OAuth Login page will redirect back to.
    public async Task<ActionResult> Authorize(string code, string state)
    {
        if (!String.IsNullOrEmpty(code))
        {
            var expectedState = Session["CSRF:State"] as string;
            if (state != expectedState) throw new InvalidOperationException("SECURITY FAIL!");
            Session["CSRF:State"] = null;

            var token = await client.Oauth.CreateAccessToken(
                new OauthTokenRequest(clientId, clientSecret, code)
                {
                    RedirectUri = new Uri("http://localhost:58292/home/authorize")
                });
            Session["OAuthToken"] = token.AccessToken;
        }

        return RedirectToAction("Index");
    }

    private string GetOauthLoginUrl()
    {
        string csrf = Membership.GeneratePassword(24, 1);
        Session["CSRF:State"] = csrf;

        // 1. Redirect users to request GitHub access
        var request = new OauthLoginRequest(clientId)
        {
            Scopes = {"user", "notifications"},
            State = csrf
        };
        var oauthLoginUrl = client.Oauth.GetGitHubLoginUrl(request);
        return oauthLoginUrl.ToString();
    }

    public async Task<ActionResult> Emojis()
    {
        var emojis = await client.Miscellaneous.GetEmojis();

        return View(emojis);
    }
}

If you visit the /Home/Emojis endpoint, you'll see it works fine without authentication since GitHub doesn't require authentication in order to see the emojis.

But visiting /Home/Index requires authentication. That redirects to GitHub.com. GitHub.com in turn redirects back to /Home/authorize which stores the OAuth access token in the session. In a real application I might store this in an encrypted cookie or the database.

To get this sample working, make sure to replace the clientId and clientSecret constants with the values you got from registering your own application.

When you visit the home page and authorize the application, you'll see a list of your repositories lovingly rendered by my beautiful web design.

the beautiful result

Next Steps

If you're using ASP.NET MVC 5 or any OWIN based application, there's an Owin OAuth provider for GitHub you can use instead to provide authentication. I haven't played with it so I have no idea how good it is or how you obtain the OAuth Access Token when you use it in order to pass it to Octokit.net.

git ghfw vs comments edit

In a recent version of GitHub for Windows, we made a quiet change that had a subtle effect you might have noticed. We changed the default merge strategy for *.csproj and similar files. If you make changes to a .csproj file in a branch and then merge it to another branch, you'll probably run into more merge conflicts now than before.

Why?

Well, it used to be that we would do a union merge for *.csproj files. The git merge-file documentation describes this option as such:

Instead of leaving conflicts in the file, resolve conflicts favouring our (or their or both) side of the lines.

For those who don't speak the commonwealth English, "favouring" is a common British misspelling of the one true spelling of "favoring". :trollface:

So when a conflict occurs, it tries to resolve it by accepting all changes more or less. It's basically a cop out.

This strategy can be set in a .gitattributes file like so if you really want this behavior for your repository.

*.csproj  merge=union

But let me show why you probably don't want to do that and why we ended up changing this.

Union Merges Gone Wild

Suppose we start with the following simplified foo.csproj file in our master branch along with that .gitattributes file:

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

After creating that file, let's make sure we commit it.

git init .
git add -A
git commit -m "Initial commit of gittattributes and foo.csproj"

We then create a branch (git checkout -b branch) creatively named "branch" and insert the following snippet into foo.csproj in between the AAA.cs and DDD.cs elements.

    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>

For those who lack imagination, here's the result that we'll commit to this branch.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Don't forget to commit this if you're following along.

git commit -a "Add BBB.cs element"

Ok, so let's switch back to our master branch.

git checkout master

And then insert the following snippet into the same location.

    <Page Include="CCC.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>

The result now in master is this:

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="CCC.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Ok, commit that.

git commit -a "Add CCC.cs element"

Still with me?

Ok, now let's merge our branch into our master branch.

git merge branch

Here's the end result with the union merge.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="CCC.cs">
    <Page Include="BBB.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Eww, that did not turn out well. Notice that "BBB.cs" is nested inside of "CCC.cs" and we don't have enough closing </Page> tags. That's pretty awful.

Without that .gitattributes file in place and using the standard merge strategy, the last merge command would result in a merge conflict which forces you to fix it. In our minds, this is better than a quiet failure that leaves your project in this weird state.

<?xml version="1.0" encoding="utf-8"?>
<Project>
  <PropertyGroup>
    <Page Include="AAA.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
<<<<<<< HEAD
    <Page Include="CCC.cs">
=======
    <Page Include="BBB.cs">
>>>>>>> branch
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
    <Page Include="DDD.cs">
      <SubType>Designer</SubType>
      <Generator>MSBuild:Compile</Generator>
    </Page>
  </PropertyGroup>
</Project>

Obviously, in some idyllic parallel universe, git would merge the full CCC element after the BBB element without fudging it up and without bothering us with these pesky merge conflicts. We don't live in that universe, but maybe ours could become more like that one. I hear it's cool over there.

What's this gotta do with Visual Studio?

I recently asked folks on Twitter to vote up this User Voice issue asking the Visual Studio team to support file patterns in project files. Wildcards in .csproj files are already supported by MSBuild, but Visual Studio doesn't deal with them very well.

One of the big reasons to do this is to ease the pain of merge conflicts. If I could wild-card a directory, I wouldn't need to add an entry to *.csproj every time I add afile.

Another way would be to write a proper XML merge driver for Git, but that's quite a challenge as my co-worker Markus Olsson can attest to. If it were easy, or even moderately hard, it would have been done already. Though I wonder if we limited it to common .csproj issues could we write one that isn't perfect but good enough to handle common merge conflicts? Perhaps.

Even if we did this, the merge driver only solves the problem for one version control system, though arguably the only one that really matters. :trollface:

It's been suggested that if Visual Studio sorted its elements first, that would help mitigate the problem. That helps reduce the incidental conflicts caused by Visual Studio's apparent non-deterministic sort of elements. But it doesn't make the issue of merge conflicts go away. In the example I presented, every element remained sorted throughout my example. So any time two different branches adds files that would be adjacent, you run the risk of this conflict. This happens quite frequently.

Wild card support would make this problem almost completely go away. Note, I said almost. There will still be the occasional conflict in the file, but they'd be very rare.

nuget comments edit

According to Maarten Balliauw, Building .NET projects is a world of pain. He should know, he is a co-founder of MyGet.org which provides private NuGet feeds along with build services for those packages.

He's also a co-author of the Pro NuGet book, though I might argue he's most famous for his contribution to Let Me Bing That For You.

His post gives voice to a frustration I've long had. For example, if you want to build a project library that targets Windows 8 RT, you have to install Visual Studio on your build machine. That's just silly fries! (By the way, if you have a solution that doesn't require Visual Studio, I'd love to hear it!)

UPDATE: Nick Berardi writes about an approach that doesn't require Visual Studio. Of course, there are several caveats with that approach. First, any upgrade requires you re-do the copy. Second, I'm not sure what the licensing implications are. You might still technically need a Visual Studio license for that server to do this. In any case, I opened a User Voice issue asking Microsoft to just clean this mess up and make it easier for us to do this.

Maarten doesn't just rant about this situation, he proposes a solution (emphasis mine):

I do not think we can solve this quickly and change history. But I do think from now on we have to start building SDK’s differently. Most projects only require an MSBuild .targets file and some assemblies, either containing MSBuild tasks or reference assemblies, to do their compilation work. What if… we shipped the minimum files required to succesfully build a project as NuGet packages?

This philosophy aligns well with my personal philosophy on self-contained builds and was a key design goal with NuGet. One of the guiding principles I wrote about when we first announced NuGet:

Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.

There's a caveat that NuGet does store packages in a machine specific location outside of the solution, but that's an optimization. The point is, a developer should ideally be able to checkout your code from GitHub or other source hosting repository and build the solution. Bam! Done! If there's too many more steps than that, it's a pain to contribute.

Fortunately, there are some great features in NuGet that can help package authors reach this goal!

Import MSBuild targets and props files into project

NuGet 2.5 introduces the ability to import MSBuild targets and prop files into a project. As more projects take advantage of this feature, we'll hopefully see the demise of required MSIs in order to work on a project. As Maarten points out, MSIs (or Visual Studio Extensions) are still valuable in order to add extra tooling. But they shouldn't be required in order to build a project.

Development-only dependencies

In tandem with importing MSBuild targets, NuGet 2.7 adds the ability to specify development-only dependencies.

This feature was contributed by Adam Ralph and it allows package authors to declare dependencies that were only used at development time and don't require package dependencies. By adding a developmentDependency="true" attribute to a package in packages.config, nuget.exe pack will no longer include that package as a dependency.

These are packages that do not get deployed with your application. These packages might include MSBuild targets, code contract assemblies, or source code only packages.

You can see an example of this in use with Octokit.net in its packages.config.

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="DocPlagiarizer" version="0.1.1" targetFramework="net45" developmentDependency="true" />
  <package id="SimpleJson" version="0.34.0" targetFramework="net45" developmentDependency="true" />
</packages>

My recommendation to package authors is to consider a separate *.Runtime package that contains just the assemblies that need to be deployed and a separate main project that depends on that package that brings in all the build-time dependencies such as MSBuild targets and whatnot. It keeps a nice separation and works well for other non-Visual Studio NuGet consumers such as Web Matrix, ASP.NET Web Pages, Xamarin, etc.

Related dependencies feature

At the end of his post, Maarten notes that there is good progress towards build sanity.

P.S.: A lot of the new packages like ASP.NET MVC and WebApi, the OData packages and such are being shipped as NuGet packages which is awesome. The ones that I am missing are those that require additional build targets that are typically shipped in SDK's. Examples are the Windows Azure SDK, database tools and targets, ... I would like those to come aboard the NuGet train and ship their Visual Studio tooling separately from teh artifacts required to run a build.

This reminds me of a feature proposal I wrote a draft specification for a long time ago called Related Dependencies. You can tell it's old because it refers to the old name for NuGet.

These are basically "optional" dependencies that can bring in tooling from other package managers such as the Visual Studio Extensions gallery. In the spec, I mentioned "prompting" but the goal would be a non-obtrusive way for packages to highlight other tooling related to the package dependency and make it easy for developers to easily install all of them.

In my mind, this would be similar to how you are notified of updates in the Visual Studio Extension Manager (now called "Extensions and Updates" dialog). Perhaps there's another tab that lets you see extensions related to the packages installed in your solution and an easy way to install them all.

But these would have to be optional. You should be able to build the solution without them. Installing them just makes the development experience a bit better.

personal, github comments edit

If you happen to be in Oahu next week (lucky you!), Wednesday April 9 2014 at 5:30 PM, come see my talk on GitHub Secrets at the University of Hawaii (lucky me!). Did I mention good food will be served?!

What am I speaking about? Well I asked a few dear friends of mine what questions they would want answered in a talk by me and this is what they came up with.

What's the software industry like? Great question DJ Pauly D! Photo by Eva Rinaldi license CC BY-SA 2.0 What's the secret to success to being a developer? I have a few ideas Mr. Bill Gates! Photo by World Economic Forum license CC BY-SA 2.0 What's the secret to GitHub's success Well it's a combination of factors Ms. Marissa Mayers. Photo by Michael Tippet license CC BY-SA 2.0 Tell me GitHub.com secrets for great success You got it Mr. Mark Zuckerberg! Photo by Jason McElWeenie license CC BY 2.0

It's really an opportunity to talk to developers and students about topics that are near and dear to my heart.

Share your knowledge when you travel

This is the second time I'm giving a talk while on vacation in Hawaii. The first time was a couple of years ago. When I went back home to Alaska, I also gave a talk there.

I've found that places that are outside of the usual tech-hubs tend to be very welcoming to outside speakers. It can be hard to maintain a software user group when you don't have a large pool of speakers to draw from as you would in Seattle or San Francisco.

So if you find yourself on vacation somewhere like Alaska, Hawaii, or elsewhere, you should consider getting in touch with a local user group if you have something interesting to share. It may be your fresh perspective is exactly what they'd like to see.

But here's a pro tip. Giving a talk while on vacation does introduce an element of stress that you're probably going on vacation to avoid in the first place. I advise trying to schedule the talk near the beginning of your vacation. The amazing feeling of relief after giving a talk will help you relax the rest of the trip.

Sadly, I did not think of this until after I scheduled this talk. But I think I've prepared enough in advance that I'll be able to relax. After all, I'll be in Hawaii. How can I complain?

personal, blogging comments edit

I'm going through a bit of a funk with work and writing. They seem somewhat intertwined. Writing this blog has been such an important outlet for me that it's rough when I can't seem to muster the energy to just keep writing.

So what do you do when you have blogger's block? You blog about blogging of course! This isn't the first time I've done it.

Looking at this list, which isn't even comprehensive, I now realize I have a bit of a blogging problem. I mean, just look at that last entry in the list. That has to be when blogging about blogging jumped the shark. Then again, after Fonzie jumped the shark, Happy Days continued on as a top television show for six more seasons. Jumping the shark doesn't necessarily lead to decline.

But I digress.

These days, I have a new tool for fighting bloggers block. On Twitter today, I asked people to do me a favor. Now that my blog is hosted on GitHub.com using Jekyll, there's an associated repository with issues!

So I asked folks to log an issue with a topic you'd like me to write about. In other words, for some crazy reason, you think I might have something interesting to write about this topic.

A big challenge for me is each blog post (well, every one except this one) takes me a lot of time and effort to write. So I end up looking at that effort and decide to watch Game of Thrones or Archer instead.

Sometimes though, an idea will just grab me by the neck and not let go until I absolutely have to write about it. I can't promise I'll address every idea posted in my blog's issues. I might not even do any. But my hope is one of the posted issues will grab be hard and toss me in front of the keyboard until all the words spill out.

One topic I plan to write about is how I've been using GitHub repositories and issues to manage many aspects of my life apart from software lately. I also want to make sure I get back to my roots and blog more about code. But I am curious to hear what you're interested in. Thanks!

personal, empathy comments edit

If I had to pick only one trait I hope to instill in my children, it's empathy. It's on my mind because of this beautiful post by Reg Braythwayt.

Empathy is not seeing the world with your eyes from where someone else is standing, it’s seeing the world with their eyes, from their perspective, coloured with their hopes and fears, their life experience.

Empathy is putting yourself in someone else’s shoes and then overcoming your own thoughts of what you would do in their shoes and imagining what it feels like to be them in their shoes.

You'll note the unnecessary "u" in "coloured". Reg is Canadian, but don't let that stop you from reading the whole post. It's brief but wonderful.

In fact, it's so good, part of the point of my blog post is to draw attention to his post. Especially the iconic image in his post that is a powerful illustration of real empathy.

But this reminds me of a scene from an early 2000 television sitcom known for exploring the dark recesses of human psychology, Malcom in the Middle. In an episode entitled Reese Cooks, Reese, an older brother to Malcom, exhibits mild sociopathic tendencies. In an effort to show him more attention, ~~Heisenberg~~Hal, his dad, signs him up for a cooking class.

Malcom in the Middle episode: Reese Cooks - Heisenberg in his previous marriage

Reese discovers he has natural talent at cooking and really takes to the class. The parents are amazed at his transformation until a cooking contest where he ends up sabotaging the other contestants dishes because "It was fun!"

His mom, Lois, and dad then attempt to teach him about empathy.

Lois:
How would you feel if you were that poor woman whose quiche you salted?
Reese:
…Fat?
Hal:
Reese, do you know what empathy is?
Reese:
No.
Hal:
Well, empathy is putting yourself in other people's shoes so you can feel what they do. If you hurt someone, empathy makes you hurt as well.
Reese:
Then why would you want empathy?

Why would you want it indeed? It sounds kind of, well, painful. Why would anyone want to be empathetic? How do you explain the benefit to someone who's not inclined to be empathetic? How do you explain to someone who seems to only look out for his or herself?

It's in your own best interest to be empathetic.

I don't mean this in some vague karmic fashion, but in a concrete sense.

It makes for better relationships with others.

It's hard to carry on meaningful relationships with others when you constantly misunderstand the intentions and motivations of those around you. This applies to your friends, family, and work relationships. You can imagine that being around someone who constantly misinterprets your intentions would lead to unnecessary conflict.

Empathy helps people better understand the mindset of those around them. This helps people address the real issues rather than talking past each other or working towards cross purposes.

It helps you make better choices for your own well being.

Everyone views the world through a lens of their own experience. In effect, our own biases are feeding ourselves misinformation which affects our ability to make decisions. Empathy helps one see the truth of a situation and act accordingly.

Too often, people spend much of their time engaging in behavior that is ultimately not in their long term self interest for an apparent short term gain. Sometimes it's obvious. It might feel real good to smoke that cigarette, but in the long term you know you'd be better off quitting.

Sometimes it's more subtle. For example, when a marginalized person speaks out against some abuse they've faced, it seems inevitable that there's a strong backlash from people who, although are not involved in this particular incident in any way, feel a sense of being attacked.

I ascribe this to a lack of empathy. People jump to a conclusion that ascribes the worst motives and demonize others who don't share the same worldview.

Empathy makes you realize that everybody has their struggles in life and are just trying to get by. People spend their time concerned about their own well-being, not on negatively affecting yours. As a friend once told me, we're all just squirrels trying to get a nut in this world.

Spending a lot of time demonizing others who don't conform to your world view leads to a pretty unhealthy existence. This isn't to say that you must agree with everyone. But that you recognize that the lives of others is not so black and white, much as yours isn't.

It makes you a more effective person

All too often I see leaders who flip a bozo bit on an employee. Or color their experiences through their own lens. This makes the leader extremely ineffective at motivating people to do their best work. It creates an environment where those who don't see things the same way as the leader are demoralized, even though they may be doing great work otherwise.

Likewise, I often see employees flip the bozo bit on a leader because of lack of empathy for the challenges and pressures of being a leader. This makes the employee ineffective. It's hard to influence decisions when you lack basic empathy to the view point you're arguing against.

Conclusion

Someone truly concerned about their own well being in the long run would see the benefits of empathy.

This isn't the first time I've written about empathy and won't be the last. You might find my other posts that talk about empathy in various contexts helpful.

rx, software comments edit

What would you do if you could stop time for everyone but yourself?

When I was a kid, I watched a TV movie called The Girl, The Gold Watch, and Everything that explored this question. The main character, Kirby, inherits a very special gold watch from his Uncle that can stop time, but not for the bearer of the watch who is free to move around and troll people. Here's a clip from the movie where Kirby and his friend have a bit of fun with it.

The Girl, The Gold Watch, and Everything

This motif has been repeated in more recent movies as well. I often daydream about the shenanigans I could get into with such a device. If you had such a device, I'm sure you would do what I would do: use the device to write deterministic tests of asynchronous code of course!

Writing tests of asynchronous code can be very tricky. You often have to resort to calling Thread.Sleep or Task.Delay within an asynchronous callback so you can control the timing and assert what you need to assert.

For the most part, these are ugly hacks. What you really want is a way to control execution timing with fine grained control. You need a device like Kirby's golden watch.

Here's the good news. When you use Reactive Extensions (Rx), you have such a device at your disposal! Try not to get into too much trouble with it.

In the past, I've written how Rx can reduce the cognitive load of asynchronous code through a declarative model. Rather than attempt to orchestrate all the interactions that must happen asynchronously at the right time, you simply describe the operations that need to happen and Reactive Extensions orchestrates everything for you.

This nearly eliminates race conditions and deadlocks while also reducing the cognitive load and potential for mistakes when writing asynchronous code.

Those are all amazing benefits of this approach, yet those aren’t even my favorite thing about Reactive Extensions. My favorite thing is how the abstraction allows me to bend time itself to my will when writing unit tests. FEEL THE POWER!

Everything in Rx is scheduled using schedulers. Schedulers are classes that implement the IScheduler interface. This simple, but powerful, interface contains a Now property as well as three Schedule methods for scheduling actions to be run.

Control Time with the The TestScheduler

Rx provides the TestScheduler class (available in the Rx-Testing NuGet package) to give you absolute control over scheduling. This makes it possible to write deterministic repeatable unit tests.

Unfortunately, it's a bit of a pain to use as-is which is why Paul Betts took it upon himself to write some useful TestScheduler extension methods available in the reactiveui-testing NuGet package. This library provides the OnNextAt method. We'll use this to create an observable that provides values at specified times.

The following test demonstrates how we can use the TestScheduler.

[Fact]
public void SchedulerDemo()
{
    var sched = new TestScheduler();
    var subject = sched.CreateColdObservable(
        sched.OnNextAt(100, "m"), // Provides "m" at 100 ms
        sched.OnNextAt(200, "o"), // Provides "o" at 200 ms
        sched.OnNextAt(300, "r"), // Provides "r" at 300 ms
        sched.OnNextAt(400, "k")  // Provides "k" at 400 ms
    );

    string seenValue = null;
    subject.Subscribe(value => seenValue = value);

    sched.AdvanceByMs(100);
    Assert.Equal("m", seenValue);

    sched.AdvanceByMs(100);
    Assert.Equal("o", seenValue);

    sched.AdvanceByMs(100);
    Assert.Equal("r", seenValue);

    sched.AdvanceByMs(100);
    Assert.Equal("k", seenValue);
}

We start off by creating an instance of a TestScheduler. We then create an observable (subject) that provides four values at specific times. We subscribe to the observable and set the seenValue variable to whatever values the observable supplies us.

After we subscribe to the observable, we start to advance the scheduler's clock using the OnNextAt method. At this point, we are in control of time as far as the scheduler is concerned. Feel the power! The test scheduler is your gold watch.

Note that these are timings on a virtual clock. When you run this test, the code executes pretty much instantaneously. When you see AdvanceByMs(100), the scheduler's clock advances by that amount, but your computer's real clock does not have to wait 100 ms. You could call AdvanceByMs(99999999) and that statement would execute instantaneously.

Real World Example

Ok, that's neat. But let's see something that's a bit more real world. Suppose you want to kick off a search (as in an autocomplete scenario) when someone types in values into a text box. You probably don't want to kick off a search for every typed in value. Instead, you want to throttle it a bit. We'll write a method to do that that takes advantage of the Throttle method. From the MSDN documentation, the Throttle method:

Ignores the values from an observable sequence which are followed by another value before due time with the specified source, dueTime and scheduler.

Throttle is the type of method you might use with a text field that does incremental search while you’re typing. If you type a set of characters quickly one after the other, you don’t want a separate HTTP request for each character to be made. You’d rather wait till there’s a slight pause before searching because the old results are going to be discarded anyways. Here's a super simple Throttle example that throttles values coming from some subject. No matter how quickly the subject produces values, the Subscribe callback will only see values every 10 milliseconds.

  subject.Throttle(TimeSpan.FromMilliseconds(10))
      .Subscribe(value => seenValue = value);
public static IObservable<string> ThrottleTextBox(TextBox textBox, IScheduler scheduler)
{
    return Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>(
        h => textBox.TextChanged += h,
        h => textBox.TextChanged -= h)
        .Throttle(TimeSpan.FromMilliseconds(400), scheduler)
        .Select(e => ((TextBox)e.Source).Text);
}

What we do here is use the Observable.FromEventPattern method create an observable from the TextChanged event. If you're not used to it, the FromEventPattern method is kind of gnarly.

Once again, Paul Betts has your back with the very useful ReactiveUI-Events package on NuGet. This package adds an Events extension method to most Windows controls that provides observable event properties. Here's the code rewritten using that. It's much easier to understand.

public static IObservable<string> ThrottleTextBox(TextBox textBox, IScheduler scheduler)
{
    return textBox
        .Events()
        .TextChanged // IObservable<TextChangedEventArgs>
        .Throttle(TimeSpan.FromMilliseconds(400), scheduler)
        .Select(e => ((TextBox)e.Source).Text);
}

What we're doing here is creating a method that signals us when the text of the TextBox changes, but only after there's been no change for 400 milliseconds. It will then give us the full text of the text box.

Here's a unit test to make sure we wrote this correctly.

[Fact]
public void TextBoxThrottlesCorrectly()
{
    var textBox = new TextBox();

    new TestScheduler().With(sched =>
    {
        string observed = null;
        ThrottleTextBox(textBox, sched)
            .Subscribe(value => observed = value);

        textBox.Text = "m";
        Assert.Null(observed);

        sched.AdvanceByMs(100);
        textBox.Text = "mo";
        Assert.Null(observed);

        textBox.Text = "mor";
        sched.AdvanceBy(399);  // Just about to kick off the throttle
        Assert.Null(observed);

        textBox.Text = "mork"; // But we changed it just in time.
        Assert.Null(observed);

        sched.AdvanceByMs(400); // Wait the throttle amount
        Assert.Equal("mork", observed);
    });
}

In this test, we're using the With extension method provided by reactiveui-testing package. This method takes in a lambda expression that provides us with a scheduler to pass into our Throttle method.

Within that lambda, I am once again in complete control of time. As you can see, I start advancing the clock here and there and changing the TextBox's Text values. As you'd expect, as long as I don't advance the clock more than 400 ms in between text changes, the ThrottleTextBox observable won't give us any values.

But at the end, I go ahead and advance the clock by 400 ms after a text change and we finally get a value from the observable.

Conclusion

The throttling of a TextBox (for autocomplete and search scenarios) is probably an overused and abused example for Rx, but there's a good reason for that. It's easy to grok and explain. But don't let that stop you from seeing the full power and potential of this technique.

It should be clear how this ability to control time makes it possible to write tests that can verify even the most complex asynchronous interactions in a deterministic manner (cue "mind blown").

Unfortunately, and this next point is important, the TestScheduler doesn't extend into real life, so your shenanigans are limited to your asynchronous Reactive code. Thus, if you call Thread.Sleep(1000) in your test, that thread will really be blocked for a second. But as far as the test scheduler is concerned, no time has passed.

The good news is, with the TestScheduler, you generally don't need to call Thread.Sleep in your tests. There are many methods in Reactive Extensions for converting asynchronous calling patterns into Observables.

So the TestScheduler might not be as much fun as Kirby's golden watch, it should make writing and testing asynchronous code a whole lot more fun than it was in the past.

personal blogging comments edit

Today Jeff Atwood commemorates 10 years of CodingHorror.com. Congratulations Jeff!

But as I read that a thought occurred to me. Haven't I been blogging as long as Jeff, albeit much less successfully? I mean, just look at the intro to his post and compare it to the intro to this post. His is way better. WAY BETTER!

And sure enough, I started this very blog you are reading (thank you for your patronage!) one day after CodingHorror posted his first blog post. I started on February 3, 2004 with this exciting post about, well, the blog itself.

Jeff and I didn't know each other back then but we became friends through blogging and at one point I tried to hire him to a company I co-founded but he wisely said no and went on to great things. This is why I love blogging. The community and serendipitous interactions that result have been a big part of my growth in the last decade.

Yes, there's nothing more boring than blogging about blogging (other than blogging about blogging about blogging). As I'm doing right now.

But this blog doesn't contain the full extent of my blogging history. I had a blog long before this one. I was reminded about it when Zach Holman wrote a great post entitled Only 90s Web Developers Remember This. I'm not yet 90 years old, but I do remember the practices he highlights. In fact, my previous blog perpetrated some of them.

I used to keep a blog at haack.org which no longer exists except in the great internet graveyard in the sky, The Internet Archive Wayback Machine.

Here's the homepage of my very first blog.

Please use IE 4.0 and above folks!

Digging this up and seeing that IE 4.0 disclaimer made me laugh out loud in light of what Zach writes about IE.

It's come to my attention that people today don't like Internet Explorer. I can only believe they hate Internet Explorer because it has devolved from its purest form, Internet Explorer 4.0.

Internet Explorer 4.0 was perfection incarnate in a browser. It had Active Desktop. It had Channels. It had motherfucking Channels, the coolest technology that never reached market adoption ever not even a little bit. IE4, in general, was so good that you were going to have it installed on your PC whether you liked it or not.

See, I knew perfection even back then.

And here's a shot of my very first blog post way back in November 10, 2000. Over 13 years ago! The header image wasn't broken back then.

-574

I'm still proud of that deep review of Charlie's Angels.

If all goes well, I hope to be blogging for the next 10 years. And hopefully there will be something in here that's interesting or useful to some of you in that time. Thanks for reading!