, mvc comments edit

In the ASP.NET MVC 3 Uservoice site, one of the most voted up items is a suggestion to include an empty project template. No, a really empty project template.

You see, ASP.NET MVC 3 includes an “empty” project template, but it’s not empty enough for many people. So in this post, I’ll give you a much emptier one. It’s not completely empty. If you really wanted it completely empty, just choose the ASP.NET Empty Web Application template.

The Results

I’ll show you the results first, and then talk about how I made it. After installing my project template, every time you create a new ASP.NET MVC 3 project, you’ll see a new entry named “Really Empty”


Select that and you end up with the following directory structure.


I removed just about everything. I kept the Views directory because the Web.config file that’s required is not obvious and there’s special logic related to the Views directory. I also kept the Controllers directory, since that’s where the tooling is going to put controllers anyways. I also kept the Global.asax and Web.config files which are typically necessary for an ASP.NET MVC project.

I debated removing the AssemblyInfo.cs file, but decided to trim it down and keep it.

Building Custom Project Templates

I wrote about building a custom ASP.NET MVC 3 project template a long time ago. However, I’ve improved on what I did quite a bit. Now, I have a single install.cmd file you can run and it’ll determine whether you’re on x64 or x86 and run the correct registry script. The install.cmd and uninstall.cmd batch files are there for convenience and call into a PowerShell script that does the real work.

UPDATE 1/12/2012: Thanks to Tim Heuer, we have an even better installation experience. He refactored the project to output a VSIX file. All you need to do is double click the extension file to install the project template. I’ve uploaded the extension file to GitHub here.

I tried uploading it to the gallery, but it wouldn’t let me. I’ll follow up on that.


If you’re wondering why the product team hasn’t included this all along, it’s for a lot of reasons. There was (at least when I was there) internal debate about how empty to make it. For example, when you create a new project with my empty template, and hit F5, you get an error. Not a great experience for most people.

Honestly, I’m all for it, but there are many other higher priority items for the team to work on. So I figured I’d do it myself and put it up on GitHub.


Installation is really simple. If you like to build things from source, grab the source from my GitHub repository and run the build.cmd batch file. Then double click the resulting VSIX file. Be sure to read the README for more details.

If you don’t yet know how to use Git to grab a repository, don’t worry, just navigate to the downloads page and download the VSIX file I’ve conveniently uploaded.


Hey, if you think you can help me make this better, please go fork it and send me a pull request. Let me know if I include too little or too much.

I’ve already posted a few things that could use improvement in the README. If you’d like to help make this better, consider one of the following. :)

  • Make script auto-detect whether VS is running or not and do the right thing
  • Test this on an x86 machine
  • Write an installer for this

Let me know if you find this useful.

open source, community comments edit

Mary Poppendieck writes the following in Unjust Deserts (pdf), a paper on compensation systems (emphasis mine),

There is no greater de-motivator than a reward system that is perceived to be unfair. It doesn’t matter if the system is fair or not. If there is a perception of unfairness, then those who think that they have been treated unfairly will rapidly lose their motivation.

Written over seven years ago, the paper is just as insightful and applicable today. For example, let’s apply it to the recent dust-up about the legitimacy and fairness of the Microsoft MVP Program.

I think the MVP program means well. It’s not trying to be a conspiracy or filch you of your just desserts. But if you think about the MVP program as a compensation system, it becomes very clear why people feel disillusioned.

What compensation am I talking about?

  1. An MSDN Subscription
  2. Privileged access to product teams and not yet public information (under NDA)
  3. A yearly summit which provides hotel rooms and access to product team members as well as a nice party.

Not only is it a compensation system, but the means by which compensation is doled out is perceived to be arbitrary and hidden. It’s a recipe for mistrust.

Intrinsic Motivations

Mary goes on to point out,

In the same way, once employees get used to receiving financial rewards for meeting goals, they begin to work for the rewards, not the intrinsic motivation that comes from doing a good job and helping their company be successful.

Someone asked me what I thought about the MVP program recently and I said I think Microsoft’s actually a great company, but I don’t think you should seek out recognition from Microsoft or any other corporation for your community contributions. I think that provides the wrong incentives to build community.

If you run an open source project, don’t do it to receive recognition from Microsoft. Or any other corporation for that matter (except maybe you’re own). Do it to scratch an itch! Do it because it’s fun. Do it to show cool stuff to your peers. Worry about their recognition more than some corporation.

If you answer questions about a technology on StackOverflow, do it because you enjoy sharing your knowledge with others (and you want the SO points!), not because it’s on a checklist to receive an MVP award.

Just as Mary points out, when you start to frame these activities as means to receive an extrinsic reward, you become disillusioned. So whether the program exists or not, we should strive on our part to not feel a sense of entitlement to the program and focus on our intrinsic motivations.

Fixing It

I covered what I think we should strive for. But what do I think Microsoft should do? Several things.

So far, I glossed over the the fact that recognition from Microsoft isn’t the only reason people want the award. There are material benefits. MVPs are part of a privileged group that gets early access to what Microsoft is doing, which might provide a real competitive advantage. Why wouldn’t you seek that out?

Open Development

Let’s tackle the first thing first, privileged “early access”. Well there’s one easy solution to that. Do you know why NuGet doesn’t have an “early access” program? Drew Miller nails it on Twitter:

Know how you avoid the need for a privileged group of folks under NDA that inevitably is seen as special and superior? Develop in the open.

NuGet sidesteps the whole question of a recognition program by developing in the open. The same is true for the Azure SDK. When active development occurs in a public repository, the whole concept of “early access program” makes no sense.

Not only that, but recognition in an open source project doesn’t come from some corporation. It comes from the maintainers of a project and from the folks in the project’s community that you’ve helped. You can point to the reason people are recognizing you.

Better Free Tools

The other reason folks want an MVP is to have access to the professional tools. Most companies will easily shell out the money for this, but if you’re a hobbyist or open source developer, it’s a lot of money to shell out.

In this regard, I think Microsoft should either make its free Express tools have more pro features such as allowing Visual Studio Extensions and multi-project support, or simply make Visual Studio Professional free, and focus on developing the ecosystem that gets a boost when everyone has better tools to build on your platform. Everyone wins.

Focused recognition

I don’t think it’s inherently wrong for a company to recognize people’s contributions. But it has to be done in a way so that it’s seen as icing and not an entitlement or cronyism.

It’s darn near impossible to conceive of a recognition program that would be seen as universally fair and recognizes something so broad as “community contributions”. A better approach might be to have multiple smaller recognition programs. Focus on removing obstacles that get in the way of people inherently doing the things that’s good for all of us. For example, it benefits Microsoft’s when:

  1. People are helping solve each other problems on the forums.
  2. People are giving talks about their products.
  3. People are building software (open and not) on their platforms.
  4. Probably some others I’m forgetting…

For what it’s worth, I think the first one is already solved by StackOverflow. Just move your forums there and be done with it. After all, nobody gets upset when they answer a question on Twitter and don’t get StackOverflow points.


Will Microsoft change the program? I have no idea. I’m not really all that concerned about it really. In the meanwhile, we can recognize folks who make our lives better. We don’t need to wait for Microsoft to do so. I’ve used a huge swath of open source projects that have made my development smoother. I’ve found many great answers in forums, blog posts, StackOverflow that unblocked me.

Moving forward, I’ll make an extra effort to thank the people responsible for those things. Maybe there’s some projects and folks you should recognize. Go for it! It’ll feel good.

Disclaimer: I was a former Microsoft MVP for about three months before joining Microsoft as an employee. I’m now an employee of GitHub. My opinion here is simply my own opinion and does not necessarily represent the opinion of any employers past, present, and future. Nor does it represent the opinion of my dog, because I don’t have one, nor anyone in my neighborhood.

code, tdd comments edit

In the past, I’ve tried various schemes to structure my unit tests but never fell into a consistent approach. Pretty much the only rule I had (which I broke all the time) was to write a test class for each class I tested. I would then fill that class with a ton of haphazard test methods.

That was until I saw the approach that Drew Miller took with The way he structured the unit tests struck me as odd at first, but quickly won me over. Drew tells me he can’t take all the credit for this approach. This approach came from when he worked at CodePlex, and builds upon practices he learned from Brad Wilson and Jim Newkirk. That’s the thing I like about Drew, he won’t take credit for other people’s work. Unlike me, of course.

The structure has a test class per class being tested. That’s not so unusual. But what was unusual to me was that he had a nested class for each method being tested.

I’ll provide a simple code example to illustrate this approach and then highlight some of the benefits. The following has two methods for embellishing names with more interesting titles. What it does isn’t really that important for this discussion.

using System;

public class Titleizer
    public string Titleize(string name)
        if (String.IsNullOrEmpty(name))
            return "Your name is now Phil the Foolish";
        return name + " the awesome hearted";

    public string Knightify(string name, bool male)
        if (String.IsNullOrEmpty(name))
            return "Your name is now Sir Jester";
        return (male ? "Sir" : "Dame") + " " + name;

Under Drew’s system, I’ll have a corresponding top level class, with two embedded classes, one for each method. In each class, I’ll have a series of tests for that method.

Let’s look at a set of potential tests for this class. I wrote xUnit.NET tests for this, but you could apply the same approach with NUnit, mbUnit, or whatever you use.

using Xunit;

public class TitleizerFacts
    public class TheTitleizerMethod
        public void ReturnsDefaultTitleForNullName()
            // Test code

        public void AppendsTitleToName()
            // Test code

    public class TheKnightifyMethod
        public void ReturnsDefaultTitleForNullName()
            // Test code

        public void AppendsSirToMaleNames()
            // Test code

        public void AppendsDameToFemaleNames()
            // Test code

Pretty simple, right? If you want to see a real-world example, look at these tests of the user service within

So why do this at all? Why not stick with the old way I’ve done in the past?

Well for one thing, it’s a nice way to keep tests organized. All the tests (or facts) for a method are grouped together. For example, if you use the CTRL+M, CTRL+O shortcut to collapse method bodies, you can easily scan your tests and read them like a spec for your code.


You also get the same effect if you run your tests in a test runner such as the xUnit test runner:


When the test class file is open in Visual Studio, the class drop down provides a quick way to see a list of the methods you have tests for.


This makes it easy to then see all the tests for a given method by using the drop down on the right.


It’s a minor change to my existing practices, but one that I’ve grown to like a lot and hope to apply in all my projects in the future.

Update: Several folks asked about how to have common setup code for all tests. ZenDeveloper has a simple solution in which the nested child classes simply inherit the outer parent class. Thus they’ll all share the same setup code.

Tags: unit testing, tdd, xunit

personal comments edit

Happy New Year’s Eve everyone! And by the time you read this, it’ll probably already be the new year. To my friends across the international date line, what is 2012 like? The rest of us will be there soon.

New Year’s Eve has always been one of my favorite holidays. It brings a collective time for reflection on the past year and anticipation and hope for the year to come.

And for me, New Year’s Eve has an extra special meaning because exactly ten years ago on New Year’s Eve, I met this woman at Giant Village. A mutual friend suggested that we should meet since we were both attending this event. This woman was there with her brother, and I was there with a buddy.


I wonder what she’s been up to after all these years?

Just kidding!

I know what she’s up to. We met in 2001 and were smitten by the time 2002 arrived and have been together ever since. Ten years later, we’ve added to our funky bunch. We work hard hoping to keep these little munchkins alive. What a difference a decade makes, no?


So yeah, New Year’s Eve totally rocks in my book.

code, open source comments edit

T’is the season for “Year in Review” and “Best of” blog posts. It’s a vain practice, to be sure. This is exactly why I’ve done it almost every year! After all, isn’t all blogging pure vanity? Sadly, I did miss a few years when my vanity could not overcome my laziness.

This year I am changing it up a bit to look at some of the highlights, in my opinion, that occurred in 2011 with open source software and the .NET community. I think it’s been a banner year for OSS and .NET/Microsoft, and I think it’s only going to get better in 2012.


We released NuGet 1.0 in the beginning of this year and it had a big impact on the amount of sleep I got last year. Insomnia aside, it’s also had a significant impact on the .NET community and been very well received.

One key benefit of NuGet is it provides a central location for people to discover and easily acquire open source libraries. This alone helps many open source libraries gain visibility. According to, the NuGet gallery now has over 4,000 unique packages and 3.4 Million package downloads.

Scott Hanselman noted another impact I hadn’t considered in his DevReach 2011 keynote. To understand his observation, I need to provide a bit of background.

Back in April, Microsoft released the ASP.NET MVC 3 Tools update. This added support for pre-installed NuGet packages in the ASP.NET MVC 3 project templates so that projects created from these templates already include dependent libraries installed as NuGet packages rather than as flat files in the project. This allows developers who create a project from a template to upgrade these libraries after the project has been created via NuGet.

NuGet 1.5 adds this support for pre-installed packages to any project template that wants it. In the preview for ASP.NET MVC 4, we included libraries such as ModernizR, jQuery, jQuery UI, jQuery Validation, and Knockout in this manner. We expect other project templates in the future to take advantage of this as well.

The interesting observation Hanselman had in his keynote is that this is an example of Microsoft giving equal billing to these open source libraries as it does to its own. When you create an ASP.NET MVC 4 project, your project includes Microsoft packages alongside 3rd party OSS packages all installed in the same manner.

Additionally, the way NuGet itself was developed is also important. NuGet is an Apache v2 open source project that accepts contributions from the community. Microsoft gave it to the Outercurve Foundation and continues to supply the project with employee contributors.

Orchard Project

Before there was NuGet, there was Orchard. Orchard is an open source CMS system that was started at Microsoft, but also contributed to the OuterCurve Foundation.

What’s really impressive about Orchard is the amount of community involvement they’ve fostered. They’ve set up a governance structure consisting of an elected steering committee so that it’s truly a community run project.

They recently surpassed 1 million module downloads from their online gallery. Modules are extensions to Orchard that are installable directly from within the Orchard admin.


Umbraco is an independent open source CMS that has a huge following and a strong community. They’ve been around for a while, long before 2011. But in 2011, Microsoft hosted the redesigned site using Umbraco.

Micro ORMS

For a lack of better term, I think 2011 was the year of the mini-ORMS. While many refere to these libraries as micro-ORMS, they’re not technically ORMs. They’re more simple data access libraries. A non-comprehensive list of the ones that made a big splash are:

If you’re interested in seeing a more comprehensive list of Micro ORMS with source code examples of usage (Nice!), check out this blog post by James Hughes.

Micro Web Frameworks and OWIN

Like pairing a good beer with the right steak, lightweight micro-web frameworks provide a good pairing with Micro-ORMS. It’s interesting that  both of these picked up quite a bit this past year.

Some that caught my attention this year are:

  • Named after Sinatra’s daughter, there’s the Nancy micro web framework.
  • FubuMVC is billed as the project that gets out of your way.
  • OpenRasta is a resource oriented web framework for building REST services.

Again, James Hughes provides a comparative list of micro-web frameworks complete with source code examples.

With the proliferation of web frameworks as well as lightweight web servers such as Kayak and Manos de Mono, the need to decouple the one from the other arose. This is where OWIN stepped into the gap.

OWIN stands for Open Web Interface for .NET. It is a project inspired by Rack, a Ruby Webserver interface, meant to decouple web servers from the web application frameworks that run on them.

This project was started as a completely grass roots project in 2011 but has seen amazing pick-up from the community and I believe will have a big impact in 2012.


Miguel de Icaza wrote a monster blog post about the year that he and the Mono (and Xamarin) folks have had in

  1. His post inspired me to write this less monstrous one. It’s a great post and really inspiring to see how they’ve emerged from the ashes of the great Novell layoff of 2011 to have a great year.

In the following image, you can see me teaching Miguel everything he knows about software development and open source while Scott acts surprised?

What really caught my interest in his post was the note about Microsoft using Mono and Unity3D to build Kinectimals for iOS systems such as the iPad. 2011 seems to be the year of pigs flying for Microsoft.

Xamarin is doing a great job of bringing Mono, and consequently C# and open source to just about every device imaginable!

Open Source Fest at Mix 11

Mix is one of my favorite conferences and I’ve attended every single one. And it has nothing to do with it being in Las Vegas, though that doesn’t hurt one bit.

This year was special due to the efforts of John Papa (who’s name makes me wonder if he ever goes all Biggie Smalls on people and sings “I love it when you call me John Papa”). This year, John put together the Open Source Fest at Mix.

This was an event where around 50 projects had stations in a large open room where they could represent their project and talk to attendees. The atmosphere was electric as folks went from table to table learning about useful software directly from the folks who built it.

This is where projects such as Glimpse got noticed and really took off. Would love to see more of this sort of thing at conferences.

Azure SDKs and GitHub

As I recently wrote on the GitHub blog, Microsoft is actively developing a set of Azure SDKs for multiple platforms (not just .NET) in GitHub. All of these libraries are Apache v2 licensed and actively being developed in GitHub.

screenshot of the azure sdk

It’s great to see Microsoft not only releasing source code under an open source license, but actively developing it in the open and ostensibly accepting contributions from the public. I look forward to seeing more of this in the future.


Last but not least, there’s GitHub. Full disclaimer: I’m an employee of GitHub so naturally my opinion is totally biased. But a bias doesn’t necessarily mean an opinion is wrong.

What I love about GitHub is that just about everybody is there. GitHub hosts a huge number of open source projects, including a large number of the important ones you’ve heard of. Quantity alone isn’t a sign of quality, but it can create network effects. When a site has such a large community, hosting a project there makes it easier to attract contributors because there’s such a large pool to draw from.

I’ve seen this benefit .NET open source projects first hand. Since moving some of my projects there, I’ve received more pull requests. Small independent projects such as JabbR have really attracted a passionate community at GitHub with large numbers of external contributions. Most of the credit must go to the efforts of the great project leads who’ve worked hard to foster a great community, but I think they’d agree that hosting on GitHub certainly makes it easier and more enjoyable.

What did I miss?

Did I miss anything significant in your opinion? Let me know in the comments. What do you think will happen in 2012? Does the number 2012 look like a science fiction year to you? Because it does to me. I can’t believe it’s just about here already. Have a great holidays!

UPDATE: Egg on my face. This post was meant to list a few highlights and not be a comprehensive list of all that happened in open source in the .NET space. Even so, in my holiday infused malaise, I was negligent in omitting several highlights. I apologize and updated the post to reflect a few more significant events. Let me know if I missed some obvious ones.

git, code comments edit

My last post covered how to improve your Git experience on Windows using PowerShell, Posh-Git, and PsGet. However, a commenter reminded me that a lot of folks don’t know how to get Git for Windows in the first place.

And once you do get Git set up, how do you avoid getting prompted all the time for your credentials when you push changes back to your repository (or pull from a private repository)?

I’ll answer both of those questions in this post.

Install msysgit

The first step is to install Git for Windows (aka msysgit). The full installer for msysgit 1.7.8 is here. For a detailed walkthrough of the setup steps, check out GItHub’s Windows Setup walkthrough. It’s pretty straightforward. That’ll put Git.exe in your path so that Posh-Git will work.

Bam! Done! On to the second question. Make sure you set up your SSH keys before moving to the second section.

Using SSH with Posh-Git

One annoyance with Git on Windows is when pushing changes to a repository (or pulling from a private repository), you have to constantly enter your password if you cloned the repository using HTTPS.

Likewise, if you clone with SSH, you also need to enter your passphrase each time. Fortunately, a little program called ssh-agent can securely save your pass phrase (and consequently your sanity) for the session and supply it when needed.

Update: Mike Chaliy just fixed PsGet so it always grabs the latest version of Posh-Git. If you installed Posh-Git before today using PsGet, you’ll need to update Posh-Git by running the following command:

Install-Module Posh-Git force

Unfortunately, at the time that I write this, the version of Posh-Git in PsGet does not support starting an SSH Agent. The good news is, the latest version of Posh-Git direct from their GitHub repository does support SSH Agent.

Since the previous step installed git.exe on my machine, all I needed to do to get the latest version of Posh-Git is to clone the repository.

git clone

This creates a folder named “posh-git” in the directory where you ran the command. I then copied all the files in that folder into the place where PsGet installed posh-git. On my machine, that was:


When I restarted my PowerShell prompt, it told me it could not start SSH Agent.


It turns out that it was not able to find the “ssh-agent.exe” executable. That file is located in C:\Program Files (x86)\Git\bin. but that folder isn’t automatically added to your PATH by msysgit.

If you don’t want to add this path to your system PATH, you can update your PowerShell profile script so it only applies to your PowerShell session. Here’s the change I made.

$env:path += ";" + (Get-Item "Env:ProgramFiles(x86)").Value + "\Git\bin"

On my machine that script is at:


The next time I opened my PowerShell prompt, I was greeted with a request for my pass phrase.


After typing in my super secret pass phrase, once at the beginning of the session, I was set. I could clone some private repositories and push some changes without having to specify my pass phrase each time. Nice. Secure. Convenient.

The Start-SshAgent Command

The reason that I get the ssh-agent prompt when starting up PowerShell is because when I installed Posh-Git, it updated my profile to load in their example profile:


That profile script is calling the Start-SshAgent command which is included with Posh-Git. If you don’t like their profile example, you can manually start ssh-agent by calling the Start-SshAgent command.

code, git comments edit

I’m usually not one to resort to puns in my blog titles, but I couldn’t resist. Git it? Git it? Sorry.

Ever since we introduced PowerShell into NuGet, I’ve become a big fan. I think it’s great, yet I’ve heard from so many other developers that they have no time to try it out. That it’s “on their list” and they really want to learn it, but they just don’t have the time.

But here’s the dirty little secret about PowerShell. This might get me banned from the PowerShell junkie secret meet-ups (complete with secret handshake) for leaking it, but here it is anyways. You don’t have to learn PowerShell to get started with it and benefit from it!

Seriously. If you use a command line today, and switch to PowerShell instead, pretty much everything you do day to day still works without changing much of your workflow. There might be the occasional hiccup here and there, but not a whole lot. And over time, as you use it more, you can slowly start accreting PowerShell knowledge and start to really enjoy its power. But on your time schedule.

UPDATE: Before you do any of this, make sure you have Git for Windows (msysgit) installed. Read my post about how to get this set up and configured.

There’s a tiny bit of one time setup you do need to remember to do:

Set-ExecutionPolicy RemoteSigned

Note: Some folks simply use Unrestricted for that instead of RemoteSigned. I tend to play it safe until shit breaks.So with that bit out of the way, let’s talk about the benefits.


If you do any work with Git on Windows, you owe it to yourself to check out Posh-Git. In fact, there’s also Posh-HG for mercurial users and even Posh-Svn for those so inclined.

Once you have Posh-Git loaded up, your PowerShell window lights up with extra information and features when you are in a directory with a git repository.


Notice that my PowerShell prompt includes the current branch name as well as information about the current status of my index. I have 2 files added to my index ready to be committed.

More importantly though, Posh-Git adds tab expansions for Git commands as well as your branches! The following animated GIF shows what happens when I hit the tab key multiple times to cycle through my available branches. That alone is just sublime.


Install Posh-Git using PsGet

You’re ready to dive into Posh-Git now, right? So how do you get it? Well, you could follow all those pesky directionson the GitHub site. But we’re software developers. We don’t follow no stinkin’ list of instructions. It’s time to AWW TOE  MATE!

And this is where a cool utility named PsGet comes along. There are several implementations of “PsGet” around, but the one I cover here is so dirt simple to use I cried the first time I used it.

To use posh-git, I only needed to run the following two commands:

(new-object Net.WebClient).DownloadString("") | iex
install-module posh-git

Here’s a screenshot of my PowerShell window running the command. Once you run the commands, you’ll need to close and re-open the PowerShell console for the changes to take’s

Both of these commands are pulled right from the PsGet homepage. That’s it! Took me no effort to do this, but suddenly using Git just got that much smoother for me.

Many thanks to Keith Dahlby for Posh-Git and Mike Chaliy for PsGet. Now go git it!, mvc, tdd, code, razor comments edit

Given how central JavaScript is to many modern web applications,  it is important to use unit tests to drive the design and quality of that JavaScript. But I’ve noticed that there are a lot of developers that don’t know where to start.

There are many test frameworks out there, but the one I love is QUnit, the jQuery unit test framework.


Most of my experience with QUnit is writing tests for a client script library such as a jQuery plugin. Here’s an example of one QUnit test file I wrote a while ago (so you know it’s nasty).

You’ll notice that the entire set of tests is in a single static HTML file.

I saw a recent blog post by Jonathan Creamer that uses ASP.NET MVC 3 layouts for QUnit tests. It’s a neat approach that consolidates all the QUnit boilerplate into a single layout page. This allows you to have multiple test files and duplicate that boilerplate.

But there was one thing that nagged me about it. For each new set of tests, you need to add an action method and a corresponding view. ASP.NET MVC does not allow rendering a view without a controller action.

Controller-Less Views

The idea of controller-less views has been one tossed around by folks, but there are all sorts of design issues that come up when you consider it. For example, how do you request such a view directly? If you allow that, what if the view is intended to be rendered by a controller action. Now you have two ways to access that view, one of which is probably incorrect. And so on.

However, there is another lesser known framework (at least, lesser known to ASP.NET MVC developers) from the ASP.NET team that pretty much provides this ability!

ASP.NET Web Pages with Razor Syntax

It’s a product called ASP.NET Web Pages that is designed to appeal to developers who prefer an approach to web development that’s more like PHP or classic ASP.

Aside: I’d like to go on record and say I hated that name from the beginning because it causes so much confusion. Isn’t everything I do in ASP.NET a web page?

A Web Page in ASP.NET Web Pages (see, confusing!) uses Razor syntax inline to render out the response to a request. ASP.NET Web Pages also support layouts. This means we can create an approach very similar to Jonathan’s, but we only need to add one file for each new set of tests. Even better, this approach works for both ASP.NET MVC 3 and ASP.NET Web Pages.

The Code

The code to do this is straightforward. I just created a folder named test which will contain all my unit tests. I added an _PageStart.cshtml file to this directory that sets the layout for each page. Note that this is equivalent to the _ViewStart.cshtml page in ASP.NET MVCs.

    Layout = "_Layout.cshtml";

The next step is to write the layout file, _Layout.cshtml. This contains the QUnit boilerplate along with a place holder (the RenderBody call) for the actual tests.

<!DOCTYPE html>

        <link rel="stylesheet" href="/content/qunit.css " />
        <script src="/Scripts/jquery-1.7.1.min.js"></script>
        <script src="/scripts/qunit.js"></script>

        @RenderSection("Javascript", false)
        @* Tests are written in the body. *@
        <h1 id="qunit-header">
          @(Page.Title ?? "QUnit tests")
        <h2 id="qunit-banner">
        <h2 id="qunit-userAgent"></h2>
        <ol id="qunit-tests">
            <a href="/tests">Back to tests</a>

And now, one or more files that contain the actual test. Here’s an example called footest.cshtml.

  Page.Title = "FooTests";
@if (false) {
  // OPTIONAL! QUnit script (here for intellisense)
  <script src="/scripts/qunit.js"> </script>

<script src="/scripts/calculator.js"></script>

  $(function () {
    // calculator_tests.js
    module("A group of tests get's a module");
    test("First set of tests", function () {
      var calc = new Calculator();
      ok(calc, "My caluculator is a O.K.");
      equals(calc.add(2, 2), 4, "shit broken");

You’ll note that I have this funky if (false) block in the code. That’s to workaround a current limitation in Razor so that JavaScript Intellisense for QUnit works in this file. If you don’t care for Intellisense, you don’t need it. I hope that in the future, Razor will pick up the script in the layout and you won’t need this either way.

With this in place, to add a new test with the proper QUnit boilerplate is very easy. Just add a .cshtml file, set the title for the tests, and then add the script you’re testing and the test script into the same file.

The last step is to create an index into all the tests. I wrote the following index.cshtml file that creates a list of links for each set of tests. It simply iterates through every test file and generates a link. One nifty little perk of using ASP.NET Web Pages is you can leave off the extension when you request the file.

@using System.IO;
  Layout = null;

  var files = from path in
  Directory.GetFiles(Server.MapPath("./"), "*.cshtml")
  let fileName = Path.GetFileNameWithoutExtension(path)
  where !fileName.StartsWith("_")
  && !fileName.Equals("index", StringComparison.OrdinalIgnoreCase)
  select fileName;

<!DOCTYPE html>
        <h1>QUnit tests</h1>
        @foreach (var file in files) {
            <li><a href="@file">@file</a></li>

The output of this page isn’t pretty, but it works. When I navigate to /test I see a list of my test files:


Here’s the contents of my test folder when I’m done with all this.



I personally haven’t used this approach yet, but I think it could be a nice approach if you tend to have more than one QUnit test file in your projects and you tend to customize the boilerplate for those tests.

I tend to just use a static HTML file, but so far, most of my QUnit tests are for a single JavaScript library. But this approach might come in handy when I get around to testing the JavaScript in the NuGet gallery.

personal comments edit

Hubot stache me.

Well the poll results are in and you guys were very close! I was taken aback at the intensity of the interest in where I would end up. Seriously, I’m honored. But then I thought about it for a moment and figured, there must be a betting pool on this. These folks don’t care that much.

Today is my first day as a GitHub employee! In other words, I am now a GitHubber, a Hubbernaut, a GitHubberati. Ok, I made that last one up.

If you haven’t heard of GitHub, it’s a site that makes it frictionless to collaborate on code. Some would call it a source code hosting site, or a forge, but it goes way beyond that. Their motto is “Social Coding”, and they mean it. They’ve turned shipping software into a fun social activity. It’s great!

Beyond a great product, they’ve built a great company culture. From everything I’ve seen and read, GitHub has figured out how to make a great work environment. They optimize for happiness and I believe that’s resulted in a great product and a lot of success. I’ll talk about that some more another time. For now, let’s talk about…

What will I be doing at GitHub?

According to my offer letter, my title is “Windows Badass”, but the way I see it, I will do whatever I can to help GitHub be even more awesome. It’s going to take some creative thinking because it’s already pretty damn cool, but I’ll figure something out.  My first idea for adding more cowbell was rejected, but I’m just finding my footing. I’ll get the hang of it.

More specifically, I plan to help GitHub appeal to more developers who code for Windows and the .NET platform. For example, take a look at the TIOBE language index.


Now take a look at this chart from the GitHub languages page (no longer around).

github languages

See something missing? Yes, oh mah gawd! LOLCODE is not there!!!

Ok, besides that. See something else missing? Despite the fact that TIOBE ranks it as the fourth most popular language, C# doesn’t make it into the top ten at GitHub. I’d like to change that!

I’ve always been a big proponent of open source on .NET. Pretty much everything I worked on at Microsoft was or became open source (I did work on a Web Form control that wasn’t open sourced, but we don’t talk about that much).

I will continue to work to grow a healthy open source ecosystem on .NET and Windows. I hope to see more .NET developers contributing to open source and doing it on GitHub.

This might include making the website more friendly to Windows developers, working on a Windows client for GitHub, and continuing to work on NuGet, among other things. One of the appealing aspects of GitHub to me was how much they got NuGet. Perhaps more so than many at Microsoft.

Why Bother?

You might wonder, why bother?

Well, there’s the simple business answer. The more open source developers there are, the more potential customers GitHub has. But we have larger aspirations than that as well.

When trying to build a case for releasing more software as open source at Microsoft, I once asked Miguel de Icaza, what’s in it for Microsoft? Why do it?

His response was something along the lines of bla bla bla bla. But there was one thing that he said that struck me.

A rising tide lifts all boats.

When I first read that, I thought he wrote “tilde” and I was really confused what a rising tilde had to do with anything.

But it makes sense to me now. As I wrote in a recent post talking about software communities,

The interchange of ideas between these disparate technology communities can only result in good things for everyone.

There are millions of .NET developers, but a disproportionately small number of them are involved in open source projects. If we increase that just a tiny bit, that increases the pool of ideas floating around in the larger software community. Ideas backed by code that anybody can look at, incorporate, tweak.

The nice thing here is I think a healthy .NET OSS ecosystem is a good thing for everyone. Good for GitHub. Good for Microsoft. Good for the software industry.

Am I moving?

GitHub is located in an amazing space in San Francisco. When I visited, Hubot pumped in Daft Punk via overhead speakers as people coded away. That alone nearly sealed the deal for me. The fine scotch we sipped as we talked about software didn’t hurt either.

But alas, as much as San Francisco is a great city, my family and I love it here in the Washington, so I will work as a remote employee. Fortunately, GitHub is well suited for remote employees. And this gives me a great excuse to visit SF often!

My little octocats agree, this is a good thing.


If you’ve been a fan of my blog or Twitter account, I hope you stick around. I’ll still be blogging about ASP.NET MVC, NuGet, etc. But you can expect my blog will also expand to cover new topics.

It’ll be an adventure.

nuget, code comments edit

So my last day at Microsoft ended up being a very long one as the NuGet team worked late into the evening to deployan updated version of I’m very happy to be a part of this as my last act as a Microsoft employee. This is complete re-write of the gallery.

Why a rewrite? We’ve learned a lot since we first launched, and our needs have evolved to the point where a rewrite made sense. The new implementation is a vanilla ASP.NET MVC 3 application and highly optimized to be a gallery with just the features we need.

For example, we made extensive use of Mvc Mini Profiler to ensure pages made the least number of database queries as necessary. Also, the site is now hosted in Azure!

What’s in this new implementation?

There’s a lot of great improvements. I won’t provide a comprehensive list, but I will provide a taste. Matthew and others will write about the improvements in more detail:

  • Search on every page! This seems obvious, but we didn’t have this in the old gallery. That deficiency is now just a bad memory. Also, the search is way faster!
  • Package owners are displayed more prominently. In the old gallery, the owners of the package weren’t displayed. Anywhere. Which was a terrible experience because the owners are the people who matter. A package owner is associated with an account. The “author” of a package is simply metadata and could be anyone.
  • Owner profiles. Click on a package owner to see the package owner’s profile. Today, the only thing you see is a gravatar for the owner and the list of packages that person owns. In the future, we might include more profile information.
  • Adding a package owner requires acceptance. In the past, you could add anyone else as an owner of your package and they’d immediately become an owner of a package. Now that we show the list of owners next to a package, that’s not such a good thing. In the new gallery, when you try and add an owner, the gallery sends them an email inviting them to become an owner. This way MyCrappyPackage can’t add you as an owner as a way of boosting their reputation at the expense of yours.
  • Package stats are displayed more prominently. We wanted to make the package stats very prominent.
  • Package unlisting. Packages can now be unlisted. This effectively hides the package, but the package is still used to resolve dependencies.
  • Cleaner markup and design. The HTML markup is way cleaner and streamlined. For example, we reduced the CSS files from 20 to 1.
  • Cleaner URLs.For example, the new package feed URL is now In the future, we’ll probably use content negotiation so we won’t even need versioned URLs for the package feed. The NuGet 1.5 client will continue to work.
  • And it’s WAY FASTER! I almost forgot to mention just how much faster the gallery is now than before.

What about NuGet 1.6?

There are some features of the Gallery you won’t see until we release NuGet 1.6. We want to make sure the site works well before we deploy NuGet 1.6. Once we do that, you’ll also see support for SemVer (Semantic Versioning) and Prerelease packages in the Gallery.

personal comments edit

Well, as I wrote before, today is my last day at Microsoft. Last night we had our office Holiday party in the observation deck and lounge of the Space Needle. The party was just fantastic and we were lucky to have a nice clear evening with spectacular views. What a great way to go!

I had a brief exit interview where I handed over my badge with an air of finality. However, I am still an employee until midnight tonight. So it’s not so final just yet. Which is a good thing as the NuGet team is working to deploy the new gallery tonight if all goes well. Once that’s been up for a few days and we’re comfortable with it being stable, we’ll release NuGet 1.6.


In the meanwhile, my office has been razed of all the good equipment including my crossbows that I bequeath to my co-workers that remain, much as they had been bequeathed unto me. Here, you can see a shot of my co-workers taking shots at me. Yes, that’s David Fowler of SignalR and JabbR fame, and Scott Hanselman, of the fivehead fame who needs no introduction.

I will miss working with all of my friends at Microsoft dearly, but seriously, I live 2 miles away, so don’t be a stranger all of a sudden. And to all of you who have supported me at Microsoft via comments on my blog, tweets on Twitter, and other encouraging means. Thank you!

But just because I’m leaving, that doesn’t mean you have to leave too. I’ll still be blogging here and tweeting on Twitter so do stick around as I begin my new journey at REDACTED GitHub!

Tags: microsoft

personal, nuget comments edit

It’s not every day you write this sort of blog post. And you hope it’s not something you do so often that you ever get good at it. I’m certainly sucking up a storm here.

Just last month I hit my four year mark at Microsoft. I reflected on the sheer joy I experienced working with such smart people on cool projects. I’ve been very lucky and fortunate to be able to speak about these projects at many conferences, meeting so many interesting attendees. It’s been a real blast.

Today, I write a different sort of post. It was a tough decision to make, but I’ve decided to leave Microsoft to try something different. This is my last week as a Microsoft employee. On Monday, December 5, 2011 I’ll come into the office, hand over my card key, the launch codes, and the Amex card, and then experience a Microsoft exit interview. It will be interesting.

But before I continue, there’s two things I want to make crystal clear:

  1. I will still be involved with the .NET community and development.
  2. I will still work on NuGet.
  3. I’m known for off-by-one errors and lame jokes.

What’s Next?

I’ll let you know on December 7, when I start a new gig. My new company often announces new employees and I didn’t want to spoil the surprise! I’m very excited about it as it’s a position that will keep me involved in .NET and working on NuGet, but will also let me stretch into multiple other technologies beyond .NET.

I’m not leaving .NET

The way I see it, the .NET community isn’t a place you just leave. A community is a set of relationships among people who hold some common goals or ideals. The people I think are interesting today, will still be interesting on December 7. Well, most of you at least.

Rather, I like to think that I will focus more on being a member of a larger software community, as I wrote about recently. It’s one thing to write about it, but I hope to better live it in the future.

So while I’m not leaving .NET, I am also arriving at Macs, Ruby, and Node.js and whatever other technologies I need to get the job done. I look forward to getting my hands dirty building things with these other technologies in addition to .NET.

What About NuGet?

As I mentioned earlier, I’ll continue to work on the NuGet open source project as a core contributor. From the Outercurve Foundation’s side of things, I’ll also remain on this page as a project lead, though most of the day to day responsibilities will transfer to a Program Manager on the Microsoft side of things. We have yet to figure out in detail how we’ll share responsibilities.

This is possible because there’s an interesting distinction between the NuGet open source project and the NuGet based product that Microsoft ships. I should write about this another time. For the time being, just know I’ll continue to be heavily involved in NuGet once I ramp up in my new job.

What about ASP.NET MVC?

ASP.NET MVC has been a joy to work on. It’s pioneered so much change at Microsoft. Leaving it will be hard, especially with all the cool stuff coming down the pike I wish I could tell you about. Suffice to say, ASP.NET MVC is a mature product in good hands with a strong team in place. I’m not worried about it at all.

In fact, there’s a lot of good stuff coming from the overall team that’s been the result of a long succession of baby steps. I can’t talk about it yet, but I can say that knowing this made my decision especially difficult to make.

Anything Else?

I will still be speaking at CodeMania in New Zealand in March 2012. I made sure to contact the organizers in case they wanted to change their minds given my news but they’re happy to have me speak.

I’m still happy to speak about NuGet, ASP.NET MVC, or anything else for that matter if you have a conference you think I’d be a good fit for.

I will miss working at Microsoft and being involved with the community in that capacity. But I am also excited about this new opportunity to work with the community in a different capacity.

Next week, I’ll tell you about what could possibly draw me away from Microsoft. I hope you’ll stick around., mvc, code, razor comments edit

Donut caching, the ability to cache an entire page except for a small region of the page (or set of regions) has been conspicuously absent from ASP.NET MVC since version 2.

MMMM Donuts! Photo by Pzado at

This is something that’s on our Roadmap for ASP.NET MVC 4, but we have yet to flesh out the design. In the meanwhile, there’s a new NuGet package written by Paul Hiles that brings donut caching to ASP.NET MVC 3. I haven’t tried it myself yet, so be forewarned, but judging by the blog post, Paul has done some extensive research into how output caching works.

One issue with his approach is that to create “donut holes”, you need to call an action from within your view. That works for ASP.NET MVC, but not for ASP.NET Web Pages. What if you simply want to carve out a region in your view that isn’t cached?

Well to implement such a thing requires that we make changes to Razor pages itself to support substitution caching. I’ve been tasked with the design of this, but I’ve been so busy that I’ve fallen behind. So I’m going to sketch some thoughts here and get your input, and then turn in your work as if I had done it. Ha!

Ideally, Razor should have first class support for carving out donut holes. Perhaps something like:

<h1>This entire view is cached</h1>
@nocache {
  <div>But this part is not. @DateTime.Now</div>

As this seems to be the most common scenario for donut holes, I like the simplicity of this approach. However, there may be times when you do want the hole cached, but at a different interval than the rest of the page.

<h1>The entire view is cached for a day</h1>
@cache(TimeSpan.FromSeconds(10)) {
  <div>But this part is cached for 10 seconds. @DateTime.Now</div>

If we have the second cache directive, we probably don’t really need the nocache directive as its redundant. But since I think it’s the most common scenario, I’d want to keep it anyways.

The final question is whether these should be actual Razor directives or simply methods. I haven’t dug into Razor enough to know the answer, but my gut feel is that it would require changes to Razor itself to support it and can’t be added on as method calls as method calls run too late.

What do you think of this approach?

code comments edit

While attending Oredev 2011, I had an interesting conversation with Corey Haines about his perception of the Ruby community as compared to the .NET community.

One thing he suggested is that the .NET community is seems a bit insular and self-isolating. He noted that when he attended .NET user groups, he only saw folks he knew to be .NET developers. But when he attends Ruby, Scala, NodeJS, Erlang, etc. user groups, he sees many of the same people at these meet ups.

While I’m not completely against identifying oneself as a .NET developer to indicate your primary focus, I do see what Corey is getting at. Rather than only seeing ourselves as .NET developers, it’s just as important to also see ourselves as software developers.

We should recognize that we’re part of this larger cosmopolitan software community. We have a lot to learn from this greater community. Just as importantly, our community also has much to offer to the larger community!

As a good friend once told me, a rising tide lifts all boats. The interchange of ideas between these disparate technology communities can only result in good things for everyone.

I’ve been grateful that folks such as Nick Quaranto have this view. Although he’s one of those hippie Ruby folks and runs (which some might see as a competitor to NuGet), he’s been extremely helpful and generous with advice for the NuGet team. To me, that’s what community is about. Not isolating oneself from ideas simply because they come from someone who’s eschewed curly braces.

The good news is that I think the .NET community is actually further along in this than it gets credit for. Podcasts such as Herding Code have a very polyglot bent to it. Even .NET Rocks, seen as the bastion of .NET, has expanded its archive with topics such as node.js and Modernizr recently.

So if you identify yourself as a .NET developer, well you’re in good company. There’s a lot of interesting .NET developers around. At the same time, I encourage you to reach across the aisle and learn a thing or two about a different technology. Maybe even hoist a beer with one of those hippie rubyists or smug clojure developers!

After all, someday we’re all going to end up as JavaScript developers anyways.

code, personal comments edit

Once in a while folks ask me for details about the hardware and software that hosts my blog. Rather than write about it, a photo can provide all the details that you need.

There you have it.


Well actually^TM^, my blog runs on a bit more hardware than that these days. Especially after the Great Hard-Drive Failure of 2009. As longtime readers of my blog might remember, nearly two years ago, this blog went down in flames due to a faulty hard-drive on the hosting server.

My hosting provider, CrystalTech (now rebranded to be the Web Services home of The Small Business Authority), took regular backups of the server, but I hosted my blog in a virtual machine. As it turns out, the backups did not include the VM because it was always “in use”. In order to backup a virtual machine, the backup needs to take special action to ensure that works.

Today, I still host with CrystalTech in a large part due to their response to the great hard-drive meltdown. First and foremost, they didn’t jump to blame me. They focused on fixing the problem at hand. In the past, I’ve hosted with other providers who excelled at making you feel that anything wrong was your fault. Ever been in a relationships like that?

Once things were settled, they worked with me to figure out what systematic changes they should make to ensure this sort of thing doesn’t happen again. Hard drives will fail. You can’t prevent that. But you can ensure that the data customers care about are backed up and verified.

Not only that, they hooked me up with a pretty nice new dedicated server. Smile

Even though they now are prepared to ensure VMs are backed up, I now host on bare metal, in part because my other tenant moved off of the server so I don’t really need to share it anymore. All miiiiiine!


  • Case:2U server dedicated server
  • Processors: 2x Intex Xeon CPU 3.20 GHZ (1 core, 2 logical processors) x64
  • Memory: 4.00 GB RAM
  • OS Hard Drive: C: 233 GB RAID 1 (2 physical drives)
  • Data Hard Drive: D: 467 GB RAID 5 (3 physical drives)


  • OS: Windows Server 2008 Datacenter SP2
  • Database: SQL Server 2008
  • Web Server: IIS 7 running ASP.NET 4
  • Blog:Subtext
  • Backup: In addition to the machine backus, I have a scheduled task that 7z archives my web directories and also takes a SQL backup into a backups folder. Windows Live Mesh syncs those backup files to my home machine.

This server hosts the following sites:

For some of these sites, I plan to migrate them to other cloud based solutions. For example, rather than have my own NuGet feed, I’ll just use a feed.

Even so, I plan to keep on this hardware for as long as The Small Business Authority lets me. It’s a great way for me to keep my system administration skills from completely atrophying and I like having a server at my disposal.

So thanks again to The Small Business Authority (though I admit, I liked CrystalTech as a name betterSmile with tongue
out) for hosting this blog! And thank you for reading!

personal comments edit

As I mentioned in my last post, I have an overnight stopover in Reykjavik Iceland. After checking into my hotel at an ungodly early hour (which ended up being really late for me Seattle time), my first order of business was to head over to the Blue Lagoon.


No, not that Blue Lagoon! This one!


Look at that steam coming off the water! The Blue Lagoon is a geothermal spa with a 5000 square meter lagoon. The water comes from a nearby geothermal plant and is renewed every two days. According to Wikipedia,

Superheated water is vented from the ground near a lava flow and used to run turbines that generate electricity. After going through the turbines, the steam and hot water passes through a heat exchanger to provide heat for a municipal hot water heating system. Then the water is fed into the lagoon for recreational and medicinal users to bathe in.

Yes, the thought of being cooked in superheated water did cross my mind since my manager reminded me of a scene from some crappy movie where that happened. Fortunately, that did not happen.

This method of heating the lagoon is just one example of how Iceland gets a lot of its power from the heat within the Earth. From another Wikipedia article, emphasis mine,

Five major geothermal power plants exist in Iceland, which produce approximately 26.2% (2010) of the nation’s energy. In addition,geothermal heatingmeets the heating and hot water requirements of approximately 87% of all buildings in Iceland. Apart from geothermal energy, 75.4% of the nation’s electricity was generated by hydro power, and 0.1% from fossil fuels.

It’s pretty impressive. They plan to go 100% fossil-fuel-free in the near future. Of course, the one downside is that the water here smells slightly of sulfur. I actually don’t mind it.

The spa provides buckets of white silica gel you can put on your face to exfoliate your skin. I found that the sleet being whipped around 35 miles per hours did a fine job of exfoliating my skin. It nearly exfoliated it off of my face.

Though I have to admit, that was part of the fun. It was novel to be swimming outdoors in November with sleet and wind pouring down, but nice and warm within the heated waters.

I even had time to stop at a water side bar for a drink.


A good part of the drive to the Lagoon is through a vast lava field that is reminiscent of the photos sent back by the Mars Rover. It’s very easy to catch a bus from your hotel or from the airport to get there and they provide lockers as well as towel and even swimwear rental. They really make it easy to take a quick jaunt over there if you’re just on a stopover in Iceland.

Now I’m warm and dry in my hotel room planning my next step. I would like to do some sight seeing before I meet folks at the bar, but I also like remaining warm and dry. Conundrums!

I think I’ll venture out now and report back later. If you ever find yourself with a long stopover in Iceland, do visit the Blue Lagoon.

personal comments edit

If you’re in the Reykjavik area on November 7th, come join me for a beer-up. A Beer-Up is basically a meet-up, but with lots of beer!

  • When: November 7th, 2011 at 8:00 PM
  • Where: The English Pub (yes, I went all the way to Iceland for an English pub)
  • Why: To talk about ASP.NET, ASP.NET MVC, NuGet, Software Development whatever geeky topics you want. And if we do our jobs right, by the end of the night we’ll discuss life, philosophy, and which direction is my hotel?

blue-lagoon \ Blue Lagoon in Iceland - Photo from

I’ll be stopping overnight in Reykjavik on my way to Oredev 2011. I’m pretty excited as I’ve always been fascinated by the natural beauty of such a geologically active place. I definitely plan to see the Blue Lagoon geothermal spa (pictured above) during my stay.

If you’re in the area and love to talk about coding, technology, whatever, do join us!

nuget, open source comments edit

We made a recent change to make it easy to update the NuGet documentation. In this post, I’ll cover what the change was, why we made it, and how it makes it easier to contribute to our documentation.

Our docs run as a simple ASP.NET Web Pages application that renders documentation written in the Markdown format. The Markdown text is not stored in a database, but live as files that are part of the application source code. That allows us to use source control to version our docs.

We used to host the source for the docs site in Mercurial (hg) on Under the old system, it took the following to contribute docs.

  1. Install Mercurial (TortoiseHG for example) if you didn’t already have it.
  2. Fork our repository and clone it to your local machine.
  3. Open it up in Visual Studio.
  4. Make and commit your changes.
  5. Push your changes.
  6. Send a pull request.

It’s no surprise that we don’t get a lot of pull requests for our documentation. Oh, and I didn’t even mention all the steps once we received such a pull request.

As anyone who’s ever run an open source product knows, it’s hard enough to get folks to contribute to documentation in the first place. Why add more roadblocks?

To improve this situation, we moved our documentation repository to Github for three reasons:

  1. In-browser editing of files with MarkDown preview.
  2. Pull requests can be merged at the click of a button.
  3. Support for deploying to AppHarbor (which CodePlex also has)

With this in place, it’s now easy to be a “drive-by” contributor to our docs. Let’s walk through an example to see what I mean. In this example, I’m posing as a guy named “FakeHaacked” with no commit access to the NuGetDocs repository.

Here’s a sample page from our docs (click for larger). The words at the end of the first paragraph should be links! Doh! I should fix that.


First, I’ll visit the NuGet Docs repository (TODO: Add a link to each page with the path to the Github repository page).


Cool, I’m logged in as FakeHaacked. Now I just need to navigate to the page that needs the correction. All of the documentation pages are in the site folder.

Pro tip, type the letter “t” while on this page to use incremental search to search for the page you want to edit.

Here’s the page I want to edit.


Since this file is a Markdown file, I can see a view of the file that’s a nice approximation of what it will look like when it’s deployed. It’s not exactly the same since we have different CSS styles on the production site.

See that blue button just above the content and to the right? That allows me to “fork” this page and edit the file. Forking it, for those not familiar with distributed version control, means it will create a clone of the main repository. I’m free to work and do whatever I want in that clone without worry that I will affect the real thing.

Let’s click that button and see what happens.


Cool, I get a nice editor with preview for editing the page right here in the browser. I’ll go ahead and make those last two references into Markdown formatted links.

When I’m done, I can scroll down, type in a commit message describing the change, and click the blue Propose File Change button.


Once you’re happy with the set of changes you’ve made, click the button to send a pull request. This lets the folks who maintain the documentation to know you have changes that are ready for them to pull in.


And that’s it. You’ve done your part. Thank you for your contribution to our docs! Now let’s look at what happens on the other side. I’ll put on my project maintainer hat and visit the site. Notice I’m logged in as Haacked now and I see there’s an outstanding pull request.pull-requests-nuget-docs

Cool, I can take a look at it, quickly see a diff, and comment on it. Notice that Github was able to determine that this file is safe to automatically merge into the master branch.


All I have to do is click the big blue button, enter a message, and I’m done!


It’s that easy for me to merge in your changes.


You might ask why we don’t use the Github Pages feature (or even Git-backed wikis). We started the docs site before we were on Github and didn’t know about the pages feature.

If I were to start over, I’d probably just use that. Maybe we’ll migrate in the future. One benefit of our current implementation is we get that nice Table of Contents widget generated for us dynamically (which we can probably do with Github Pages and Jekyll) and we can use Razor for our layout template.

The downside of our current approach is that we can’t create new doc pages this way, but I’ll submit a feature request to the Github team and see what happens.

So if you are reading the NuGet docs, and see something that makes you think, “That ain’t right!”, please go fix it! It’s easy and contributing to open source documentation makes you a good person. It’s how I got started in open source.

Oh, and if you happen to be experienced with Git, you can always use the traditional method of cloning the repository to your local machine and making changes. That gives you the benefit of running the site to look at your change., nuget comments edit

Recently, a group of covert ninjas within my organization started to investigate what it would take to change our internal build and continuous integration systems (CI) to take advantage of NuGet for many of our products, and I need your input!

Hmm, off by one error slays me again. -Image from Ask A Ninja. Click on
the image to visit.

Ok, they’re not really covert ninjas, that just sounds much cooler than a team of slightly pudgy software developers. Ok, they’ve told me to speak for myself, they’re in great shape!

In response to popular demand, we changed our minds and decided to support Semantic Versioning (SemVer) as the means to specify pre-release packages for the next version of NuGet (1.6).

In part, this is the cause of the delay for this release as it required extensive changes to NuGet and the gallery. I will write a blog post later that covers this in more detail, but for now, you can read our spec on it which is mostly up to date. I hope.

I’m really excited to change our own build to use NuGet because it will force us to eat our own dogfood and feel the pain that many of you feel with NuGet in such scenarios. Until we feel that pain, we won’t have a great solution to the pain.

A really brief intro to SemVer

You can read the SemVer spec here, but in case you’re lazy, I’ll provide a brief summary.

SemVer is a convention for versioning your public APIs that gives meaning to the version number. Each version has three parts, Major.Minor.Patch.

In brief, these correspond to:

  • Major: Breaking changes.
  • Minor: New features, but backwards compatible.
  • Patch: Backwards compatible bug fixes only.

Additionally, pre-release versions of your API can be denoted by appending a dash and an arbitrary string after the Patch number. For example:

  • 1.0.1-alpha
  • 1.0.1-beta
  • 1.0.1-Fizzlebutt

When you’re ready to release, you just remove the pre-release part and that version is considered “higher” than all the pre-release versions. The pre-release versions are given precedence in alphabetical order (well technically lexicographic ASCII sort order).

Therefore, the following is an example from lowest to highest versions of a package.

  • 1.0.1-alpha
  • 1.0.1-alpha2
  • 1.0.1-beta
  • 1.0.1-rc
  • 1.0.1-zeeistalmostdone
  • 1.0.1

How NuGet uses SemVer

As I mentioned before, I’ll write up a longer blog post about how SemVer figures into your package. For now, I just want to make it clear that if you’re using 4-part version numbers today, your packages will still work and behave as before.

It’s only when you specify a 3-part version with a version string that NuGet gets strict about SemVer. For example, NuGet allows 1.0.1-beta but does not allow

How to deal with nightly builds?

So the question I have is, how do we deal with nightly (or continous) builds?

For example, suppose I start work on what will become 1.0.1-beta. Internally, I may post nightly builds of 1.0.1-beta for others in my team to use. Then at some point, I’ll stamp a release as the official 1.0.1-beta for public consumption.

The problem is, each of those builds need to have the package version incremented. This ensures that folks can revert back to a last-known-good nightly build if a problem comes up. SemVer doesn’t seem to address how to handle internal nightly (or continuous) builds. It’s really focused on public releases.

Note, we’re thinking about this for our internal setup, not for the public gallery. I’ll address that question later.

We had a few ideas in mind.

Stick with the previous version number and change labels just before release

The idea here is that when we’re working on 1.0.1beta, we version the packages using the alpha label and increment it with a build number.

  • 1.0.1-alpha (public release)
  • 1.0.1-alpha.1 (internal build)
  • 1.0.1-alpha.2 (internal build)

A variant of this approach is to append the date (and counter) in number format.

  • 1.0.1-alpha (public release)**
  • 1.0.1-alpha.20101025001 (internal build)
  • 1.0.1-alpha.20101026001 (internal build on the next day)
  • 1.0.1-alpha.20101026002 (another internal build on the same day)

With this approach, when we’re ready to cut the release, we simply change the package to be 1.0.1-beta and release it.

The downside of this approach is that it’s not completely clear that these are internal nightly builds of what will be 1.0.1-beta. They could be builds of 1.0.1-alpha2.

Yet another variant of this approach is to name our public releases with an even Minor or Patch number and our internal releases with an odd one. So when we’re ready to work on 1.0.2-beta, we’d version the package as 1.0.1-beta. When we’re ready to release, we change it to 1.0.2-beta.

Have a separate feed with its own versioning scheme.

Another thought was to simply have a completely separate feed with its own versioning scheme. So you can choose to grab packages from the stable feed, or the nightly feed.

In the nightly feed, the package version might just be the date.

  • 2010.10.25001
  • 2010.10.25002
  • 2010.10.25003

The downside of this approach is that it’s not clear at all what release version these will apply to. Also, when you’re ready to promote one to the stable feed, you have to move it in there and completely change the version number.

Support an optional Build number for Semantic Versions

For NuGet 1.6, you can still use a four-part version number. But NuGet is strict if the version is clearly a SemVer version. For example, if you specify a pre-release string, such as 1.0.1-alpha, NuGet will not allow a fourth version part.

But we had the idea that we could extend SemVer to support this concept of a build number. This might be off by default in the public gallery, but could be something you could turn on individually. What it would allow you to do is continue to push new builds of 1.0.1alpha with an incrementing build number. For example:

  • 1.0.1-beta.0001 (nightly)
  • 1.0.1-beta.0002 (nightly)
  • 1.0.1-beta.0003 (nightly)
  • 1.0.1-beta (public)

Note that unlike a standard 4-part version, 1.0.1-beta is a higher version than 1.0.1-beta.0003.

While I’m hesitant to suggest a custom extension to SemVer, it makes a lot of sense to me. After all, this would be a convention applied to internal builds and not for public releases.

Question for you

So, do you like any of these approaches? Do you have a better approach?

While I love comments on my blog, I would like to direct discussion to the NuGet Discussions page. I look forward to hearing your advice on how to handle this situation. Whatever we decide, we want to bake in first-class support into NuGet to make this sort of thing easier.

code, comments edit

If you’re not familiar with WCF Web API, it’s a framework with nice HTTP abstractions used to expose simple HTTP services over the web. Its focus is targeted at applications that provide HTTP services for various clients such as mobile devices, browsers, desktop applications.

In some ways, it’s similar to ASP.NET MVC as it was developed with testability and extensibility in mind. There are some concepts that are similar to ASP.NET MVC, but with a twist. For example, where ASP.NET MVC has filters, WCF has operation handlers.


One question that comes up often with Web API is how do you authenticate requests? Well, you run Web API on ASP.NET (Web API also supports a self-host model), one approach you could take is to write an operation handler and attach it to a set of operations (an operation is analogous to an ASP.NET MVC action).

However, some folks like the ASP.NET MVC approach of slapping on an AuthorizeAttribute. In this blog post, I’ll show you how to write an attribute, RequireAuthorizationAttribute, for WCF Web API that does something similar.

One difference is that in the WCF Web API case, the attribute simply provides metadata, but not the the behavior, for authorization. If you wanted to use the existing ASP.NET MVC AuthorizeAttribute in the same way, you could do that as well, but I leave that as an exercise for the reader.

I’ll start with the easiest part, the attribute.

public class RequireAuthorizationAttribute : Attribute
    public string Roles { get; set; }

For now, it only applies to methods (operations). Later, we can update it to apply to classes as well if we so choose. I’m still learning the framework so I didn’t want to go bite off too much all at once.

The next step is to write an operation handler. When properly configured, the operation handler runs on every request for the operation that it applies to.

public class AuthOperationHandler 
      : HttpOperationHandler<HttpRequestMessage, HttpRequestMessage>
  RequireAuthorizationAttribute _authorizeAttribute;

  public AuthOperationHandler(RequireAuthorizationAttribute authorizeAttribute)
    : base("response")
    _authorizeAttribute = authorizeAttribute;

  protected override HttpRequestMessage OnHandle(HttpRequestMessage input)
    IPrincipal user = Thread.CurrentPrincipal;
    if (!user.Identity.IsAuthenticated)
      throw new HttpResponseException(HttpStatusCode.Unauthorized);

    if (_authorizeAttribute.Roles == null)
      return input;

    var roles = _authorizeAttribute.Roles.Split(new[] { " " }, 
    if (roles.Any(role => user.IsInRole(role)))
      return input;

    throw new HttpResponseException(HttpStatusCode.Unauthorized);

Notice that the code accesses HttpContext.Current. This restricts this operation handler to only work within ASP.NET applications. Hey, I write what I know! Many folks replied to me that I should use Thread.CurrentPrincipal. My brain must have been off when I wrote this to not think of it. :)

Then all we do is ensure that the user is authenticated and in one of the specified roles if any role is specified. Very simple straightforward code at this point.

The final step is to associate this operation handler with some operations. In general, when you build a Web API application, the application author writes a configuration class that derives from WebApiConfiguration and either sets it as the default configuration, or passes it to a service route.

Within that configuration class, the author can specify an action that gets called on every request and gives the configuration class a chance to map a set of operation handlers to an operation.

For example, in a sample Web API app, I added the following configuration class.

public class CommentsConfiguration : WebApiConfiguration
    public CommentsConfiguration()
        EnableTestClient = true;

        RequestHandlers = (c, e, od) =>
            // TODO: Configure request operation handlers


The RequestHandlers is a property of type Action<Collection<HttpOperationHandler>, ServiceEndpoint, HttpOperationDescription>

In general, it would be up to the application author to wire up the authentication operation handler I wrote to the appropriate actions. But I wanted to provide a method that helps with that. That’s the AppendAuthorizationRequestHandlers method in there, which is an extension method I wrote.

public static void AppendAuthorizationRequestHandlers(
  this WebApiConfiguration config)
  var requestHandlers = config.RequestHandlers;
  config.RequestHandlers = (c, e, od) =>
    if (requestHandlers != null)
      requestHandlers(c, e, od); // Original request handler
    var authorizeAttribute = od.Attributes.OfType<RequireAuthorizationAttribute>()
    if (authorizeAttribute != null)
      c.Add(new AuthOperationHandler(authorizeAttribute));

Since I didn’t want to stomp on the existing request handlers, I set the RequestHandlers property to a new action that calls the existing action (if any) and then does my custom registration logic.

I’ll admit, I couldn’t help thinking that if RequestHandlers was an event, rather than an action, that sort of logic could be handled for me. Winking
smile Have events fallen out of favor? They do work well to decouple code in this sort of scenario, but I digress.

The interesting part here is that the action’s third parameter, od, is an HttpOperationDescription. This is a description of the operation that includes access to such things as the attributes applied to the method! I simply look to see if the operation has the RequireAuthorizationAttribute applied and if so, I add the AuthOperationHandler I wrote earlier to the operation’s collection of operation handlers.

With this in place, I can now write a service that looks like this:

public class CommentsApi
    public IQueryable<Comment> Get()
        return new[] { new Comment 
            Title = "This is neato", 
            Body = "Ok, not as neat as I originally thought." } 

    [WebGet(UriTemplate = "auth"), RequireAuthorization]
    public IQueryable<Comment> GetAuth()
        return new[] { new Comment 
            Title = "This is secured neato", 
            Body = "Ok, a bit neater than I originally thought." } 

And route to the Web API service like so:

public class Global : HttpApplication
  protected void Application_Start(object sender, EventArgs e)
      new CommentsConfiguration());

With this in place, a request for /comments allows anonymous, but a request for /comments/auth requires authentication.

If you’re interested in checking this code out, I pushed it to my CodeHaacks Github repository as a sample. I won’t make this into a NuGet package until it’s been thoroughly vetted by the WCF Web API team because it’s very likely I have no idea what I’m doing. I’d rather one of those folks make a NuGet package for this. Smile

And if you’re wondering why I’m writing about Web API, we’re all part of the same larger team now, so I figured it’s good to take a peek at what my friends are up to.