git, code comments edit

My last post covered how to improve your Git experience on Windows using PowerShell, Posh-Git, and PsGet. However, a commenter reminded me that a lot of folks don’t know how to get Git for Windows in the first place.

And once you do get Git set up, how do you avoid getting prompted all the time for your credentials when you push changes back to your repository (or pull from a private repository)?

I’ll answer both of those questions in this post.

Install msysgit

The first step is to install Git for Windows (aka msysgit). The full installer for msysgit 1.7.8 is here. For a detailed walkthrough of the setup steps, check out GItHub’s Windows Setup walkthrough. It’s pretty straightforward. That’ll put Git.exe in your path so that Posh-Git will work.

Bam! Done! On to the second question. Make sure you set up your SSH keys before moving to the second section.

Using SSH with Posh-Git

One annoyance with Git on Windows is when pushing changes to a repository (or pulling from a private repository), you have to constantly enter your password if you cloned the repository using HTTPS.

Likewise, if you clone with SSH, you also need to enter your passphrase each time. Fortunately, a little program called ssh-agent can securely save your pass phrase (and consequently your sanity) for the session and supply it when needed.

Update: Mike Chaliy just fixed PsGet so it always grabs the latest version of Posh-Git. If you installed Posh-Git before today using PsGet, you’ll need to update Posh-Git by running the following command:

Install-Module Posh-Git force

Unfortunately, at the time that I write this, the version of Posh-Git in PsGet does not support starting an SSH Agent. The good news is, the latest version of Posh-Git direct from their GitHub repository does support SSH Agent.

Since the previous step installed git.exe on my machine, all I needed to do to get the latest version of Posh-Git is to clone the repository.

git clone https://github.com/dahlbyk/posh-git.git

This creates a folder named “posh-git” in the directory where you ran the command. I then copied all the files in that folder into the place where PsGet installed posh-git. On my machine, that was:

C:\Users\Haacked\Documents\WindowsPowerShell\Modules\posh-git

When I restarted my PowerShell prompt, it told me it could not start SSH Agent.

powershell-ssh-agent-not-found

It turns out that it was not able to find the “ssh-agent.exe” executable. That file is located in C:\Program Files (x86)\Git\bin. but that folder isn’t automatically added to your PATH by msysgit.

If you don’t want to add this path to your system PATH, you can update your PowerShell profile script so it only applies to your PowerShell session. Here’s the change I made.

$env:path += ";" + (Get-Item "Env:ProgramFiles(x86)").Value + "\Git\bin"

On my machine that script is at:

C:\Users\Haacked\Documents\WindowsPowerShell\Microsoft.Powershell_profile.ps1

The next time I opened my PowerShell prompt, I was greeted with a request for my pass phrase.

powershell-ssh-agent

After typing in my super secret pass phrase, once at the beginning of the session, I was set. I could clone some private repositories and push some changes without having to specify my pass phrase each time. Nice. Secure. Convenient.

The Start-SshAgent Command

The reason that I get the ssh-agent prompt when starting up PowerShell is because when I installed Posh-Git, it updated my profile to load in their example profile:

C:\Users\Haacked\Documents\WindowsPowerShell\Modules\posh-git\profile.example.ps1

That profile script is calling the Start-SshAgent command which is included with Posh-Git. If you don’t like their profile example, you can manually start ssh-agent by calling the Start-SshAgent command.

code, git comments edit

I’m usually not one to resort to puns in my blog titles, but I couldn’t resist. Git it? Git it? Sorry.

Ever since we introduced PowerShell into NuGet, I’ve become a big fan. I think it’s great, yet I’ve heard from so many other developers that they have no time to try it out. That it’s “on their list” and they really want to learn it, but they just don’t have the time.

But here’s the dirty little secret about PowerShell. This might get me banned from the PowerShell junkie secret meet-ups (complete with secret handshake) for leaking it, but here it is anyways. You don’t have to learn PowerShell to get started with it and benefit from it!

Seriously. If you use a command line today, and switch to PowerShell instead, pretty much everything you do day to day still works without changing much of your workflow. There might be the occasional hiccup here and there, but not a whole lot. And over time, as you use it more, you can slowly start accreting PowerShell knowledge and start to really enjoy its power. But on your time schedule.

UPDATE: Before you do any of this, make sure you have Git for Windows (msysgit) installed. Read my post about how to get this set up and configured.

There’s a tiny bit of one time setup you do need to remember to do:

Set-ExecutionPolicy RemoteSigned

Note: Some folks simply use Unrestricted for that instead of RemoteSigned. I tend to play it safe until shit breaks.So with that bit out of the way, let’s talk about the benefits.

Posh-Git

If you do any work with Git on Windows, you owe it to yourself to check out Posh-Git. In fact, there’s also Posh-HG for mercurial users and even Posh-Svn for those so inclined.

Once you have Posh-Git loaded up, your PowerShell window lights up with extra information and features when you are in a directory with a git repository.

posh-git-info

Notice that my PowerShell prompt includes the current branch name as well as information about the current status of my index. I have 2 files added to my index ready to be committed.

More importantly though, Posh-Git adds tab expansions for Git commands as well as your branches! The following animated GIF shows what happens when I hit the tab key multiple times to cycle through my available branches. That alone is just sublime.

ps-tab-expansion

Install Posh-Git using PsGet

You’re ready to dive into Posh-Git now, right? So how do you get it? Well, you could follow all those pesky directionson the GitHub site. But we’re software developers. We don’t follow no stinkin’ list of instructions. It’s time to AWW TOE  MATE!

And this is where a cool utility named PsGet comes along. There are several implementations of “PsGet” around, but the one I cover here is so dirt simple to use I cried the first time I used it.

To use posh-git, I only needed to run the following two commands:

(new-object Net.WebClient).DownloadString("http://psget.net/GetPsGet.ps1") | iex
install-module posh-git

Here’s a screenshot of my PowerShell window running the command. Once you run the commands, you’ll need to close and re-open the PowerShell console for the changes to take effect.ps-installing-posh-gitThat’s

Both of these commands are pulled right from the PsGet homepage. That’s it! Took me no effort to do this, but suddenly using Git just got that much smoother for me.

Many thanks to Keith Dahlby for Posh-Git and Mike Chaliy for PsGet. Now go git it!

asp.net, asp.net mvc, tdd, code, razor comments edit

Given how central JavaScript is to many modern web applications,  it is important to use unit tests to drive the design and quality of that JavaScript. But I’ve noticed that there are a lot of developers that don’t know where to start.

There are many test frameworks out there, but the one I love is QUnit, the jQuery unit test framework.

qunit-tests-running

Most of my experience with QUnit is writing tests for a client script library such as a jQuery plugin. Here’s an example of one QUnit test file I wrote a while ago (so you know it’s nasty).

You’ll notice that the entire set of tests is in a single static HTML file.

I saw a recent blog post by Jonathan Creamer that uses ASP.NET MVC 3 layouts for QUnit tests. It’s a neat approach that consolidates all the QUnit boilerplate into a single layout page. This allows you to have multiple test files and duplicate that boilerplate.

But there was one thing that nagged me about it. For each new set of tests, you need to add an action method and a corresponding view. ASP.NET MVC does not allow rendering a view without a controller action.

Controller-Less Views

The idea of controller-less views has been one tossed around by folks, but there are all sorts of design issues that come up when you consider it. For example, how do you request such a view directly? If you allow that, what if the view is intended to be rendered by a controller action. Now you have two ways to access that view, one of which is probably incorrect. And so on.

However, there is another lesser known framework (at least, lesser known to ASP.NET MVC developers) from the ASP.NET team that pretty much provides this ability!

ASP.NET Web Pages with Razor Syntax

It’s a product called ASP.NET Web Pages that is designed to appeal to developers who prefer an approach to web development that’s more like PHP or classic ASP.

Aside: I’d like to go on record and say I hated that name from the beginning because it causes so much confusion. Isn’t everything I do in ASP.NET a web page?

A Web Page in ASP.NET Web Pages (see, confusing!) uses Razor syntax inline to render out the response to a request. ASP.NET Web Pages also support layouts. This means we can create an approach very similar to Jonathan’s, but we only need to add one file for each new set of tests. Even better, this approach works for both ASP.NET MVC 3 and ASP.NET Web Pages.

The Code

The code to do this is straightforward. I just created a folder named test which will contain all my unit tests. I added an _PageStart.cshtml file to this directory that sets the layout for each page. Note that this is equivalent to the _ViewStart.cshtml page in ASP.NET MVCs.

@{
    Layout = "_Layout.cshtml";
}

The next step is to write the layout file, _Layout.cshtml. This contains the QUnit boilerplate along with a place holder (the RenderBody call) for the actual tests.

<!DOCTYPE html>

<html>
    <head>
        <title>@Page.Title</title>
        <link rel="stylesheet" href="/content/qunit.css " />
        <script src="/Scripts/jquery-1.7.1.min.js"></script>
        <script src="/scripts/qunit.js"></script>

        @RenderSection("Javascript", false)
        @* Tests are written in the body. *@
        @RenderBody()
    </head>
    <body>
        <h1 id="qunit-header">
          @(Page.Title ?? "QUnit tests")
        </h1>
        <h2 id="qunit-banner">
        </h2>
        <h2 id="qunit-userAgent"></h2>
        <ol id="qunit-tests">
        </ol>
        <p>
            <a href="/tests">Back to tests</a>
        </p>
    </body>
</html>

And now, one or more files that contain the actual test. Here’s an example called footest.cshtml.


@{
  Page.Title = "FooTests";
}
@if (false) {
  // OPTIONAL! QUnit script (here for intellisense)
  <script src="/scripts/qunit.js"> </script>
}

<script src="/scripts/calculator.js"></script>


<script>
  $(function () {
    // calculator_tests.js
    module("A group of tests get's a module");
    test("First set of tests", function () {
      var calc = new Calculator();
      ok(calc, "My caluculator is a O.K.");
      equals(calc.add(2, 2), 4, "shit broken");
    });
  });
</script>

You’ll note that I have this funky if (false) block in the code. That’s to workaround a current limitation in Razor so that JavaScript Intellisense for QUnit works in this file. If you don’t care for Intellisense, you don’t need it. I hope that in the future, Razor will pick up the script in the layout and you won’t need this either way.

With this in place, to add a new test with the proper QUnit boilerplate is very easy. Just add a .cshtml file, set the title for the tests, and then add the script you’re testing and the test script into the same file.

The last step is to create an index into all the tests. I wrote the following index.cshtml file that creates a list of links for each set of tests. It simply iterates through every test file and generates a link. One nifty little perk of using ASP.NET Web Pages is you can leave off the extension when you request the file.

@using System.IO;
@{
  Layout = null;

  var files = from path in
  Directory.GetFiles(Server.MapPath("./"), "*.cshtml")
  let fileName = Path.GetFileNameWithoutExtension(path)
  where !fileName.StartsWith("_")
  && !fileName.Equals("index", StringComparison.OrdinalIgnoreCase)
  select fileName;
}

<!DOCTYPE html>
<html>
<head>
    <title></title>
</head>
<body>
    <div>
        <h1>QUnit tests</h1>
        <ul>
        @foreach (var file in files) {
            <li><a href="@file">@file</a></li>
        }
        </ul>
    </div>
</body>
</html>

The output of this page isn’t pretty, but it works. When I navigate to /test I see a list of my test files:

qunit-tests

Here’s the contents of my test folder when I’m done with all this.

solution

Summary

I personally haven’t used this approach yet, but I think it could be a nice approach if you tend to have more than one QUnit test file in your projects and you tend to customize the boilerplate for those tests.

I tend to just use a static HTML file, but so far, most of my QUnit tests are for a single JavaScript library. But this approach might come in handy when I get around to testing the JavaScript in the NuGet gallery.

personal comments edit

Hubot stache me.

Well the poll results are in and you guys were very close! I was taken aback at the intensity of the interest in where I would end up. Seriously, I’m honored. But then I thought about it for a moment and figured, there must be a betting pool on this. These folks don’t care that much.

Today is my first day as a GitHub employee! In other words, I am now a GitHubber, a Hubbernaut, a GitHubberati. Ok, I made that last one up.

If you haven’t heard of GitHub, it’s a site that makes it frictionless to collaborate on code. Some would call it a source code hosting site, or a forge, but it goes way beyond that. Their motto is “Social Coding”, and they mean it. They’ve turned shipping software into a fun social activity. It’s great!

Beyond a great product, they’ve built a great company culture. From everything I’ve seen and read, GitHub has figured out how to make a great work environment. They optimize for happiness and I believe that’s resulted in a great product and a lot of success. I’ll talk about that some more another time. For now, let’s talk about…

What will I be doing at GitHub?

According to my offer letter, my title is “Windows Badass”, but the way I see it, I will do whatever I can to help GitHub be even more awesome. It’s going to take some creative thinking because it’s already pretty damn cool, but I’ll figure something out.  My first idea for adding more cowbell was rejected, but I’m just finding my footing. I’ll get the hang of it.

More specifically, I plan to help GitHub appeal to more developers who code for Windows and the .NET platform. For example, take a look at the TIOBE language index.

tiobe-index

Now take a look at this chart from the GitHub languages page (no longer around).

github languages

See something missing? Yes, oh mah gawd! LOLCODE is not there!!!

Ok, besides that. See something else missing? Despite the fact that TIOBE ranks it as the fourth most popular language, C# doesn’t make it into the top ten at GitHub. I’d like to change that!

I’ve always been a big proponent of open source on .NET. Pretty much everything I worked on at Microsoft was or became open source (I did work on a Web Form control that wasn’t open sourced, but we don’t talk about that much).

I will continue to work to grow a healthy open source ecosystem on .NET and Windows. I hope to see more .NET developers contributing to open source and doing it on GitHub.

This might include making the website more friendly to Windows developers, working on a Windows client for GitHub, and continuing to work on NuGet, among other things. One of the appealing aspects of GitHub to me was how much they got NuGet. Perhaps more so than many at Microsoft.

Why Bother?

You might wonder, why bother?

Well, there’s the simple business answer. The more open source developers there are, the more potential customers GitHub has. But we have larger aspirations than that as well.

When trying to build a case for releasing more software as open source at Microsoft, I once asked Miguel de Icaza, what’s in it for Microsoft? Why do it?

His response was something along the lines of bla bla bla bla. But there was one thing that he said that struck me.

A rising tide lifts all boats.

When I first read that, I thought he wrote “tilde” and I was really confused what a rising tilde had to do with anything.

But it makes sense to me now. As I wrote in a recent post talking about software communities,

The interchange of ideas between these disparate technology communities can only result in good things for everyone.

There are millions of .NET developers, but a disproportionately small number of them are involved in open source projects. If we increase that just a tiny bit, that increases the pool of ideas floating around in the larger software community. Ideas backed by code that anybody can look at, incorporate, tweak.

The nice thing here is I think a healthy .NET OSS ecosystem is a good thing for everyone. Good for GitHub. Good for Microsoft. Good for the software industry.

Am I moving?

GitHub is located in an amazing space in San Francisco. When I visited, Hubot pumped in Daft Punk via overhead speakers as people coded away. That alone nearly sealed the deal for me. The fine scotch we sipped as we talked about software didn’t hurt either.

But alas, as much as San Francisco is a great city, my family and I love it here in the Washington, so I will work as a remote employee. Fortunately, GitHub is well suited for remote employees. And this gives me a great excuse to visit SF often!

My little octocats agree, this is a good thing.

octocats

If you’ve been a fan of my blog or Twitter account, I hope you stick around. I’ll still be blogging about ASP.NET MVC, NuGet, etc. But you can expect my blog will also expand to cover new topics.

It’ll be an adventure.

nuget, code comments edit

So my last day at Microsoft ended up being a very long one as the NuGet team worked late into the evening to deployan updated version of NuGet.org. I’m very happy to be a part of this as my last act as a Microsoft employee. This is complete re-write of the gallery.

Why a rewrite? We’ve learned a lot since we first launched, and our needs have evolved to the point where a rewrite made sense. The new implementation is a vanilla ASP.NET MVC 3 application and highly optimized to be a gallery with just the features we need.

For example, we made extensive use of Mvc Mini Profiler to ensure pages made the least number of database queries as necessary. Also, the site is now hosted in Azure!

What’s in this new implementation?

There’s a lot of great improvements. I won’t provide a comprehensive list, but I will provide a taste. Matthew and others will write about the improvements in more detail:

  • Search on every page! This seems obvious, but we didn’t have this in the old gallery. That deficiency is now just a bad memory. Also, the search is way faster!
  • Package owners are displayed more prominently. In the old gallery, the owners of the package weren’t displayed. Anywhere. Which was a terrible experience because the owners are the people who matter. A package owner is associated with an account. The “author” of a package is simply metadata and could be anyone.
  • Owner profiles. Click on a package owner to see the package owner’s profile. Today, the only thing you see is a gravatar for the owner and the list of packages that person owns. In the future, we might include more profile information.
  • Adding a package owner requires acceptance. In the past, you could add anyone else as an owner of your package and they’d immediately become an owner of a package. Now that we show the list of owners next to a package, that’s not such a good thing. In the new gallery, when you try and add an owner, the gallery sends them an email inviting them to become an owner. This way MyCrappyPackage can’t add you as an owner as a way of boosting their reputation at the expense of yours.
  • Package stats are displayed more prominently. We wanted to make the package stats very prominent.
  • Package unlisting. Packages can now be unlisted. This effectively hides the package, but the package is still used to resolve dependencies.
  • Cleaner markup and design. The HTML markup is way cleaner and streamlined. For example, we reduced the CSS files from 20 to 1.
  • Cleaner URLs.For example, the new package feed URL is now http://nuget.org/api/v1/. In the future, we’ll probably use content negotiation so we won’t even need versioned URLs for the package feed. The NuGet 1.5 client will continue to work.
  • And it’s WAY FASTER! I almost forgot to mention just how much faster the gallery is now than before.

What about NuGet 1.6?

There are some features of the Gallery you won’t see until we release NuGet 1.6. We want to make sure the site works well before we deploy NuGet 1.6. Once we do that, you’ll also see support for SemVer (Semantic Versioning) and Prerelease packages in the Gallery.

personal comments edit

Well, as I wrote before, today is my last day at Microsoft. Last night we had our office Holiday party in the observation deck and lounge of the Space Needle. The party was just fantastic and we were lucky to have a nice clear evening with spectacular views. What a great way to go!

I had a brief exit interview where I handed over my badge with an air of finality. However, I am still an employee until midnight tonight. So it’s not so final just yet. Which is a good thing as the NuGet team is working to deploy the new NuGet.org gallery tonight if all goes well. Once that’s been up for a few days and we’re comfortable with it being stable, we’ll release NuGet 1.6.

last-day

In the meanwhile, my office has been razed of all the good equipment including my crossbows that I bequeath to my co-workers that remain, much as they had been bequeathed unto me. Here, you can see a shot of my co-workers taking shots at me. Yes, that’s David Fowler of SignalR and JabbR fame, and Scott Hanselman, of the fivehead fame who needs no introduction.

I will miss working with all of my friends at Microsoft dearly, but seriously, I live 2 miles away, so don’t be a stranger all of a sudden. And to all of you who have supported me at Microsoft via comments on my blog, tweets on Twitter, and other encouraging means. Thank you!

But just because I’m leaving, that doesn’t mean you have to leave too. I’ll still be blogging here and tweeting on Twitter so do stick around as I begin my new journey at REDACTED GitHub!

Tags: microsoft

personal, nuget comments edit

It’s not every day you write this sort of blog post. And you hope it’s not something you do so often that you ever get good at it. I’m certainly sucking up a storm here.

Just last month I hit my four year mark at Microsoft. I reflected on the sheer joy I experienced working with such smart people on cool projects. I’ve been very lucky and fortunate to be able to speak about these projects at many conferences, meeting so many interesting attendees. It’s been a real blast.

Today, I write a different sort of post. It was a tough decision to make, but I’ve decided to leave Microsoft to try something different. This is my last week as a Microsoft employee. On Monday, December 5, 2011 I’ll come into the office, hand over my card key, the launch codes, and the Amex card, and then experience a Microsoft exit interview. It will be interesting.

But before I continue, there’s two things I want to make crystal clear:

  1. I will still be involved with the .NET community and development.
  2. I will still work on NuGet.
  3. I’m known for off-by-one errors and lame jokes.

What’s Next?

I’ll let you know on December 7, when I start a new gig. My new company often announces new employees and I didn’t want to spoil the surprise! I’m very excited about it as it’s a position that will keep me involved in .NET and working on NuGet, but will also let me stretch into multiple other technologies beyond .NET.

I’m not leaving .NET

The way I see it, the .NET community isn’t a place you just leave. A community is a set of relationships among people who hold some common goals or ideals. The people I think are interesting today, will still be interesting on December 7. Well, most of you at least.

Rather, I like to think that I will focus more on being a member of a larger software community, as I wrote about recently. It’s one thing to write about it, but I hope to better live it in the future.

So while I’m not leaving .NET, I am also arriving at Macs, Ruby, and Node.js and whatever other technologies I need to get the job done. I look forward to getting my hands dirty building things with these other technologies in addition to .NET.

What About NuGet?

As I mentioned earlier, I’ll continue to work on the NuGet open source project as a core contributor. From the Outercurve Foundation’s side of things, I’ll also remain on this page as a project lead, though most of the day to day responsibilities will transfer to a Program Manager on the Microsoft side of things. We have yet to figure out in detail how we’ll share responsibilities.

This is possible because there’s an interesting distinction between the NuGet open source project and the NuGet based product that Microsoft ships. I should write about this another time. For the time being, just know I’ll continue to be heavily involved in NuGet once I ramp up in my new job.

What about ASP.NET MVC?

ASP.NET MVC has been a joy to work on. It’s pioneered so much change at Microsoft. Leaving it will be hard, especially with all the cool stuff coming down the pike I wish I could tell you about. Suffice to say, ASP.NET MVC is a mature product in good hands with a strong team in place. I’m not worried about it at all.

In fact, there’s a lot of good stuff coming from the overall team that’s been the result of a long succession of baby steps. I can’t talk about it yet, but I can say that knowing this made my decision especially difficult to make.

Anything Else?

I will still be speaking at CodeMania in New Zealand in March 2012. I made sure to contact the organizers in case they wanted to change their minds given my news but they’re happy to have me speak.

I’m still happy to speak about NuGet, ASP.NET MVC, or anything else for that matter if you have a conference you think I’d be a good fit for.

I will miss working at Microsoft and being involved with the community in that capacity. But I am also excited about this new opportunity to work with the community in a different capacity.

Next week, I’ll tell you about what could possibly draw me away from Microsoft. I hope you’ll stick around.

asp.net, asp.net mvc, code, razor comments edit

Donut caching, the ability to cache an entire page except for a small region of the page (or set of regions) has been conspicuously absent from ASP.NET MVC since version 2.

MMMM Donuts! Photo by Pzado at sxc.hu

This is something that’s on our Roadmap for ASP.NET MVC 4, but we have yet to flesh out the design. In the meanwhile, there’s a new NuGet package written by Paul Hiles that brings donut caching to ASP.NET MVC 3. I haven’t tried it myself yet, so be forewarned, but judging by the blog post, Paul has done some extensive research into how output caching works.

One issue with his approach is that to create “donut holes”, you need to call an action from within your view. That works for ASP.NET MVC, but not for ASP.NET Web Pages. What if you simply want to carve out a region in your view that isn’t cached?

Well to implement such a thing requires that we make changes to Razor pages itself to support substitution caching. I’ve been tasked with the design of this, but I’ve been so busy that I’ve fallen behind. So I’m going to sketch some thoughts here and get your input, and then turn in your work as if I had done it. Ha!

Ideally, Razor should have first class support for carving out donut holes. Perhaps something like:


<h1>This entire view is cached</h1>
@nocache {
  <div>But this part is not. @DateTime.Now</div>
}

As this seems to be the most common scenario for donut holes, I like the simplicity of this approach. However, there may be times when you do want the hole cached, but at a different interval than the rest of the page.


<h1>The entire view is cached for a day</h1>
@cache(TimeSpan.FromSeconds(10)) {
  <div>But this part is cached for 10 seconds. @DateTime.Now</div>
}

If we have the second cache directive, we probably don’t really need the nocache directive as its redundant. But since I think it’s the most common scenario, I’d want to keep it anyways.

The final question is whether these should be actual Razor directives or simply methods. I haven’t dug into Razor enough to know the answer, but my gut feel is that it would require changes to Razor itself to support it and can’t be added on as method calls as method calls run too late.

What do you think of this approach?

code comments edit

While attending Oredev 2011, I had an interesting conversation with Corey Haines about his perception of the Ruby community as compared to the .NET community.

One thing he suggested is that the .NET community is seems a bit insular and self-isolating. He noted that when he attended .NET user groups, he only saw folks he knew to be .NET developers. But when he attends Ruby, Scala, NodeJS, Erlang, etc. user groups, he sees many of the same people at these meet ups.

While I’m not completely against identifying oneself as a .NET developer to indicate your primary focus, I do see what Corey is getting at. Rather than only seeing ourselves as .NET developers, it’s just as important to also see ourselves as software developers.

We should recognize that we’re part of this larger cosmopolitan software community. We have a lot to learn from this greater community. Just as importantly, our community also has much to offer to the larger community!

As a good friend once told me, a rising tide lifts all boats. The interchange of ideas between these disparate technology communities can only result in good things for everyone.

I’ve been grateful that folks such as Nick Quaranto have this view. Although he’s one of those hippie Ruby folks and runs http://rubygems.org/ (which some might see as a competitor to NuGet), he’s been extremely helpful and generous with advice for the NuGet team. To me, that’s what community is about. Not isolating oneself from ideas simply because they come from someone who’s eschewed curly braces.

The good news is that I think the .NET community is actually further along in this than it gets credit for. Podcasts such as Herding Code have a very polyglot bent to it. Even .NET Rocks, seen as the bastion of .NET, has expanded its archive with topics such as node.js and Modernizr recently.

So if you identify yourself as a .NET developer, well you’re in good company. There’s a lot of interesting .NET developers around. At the same time, I encourage you to reach across the aisle and learn a thing or two about a different technology. Maybe even hoist a beer with one of those hippie rubyists or smug clojure developers!

After all, someday we’re all going to end up as JavaScript developers anyways.

code, personal comments edit

Once in a while folks ask me for details about the hardware and software that hosts my blog. Rather than write about it, a photo can provide all the details that you need.

There you have it.

trs-80

Well actually^TM^, my blog runs on a bit more hardware than that these days. Especially after the Great Hard-Drive Failure of 2009. As longtime readers of my blog might remember, nearly two years ago, this blog went down in flames due to a faulty hard-drive on the hosting server.

My hosting provider, CrystalTech (now rebranded to be the Web Services home of The Small Business Authority), took regular backups of the server, but I hosted my blog in a virtual machine. As it turns out, the backups did not include the VM because it was always “in use”. In order to backup a virtual machine, the backup needs to take special action to ensure that works.

Today, I still host with CrystalTech in a large part due to their response to the great hard-drive meltdown. First and foremost, they didn’t jump to blame me. They focused on fixing the problem at hand. In the past, I’ve hosted with other providers who excelled at making you feel that anything wrong was your fault. Ever been in a relationships like that?

Once things were settled, they worked with me to figure out what systematic changes they should make to ensure this sort of thing doesn’t happen again. Hard drives will fail. You can’t prevent that. But you can ensure that the data customers care about are backed up and verified.

Not only that, they hooked me up with a pretty nice new dedicated server. Smile

Even though they now are prepared to ensure VMs are backed up, I now host on bare metal, in part because my other tenant moved off of the server so I don’t really need to share it anymore. All miiiiiine!

Hardware

  • Case:2U server dedicated server
  • Processors: 2x Intex Xeon CPU 3.20 GHZ (1 core, 2 logical processors) x64
  • Memory: 4.00 GB RAM
  • OS Hard Drive: C: 233 GB RAID 1 (2 physical drives)
  • Data Hard Drive: D: 467 GB RAID 5 (3 physical drives)

Software

  • OS: Windows Server 2008 Datacenter SP2
  • Database: SQL Server 2008
  • Web Server: IIS 7 running ASP.NET 4
  • Blog:Subtext
  • Backup: In addition to the machine backus, I have a scheduled task that 7z archives my web directories and also takes a SQL backup into a backups folder. Windows Live Mesh syncs those backup files to my home machine.

This server hosts the following sites:

For some of these sites, I plan to migrate them to other cloud based solutions. For example, rather than have my own NuGet feed, I’ll just use a http://myget.org/ feed.

Even so, I plan to keep http://haacked.com/ on this hardware for as long as The Small Business Authority lets me. It’s a great way for me to keep my system administration skills from completely atrophying and I like having a server at my disposal.

So thanks again to The Small Business Authority (though I admit, I liked CrystalTech as a name betterSmile with tongue
out) for hosting this blog! And thank you for reading!

personal comments edit

As I mentioned in my last post, I have an overnight stopover in Reykjavik Iceland. After checking into my hotel at an ungodly early hour (which ended up being really late for me Seattle time), my first order of business was to head over to the Blue Lagoon.

blue-lagoon-movie

No, not that Blue Lagoon! This one!

blue-lagoon 

Look at that steam coming off the water! The Blue Lagoon is a geothermal spa with a 5000 square meter lagoon. The water comes from a nearby geothermal plant and is renewed every two days. According to Wikipedia,

Superheated water is vented from the ground near a lava flow and used to run turbines that generate electricity. After going through the turbines, the steam and hot water passes through a heat exchanger to provide heat for a municipal hot water heating system. Then the water is fed into the lagoon for recreational and medicinal users to bathe in.

Yes, the thought of being cooked in superheated water did cross my mind since my manager reminded me of a scene from some crappy movie where that happened. Fortunately, that did not happen.

This method of heating the lagoon is just one example of how Iceland gets a lot of its power from the heat within the Earth. From another Wikipedia article, emphasis mine,

Five major geothermal power plants exist in Iceland, which produce approximately 26.2% (2010) of the nation’s energy. In addition,geothermal heatingmeets the heating and hot water requirements of approximately 87% of all buildings in Iceland. Apart from geothermal energy, 75.4% of the nation’s electricity was generated by hydro power, and 0.1% from fossil fuels.

It’s pretty impressive. They plan to go 100% fossil-fuel-free in the near future. Of course, the one downside is that the water here smells slightly of sulfur. I actually don’t mind it.

The spa provides buckets of white silica gel you can put on your face to exfoliate your skin. I found that the sleet being whipped around 35 miles per hours did a fine job of exfoliating my skin. It nearly exfoliated it off of my face.

Though I have to admit, that was part of the fun. It was novel to be swimming outdoors in November with sleet and wind pouring down, but nice and warm within the heated waters.

I even had time to stop at a water side bar for a drink.

bar-at-the-blue-lagoon

A good part of the drive to the Lagoon is through a vast lava field that is reminiscent of the photos sent back by the Mars Rover. It’s very easy to catch a bus from your hotel or from the airport to get there and they provide lockers as well as towel and even swimwear rental. They really make it easy to take a quick jaunt over there if you’re just on a stopover in Iceland.

Now I’m warm and dry in my hotel room planning my next step. I would like to do some sight seeing before I meet folks at the bar, but I also like remaining warm and dry. Conundrums!

I think I’ll venture out now and report back later. If you ever find yourself with a long stopover in Iceland, do visit the Blue Lagoon.

personal comments edit

If you’re in the Reykjavik area on November 7th, come join me for a beer-up. A Beer-Up is basically a meet-up, but with lots of beer!

  • When: November 7th, 2011 at 8:00 PM
  • Where: The English Pub (yes, I went all the way to Iceland for an English pub)
  • Why: To talk about ASP.NET, ASP.NET MVC, NuGet, Software Development whatever geeky topics you want. And if we do our jobs right, by the end of the night we’ll discuss life, philosophy, and which direction is my hotel?

blue-lagoon \ Blue Lagoon in Iceland - Photo from sxc.hu

I’ll be stopping overnight in Reykjavik on my way to Oredev 2011. I’m pretty excited as I’ve always been fascinated by the natural beauty of such a geologically active place. I definitely plan to see the Blue Lagoon geothermal spa (pictured above) during my stay.

If you’re in the area and love to talk about coding, technology, whatever, do join us!

nuget, open source comments edit

We made a recent change to make it easy to update the NuGet documentation. In this post, I’ll cover what the change was, why we made it, and how it makes it easier to contribute to our documentation.

Our docs run as a simple ASP.NET Web Pages application that renders documentation written in the Markdown format. The Markdown text is not stored in a database, but live as files that are part of the application source code. That allows us to use source control to version our docs.

We used to host the source for the docs site in Mercurial (hg) on CodePlex.com. Under the old system, it took the following to contribute docs.

  1. Install Mercurial (TortoiseHG for example) if you didn’t already have it.
  2. Fork our repository and clone it to your local machine.
  3. Open it up in Visual Studio.
  4. Make and commit your changes.
  5. Push your changes.
  6. Send a pull request.

It’s no surprise that we don’t get a lot of pull requests for our documentation. Oh, and I didn’t even mention all the steps once we received such a pull request.

As anyone who’s ever run an open source product knows, it’s hard enough to get folks to contribute to documentation in the first place. Why add more roadblocks?

To improve this situation, we moved our documentation repository to Github for three reasons:

  1. In-browser editing of files with MarkDown preview.
  2. Pull requests can be merged at the click of a button.
  3. Support for deploying to AppHarbor (which CodePlex also has)

With this in place, it’s now easy to be a “drive-by” contributor to our docs. Let’s walk through an example to see what I mean. In this example, I’m posing as a guy named “FakeHaacked” with no commit access to the NuGetDocs repository.

Here’s a sample page from our docs (click for larger). The words at the end of the first paragraph should be links! Doh! I should fix that.

nuget-docs-page

First, I’ll visit the NuGet Docs repository (TODO: Add a link to each page with the path to the Github repository page).

nuget-docs-github

Cool, I’m logged in as FakeHaacked. Now I just need to navigate to the page that needs the correction. All of the documentation pages are in the site folder.

Pro tip, type the letter “t” while on this page to use incremental search to search for the page you want to edit.

Here’s the page I want to edit.

page-with-fork-this-button

Since this file is a Markdown file, I can see a view of the file that’s a nice approximation of what it will look like when it’s deployed. It’s not exactly the same since we have different CSS styles on the production site.

See that blue button just above the content and to the right? That allows me to “fork” this page and edit the file. Forking it, for those not familiar with distributed version control, means it will create a clone of the main repository. I’m free to work and do whatever I want in that clone without worry that I will affect the real thing.

Let’s click that button and see what happens.

github-markdown-editor

Cool, I get a nice editor with preview for editing the page right here in the browser. I’ll go ahead and make those last two references into Markdown formatted links.

When I’m done, I can scroll down, type in a commit message describing the change, and click the blue Propose File Change button.

enter-commit-message

Once you’re happy with the set of changes you’ve made, click the button to send a pull request. This lets the folks who maintain the documentation to know you have changes that are ready for them to pull in.

send-a-pull-request

And that’s it. You’ve done your part. Thank you for your contribution to our docs! Now let’s look at what happens on the other side. I’ll put on my project maintainer hat and visit the site. Notice I’m logged in as Haacked now and I see there’s an outstanding pull request.pull-requests-nuget-docs

Cool, I can take a look at it, quickly see a diff, and comment on it. Notice that Github was able to determine that this file is safe to automatically merge into the master branch.

merge-pull-request

All I have to do is click the big blue button, enter a message, and I’m done!

confirm-merge

It’s that easy for me to merge in your changes.

Summary

You might ask why we don’t use the Github Pages feature (or even Git-backed wikis). We started the docs site before we were on Github and didn’t know about the pages feature.

If I were to start over, I’d probably just use that. Maybe we’ll migrate in the future. One benefit of our current implementation is we get that nice Table of Contents widget generated for us dynamically (which we can probably do with Github Pages and Jekyll) and we can use Razor for our layout template.

The downside of our current approach is that we can’t create new doc pages this way, but I’ll submit a feature request to the Github team and see what happens.

So if you are reading the NuGet docs, and see something that makes you think, “That ain’t right!”, please go fix it! It’s easy and contributing to open source documentation makes you a good person. It’s how I got started in open source.

Oh, and if you happen to be experienced with Git, you can always use the traditional method of cloning the repository to your local machine and making changes. That gives you the benefit of running the site to look at your change.

asp.net, nuget comments edit

Recently, a group of covert ninjas within my organization started to investigate what it would take to change our internal build and continuous integration systems (CI) to take advantage of NuGet for many of our products, and I need your input!

Hmm, off by one error slays me again. -Image from Ask A Ninja. Click on
the image to visit.

Ok, they’re not really covert ninjas, that just sounds much cooler than a team of slightly pudgy software developers. Ok, they’ve told me to speak for myself, they’re in great shape!

In response to popular demand, we changed our minds and decided to support Semantic Versioning (SemVer) as the means to specify pre-release packages for the next version of NuGet (1.6).

In part, this is the cause of the delay for this release as it required extensive changes to NuGet and the NuGet.org gallery. I will write a blog post later that covers this in more detail, but for now, you can read our spec on it which is mostly up to date. I hope.

I’m really excited to change our own build to use NuGet because it will force us to eat our own dogfood and feel the pain that many of you feel with NuGet in such scenarios. Until we feel that pain, we won’t have a great solution to the pain.

A really brief intro to SemVer

You can read the SemVer spec here, but in case you’re lazy, I’ll provide a brief summary.

SemVer is a convention for versioning your public APIs that gives meaning to the version number. Each version has three parts, Major.Minor.Patch.

In brief, these correspond to:

  • Major: Breaking changes.
  • Minor: New features, but backwards compatible.
  • Patch: Backwards compatible bug fixes only.

Additionally, pre-release versions of your API can be denoted by appending a dash and an arbitrary string after the Patch number. For example:

  • 1.0.1-alpha
  • 1.0.1-beta
  • 1.0.1-Fizzlebutt

When you’re ready to release, you just remove the pre-release part and that version is considered “higher” than all the pre-release versions. The pre-release versions are given precedence in alphabetical order (well technically lexicographic ASCII sort order).

Therefore, the following is an example from lowest to highest versions of a package.

  • 1.0.1-alpha
  • 1.0.1-alpha2
  • 1.0.1-beta
  • 1.0.1-rc
  • 1.0.1-zeeistalmostdone
  • 1.0.1

How NuGet uses SemVer

As I mentioned before, I’ll write up a longer blog post about how SemVer figures into your package. For now, I just want to make it clear that if you’re using 4-part version numbers today, your packages will still work and behave as before.

It’s only when you specify a 3-part version with a version string that NuGet gets strict about SemVer. For example, NuGet allows 1.0.1-beta but does not allow 1.0.1.234-beta.234.

How to deal with nightly builds?

So the question I have is, how do we deal with nightly (or continous) builds?

For example, suppose I start work on what will become 1.0.1-beta. Internally, I may post nightly builds of 1.0.1-beta for others in my team to use. Then at some point, I’ll stamp a release as the official 1.0.1-beta for public consumption.

The problem is, each of those builds need to have the package version incremented. This ensures that folks can revert back to a last-known-good nightly build if a problem comes up. SemVer doesn’t seem to address how to handle internal nightly (or continuous) builds. It’s really focused on public releases.

Note, we’re thinking about this for our internal setup, not for the public gallery. I’ll address that question later.

We had a few ideas in mind.

Stick with the previous version number and change labels just before release

The idea here is that when we’re working on 1.0.1beta, we version the packages using the alpha label and increment it with a build number.

  • 1.0.1-alpha (public release)
  • 1.0.1-alpha.1 (internal build)
  • 1.0.1-alpha.2 (internal build)

A variant of this approach is to append the date (and counter) in number format.

  • 1.0.1-alpha (public release)**
  • 1.0.1-alpha.20101025001 (internal build)
  • 1.0.1-alpha.20101026001 (internal build on the next day)
  • 1.0.1-alpha.20101026002 (another internal build on the same day)

With this approach, when we’re ready to cut the release, we simply change the package to be 1.0.1-beta and release it.

The downside of this approach is that it’s not completely clear that these are internal nightly builds of what will be 1.0.1-beta. They could be builds of 1.0.1-alpha2.

Yet another variant of this approach is to name our public releases with an even Minor or Patch number and our internal releases with an odd one. So when we’re ready to work on 1.0.2-beta, we’d version the package as 1.0.1-beta. When we’re ready to release, we change it to 1.0.2-beta.

Have a separate feed with its own versioning scheme.

Another thought was to simply have a completely separate feed with its own versioning scheme. So you can choose to grab packages from the stable feed, or the nightly feed.

In the nightly feed, the package version might just be the date.

  • 2010.10.25001
  • 2010.10.25002
  • 2010.10.25003

The downside of this approach is that it’s not clear at all what release version these will apply to. Also, when you’re ready to promote one to the stable feed, you have to move it in there and completely change the version number.

Support an optional Build number for Semantic Versions

For NuGet 1.6, you can still use a four-part version number. But NuGet is strict if the version is clearly a SemVer version. For example, if you specify a pre-release string, such as 1.0.1-alpha, NuGet will not allow a fourth version part.

But we had the idea that we could extend SemVer to support this concept of a build number. This might be off by default in the public NuGet.org gallery, but could be something you could turn on individually. What it would allow you to do is continue to push new builds of 1.0.1alpha with an incrementing build number. For example:

  • 1.0.1-beta.0001 (nightly)
  • 1.0.1-beta.0002 (nightly)
  • 1.0.1-beta.0003 (nightly)
  • 1.0.1-beta (public)

Note that unlike a standard 4-part version, 1.0.1-beta is a higher version than 1.0.1-beta.0003.

While I’m hesitant to suggest a custom extension to SemVer, it makes a lot of sense to me. After all, this would be a convention applied to internal builds and not for public releases.

Question for you

So, do you like any of these approaches? Do you have a better approach?

While I love comments on my blog, I would like to direct discussion to the NuGet Discussions page. I look forward to hearing your advice on how to handle this situation. Whatever we decide, we want to bake in first-class support into NuGet to make this sort of thing easier.

code, asp.net comments edit

If you’re not familiar with WCF Web API, it’s a framework with nice HTTP abstractions used to expose simple HTTP services over the web. Its focus is targeted at applications that provide HTTP services for various clients such as mobile devices, browsers, desktop applications.

In some ways, it’s similar to ASP.NET MVC as it was developed with testability and extensibility in mind. There are some concepts that are similar to ASP.NET MVC, but with a twist. For example, where ASP.NET MVC has filters, WCF has operation handlers.

security

One question that comes up often with Web API is how do you authenticate requests? Well, you run Web API on ASP.NET (Web API also supports a self-host model), one approach you could take is to write an operation handler and attach it to a set of operations (an operation is analogous to an ASP.NET MVC action).

However, some folks like the ASP.NET MVC approach of slapping on an AuthorizeAttribute. In this blog post, I’ll show you how to write an attribute, RequireAuthorizationAttribute, for WCF Web API that does something similar.

One difference is that in the WCF Web API case, the attribute simply provides metadata, but not the the behavior, for authorization. If you wanted to use the existing ASP.NET MVC AuthorizeAttribute in the same way, you could do that as well, but I leave that as an exercise for the reader.

I’ll start with the easiest part, the attribute.

[AttributeUsage(AttributeTargets.Method)]
public class RequireAuthorizationAttribute : Attribute
{
    public string Roles { get; set; }
}

For now, it only applies to methods (operations). Later, we can update it to apply to classes as well if we so choose. I’m still learning the framework so I didn’t want to go bite off too much all at once.

The next step is to write an operation handler. When properly configured, the operation handler runs on every request for the operation that it applies to.

public class AuthOperationHandler 
      : HttpOperationHandler<HttpRequestMessage, HttpRequestMessage>
{
  RequireAuthorizationAttribute _authorizeAttribute;

  public AuthOperationHandler(RequireAuthorizationAttribute authorizeAttribute)
    : base("response")
  {
    _authorizeAttribute = authorizeAttribute;
  }

  protected override HttpRequestMessage OnHandle(HttpRequestMessage input)
  {
    IPrincipal user = Thread.CurrentPrincipal;
    if (!user.Identity.IsAuthenticated)
    {
      throw new HttpResponseException(HttpStatusCode.Unauthorized);
    }

    if (_authorizeAttribute.Roles == null)
    {
      return input;
    }

    var roles = _authorizeAttribute.Roles.Split(new[] { " " }, 
      StringSplitOptions.RemoveEmptyEntries);
    if (roles.Any(role => user.IsInRole(role)))
    {
      return input;
    }

    throw new HttpResponseException(HttpStatusCode.Unauthorized);
  }
}

Notice that the code accesses HttpContext.Current. This restricts this operation handler to only work within ASP.NET applications. Hey, I write what I know! Many folks replied to me that I should use Thread.CurrentPrincipal. My brain must have been off when I wrote this to not think of it. :)

Then all we do is ensure that the user is authenticated and in one of the specified roles if any role is specified. Very simple straightforward code at this point.

The final step is to associate this operation handler with some operations. In general, when you build a Web API application, the application author writes a configuration class that derives from WebApiConfiguration and either sets it as the default configuration, or passes it to a service route.

Within that configuration class, the author can specify an action that gets called on every request and gives the configuration class a chance to map a set of operation handlers to an operation.

For example, in a sample Web API app, I added the following configuration class.

public class CommentsConfiguration : WebApiConfiguration
{
    public CommentsConfiguration()
    {
        EnableTestClient = true;

        RequestHandlers = (c, e, od) =>
        {
            // TODO: Configure request operation handlers
        };

        this.AppendAuthorizationRequestHandlers();
    }
}

The RequestHandlers is a property of type Action<Collection<HttpOperationHandler>, ServiceEndpoint, HttpOperationDescription>

In general, it would be up to the application author to wire up the authentication operation handler I wrote to the appropriate actions. But I wanted to provide a method that helps with that. That’s the AppendAuthorizationRequestHandlers method in there, which is an extension method I wrote.

public static void AppendAuthorizationRequestHandlers(
  this WebApiConfiguration config)
{
  var requestHandlers = config.RequestHandlers;
  config.RequestHandlers = (c, e, od) =>
  {
    if (requestHandlers != null)
    {
      requestHandlers(c, e, od); // Original request handler
    }
    var authorizeAttribute = od.Attributes.OfType<RequireAuthorizationAttribute>()
      .FirstOrDefault();
    if (authorizeAttribute != null)
    {
      c.Add(new AuthOperationHandler(authorizeAttribute));
    }
  };
}

Since I didn’t want to stomp on the existing request handlers, I set the RequestHandlers property to a new action that calls the existing action (if any) and then does my custom registration logic.

I’ll admit, I couldn’t help thinking that if RequestHandlers was an event, rather than an action, that sort of logic could be handled for me. Winking
smile Have events fallen out of favor? They do work well to decouple code in this sort of scenario, but I digress.

The interesting part here is that the action’s third parameter, od, is an HttpOperationDescription. This is a description of the operation that includes access to such things as the attributes applied to the method! I simply look to see if the operation has the RequireAuthorizationAttribute applied and if so, I add the AuthOperationHandler I wrote earlier to the operation’s collection of operation handlers.

With this in place, I can now write a service that looks like this:

[ServiceContract]
public class CommentsApi
{
    [WebGet]
    public IQueryable<Comment> Get()
    {
        return new[] { new Comment 
        { 
            Title = "This is neato", 
            Body = "Ok, not as neat as I originally thought." } 
        }.AsQueryable();
    }

    [WebGet(UriTemplate = "auth"), RequireAuthorization]
    public IQueryable<Comment> GetAuth()
    {
        return new[] { new Comment 
        { 
            Title = "This is secured neato", 
            Body = "Ok, a bit neater than I originally thought." } 
        }.AsQueryable();
    }
}

And route to the Web API service like so:

public class Global : HttpApplication
{
  protected void Application_Start(object sender, EventArgs e)
  {
    RouteTable.Routes.MapServiceRoute<CommentsApi>("comments",
      new CommentsConfiguration());
  }
}

With this in place, a request for /comments allows anonymous, but a request for /comments/auth requires authentication.

If you’re interested in checking this code out, I pushed it to my CodeHaacks Github repository as a sample. I won’t make this into a NuGet package until it’s been thoroughly vetted by the WCF Web API team because it’s very likely I have no idea what I’m doing. I’d rather one of those folks make a NuGet package for this. Smile

And if you’re wondering why I’m writing about Web API, we’re all part of the same larger team now, so I figured it’s good to take a peek at what my friends are up to.

asp.net, code comments edit

I like to live life on the wild side. No, I don’t base jump off of buildings or invest in speculative tranches made up of junk stock derivatives. What I do is attempt to run recurring background tasks within an ASP.NET application.

110121-M-2339L-074 Writing code is totally just like this - Photo by DVIDSHUBCC BY 2.0 

But before I do anything wild with ASP.NET, I always talk to my colleague, Levi (sadly, no blog). As a developer on the internals of ASP.NET, he knows a huge amount about it, especially the potential pitfalls. He’s also quite the security guru. As you read this sentence, he just guessed your passwords. All of them.

When he got wind of my plan, he let me know it was evil, unsupported by ASP.NET and just might kill a cat. Good thing I’m a dog person. I persisted in my foolhardiness and suggested maybe it’s not evil, just risky. If so, how can I do it as safely as possible? What are the risks?

There are three main risks, one of which I’ll focus on in this blog post.

  1. An unhandled exception in a thread not associated with a request will take down the process. This occurs even if you have a handler setup via the Application_Error method. I’ll try and explain why in a follow-up blog post, but this is easy to deal with.
  2. If you run your site in a Web Farm, you could end up with multiple instances of your app that all attempt to run the same task at the same time. A little more challenging to deal with than the first item, but still not too hard. One typical approach is to use a resource common to all the servers, such as the database, as a synchronization mechanism to coordinate tasks.
  3. The AppDomain your site runs in can go down for a number of reasons and take down your background task with it. This could corrupt data if it happens in the middle of your code execution.

It’s this last risk that is the focus of this blog post.

Bye Bye App Domain

There are several things that can cause ASP.NET to tear down your AppDomain.

  • When you modify web.config, ASP.NET will recycle the AppDomain, though the w3wp.exe process (the IIS web server process) stays alive.
  • IIS will itself recycle the entire w3wp.exe process every 29 hours. It’ll just outright put a cap in the w3wp.exe process and bring down all of the app domains with it.
  • In a shared hosting environment, many web servers are configured to tear down the application pools after some period of inactivity. For example, if there are no requests to the application within a 20 minute period, it may take down the app domain.

If any of these happen in the middle of your code execution, your application/data could be left in a pretty bad state as it’s shut down without warning.

So why isn’t this a problem for your typical per request ASP.NET code? When ASP.NET tears down the AppDomain, it will attempt to flush the existing requests and give them time to complete before it takes down the App Domain. ASP.NET and IIS are considerate to code that they know is running,such as code that runs as part of a request.

Problem is, ASP.NET doesn’t know about work done on a background thread spawned using a timer or similar mechanism. It only knows about work associated with a request.

So tell ASP.NET, “Hey, I’m working here!”

The good news is there’s an easy way to tell ASP.NET about the work you’re doing! In the System.Web.Hosting namespace, there’s an important class, HostingEnvironment. According to the MSDN docs, this class…

Provides application-management functions and application services to a managed application within its application domain

This class has an important static method, RegisterObject. The MSDN description here isn’t super helpful.

Places an object in the list of registered objects for the application.

For us, what this means is that the RegisterObject method tells ASP.NET that, “Hey! Pay attention to this code here!” Important! This method requires full trust!

This method takes in a single object that implements the IRegisteredObject interface. That interface has a single method:

public interface IRegisteredObject
{
    void Stop(bool immediate);
}

When ASP.NET tears down the AppDomain, it will first attempt to call Stop method on all registered objects.

In most cases, it’ll call this method twice, once with immediate set to false. This gives your code a bit of time to finish what it is doing. ASP.NET gives all instances of IRegisteredObject a total of 30 seconds to complete their work, not 30 seconds each. After that time span, if there are any registered objects left, it will call them again with immediate set to true. This lets you know it means business and you really need to finish up pronto! I modeled my parenting technique after this method when trying to get my kids ready for school.

When ASP.NET calls into this method, your code needs to prevent this method from returning until your work is done. Levi showed me one easy way to do this by simply using a lock. Once the work is done, the code needs to unregister the object.

For example, here’s a simple generic implementation of IRegisteredObject. In this implementation, I simply ignored the immediate flag and try to prevent the method from returning until the work is done. The intent here is I won’t pass in any work that’ll take too long. Hopefully.

public class JobHost : IRegisteredObject
{
    private readonly object _lock = new object();
    private bool _shuttingDown;

    public JobHost()
    {
        HostingEnvironment.RegisterObject(this);
    }

    public void Stop(bool immediate)
    {
        lock (_lock)
        {
            _shuttingDown = true;
        }
        HostingEnvironment.UnregisterObject(this); 
    }

    public void DoWork(Action work)
    {
        lock (_lock)
        {
            if (_shuttingDown)
            {
                return;
            }
            work();
        }
    }
}

I wanted to get the simplest thing possible working. Note, that when ASP.NET is about to shut down the AppDomain, it will attempt to call the Stop method. That method will try to acquire a lock on the _lock instance. The DoWork method also acquires that same lock. That way, when the DoWork method is doing the work you give it (passed in as a lambda) the Stop method has to wait until the work is done before it can acquire the lock. Nifty.

Later on, I plan to make this more sophisticated by taking advantage of using a Task to represent the work rather than an Action. This would allow me to take advantage of task cancellation instead of the brute force approach with locks.

With this class in place, you can create a timer on Application_Start (I generally use WebActivator to register code that runs on app start) and when it elapses, you call into the DoWork method here. Remember, the timer must be referenced or it could be garbage collected.

Here’s a small example of this:

using System;
using System.Threading;
using WebBackgrounder;

[assembly: WebActivator.PreApplicationStartMethod(
  typeof(SampleAspNetTimer), "Start")]

public static class SampleAspNetTimer
{
    private static readonly Timer _timer = new Timer(OnTimerElapsed);
    private static readonly JobHost _jobHost = new JobHost();

    public static void Start()
    {
        _timer.Change(TimeSpan.Zero, TimeSpan.FromMilliseconds(1000));
    }

    private static void OnTimerElapsed(object sender)
    {
        _jobHost.DoWork(() => { /* What is it that you do around here */ });
    }
}

Recommendation

This technique can make your background tasks within ASP.NET much more robust. There’s still a chance of problems occurring though. Sometimes, the AppDomain goes down in a more abrupt manner. For example, you might have a blue screen, someone might trip on the plug, or a hard-drive might fail. These catastrophic failures can take down your app in such a way that leaves data in a bad state. But hopefully, these situations occur much less frequently than an AppDomain shutdown.

Many of you might be scratching your head thinking it seems weird to use a web server to perform recurring background tasks. That’s not really what a web server is for. You’re absolutely right. My recommendation is to do one of the following instead:

  • Write a simple console app and schedule it using Windows task schedule.
  • Write a Windows Service to manage your recurring tasks.
  • Use an Azure worker or something similar.

Given that those are my recommendations, why am I still working on a system for scheduling recurring tasks within ASP.NET that handles web farms and AppDomain shutdowns I call WebBackgrounder (NuGet package coming later)?

I mean, besides the fact that I’m thick-headed? Well, for two reasons.

The first is to make development easier. When you get latest from our source code, I just want everything to work. I don’t want you to have to set up a scheduled task, or an Azure worker, or a Windows server on your development box. A development environment can tolerate the issues I described.

The second reason is for simplicity. If you’re ok with the limitations I mentioned, this approach has one less moving part to worry about when setting up a website. There’s no need to configure an external recurring task. It just works.

But mostly, it’s because I like to live life on the edge.

Technorati Tags: asp.net,appdomain,web farms,background,threads,tasks

personal, code comments edit

Today, October 15 2011, marks four years of being a Microsoft employee for me. As such, it’s time for a little introspection, but in many ways, Tim Heuer already introspected for me. Much of what he writes echoes my own experience, thus leaving me with less to write about. Smile

microsoft-way It’s the Microsoft way, or the highway. Which is conveniently located near Microsoft Way. - Photo by Todd Bishop, CC BY 2.0

Looking back in my archives, I realized I haven’t written a whole lot about what it’s like to work here. I do have a few posts such as…

Regarding the second post, the funny thing is you never stop drinking from the fire hose here. At least I haven’t yet. And that’s both a good thing, but it can be wearying at times. I’ve been taking a lot of mini-vacations lately to keep my sanity.

So far, the past four years have been a real blast.

I’ve had a passion for open source for a very long time. When I started, ASP.NET MVC 1.0 was just getting going as a proprietary project. Very few teams at that time released their source code, much less under a proper OSS license. Although it involved sitting in a lot of meetings with lawyers, reviewing lots of dry legal documents, I loved the opportunity to help drive the process to get ASP.NET MVC licensed under the Ms-PL, a liberal OSI certified open source license. Announcing that release was a happy day for me.

Since that time, we’ve shipped four RTM releases of ASP.NET MVC (recall that we released ASP.NET MVC 3 twice), each incorporating more and more third-party open source software, another milestone. As they say, shipping is a feature! And the ASP.NET team, and the MVC team in particular, are all really great people to work with. Hat tip to them all!

In this time, I’ve also had the great pleasure to work on NuGet from its inception. NuGet goes one step further in that it’s not only an open source project under the Apache v2 license, but it accepts contributions. Microsoft contributed NuGet to the Outercurve Foundation (an independent open source foundation not unlike the Apache and Eclipse foundation that recently received its non-profit status!) early in its life allowing it to flourish as an open source project.

Microsoft has a team of employees who are allowed to spend work-time contributing to the NuGet project. All development, issue tracking, and discussion occurs in a public location, the NuGet CodePlex site, with Mercurial (hg) as the source code repository.

The NuGet team is a dedicated group of folks and it’s really a joy to work with them. The community involvement and feedback has been tremendous. I used to tell folks that I wanted to work on open source as my day job. That wish came true. It’s a real joy.

The past four years have also had their fair share of challenges behind the scenes. When you work with the intensely smart folks that I have the pleasure to work with, it’s hard not to feel the effects of the Impostor Syndrome, as Hanselman writes.  And with everything I work on, I have a keen eye for all the shortcomings and faults in what we produce. I find it hard to tout our releases as much as I could because I always wish we could have done better. Perhaps because I’m a perfectionist, but more likely due to my half-Asian upbringing. I’m forced to find fault in everything.

Despite all that, I do take pride in the great work that the teams I work with have done. Their efforts have been tremendous and they deserve all the credit. I only hope my contributions were indeed contributions and not just overhead.

And I want to thank many of you, who’ve offered us encouragement and constructive criticism via Twitter, your blogs, my blog, our forums, StackOverflow, and so on. All of it is very much appreciated! Without your help, I don’t think I would have done as well in the past year. Don’t be surprised when I come knocking on your door asking for more help. Did I mention already that NuGet accepts contributions?

asp.net mvc, asp.net, code comments edit

A long while ago I wrote about the potential dangers of Cross-site Request Forgery attacks, also known as CSRF or XSRF. These exploits are a form of confused deputy attack.

police-academyScreen grab from The Police Academy movie.In that post, I covered how ASP.NET MVC includes a set of anti-forgery helpers to help mitigate such exploits. The helpers include an HTML helper meant to be called in the form that renders a hidden input, and an attribute applied to the controller action to protect. These helpers work great when in a typical HTML form post to an action method scenario.

But what if your HTML page posts JSON data to an action instead of posting a form? How do these helpers help in that case?

You can try to apply the ValidateAntiForgeryTokenAttribute attribute to an action method, but it will fail every time if you try to post JSON encoded data to the action method. On one hand, the most secure action possible is one that rejects every request. On the other hand, that’s a lousy user experience.

The problem lies in the fact that the under the hood, deep within the call stack, the attribute peeks into the Request.Form collection to grab the anti-forgery token. But when you post JSON encoded data, there is no form collection to speak of. We hope to fix this at some point and with a more flexible set of anti-forgery helpers. But for the moment, we’re stuck with this.

This problem became evident to me after I wrote a proof-of-concept library to  ASP.NET MVC action methods from JavaScript in an easy manner. The JavaScript helpers I wrote post JSON to action methods in order to call the actions. So I set out to fix this in my CodeHaacks project.

There are two parts we need to tackle this problem. The first part is on the client-side where we need to generate and send the token to the server. To generate the token, I just use the existing @Html.AntiForgeryToken helper in the view. A little bit of jQuery code grabs the value of that token.

var token = $('input[name=""__RequestVerificationToken""]').val();

That’s easy. Now that I have the value, I just need a way to post it to the server. I choose to add it to the request headers. In vanilla jQuery (mmmm, vanilla), that looks similar to:

var headers = {};
// other headers omitted
headers['__RequestVerificationToken'] = token;

$.ajax({
  cache: false,
  dataType: 'json',
  type: 'POST',
  headers: headers,
  data: window.JSON.stringify(obj),
  contentType: 'application/json; charset=utf-8',
  url: '/some-url'
});

Ok, so far so good. This will generate the token in the browser and send it to the server, but we have a problem here. As I mentioned earlier, the existing attribute which validates the token on the server won’t look in the header. It only looks in the form collection. Uh oh! It’s Haacking time! I’ll write a custom attribute called ValidateJsonAntiForgeryTokenAttribute.

This attribute will call into the underlying anti-forgery code, but we need to get around that form collection issue I mentioned earlier.

Peeking into Reflector, I looked at the implementation of the regular attribute and followed its call stack. It took me deep into the bowels of the System.Web.WebPages.dll assembly, which contains a method with the following signature that does the actual work to validate the token:

public void Validate(HttpContextBase context, string salt);

Score! The method takes in an instance of type HttpContextBase, which is an abstract base class. That means we can can intercept that call and provide our own instance of HttpContextBase to validate the anti-forgery token. Yes, I provide a forgery of the request to enable the anti-forgery helper to work. Ironic, eh?

Here’s the custom implementation of the HttpContextBase class. I wrote it as a private inner class to the attribute.

private class JsonAntiForgeryHttpContextWrapper : HttpContextWrapper {
  readonly HttpRequestBase _request;
  public JsonAntiForgeryHttpContextWrapper(HttpContext httpContext)
    : base(httpContext) {
    _request = new JsonAntiForgeryHttpRequestWrapper(httpContext.Request);
  }

  public override HttpRequestBase Request {
    get {
      return _request;
    }
  }
}

private class JsonAntiForgeryHttpRequestWrapper : HttpRequestWrapper {
  readonly NameValueCollection _form;

  public JsonAntiForgeryHttpRequestWrapper(HttpRequest request)
    : base(request) {
    _form = new NameValueCollection(request.Form);
    if (request.Headers["__RequestVerificationToken"] != null) {
      _form["__RequestVerificationToken"] 
        = request.Headers["__RequestVerificationToken"];
    }
}

  public override NameValueCollection Form {
    get {
      return _form;
    }
  }
}

In general, you can get into all sorts of trouble when you hack around with the http context. But in this case, I’ve implemented a wrapper for a tightly constrained scenario that defers to default implementation for most things. The only thing I override is the request form. As you can see, I copy the form into a new NameValueCollection instance and if there is a request verification token in the header, I copy that value in the form too. I then use this modified collection as the Form collection.

Simple, but effective.

The custom attribute follows the basic implementation pattern of the regular attribute, but uses these new wrappers.

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, 
    AllowMultiple = false, Inherited = true)]
public class ValidateJsonAntiForgeryTokenAttribute : 
    FilterAttribute, IAuthorizationFilter {
  public void OnAuthorization(AuthorizationContext filterContext) {
    if (filterContext == null) {
      throw new ArgumentNullException("filterContext");
    }

    var httpContext = new JsonAntiForgeryHttpContextWrapper(HttpContext.Current);
    AntiForgery.Validate(httpContext, Salt ?? string.Empty);
  }

  public string Salt {
    get;
    set;
  }
  
  // The private context classes go here
}

With that in place, I can now decorate action methods with this new attribute and it will work in both scenarios, whether I post a form or post JSON data. I updated the client script library for calling action methods to accept a second parameter, includeAntiForgeryToken, which causes it to add the anti-forgery token to the headers.

As always, the source code is up on Github with a sample application that demonstrates usage of this technique and the assembly is in NuGet with the package id “MvcHaack.Ajax”.

asp.net, asp.net mvc, code comments edit

redirect Go that way instead - Photo by JacobEnos CC some rights reserved

Update: It looks like ASP.NET 4.5 adds the ability to suppress forms authentication redirect now with the HttpResponse.SuppressFormsAuthenticationRedirect property.

In an ASP.NET web application, it’s very common to write some jQuery code that makes an HTTP request to some URL (a lightweight service) in order to retrieve some data. That URL might be handled by an ASP.NET MVC controller action, a Web API operation, or even an ASP.NET Web Page or Web Form. If it can return curly brackets, it can be respond to a JavaScript request for JSON.

One pain point when hosting lightweight HTTP services on ASP.NET is making a request to a URL that requires authentication. Let’s look at a snippet of jQuery to illustrate what I mean. The following code makes a request to /admin/secret/data. Let’s assume that URL points to an ASP.NET MVC action with the AuthorizeAttribute applied, which requires that the request must be authenticated.

$.ajax({
    url: '/admin/secret/data',
    type: 'POST',
    contentType: 'application/json; charset=utf-8',
    statusCode: {
        200: function (data) {
            alert('200: Authenticated');
            // Bind the JSON data to the UI
        },
        401: function (data) {
            alert('401: Unauthenticated');
            // Handle the 401 error here.
        }
    }
});

If the user is not logged in when this code executes, you would expect that the 401 status code function would get called. But if forms authentication (often called FormsAuth for short) is configured, that isn’t what actually happens. Instead, you get a 200  with the contents of the login page (or a 404 if you don’t have a login page). What gives?

If you crack open Fiddler, it’s easy to see the problem. Instead of the request returning an HTTP 401 Unauthorized status code, it instead returns a 302 pointing to a login page. This causes jQuery (well actually, the XmlHttpRequest object) to automatically follow the redirect and issue another request to the login page. The login page handles this new request and return its contents with a 200 status code. This is not the desired result as the code expects JSON data to be returned in response to a 200 code, not HTML for the login page.

This “helpful” behavior when requesting a URL that requires authentication is a consequence of having the FormsAuthenticationModule enabled, which is the default in most ASP.NET applications. Under the hood, the FormsAuthenticationModule hooks into the request pipeline and changes any request that returns a 401 status code into a redirect to the login page.

Possible Solutions

I’m going to cover a few possible solutions I’ve seen around the web and then present the one that I prefer. It’s not that these other solutions are wrong, but they are only correct in some cases.

Remove Forms Authentication

If you don’t need FormsAuth, one simple solution is to remove the forms authentication module as this post suggests. This is a great solution if you’re sole purpose is to use ASP.NET to host a Web API service and you don’t need forms authentication. But it’s not a great solution if your app is both a web application and a web service.

Register an HttpModule to convert Redirects to 401

This blog post suggests registering an HTTP Module that converts any 302 request to a

  1. There are two problems with this approach. The first is that it breaks the case where the redirect is legitimate and not the result of FormsAuth. The second is that it requires manual configuration of an HttpModule.

Install-Package MembershipService.Mvc

My colleague, Steve Sanderson, has an even better approach with his MembershipService.Mvc and MembershipService.WebForms NuGet packages. These packages expose ASP.NET Membership as a service that you can call from multiple devices.

For example, if you want your Windows Phone application to use an ASP.NET website’s membership system to authenticate users of the application, you’d use his package. He provides the MembershipClient.WP7 and MembershipClient.JavaScript packages for writing clients that call into these services.

These packages deserve a blog post in their own right, but I’m going to just focus on the DoNotRedirectLoginModule he wrote. His module takes a similar approach to the previous one I mentioned, but he checks for a special value in HttpContext.Items, a dictionary for storing data related to the current request, before reverting a redirect back to a 401.

To prevent a FormsAuth redirect, an action method (or ASP.NET page or Web API operation) would simply call the helpful method DoNotRedirectToLoginModule.ApplyForRequest. This sets the special token in HttpContext.Items and the module will rewrite a 302 that’s redirecting to the login page back to a 401.

My Solution

Steve’s solution is a very good one. But I’m particularly lazy and didn’t want to have to call that method on every action when I’m writing an Ajax heavy application. So what I did was write a module that hooks in two events of the request.

The first event, PostReleaseRequestState, occurs after authentication, but before the FormsAuthenticationModule converts the status to a 302. In the event handler for this event, I check to see if the request is an Ajax request by checking that the X-Requested-With request header is “XMLHttpRequest”.

If so, I store away a token in the HttpContext.Items like Steve does. Then in the EndRequest event handler, I check for that token, just like Steve does. Inspired by Steve’s approach, I added a method to allow explicitly opting into this behavior, SuppressAuthenticationRedirect.

Here’s the code for this module. Warning: Consider this “proof-of-concept” code. I haven’t tested this thoroughly in a wide range of environments.

public class SuppressFormsAuthenticationRedirectModule : IHttpModule {
  private static readonly object SuppressAuthenticationKey = new Object();

  public static void SuppressAuthenticationRedirect(HttpContext context) {
    context.Items[SuppressAuthenticationKey] = true;
  }

  public static void SuppressAuthenticationRedirect(HttpContextBase context) {
    context.Items[SuppressAuthenticationKey] = true;
  }

  public void Init(HttpApplication context) {
    context.PostReleaseRequestState += OnPostReleaseRequestState;
    context.EndRequest += OnEndRequest;
  }

  private void OnPostReleaseRequestState(object source, EventArgs args) {
    var context = (HttpApplication)source;
    var response = context.Response;
    var request = context.Request;

    if (response.StatusCode == 401 && request.Headers["X-Requested-With"] == 
      "XMLHttpRequest") {
      SuppressAuthenticationRedirect(context.Context);
    }
  }

  private void OnEndRequest(object source, EventArgs args) {
    var context = (HttpApplication)source;
    var response = context.Response;

    if (context.Context.Items.Contains(SuppressAuthenticationKey)) {
      response.TrySkipIisCustomErrors = true;
      response.ClearContent();
      response.StatusCode = 401;
      response.RedirectLocation = null;
    }
  }

  public void Dispose() {
  }

  public static void Register() {
    DynamicModuleUtility.RegisterModule(
      typeof(SuppressFormsAuthenticationRedirectModule));
  }
}

There’s a package for that

Warning: The following is proof-of-concept code I’ve written. I haven’t tested it thoroughly in a production environment and I don’t provide any warranties or promises that it works and won’t kill your favorite pet. You’ve been warmed.

Naturally, I’ve written a NuGet package for this. Simply install the package and all Ajax requests that set that header (if you’re using jQuery, you’re all set) will not be redirected in the case of a 401.

Install-Package AspNetHaack

Note that the package adds a source code file in App_Start that wires up the http module that suppresses redirect. If you want to turn off this behavior temporarily, you can comment out that file and you’ll be back to the old behavior.

The source code for this is in Github as part of my broader CodeHaacks project.

Why don’t you just fix the FormsAuthenticationModule?

We realize this is a deficiency with the forms authentication module and we’re looking into hopefully fixing this for the next version of the Framework.

Update: As I stated at the beginning, a new property added in ASP.NET 4.5 supports doing this.

Tags: asp.net, aspnetmvc, formsauth, membership

asp.net mvc, asp.net, nuget comments edit

NOTE: This blog post covers features in a pre-release product, ASP.NET MVC 4 Developer Preview. You’ll see we call out those two words a lot to cover our butt. The specifics about the feature will change  and this post will become out-dated. You’ve been warned.

recipe All good recipes call for a significant amount of garlic.

Introduction

Last week I spoke at the //BUILD conference on building mobile web applications with ASP.NET MVC 4. In the talk, I demonstrated a recipe I wrote that automates the process to create mobile versions of desktop views.

view-mobilizer-recipe

Recipes are a great way to show off your lack of UI design skills like me!

In this blog post, I’ll walk through the basic steps to write a recipe. But first, what exactly is a recipe?

Obviously I’m not talking about the steps it takes to make a meatloaf surprise. In the roadmap, I described a recipe as:

An ASP.NET MVC 4 recipe is a dialog box delivered via NuGet with associated user interface (UI) and code used to automate a specific task.

If you’re familiar with NuGet, you know that a NuGet package can add new Powershell commands to the Package Manager Console. You can think of a recipe as a GUI equivalent to the commands that a package can add.

It fits so well with NuGet that we plan to add recipes as a feature of NuGet (probably with a different name if we can think of a better one) so that it’s not limited to ASP.NET MVC. We did the same thing with pre-installed NuGet packages for project templates which started off as a feature of ASP.NET MVC too. This will allow developers to write recipes for other project types such as Web Forms, Windows Phone, and later on, Windows 8 applications.

Getting Started

Recipes are assemblies that are dynamically loaded into Visual Studio by the Managed Extensibility Framework, otherwise known as MEF. MEF provides a plugin model for applications and is one of the primary ways to extend Visual Studio.

The first step is to create a class library project which compiles our recipe assembly. The set of steps we’ll follow to write a recipe are:

  1. Create a class library
  2. Reference the assembly that contains the recipe framework types. At this time, the assembly is Microsoft.VisualStudio.Web.Mvc.Extensibility.1.0.dllbut this may change in the future.
  3. Write a class that implements the IRecipe interface or one of the interfaces that derive from IRecipe such as IFolderRecipe or IFileRecipe. These interfaces are in the Microsoft.VisualStudio.Web.Mvc.Extensibility.Recipes namespace. The Developer Preview only supports the IFolderRecipe interface today. These are recipes that are launched from the context of a folder. In a later preview, we’ll implement IFileRecipe which can be launched in the context of a file.
  4. Implement the logic to show your recipe’s dialog. This could be a Windows Forms dialog or a Windows Presentation Foundation (WPF) dialog.
  5. Add the MEF ExportAttribute to the class to export the IRecipe interface.
  6. Package up the whole thing in a NuGet package and make sure the assembly ends up in the recipes folder of the package, rather than the usual lib folder.

The preceding list of steps itself looks a lot like a recipe, doesn’t it? It might be natural to expect that I wrote a recipe to automate those steps. Sorry, no. But what I did do to make it easier to build a recipe was write a NuGet package.

Why didn’t I write a recipe to write a recipe (inception!)? Recipes add a command intended to be run more than once during the life of a project. But that’s not the case here as setting up the project as a recipe is a one-time operation. In this particular case, a NuGet package is sufficient because it doesn’t make sense to convert a class library project into a recipe over and over gain.

That’s the logic I use to determine whether I should write a recipe as opposed to a regular NuGet package. If it’s something you’ll do multiple times in a project, it may be a candidate for a recipe.

A package to create a recipe

To help folks get started building recipes, I wrote a NuGet package, AspNetMvc4.RecipeSdk. And as I did in my //BUILD session, I’m publishing this live right now! Install this into an empty Class Library project to set up everything you need to write your first recipe.

The following screenshot shows an example of a class library project after installing the recipe SDK package.

recipe-class-lib

Notice that it adds a reference to the Microsoft.VisualStudio.Web.Mvc.Extensibility.1.0.dll assembly and adds a MyRecipe.cs file and a MyRecipe.nuspec file. It also added a reference to System.Windows.Forms.

Feel free to rename the files it added appropriately. Be sure to edit the MyRecipe.nuspec file with metadata appropriate to your project.

The interesting stuff happens within MyRecipe.cs. The following shows the default implementation added by the package.

using System;
using System.ComponentModel.Composition;
using System.Drawing;
using Microsoft.VisualStudio.Web.Mvc.Extensibility;
using Microsoft.VisualStudio.Web.Mvc.Extensibility.Recipes;

namespace CoolRecipe {
    [Export(typeof(IRecipe))]
    public class MyRecipe : IFolderRecipe {
        public bool Execute(ProjectFolder folder) {
            throw new System.NotImplementedException();
        }

        public bool IsValidTarget(ProjectFolder folder) {
            throw new System.NotImplementedException();
        }

        public string Description {
            get { throw new NotImplementedException(); }
        }

        public Icon Icon {
            get { throw new NotImplementedException(); }
        }

        public string Name {
            get { throw new NotImplementedException(); }
        }
    }
}

Most of these properties are self explanatory. They provide metadata for a recipe that shows up when a user launches the Add recipe dialog.

The two most interesting methods are IsValidTarget and Execute. The first method determines whether the folder that the recipe is launched from is valid for that recipe. This allows you to filter recipes. For example, suppose your recipe only makes sense when launched from a view folder. You can implement that method like so:

public bool IsValidTarget(ProjectFolder folder) {
    return folder.IsMvcViewsFolderOrDescendent();
}

The IsMvcViewsFolderOrDescendant is an extension method on the ProjectFolder type in the Microsoft.VisualStudio.Web.Mvc.Extensibility namespace.

The general approach we took was to keep the ProjectFolder interface generic and then add extension methods to layer on behavior specific to ASP.NET MVC. This provides a nice simple façade to the Visual Studio Design Time Environment (or DTE). If you’ve ever tried to write code against the DTE, you’ll appreciate this.

In this particular case, I recommend that you make the method always return true for now so your recipe shows up for any folder.

The other important method to implement is Execute. This is where the meat of your recipe lives. The basic pattern here is to create a Windows Form (or WPF Form) to display to the user. That form might contain all the interactions that a user needs, or it might gather data from the user and then perform an action. Here’s the code I used in my MVC 4 Mobilizer recipe.

public bool Execute(ProjectFolder folder) {
    var model = new ViewMobilizerModel(folder);
    var form = new ViewMobilizerForm(model);

    var result = form.ShowDialog();
    if (result == DialogResult.OK) {
        // DO STUFF with info gathered from the form
    }
    return true;
}

I create a form, show it as a dialog, and when it returns, I do stuff with the information gathered from the form. It’s a pretty simple pattern.

Packaging it up

Packaging this up as a NuGet package is very simple. I used NuGet.exe to run the following command:

nuget pack MyRecipe.nuspec

If it were any easier, it’d be illegal! I can now run the nuget push command to upload the recipe package to NuGet.org and make it available to the world. In fact, I did just that live during my presentation at BUILD.

Using a recipe

To install the recipe, right click on the solution node in Solution Explorer and select Manage NuGet Packages.

Find the package and click the Install button. This installs the NuGet package with the recipe into the solution.

To run the recipe, right click on a folder and select Add > Run Recipe. run-recipe-menu

Right now you’re probably thinking that’s an odd menu option to launch a recipe. And now you’re thinking, wow, did I just read your mind? Yes, we agree that this is an odd menu option. Did I mention this was a Developer Preview? We plan to change how recipes are launched. In fact, we plan to change a lot between now and the next preview. At the moment, we think moving recipes to a top level menu makes more sense.

The Run Recipe menu option displays the recipe dialog with a list of installed recipes that are applicable in the current context.

run-recipe-dialog

As you can see, I only have one recipe in my solution. Note that a recipe can control its own icon.

Select the recipe and click the OK button to launch it. This then calls the recipe’s Execute method which displays the UI you implemented.

Get the source!

Oh, and before I forget, the source code for the Mobilizer recipe is available on Github as part of my Code Haacks project!