code comments suggest edit

While attending Oredev 2011, I had an interesting conversation with Corey Haines about his perception of the Ruby community as compared to the .NET community.

One thing he suggested is that the .NET community is seems a bit insular and self-isolating. He noted that when he attended .NET user groups, he only saw folks he knew to be .NET developers. But when he attends Ruby, Scala, NodeJS, Erlang, etc. user groups, he sees many of the same people at these meet ups.

While I’m not completely against identifying oneself as a .NET developer to indicate your primary focus, I do see what Corey is getting at. Rather than only seeing ourselves as .NET developers, it’s just as important to also see ourselves as software developers.

We should recognize that we’re part of this larger cosmopolitan software community. We have a lot to learn from this greater community. Just as importantly, our community also has much to offer to the larger community!

As a good friend once told me, a rising tide lifts all boats. The interchange of ideas between these disparate technology communities can only result in good things for everyone.

I’ve been grateful that folks such as Nick Quaranto have this view. Although he’s one of those hippie Ruby folks and runs (which some might see as a competitor to NuGet), he’s been extremely helpful and generous with advice for the NuGet team. To me, that’s what community is about. Not isolating oneself from ideas simply because they come from someone who’s eschewed curly braces.

The good news is that I think the .NET community is actually further along in this than it gets credit for. Podcasts such as Herding Code have a very polyglot bent to it. Even .NET Rocks, seen as the bastion of .NET, has expanded its archive with topics such as node.js and Modernizr recently.

So if you identify yourself as a .NET developer, well you’re in good company. There’s a lot of interesting .NET developers around. At the same time, I encourage you to reach across the aisle and learn a thing or two about a different technology. Maybe even hoist a beer with one of those hippie rubyists or smug clojure developers!

After all, someday we’re all going to end up as JavaScript developers anyways.

code, personal comments suggest edit

Once in a while folks ask me for details about the hardware and software that hosts my blog. Rather than write about it, a photo can provide all the details that you need.

There you have it.


Well actually^TM^, my blog runs on a bit more hardware than that these days. Especially after the Great Hard-Drive Failure of 2009. As longtime readers of my blog might remember, nearly two years ago, this blog went down in flames due to a faulty hard-drive on the hosting server.

My hosting provider, CrystalTech (now rebranded to be the Web Services home of The Small Business Authority), took regular backups of the server, but I hosted my blog in a virtual machine. As it turns out, the backups did not include the VM because it was always “in use”. In order to backup a virtual machine, the backup needs to take special action to ensure that works.

Today, I still host with CrystalTech in a large part due to their response to the great hard-drive meltdown. First and foremost, they didn’t jump to blame me. They focused on fixing the problem at hand. In the past, I’ve hosted with other providers who excelled at making you feel that anything wrong was your fault. Ever been in a relationships like that?

Once things were settled, they worked with me to figure out what systematic changes they should make to ensure this sort of thing doesn’t happen again. Hard drives will fail. You can’t prevent that. But you can ensure that the data customers care about are backed up and verified.

Not only that, they hooked me up with a pretty nice new dedicated server. Smile

Even though they now are prepared to ensure VMs are backed up, I now host on bare metal, in part because my other tenant moved off of the server so I don’t really need to share it anymore. All miiiiiine!


  • Case:2U server dedicated server
  • Processors: 2x Intex Xeon CPU 3.20 GHZ (1 core, 2 logical processors) x64
  • Memory: 4.00 GB RAM
  • OS Hard Drive: C: 233 GB RAID 1 (2 physical drives)
  • Data Hard Drive: D: 467 GB RAID 5 (3 physical drives)


  • OS: Windows Server 2008 Datacenter SP2
  • Database: SQL Server 2008
  • Web Server: IIS 7 running ASP.NET 4
  • Blog:Subtext
  • Backup: In addition to the machine backus, I have a scheduled task that 7z archives my web directories and also takes a SQL backup into a backups folder. Windows Live Mesh syncs those backup files to my home machine.

This server hosts the following sites:

For some of these sites, I plan to migrate them to other cloud based solutions. For example, rather than have my own NuGet feed, I’ll just use a feed.

Even so, I plan to keep on this hardware for as long as The Small Business Authority lets me. It’s a great way for me to keep my system administration skills from completely atrophying and I like having a server at my disposal.

So thanks again to The Small Business Authority (though I admit, I liked CrystalTech as a name betterSmile with tongue
out) for hosting this blog! And thank you for reading!

personal comments suggest edit

As I mentioned in my last post, I have an overnight stopover in Reykjavik Iceland. After checking into my hotel at an ungodly early hour (which ended up being really late for me Seattle time), my first order of business was to head over to the Blue Lagoon.


No, not that Blue Lagoon! This one!


Look at that steam coming off the water! The Blue Lagoon is a geothermal spa with a 5000 square meter lagoon. The water comes from a nearby geothermal plant and is renewed every two days. According to Wikipedia,

Superheated water is vented from the ground near a lava flow and used to run turbines that generate electricity. After going through the turbines, the steam and hot water passes through a heat exchanger to provide heat for a municipal hot water heating system. Then the water is fed into the lagoon for recreational and medicinal users to bathe in.

Yes, the thought of being cooked in superheated water did cross my mind since my manager reminded me of a scene from some crappy movie where that happened. Fortunately, that did not happen.

This method of heating the lagoon is just one example of how Iceland gets a lot of its power from the heat within the Earth. From another Wikipedia article, emphasis mine,

Five major geothermal power plants exist in Iceland, which produce approximately 26.2% (2010) of the nation’s energy. In addition,geothermal heatingmeets the heating and hot water requirements of approximately 87% of all buildings in Iceland. Apart from geothermal energy, 75.4% of the nation’s electricity was generated by hydro power, and 0.1% from fossil fuels.

It’s pretty impressive. They plan to go 100% fossil-fuel-free in the near future. Of course, the one downside is that the water here smells slightly of sulfur. I actually don’t mind it.

The spa provides buckets of white silica gel you can put on your face to exfoliate your skin. I found that the sleet being whipped around 35 miles per hours did a fine job of exfoliating my skin. It nearly exfoliated it off of my face.

Though I have to admit, that was part of the fun. It was novel to be swimming outdoors in November with sleet and wind pouring down, but nice and warm within the heated waters.

I even had time to stop at a water side bar for a drink.


A good part of the drive to the Lagoon is through a vast lava field that is reminiscent of the photos sent back by the Mars Rover. It’s very easy to catch a bus from your hotel or from the airport to get there and they provide lockers as well as towel and even swimwear rental. They really make it easy to take a quick jaunt over there if you’re just on a stopover in Iceland.

Now I’m warm and dry in my hotel room planning my next step. I would like to do some sight seeing before I meet folks at the bar, but I also like remaining warm and dry. Conundrums!

I think I’ll venture out now and report back later. If you ever find yourself with a long stopover in Iceland, do visit the Blue Lagoon.

personal comments suggest edit

If you’re in the Reykjavik area on November 7th, come join me for a beer-up. A Beer-Up is basically a meet-up, but with lots of beer!

  • When: November 7th, 2011 at 8:00 PM
  • Where: The English Pub (yes, I went all the way to Iceland for an English pub)
  • Why: To talk about ASP.NET, ASP.NET MVC, NuGet, Software Development whatever geeky topics you want. And if we do our jobs right, by the end of the night we’ll discuss life, philosophy, and which direction is my hotel?

blue-lagoon \ Blue Lagoon in Iceland - Photo from

I’ll be stopping overnight in Reykjavik on my way to Oredev 2011. I’m pretty excited as I’ve always been fascinated by the natural beauty of such a geologically active place. I definitely plan to see the Blue Lagoon geothermal spa (pictured above) during my stay.

If you’re in the area and love to talk about coding, technology, whatever, do join us!

nuget, open source comments suggest edit

We made a recent change to make it easy to update the NuGet documentation. In this post, I’ll cover what the change was, why we made it, and how it makes it easier to contribute to our documentation.

Our docs run as a simple ASP.NET Web Pages application that renders documentation written in the Markdown format. The Markdown text is not stored in a database, but live as files that are part of the application source code. That allows us to use source control to version our docs.

We used to host the source for the docs site in Mercurial (hg) on Under the old system, it took the following to contribute docs.

  1. Install Mercurial (TortoiseHG for example) if you didn’t already have it.
  2. Fork our repository and clone it to your local machine.
  3. Open it up in Visual Studio.
  4. Make and commit your changes.
  5. Push your changes.
  6. Send a pull request.

It’s no surprise that we don’t get a lot of pull requests for our documentation. Oh, and I didn’t even mention all the steps once we received such a pull request.

As anyone who’s ever run an open source product knows, it’s hard enough to get folks to contribute to documentation in the first place. Why add more roadblocks?

To improve this situation, we moved our documentation repository to Github for three reasons:

  1. In-browser editing of files with MarkDown preview.
  2. Pull requests can be merged at the click of a button.
  3. Support for deploying to AppHarbor (which CodePlex also has)

With this in place, it’s now easy to be a “drive-by” contributor to our docs. Let’s walk through an example to see what I mean. In this example, I’m posing as a guy named “FakeHaacked” with no commit access to the NuGetDocs repository.

Here’s a sample page from our docs (click for larger). The words at the end of the first paragraph should be links! Doh! I should fix that.


First, I’ll visit the NuGet Docs repository (TODO: Add a link to each page with the path to the Github repository page).


Cool, I’m logged in as FakeHaacked. Now I just need to navigate to the page that needs the correction. All of the documentation pages are in the site folder.

Pro tip, type the letter “t” while on this page to use incremental search to search for the page you want to edit.

Here’s the page I want to edit.


Since this file is a Markdown file, I can see a view of the file that’s a nice approximation of what it will look like when it’s deployed. It’s not exactly the same since we have different CSS styles on the production site.

See that blue button just above the content and to the right? That allows me to “fork” this page and edit the file. Forking it, for those not familiar with distributed version control, means it will create a clone of the main repository. I’m free to work and do whatever I want in that clone without worry that I will affect the real thing.

Let’s click that button and see what happens.


Cool, I get a nice editor with preview for editing the page right here in the browser. I’ll go ahead and make those last two references into Markdown formatted links.

When I’m done, I can scroll down, type in a commit message describing the change, and click the blue Propose File Change button.


Once you’re happy with the set of changes you’ve made, click the button to send a pull request. This lets the folks who maintain the documentation to know you have changes that are ready for them to pull in.


And that’s it. You’ve done your part. Thank you for your contribution to our docs! Now let’s look at what happens on the other side. I’ll put on my project maintainer hat and visit the site. Notice I’m logged in as Haacked now and I see there’s an outstanding pull request.pull-requests-nuget-docs

Cool, I can take a look at it, quickly see a diff, and comment on it. Notice that Github was able to determine that this file is safe to automatically merge into the master branch.


All I have to do is click the big blue button, enter a message, and I’m done!


It’s that easy for me to merge in your changes.


You might ask why we don’t use the Github Pages feature (or even Git-backed wikis). We started the docs site before we were on Github and didn’t know about the pages feature.

If I were to start over, I’d probably just use that. Maybe we’ll migrate in the future. One benefit of our current implementation is we get that nice Table of Contents widget generated for us dynamically (which we can probably do with Github Pages and Jekyll) and we can use Razor for our layout template.

The downside of our current approach is that we can’t create new doc pages this way, but I’ll submit a feature request to the Github team and see what happens.

So if you are reading the NuGet docs, and see something that makes you think, “That ain’t right!”, please go fix it! It’s easy and contributing to open source documentation makes you a good person. It’s how I got started in open source.

Oh, and if you happen to be experienced with Git, you can always use the traditional method of cloning the repository to your local machine and making changes. That gives you the benefit of running the site to look at your change., nuget comments suggest edit

Recently, a group of covert ninjas within my organization started to investigate what it would take to change our internal build and continuous integration systems (CI) to take advantage of NuGet for many of our products, and I need your input!

Hmm, off by one error slays me again. -Image from Ask A Ninja. Click on
the image to visit.

Ok, they’re not really covert ninjas, that just sounds much cooler than a team of slightly pudgy software developers. Ok, they’ve told me to speak for myself, they’re in great shape!

In response to popular demand, we changed our minds and decided to support Semantic Versioning (SemVer) as the means to specify pre-release packages for the next version of NuGet (1.6).

In part, this is the cause of the delay for this release as it required extensive changes to NuGet and the gallery. I will write a blog post later that covers this in more detail, but for now, you can read our spec on it which is mostly up to date. I hope.

I’m really excited to change our own build to use NuGet because it will force us to eat our own dogfood and feel the pain that many of you feel with NuGet in such scenarios. Until we feel that pain, we won’t have a great solution to the pain.

A really brief intro to SemVer

You can read the SemVer spec here, but in case you’re lazy, I’ll provide a brief summary.

SemVer is a convention for versioning your public APIs that gives meaning to the version number. Each version has three parts, Major.Minor.Patch.

In brief, these correspond to:

  • Major: Breaking changes.
  • Minor: New features, but backwards compatible.
  • Patch: Backwards compatible bug fixes only.

Additionally, pre-release versions of your API can be denoted by appending a dash and an arbitrary string after the Patch number. For example:

  • 1.0.1-alpha
  • 1.0.1-beta
  • 1.0.1-Fizzlebutt

When you’re ready to release, you just remove the pre-release part and that version is considered “higher” than all the pre-release versions. The pre-release versions are given precedence in alphabetical order (well technically lexicographic ASCII sort order).

Therefore, the following is an example from lowest to highest versions of a package.

  • 1.0.1-alpha
  • 1.0.1-alpha2
  • 1.0.1-beta
  • 1.0.1-rc
  • 1.0.1-zeeistalmostdone
  • 1.0.1

How NuGet uses SemVer

As I mentioned before, I’ll write up a longer blog post about how SemVer figures into your package. For now, I just want to make it clear that if you’re using 4-part version numbers today, your packages will still work and behave as before.

It’s only when you specify a 3-part version with a version string that NuGet gets strict about SemVer. For example, NuGet allows 1.0.1-beta but does not allow

How to deal with nightly builds?

So the question I have is, how do we deal with nightly (or continous) builds?

For example, suppose I start work on what will become 1.0.1-beta. Internally, I may post nightly builds of 1.0.1-beta for others in my team to use. Then at some point, I’ll stamp a release as the official 1.0.1-beta for public consumption.

The problem is, each of those builds need to have the package version incremented. This ensures that folks can revert back to a last-known-good nightly build if a problem comes up. SemVer doesn’t seem to address how to handle internal nightly (or continuous) builds. It’s really focused on public releases.

Note, we’re thinking about this for our internal setup, not for the public gallery. I’ll address that question later.

We had a few ideas in mind.

Stick with the previous version number and change labels just before release

The idea here is that when we’re working on 1.0.1beta, we version the packages using the alpha label and increment it with a build number.

  • 1.0.1-alpha (public release)
  • 1.0.1-alpha.1 (internal build)
  • 1.0.1-alpha.2 (internal build)

A variant of this approach is to append the date (and counter) in number format.

  • 1.0.1-alpha (public release)**
  • 1.0.1-alpha.20101025001 (internal build)
  • 1.0.1-alpha.20101026001 (internal build on the next day)
  • 1.0.1-alpha.20101026002 (another internal build on the same day)

With this approach, when we’re ready to cut the release, we simply change the package to be 1.0.1-beta and release it.

The downside of this approach is that it’s not completely clear that these are internal nightly builds of what will be 1.0.1-beta. They could be builds of 1.0.1-alpha2.

Yet another variant of this approach is to name our public releases with an even Minor or Patch number and our internal releases with an odd one. So when we’re ready to work on 1.0.2-beta, we’d version the package as 1.0.1-beta. When we’re ready to release, we change it to 1.0.2-beta.

Have a separate feed with its own versioning scheme.

Another thought was to simply have a completely separate feed with its own versioning scheme. So you can choose to grab packages from the stable feed, or the nightly feed.

In the nightly feed, the package version might just be the date.

  • 2010.10.25001
  • 2010.10.25002
  • 2010.10.25003

The downside of this approach is that it’s not clear at all what release version these will apply to. Also, when you’re ready to promote one to the stable feed, you have to move it in there and completely change the version number.

Support an optional Build number for Semantic Versions

For NuGet 1.6, you can still use a four-part version number. But NuGet is strict if the version is clearly a SemVer version. For example, if you specify a pre-release string, such as 1.0.1-alpha, NuGet will not allow a fourth version part.

But we had the idea that we could extend SemVer to support this concept of a build number. This might be off by default in the public gallery, but could be something you could turn on individually. What it would allow you to do is continue to push new builds of 1.0.1alpha with an incrementing build number. For example:

  • 1.0.1-beta.0001 (nightly)
  • 1.0.1-beta.0002 (nightly)
  • 1.0.1-beta.0003 (nightly)
  • 1.0.1-beta (public)

Note that unlike a standard 4-part version, 1.0.1-beta is a higher version than 1.0.1-beta.0003.

While I’m hesitant to suggest a custom extension to SemVer, it makes a lot of sense to me. After all, this would be a convention applied to internal builds and not for public releases.

Question for you

So, do you like any of these approaches? Do you have a better approach?

While I love comments on my blog, I would like to direct discussion to the NuGet Discussions page. I look forward to hearing your advice on how to handle this situation. Whatever we decide, we want to bake in first-class support into NuGet to make this sort of thing easier.

code, comments suggest edit

If you’re not familiar with WCF Web API, it’s a framework with nice HTTP abstractions used to expose simple HTTP services over the web. Its focus is targeted at applications that provide HTTP services for various clients such as mobile devices, browsers, desktop applications.

In some ways, it’s similar to ASP.NET MVC as it was developed with testability and extensibility in mind. There are some concepts that are similar to ASP.NET MVC, but with a twist. For example, where ASP.NET MVC has filters, WCF has operation handlers.


One question that comes up often with Web API is how do you authenticate requests? Well, you run Web API on ASP.NET (Web API also supports a self-host model), one approach you could take is to write an operation handler and attach it to a set of operations (an operation is analogous to an ASP.NET MVC action).

However, some folks like the ASP.NET MVC approach of slapping on an AuthorizeAttribute. In this blog post, I’ll show you how to write an attribute, RequireAuthorizationAttribute, for WCF Web API that does something similar.

One difference is that in the WCF Web API case, the attribute simply provides metadata, but not the the behavior, for authorization. If you wanted to use the existing ASP.NET MVC AuthorizeAttribute in the same way, you could do that as well, but I leave that as an exercise for the reader.

I’ll start with the easiest part, the attribute.

public class RequireAuthorizationAttribute : Attribute
    public string Roles { get; set; }

For now, it only applies to methods (operations). Later, we can update it to apply to classes as well if we so choose. I’m still learning the framework so I didn’t want to go bite off too much all at once.

The next step is to write an operation handler. When properly configured, the operation handler runs on every request for the operation that it applies to.

public class AuthOperationHandler 
      : HttpOperationHandler<HttpRequestMessage, HttpRequestMessage>
  RequireAuthorizationAttribute _authorizeAttribute;

  public AuthOperationHandler(RequireAuthorizationAttribute authorizeAttribute)
    : base("response")
    _authorizeAttribute = authorizeAttribute;

  protected override HttpRequestMessage OnHandle(HttpRequestMessage input)
    IPrincipal user = Thread.CurrentPrincipal;
    if (!user.Identity.IsAuthenticated)
      throw new HttpResponseException(HttpStatusCode.Unauthorized);

    if (_authorizeAttribute.Roles == null)
      return input;

    var roles = _authorizeAttribute.Roles.Split(new[] { " " }, 
    if (roles.Any(role => user.IsInRole(role)))
      return input;

    throw new HttpResponseException(HttpStatusCode.Unauthorized);

Notice that the code accesses HttpContext.Current. This restricts this operation handler to only work within ASP.NET applications. Hey, I write what I know! Many folks replied to me that I should use Thread.CurrentPrincipal. My brain must have been off when I wrote this to not think of it. :)

Then all we do is ensure that the user is authenticated and in one of the specified roles if any role is specified. Very simple straightforward code at this point.

The final step is to associate this operation handler with some operations. In general, when you build a Web API application, the application author writes a configuration class that derives from WebApiConfiguration and either sets it as the default configuration, or passes it to a service route.

Within that configuration class, the author can specify an action that gets called on every request and gives the configuration class a chance to map a set of operation handlers to an operation.

For example, in a sample Web API app, I added the following configuration class.

public class CommentsConfiguration : WebApiConfiguration
    public CommentsConfiguration()
        EnableTestClient = true;

        RequestHandlers = (c, e, od) =>
            // TODO: Configure request operation handlers


The RequestHandlers is a property of type Action<Collection<HttpOperationHandler>, ServiceEndpoint, HttpOperationDescription>

In general, it would be up to the application author to wire up the authentication operation handler I wrote to the appropriate actions. But I wanted to provide a method that helps with that. That’s the AppendAuthorizationRequestHandlers method in there, which is an extension method I wrote.

public static void AppendAuthorizationRequestHandlers(
  this WebApiConfiguration config)
  var requestHandlers = config.RequestHandlers;
  config.RequestHandlers = (c, e, od) =>
    if (requestHandlers != null)
      requestHandlers(c, e, od); // Original request handler
    var authorizeAttribute = od.Attributes.OfType<RequireAuthorizationAttribute>()
    if (authorizeAttribute != null)
      c.Add(new AuthOperationHandler(authorizeAttribute));

Since I didn’t want to stomp on the existing request handlers, I set the RequestHandlers property to a new action that calls the existing action (if any) and then does my custom registration logic.

I’ll admit, I couldn’t help thinking that if RequestHandlers was an event, rather than an action, that sort of logic could be handled for me. Winking
smile Have events fallen out of favor? They do work well to decouple code in this sort of scenario, but I digress.

The interesting part here is that the action’s third parameter, od, is an HttpOperationDescription. This is a description of the operation that includes access to such things as the attributes applied to the method! I simply look to see if the operation has the RequireAuthorizationAttribute applied and if so, I add the AuthOperationHandler I wrote earlier to the operation’s collection of operation handlers.

With this in place, I can now write a service that looks like this:

public class CommentsApi
    public IQueryable<Comment> Get()
        return new[] { new Comment 
            Title = "This is neato", 
            Body = "Ok, not as neat as I originally thought." } 

    [WebGet(UriTemplate = "auth"), RequireAuthorization]
    public IQueryable<Comment> GetAuth()
        return new[] { new Comment 
            Title = "This is secured neato", 
            Body = "Ok, a bit neater than I originally thought." } 

And route to the Web API service like so:

public class Global : HttpApplication
  protected void Application_Start(object sender, EventArgs e)
      new CommentsConfiguration());

With this in place, a request for /comments allows anonymous, but a request for /comments/auth requires authentication.

If you’re interested in checking this code out, I pushed it to my CodeHaacks Github repository as a sample. I won’t make this into a NuGet package until it’s been thoroughly vetted by the WCF Web API team because it’s very likely I have no idea what I’m doing. I’d rather one of those folks make a NuGet package for this. Smile

And if you’re wondering why I’m writing about Web API, we’re all part of the same larger team now, so I figured it’s good to take a peek at what my friends are up to., code comments suggest edit

I like to live life on the wild side. No, I don’t base jump off of buildings or invest in speculative tranches made up of junk stock derivatives. What I do is attempt to run recurring background tasks within an ASP.NET application.

110121-M-2339L-074 Writing code is totally just like this - Photo by DVIDSHUBCC BY 2.0 

But before I do anything wild with ASP.NET, I always talk to my colleague, Levi (sadly, no blog). As a developer on the internals of ASP.NET, he knows a huge amount about it, especially the potential pitfalls. He’s also quite the security guru. As you read this sentence, he just guessed your passwords. All of them.

When he got wind of my plan, he let me know it was evil, unsupported by ASP.NET and just might kill a cat. Good thing I’m a dog person. I persisted in my foolhardiness and suggested maybe it’s not evil, just risky. If so, how can I do it as safely as possible? What are the risks?

There are three main risks, one of which I’ll focus on in this blog post.

  1. An unhandled exception in a thread not associated with a request will take down the process. This occurs even if you have a handler setup via the Application_Error method. I’ll try and explain why in a follow-up blog post, but this is easy to deal with.
  2. If you run your site in a Web Farm, you could end up with multiple instances of your app that all attempt to run the same task at the same time. A little more challenging to deal with than the first item, but still not too hard. One typical approach is to use a resource common to all the servers, such as the database, as a synchronization mechanism to coordinate tasks.
  3. The AppDomain your site runs in can go down for a number of reasons and take down your background task with it. This could corrupt data if it happens in the middle of your code execution.

It’s this last risk that is the focus of this blog post.

Bye Bye App Domain

There are several things that can cause ASP.NET to tear down your AppDomain.

  • When you modify web.config, ASP.NET will recycle the AppDomain, though the w3wp.exe process (the IIS web server process) stays alive.
  • IIS will itself recycle the entire w3wp.exe process every 29 hours. It’ll just outright put a cap in the w3wp.exe process and bring down all of the app domains with it.
  • In a shared hosting environment, many web servers are configured to tear down the application pools after some period of inactivity. For example, if there are no requests to the application within a 20 minute period, it may take down the app domain.

If any of these happen in the middle of your code execution, your application/data could be left in a pretty bad state as it’s shut down without warning.

So why isn’t this a problem for your typical per request ASP.NET code? When ASP.NET tears down the AppDomain, it will attempt to flush the existing requests and give them time to complete before it takes down the App Domain. ASP.NET and IIS are considerate to code that they know is running,such as code that runs as part of a request.

Problem is, ASP.NET doesn’t know about work done on a background thread spawned using a timer or similar mechanism. It only knows about work associated with a request.

So tell ASP.NET, “Hey, I’m working here!”

The good news is there’s an easy way to tell ASP.NET about the work you’re doing! In the System.Web.Hosting namespace, there’s an important class, HostingEnvironment. According to the MSDN docs, this class…

Provides application-management functions and application services to a managed application within its application domain

This class has an important static method, RegisterObject. The MSDN description here isn’t super helpful.

Places an object in the list of registered objects for the application.

For us, what this means is that the RegisterObject method tells ASP.NET that, “Hey! Pay attention to this code here!” Important! This method requires full trust!

This method takes in a single object that implements the IRegisteredObject interface. That interface has a single method:

public interface IRegisteredObject
    void Stop(bool immediate);

When ASP.NET tears down the AppDomain, it will first attempt to call Stop method on all registered objects.

In most cases, it’ll call this method twice, once with immediate set to false. This gives your code a bit of time to finish what it is doing. ASP.NET gives all instances of IRegisteredObject a total of 30 seconds to complete their work, not 30 seconds each. After that time span, if there are any registered objects left, it will call them again with immediate set to true. This lets you know it means business and you really need to finish up pronto! I modeled my parenting technique after this method when trying to get my kids ready for school.

When ASP.NET calls into this method, your code needs to prevent this method from returning until your work is done. Levi showed me one easy way to do this by simply using a lock. Once the work is done, the code needs to unregister the object.

For example, here’s a simple generic implementation of IRegisteredObject. In this implementation, I simply ignored the immediate flag and try to prevent the method from returning until the work is done. The intent here is I won’t pass in any work that’ll take too long. Hopefully.

public class JobHost : IRegisteredObject
    private readonly object _lock = new object();
    private bool _shuttingDown;

    public JobHost()

    public void Stop(bool immediate)
        lock (_lock)
            _shuttingDown = true;

    public void DoWork(Action work)
        lock (_lock)
            if (_shuttingDown)

I wanted to get the simplest thing possible working. Note, that when ASP.NET is about to shut down the AppDomain, it will attempt to call the Stop method. That method will try to acquire a lock on the _lock instance. The DoWork method also acquires that same lock. That way, when the DoWork method is doing the work you give it (passed in as a lambda) the Stop method has to wait until the work is done before it can acquire the lock. Nifty.

Later on, I plan to make this more sophisticated by taking advantage of using a Task to represent the work rather than an Action. This would allow me to take advantage of task cancellation instead of the brute force approach with locks.

With this class in place, you can create a timer on Application_Start (I generally use WebActivator to register code that runs on app start) and when it elapses, you call into the DoWork method here. Remember, the timer must be referenced or it could be garbage collected.

Here’s a small example of this:

using System;
using System.Threading;
using WebBackgrounder;

[assembly: WebActivator.PreApplicationStartMethod(
  typeof(SampleAspNetTimer), "Start")]

public static class SampleAspNetTimer
    private static readonly Timer _timer = new Timer(OnTimerElapsed);
    private static readonly JobHost _jobHost = new JobHost();

    public static void Start()
        _timer.Change(TimeSpan.Zero, TimeSpan.FromMilliseconds(1000));

    private static void OnTimerElapsed(object sender)
        _jobHost.DoWork(() => { /* What is it that you do around here */ });


This technique can make your background tasks within ASP.NET much more robust. There’s still a chance of problems occurring though. Sometimes, the AppDomain goes down in a more abrupt manner. For example, you might have a blue screen, someone might trip on the plug, or a hard-drive might fail. These catastrophic failures can take down your app in such a way that leaves data in a bad state. But hopefully, these situations occur much less frequently than an AppDomain shutdown.

Many of you might be scratching your head thinking it seems weird to use a web server to perform recurring background tasks. That’s not really what a web server is for. You’re absolutely right. My recommendation is to do one of the following instead:

  • Write a simple console app and schedule it using Windows task schedule.
  • Write a Windows Service to manage your recurring tasks.
  • Use an Azure worker or something similar.

Given that those are my recommendations, why am I still working on a system for scheduling recurring tasks within ASP.NET that handles web farms and AppDomain shutdowns I call WebBackgrounder (NuGet package coming later)?

I mean, besides the fact that I’m thick-headed? Well, for two reasons.

The first is to make development easier. When you get latest from our source code, I just want everything to work. I don’t want you to have to set up a scheduled task, or an Azure worker, or a Windows server on your development box. A development environment can tolerate the issues I described.

The second reason is for simplicity. If you’re ok with the limitations I mentioned, this approach has one less moving part to worry about when setting up a website. There’s no need to configure an external recurring task. It just works.

But mostly, it’s because I like to live life on the edge.

Technorati Tags:,appdomain,web farms,background,threads,tasks

personal, code comments suggest edit

Today, October 15 2011, marks four years of being a Microsoft employee for me. As such, it’s time for a little introspection, but in many ways, Tim Heuer already introspected for me. Much of what he writes echoes my own experience, thus leaving me with less to write about. Smile

microsoft-way It’s the Microsoft way, or the highway. Which is conveniently located near Microsoft Way. - Photo by Todd Bishop, CC BY 2.0

Looking back in my archives, I realized I haven’t written a whole lot about what it’s like to work here. I do have a few posts such as…

Regarding the second post, the funny thing is you never stop drinking from the fire hose here. At least I haven’t yet. And that’s both a good thing, but it can be wearying at times. I’ve been taking a lot of mini-vacations lately to keep my sanity.

So far, the past four years have been a real blast.

I’ve had a passion for open source for a very long time. When I started, ASP.NET MVC 1.0 was just getting going as a proprietary project. Very few teams at that time released their source code, much less under a proper OSS license. Although it involved sitting in a lot of meetings with lawyers, reviewing lots of dry legal documents, I loved the opportunity to help drive the process to get ASP.NET MVC licensed under the Ms-PL, a liberal OSI certified open source license. Announcing that release was a happy day for me.

Since that time, we’ve shipped four RTM releases of ASP.NET MVC (recall that we released ASP.NET MVC 3 twice), each incorporating more and more third-party open source software, another milestone. As they say, shipping is a feature! And the ASP.NET team, and the MVC team in particular, are all really great people to work with. Hat tip to them all!

In this time, I’ve also had the great pleasure to work on NuGet from its inception. NuGet goes one step further in that it’s not only an open source project under the Apache v2 license, but it accepts contributions. Microsoft contributed NuGet to the Outercurve Foundation (an independent open source foundation not unlike the Apache and Eclipse foundation that recently received its non-profit status!) early in its life allowing it to flourish as an open source project.

Microsoft has a team of employees who are allowed to spend work-time contributing to the NuGet project. All development, issue tracking, and discussion occurs in a public location, the NuGet CodePlex site, with Mercurial (hg) as the source code repository.

The NuGet team is a dedicated group of folks and it’s really a joy to work with them. The community involvement and feedback has been tremendous. I used to tell folks that I wanted to work on open source as my day job. That wish came true. It’s a real joy.

The past four years have also had their fair share of challenges behind the scenes. When you work with the intensely smart folks that I have the pleasure to work with, it’s hard not to feel the effects of the Impostor Syndrome, as Hanselman writes.  And with everything I work on, I have a keen eye for all the shortcomings and faults in what we produce. I find it hard to tout our releases as much as I could because I always wish we could have done better. Perhaps because I’m a perfectionist, but more likely due to my half-Asian upbringing. I’m forced to find fault in everything.

Despite all that, I do take pride in the great work that the teams I work with have done. Their efforts have been tremendous and they deserve all the credit. I only hope my contributions were indeed contributions and not just overhead.

And I want to thank many of you, who’ve offered us encouragement and constructive criticism via Twitter, your blogs, my blog, our forums, StackOverflow, and so on. All of it is very much appreciated! Without your help, I don’t think I would have done as well in the past year. Don’t be surprised when I come knocking on your door asking for more help. Did I mention already that NuGet accepts contributions? mvc,, code comments suggest edit

A long while ago I wrote about the potential dangers of Cross-site Request Forgery attacks, also known as CSRF or XSRF. These exploits are a form of confused deputy attack.

police-academyScreen grab from The Police Academy movie.In that post, I covered how ASP.NET MVC includes a set of anti-forgery helpers to help mitigate such exploits. The helpers include an HTML helper meant to be called in the form that renders a hidden input, and an attribute applied to the controller action to protect. These helpers work great when in a typical HTML form post to an action method scenario.

But what if your HTML page posts JSON data to an action instead of posting a form? How do these helpers help in that case?

You can try to apply the ValidateAntiForgeryTokenAttribute attribute to an action method, but it will fail every time if you try to post JSON encoded data to the action method. On one hand, the most secure action possible is one that rejects every request. On the other hand, that’s a lousy user experience.

The problem lies in the fact that the under the hood, deep within the call stack, the attribute peeks into the Request.Form collection to grab the anti-forgery token. But when you post JSON encoded data, there is no form collection to speak of. We hope to fix this at some point and with a more flexible set of anti-forgery helpers. But for the moment, we’re stuck with this.

This problem became evident to me after I wrote a proof-of-concept library to  ASP.NET MVC action methods from JavaScript in an easy manner. The JavaScript helpers I wrote post JSON to action methods in order to call the actions. So I set out to fix this in my CodeHaacks project.

There are two parts we need to tackle this problem. The first part is on the client-side where we need to generate and send the token to the server. To generate the token, I just use the existing @Html.AntiForgeryToken helper in the view. A little bit of jQuery code grabs the value of that token.

var token = $('input[name=""__RequestVerificationToken""]').val();

That’s easy. Now that I have the value, I just need a way to post it to the server. I choose to add it to the request headers. In vanilla jQuery (mmmm, vanilla), that looks similar to:

var headers = {};
// other headers omitted
headers['__RequestVerificationToken'] = token;

  cache: false,
  dataType: 'json',
  type: 'POST',
  headers: headers,
  data: window.JSON.stringify(obj),
  contentType: 'application/json; charset=utf-8',
  url: '/some-url'

Ok, so far so good. This will generate the token in the browser and send it to the server, but we have a problem here. As I mentioned earlier, the existing attribute which validates the token on the server won’t look in the header. It only looks in the form collection. Uh oh! It’s Haacking time! I’ll write a custom attribute called ValidateJsonAntiForgeryTokenAttribute.

This attribute will call into the underlying anti-forgery code, but we need to get around that form collection issue I mentioned earlier.

Peeking into Reflector, I looked at the implementation of the regular attribute and followed its call stack. It took me deep into the bowels of the System.Web.WebPages.dll assembly, which contains a method with the following signature that does the actual work to validate the token:

public void Validate(HttpContextBase context, string salt);

Score! The method takes in an instance of type HttpContextBase, which is an abstract base class. That means we can can intercept that call and provide our own instance of HttpContextBase to validate the anti-forgery token. Yes, I provide a forgery of the request to enable the anti-forgery helper to work. Ironic, eh?

Here’s the custom implementation of the HttpContextBase class. I wrote it as a private inner class to the attribute.

private class JsonAntiForgeryHttpContextWrapper : HttpContextWrapper {
  readonly HttpRequestBase _request;
  public JsonAntiForgeryHttpContextWrapper(HttpContext httpContext)
    : base(httpContext) {
    _request = new JsonAntiForgeryHttpRequestWrapper(httpContext.Request);

  public override HttpRequestBase Request {
    get {
      return _request;

private class JsonAntiForgeryHttpRequestWrapper : HttpRequestWrapper {
  readonly NameValueCollection _form;

  public JsonAntiForgeryHttpRequestWrapper(HttpRequest request)
    : base(request) {
    _form = new NameValueCollection(request.Form);
    if (request.Headers["__RequestVerificationToken"] != null) {
        = request.Headers["__RequestVerificationToken"];

  public override NameValueCollection Form {
    get {
      return _form;

In general, you can get into all sorts of trouble when you hack around with the http context. But in this case, I’ve implemented a wrapper for a tightly constrained scenario that defers to default implementation for most things. The only thing I override is the request form. As you can see, I copy the form into a new NameValueCollection instance and if there is a request verification token in the header, I copy that value in the form too. I then use this modified collection as the Form collection.

Simple, but effective.

The custom attribute follows the basic implementation pattern of the regular attribute, but uses these new wrappers.

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, 
    AllowMultiple = false, Inherited = true)]
public class ValidateJsonAntiForgeryTokenAttribute : 
    FilterAttribute, IAuthorizationFilter {
  public void OnAuthorization(AuthorizationContext filterContext) {
    if (filterContext == null) {
      throw new ArgumentNullException("filterContext");

    var httpContext = new JsonAntiForgeryHttpContextWrapper(HttpContext.Current);
    AntiForgery.Validate(httpContext, Salt ?? string.Empty);

  public string Salt {
  // The private context classes go here

With that in place, I can now decorate action methods with this new attribute and it will work in both scenarios, whether I post a form or post JSON data. I updated the client script library for calling action methods to accept a second parameter, includeAntiForgeryToken, which causes it to add the anti-forgery token to the headers.

As always, the source code is up on Github with a sample application that demonstrates usage of this technique and the assembly is in NuGet with the package id “MvcHaack.Ajax”., mvc, code comments suggest edit

redirect Go that way instead - Photo by JacobEnos CC some rights reserved

Update: It looks like ASP.NET 4.5 adds the ability to suppress forms authentication redirect now with the HttpResponse.SuppressFormsAuthenticationRedirect property.

In an ASP.NET web application, it’s very common to write some jQuery code that makes an HTTP request to some URL (a lightweight service) in order to retrieve some data. That URL might be handled by an ASP.NET MVC controller action, a Web API operation, or even an ASP.NET Web Page or Web Form. If it can return curly brackets, it can be respond to a JavaScript request for JSON.

One pain point when hosting lightweight HTTP services on ASP.NET is making a request to a URL that requires authentication. Let’s look at a snippet of jQuery to illustrate what I mean. The following code makes a request to /admin/secret/data. Let’s assume that URL points to an ASP.NET MVC action with the AuthorizeAttribute applied, which requires that the request must be authenticated.

    url: '/admin/secret/data',
    type: 'POST',
    contentType: 'application/json; charset=utf-8',
    statusCode: {
        200: function (data) {
            alert('200: Authenticated');
            // Bind the JSON data to the UI
        401: function (data) {
            alert('401: Unauthenticated');
            // Handle the 401 error here.

If the user is not logged in when this code executes, you would expect that the 401 status code function would get called. But if forms authentication (often called FormsAuth for short) is configured, that isn’t what actually happens. Instead, you get a 200  with the contents of the login page (or a 404 if you don’t have a login page). What gives?

If you crack open Fiddler, it’s easy to see the problem. Instead of the request returning an HTTP 401 Unauthorized status code, it instead returns a 302 pointing to a login page. This causes jQuery (well actually, the XmlHttpRequest object) to automatically follow the redirect and issue another request to the login page. The login page handles this new request and return its contents with a 200 status code. This is not the desired result as the code expects JSON data to be returned in response to a 200 code, not HTML for the login page.

This “helpful” behavior when requesting a URL that requires authentication is a consequence of having the FormsAuthenticationModule enabled, which is the default in most ASP.NET applications. Under the hood, the FormsAuthenticationModule hooks into the request pipeline and changes any request that returns a 401 status code into a redirect to the login page.

Possible Solutions

I’m going to cover a few possible solutions I’ve seen around the web and then present the one that I prefer. It’s not that these other solutions are wrong, but they are only correct in some cases.

Remove Forms Authentication

If you don’t need FormsAuth, one simple solution is to remove the forms authentication module as this post suggests. This is a great solution if you’re sole purpose is to use ASP.NET to host a Web API service and you don’t need forms authentication. But it’s not a great solution if your app is both a web application and a web service.

Register an HttpModule to convert Redirects to 401

This blog post suggests registering an HTTP Module that converts any 302 request to a

  1. There are two problems with this approach. The first is that it breaks the case where the redirect is legitimate and not the result of FormsAuth. The second is that it requires manual configuration of an HttpModule.

Install-Package MembershipService.Mvc

My colleague, Steve Sanderson, has an even better approach with his MembershipService.Mvc and MembershipService.WebForms NuGet packages. These packages expose ASP.NET Membership as a service that you can call from multiple devices.

For example, if you want your Windows Phone application to use an ASP.NET website’s membership system to authenticate users of the application, you’d use his package. He provides the MembershipClient.WP7 and MembershipClient.JavaScript packages for writing clients that call into these services.

These packages deserve a blog post in their own right, but I’m going to just focus on the DoNotRedirectLoginModule he wrote. His module takes a similar approach to the previous one I mentioned, but he checks for a special value in HttpContext.Items, a dictionary for storing data related to the current request, before reverting a redirect back to a 401.

To prevent a FormsAuth redirect, an action method (or ASP.NET page or Web API operation) would simply call the helpful method DoNotRedirectToLoginModule.ApplyForRequest. This sets the special token in HttpContext.Items and the module will rewrite a 302 that’s redirecting to the login page back to a 401.

My Solution

Steve’s solution is a very good one. But I’m particularly lazy and didn’t want to have to call that method on every action when I’m writing an Ajax heavy application. So what I did was write a module that hooks in two events of the request.

The first event, PostReleaseRequestState, occurs after authentication, but before the FormsAuthenticationModule converts the status to a 302. In the event handler for this event, I check to see if the request is an Ajax request by checking that the X-Requested-With request header is “XMLHttpRequest”.

If so, I store away a token in the HttpContext.Items like Steve does. Then in the EndRequest event handler, I check for that token, just like Steve does. Inspired by Steve’s approach, I added a method to allow explicitly opting into this behavior, SuppressAuthenticationRedirect.

Here’s the code for this module. Warning: Consider this “proof-of-concept” code. I haven’t tested this thoroughly in a wide range of environments.

public class SuppressFormsAuthenticationRedirectModule : IHttpModule {
  private static readonly object SuppressAuthenticationKey = new Object();

  public static void SuppressAuthenticationRedirect(HttpContext context) {
    context.Items[SuppressAuthenticationKey] = true;

  public static void SuppressAuthenticationRedirect(HttpContextBase context) {
    context.Items[SuppressAuthenticationKey] = true;

  public void Init(HttpApplication context) {
    context.PostReleaseRequestState += OnPostReleaseRequestState;
    context.EndRequest += OnEndRequest;

  private void OnPostReleaseRequestState(object source, EventArgs args) {
    var context = (HttpApplication)source;
    var response = context.Response;
    var request = context.Request;

    if (response.StatusCode == 401 && request.Headers["X-Requested-With"] == 
      "XMLHttpRequest") {

  private void OnEndRequest(object source, EventArgs args) {
    var context = (HttpApplication)source;
    var response = context.Response;

    if (context.Context.Items.Contains(SuppressAuthenticationKey)) {
      response.TrySkipIisCustomErrors = true;
      response.StatusCode = 401;
      response.RedirectLocation = null;

  public void Dispose() {

  public static void Register() {

There’s a package for that

Warning: The following is proof-of-concept code I’ve written. I haven’t tested it thoroughly in a production environment and I don’t provide any warranties or promises that it works and won’t kill your favorite pet. You’ve been warmed.

Naturally, I’ve written a NuGet package for this. Simply install the package and all Ajax requests that set that header (if you’re using jQuery, you’re all set) will not be redirected in the case of a 401.

Install-Package AspNetHaack

Note that the package adds a source code file in App_Start that wires up the http module that suppresses redirect. If you want to turn off this behavior temporarily, you can comment out that file and you’ll be back to the old behavior.

The source code for this is in Github as part of my broader CodeHaacks project.

Why don’t you just fix the FormsAuthenticationModule?

We realize this is a deficiency with the forms authentication module and we’re looking into hopefully fixing this for the next version of the Framework.

Update: As I stated at the beginning, a new property added in ASP.NET 4.5 supports doing this.

Tags:, aspnetmvc, formsauth, membership mvc,, nuget comments suggest edit

NOTE: This blog post covers features in a pre-release product, ASP.NET MVC 4 Developer Preview. You’ll see we call out those two words a lot to cover our butt. The specifics about the feature will change  and this post will become out-dated. You’ve been warned.

recipe All good recipes call for a significant amount of garlic.


Last week I spoke at the //BUILD conference on building mobile web applications with ASP.NET MVC 4. In the talk, I demonstrated a recipe I wrote that automates the process to create mobile versions of desktop views.


Recipes are a great way to show off your lack of UI design skills like me!

In this blog post, I’ll walk through the basic steps to write a recipe. But first, what exactly is a recipe?

Obviously I’m not talking about the steps it takes to make a meatloaf surprise. In the roadmap, I described a recipe as:

An ASP.NET MVC 4 recipe is a dialog box delivered via NuGet with associated user interface (UI) and code used to automate a specific task.

If you’re familiar with NuGet, you know that a NuGet package can add new Powershell commands to the Package Manager Console. You can think of a recipe as a GUI equivalent to the commands that a package can add.

It fits so well with NuGet that we plan to add recipes as a feature of NuGet (probably with a different name if we can think of a better one) so that it’s not limited to ASP.NET MVC. We did the same thing with pre-installed NuGet packages for project templates which started off as a feature of ASP.NET MVC too. This will allow developers to write recipes for other project types such as Web Forms, Windows Phone, and later on, Windows 8 applications.

Getting Started

Recipes are assemblies that are dynamically loaded into Visual Studio by the Managed Extensibility Framework, otherwise known as MEF. MEF provides a plugin model for applications and is one of the primary ways to extend Visual Studio.

The first step is to create a class library project which compiles our recipe assembly. The set of steps we’ll follow to write a recipe are:

  1. Create a class library
  2. Reference the assembly that contains the recipe framework types. At this time, the assembly is Microsoft.VisualStudio.Web.Mvc.Extensibility.1.0.dllbut this may change in the future.
  3. Write a class that implements the IRecipe interface or one of the interfaces that derive from IRecipe such as IFolderRecipe or IFileRecipe. These interfaces are in the Microsoft.VisualStudio.Web.Mvc.Extensibility.Recipes namespace. The Developer Preview only supports the IFolderRecipe interface today. These are recipes that are launched from the context of a folder. In a later preview, we’ll implement IFileRecipe which can be launched in the context of a file.
  4. Implement the logic to show your recipe’s dialog. This could be a Windows Forms dialog or a Windows Presentation Foundation (WPF) dialog.
  5. Add the MEF ExportAttribute to the class to export the IRecipe interface.
  6. Package up the whole thing in a NuGet package and make sure the assembly ends up in the recipes folder of the package, rather than the usual lib folder.

The preceding list of steps itself looks a lot like a recipe, doesn’t it? It might be natural to expect that I wrote a recipe to automate those steps. Sorry, no. But what I did do to make it easier to build a recipe was write a NuGet package.

Why didn’t I write a recipe to write a recipe (inception!)? Recipes add a command intended to be run more than once during the life of a project. But that’s not the case here as setting up the project as a recipe is a one-time operation. In this particular case, a NuGet package is sufficient because it doesn’t make sense to convert a class library project into a recipe over and over gain.

That’s the logic I use to determine whether I should write a recipe as opposed to a regular NuGet package. If it’s something you’ll do multiple times in a project, it may be a candidate for a recipe.

A package to create a recipe

To help folks get started building recipes, I wrote a NuGet package, AspNetMvc4.RecipeSdk. And as I did in my //BUILD session, I’m publishing this live right now! Install this into an empty Class Library project to set up everything you need to write your first recipe.

The following screenshot shows an example of a class library project after installing the recipe SDK package.


Notice that it adds a reference to the Microsoft.VisualStudio.Web.Mvc.Extensibility.1.0.dll assembly and adds a MyRecipe.cs file and a MyRecipe.nuspec file. It also added a reference to System.Windows.Forms.

Feel free to rename the files it added appropriately. Be sure to edit the MyRecipe.nuspec file with metadata appropriate to your project.

The interesting stuff happens within MyRecipe.cs. The following shows the default implementation added by the package.

using System;
using System.ComponentModel.Composition;
using System.Drawing;
using Microsoft.VisualStudio.Web.Mvc.Extensibility;
using Microsoft.VisualStudio.Web.Mvc.Extensibility.Recipes;

namespace CoolRecipe {
    public class MyRecipe : IFolderRecipe {
        public bool Execute(ProjectFolder folder) {
            throw new System.NotImplementedException();

        public bool IsValidTarget(ProjectFolder folder) {
            throw new System.NotImplementedException();

        public string Description {
            get { throw new NotImplementedException(); }

        public Icon Icon {
            get { throw new NotImplementedException(); }

        public string Name {
            get { throw new NotImplementedException(); }

Most of these properties are self explanatory. They provide metadata for a recipe that shows up when a user launches the Add recipe dialog.

The two most interesting methods are IsValidTarget and Execute. The first method determines whether the folder that the recipe is launched from is valid for that recipe. This allows you to filter recipes. For example, suppose your recipe only makes sense when launched from a view folder. You can implement that method like so:

public bool IsValidTarget(ProjectFolder folder) {
    return folder.IsMvcViewsFolderOrDescendent();

The IsMvcViewsFolderOrDescendant is an extension method on the ProjectFolder type in the Microsoft.VisualStudio.Web.Mvc.Extensibility namespace.

The general approach we took was to keep the ProjectFolder interface generic and then add extension methods to layer on behavior specific to ASP.NET MVC. This provides a nice simple façade to the Visual Studio Design Time Environment (or DTE). If you’ve ever tried to write code against the DTE, you’ll appreciate this.

In this particular case, I recommend that you make the method always return true for now so your recipe shows up for any folder.

The other important method to implement is Execute. This is where the meat of your recipe lives. The basic pattern here is to create a Windows Form (or WPF Form) to display to the user. That form might contain all the interactions that a user needs, or it might gather data from the user and then perform an action. Here’s the code I used in my MVC 4 Mobilizer recipe.

public bool Execute(ProjectFolder folder) {
    var model = new ViewMobilizerModel(folder);
    var form = new ViewMobilizerForm(model);

    var result = form.ShowDialog();
    if (result == DialogResult.OK) {
        // DO STUFF with info gathered from the form
    return true;

I create a form, show it as a dialog, and when it returns, I do stuff with the information gathered from the form. It’s a pretty simple pattern.

Packaging it up

Packaging this up as a NuGet package is very simple. I used NuGet.exe to run the following command:

nuget pack MyRecipe.nuspec

If it were any easier, it’d be illegal! I can now run the nuget push command to upload the recipe package to and make it available to the world. In fact, I did just that live during my presentation at BUILD.

Using a recipe

To install the recipe, right click on the solution node in Solution Explorer and select Manage NuGet Packages.

Find the package and click the Install button. This installs the NuGet package with the recipe into the solution.

To run the recipe, right click on a folder and select Add > Run Recipe. run-recipe-menu

Right now you’re probably thinking that’s an odd menu option to launch a recipe. And now you’re thinking, wow, did I just read your mind? Yes, we agree that this is an odd menu option. Did I mention this was a Developer Preview? We plan to change how recipes are launched. In fact, we plan to change a lot between now and the next preview. At the moment, we think moving recipes to a top level menu makes more sense.

The Run Recipe menu option displays the recipe dialog with a list of installed recipes that are applicable in the current context.


As you can see, I only have one recipe in my solution. Note that a recipe can control its own icon.

Select the recipe and click the OK button to launch it. This then calls the recipe’s Execute method which displays the UI you implemented.

Get the source!

Oh, and before I forget, the source code for the Mobilizer recipe is available on Github as part of my Code Haacks project!, mvc comments suggest edit

Today, during his //BUILD keynote, Scott Guthrie announced the availability of ASP.NET MVC 4 Developer preview. Note those words, developer preview. This is not even a Beta release. But there sure is a lot of cool stuff inside.

One great thing about this release is that the runtime libraries (our assemblies) as well as our JavaScript libraries are available as NuGet packages. So if you write packages that depend on the ASP.NET MVC 4 runtime, you can have them depend on our packages.

Also included in this release is NuGet 1.5 which was released just recently. If you already have NuGet 1.5 installed, you may notice there’s a new update available.  This new version includes support for Visual Studio 11 Developer Preview. There are no other changes in it.

I’m also giving a couple of talks at BUILD that you’ll be able to watch online that cover some of the features within ASP.NET MVC 4. To find out more about the release, visit our ASP.NET MVC 4 information page.

Install it

You can install it via the Web Platform installer:

Or if you prefer to download the installers directly, visit the download details page.

We also published an ASP.NET MVC 3 installer for Visual Studio 11 Developer Preview if you’d like to try that out.

Closing Thoughts

I’m excited about this release and will be interested to hear your feedback. I also want to recognize the heroic efforts of the ASP.NET MVC team (and NuGet and ASP.NET Web Pages) to get this release ready in time with all the features that it contains. I’m privileged to work with such great folks. Smile

Tags: aspnetmvc,, preview, nuget, mvc comments suggest edit

BUILD-speaker-blingIf you’re at the BUILD conference in Anaheim, I’ll be speaking in two sessions on Thursday.

Progressively enable the mobile web with ASP.NET MVC 4, HTML5, and jQuery Mobile Thursday, 9:00 AM

There are over a billion mobile devices with rich Web capabilities, yet many Websites look terrible on such devices, or worse, fail to work at all. As mobile devices become the primary way that most people access the Web, having a site that fails to deliver a rich experience on the Web using HTML5, JavaScript and jQuery Mobile is missing out on a huge opportunity. In this session, learn how ASP.NET MVC 4 leverages these next generation technologies enabling developers to build a single solution that targets multiple platforms and form factors such as mobile, tablet, and desktop devices.

UPDATE: Unfortunately, we had to cancel the second talk due to a family illness which required that Damian go home early. Damian and I plan to record the session later and post it on Channel 9.The second talk is a joint talk with Damian Edwards

Building IIS and ASP.NET apps with the power of asyncThursday, 2:30 PM

It’s well established from both theory and practice that Web sites and Web services achieve scale through asynchrony. If a dedicated thread is required per client connection to a server, scalability becomes limited by the number of threads the server system can support, which is typically far fewer than business requirements demand. Unfortunately, it’s also been difficult historically for developers to write asynchronous apps, due to the myriad of callbacks that have been necessary to program asynchrony successfully. Thus, businesses scale by investing in many more machines rather than by making better use of the ones they already have or the few they’re paying for use of in the cloud. All that changes with the next release of Visual Studio and .NET. New features in managed languages make writing asynchronous code as simple as writing synchronous code, thereby enabling both developer productivity and good return on investments. In this code-heavy session, learn how you can be the hero of your organization, building efficient and scalable server apps that best utilize your company’s resources.

If you’re here, I hope you can make it. I’ll be giving out NuGet stickers at my sessions. Smile

personal comments suggest edit

I had a dry run today for an upcoming presentation that did not go quite as well as I would like, though I completely expected this as I was unprepared. The good news is, it was a dry run and not the real thing, so I have plenty of time to adjust.

Even so, there’s one thing I did wrong, that I should have known better than to do. In my crazy sleep deprived work frazzled mind, I broke the cardinal rule of a dry run – Treat it like the real thing!

flight Kitty Hawk by gilderm from sxc.huFor example, I have a little pre-flight checklist for every talk I give. I just place it in a file named preflight.txt in the root of each presentation folder and adapt it for each presentation. Even though I try to adapt it to each presentation, there is a core set of items that never really changes from one presentation to another. These are items that almost always apply and many are tips learned from other great presenters such as Scott Hanselman and Brad Wilson.

I thought this might be useful for others who find themselves in a position where they are giving presentations. Perhaps some of you will add to my checklist. I divided it into several categories. The last section isn’t actually a check-list, but a reminder of important keyboard shortcuts.


  • Set Notepad default font size to 16
  • Hide Desktop Icons
  • Close unnecessary toolbar/tray icons
  • Close all unecessary windows including Solution Explorer (use CTRL ALT L to show it)

Visual Studio

  • Launch Visual Studio
  • Set font size to 16
  • Drag necessary files into desktop
  • Start and minimize magnifier
  • Prepare snippets (if you use them)
  • Disable all unnecessary VS extensions
  • Or better yet, set up a DEMO instance of Visual Studio (thanks Mike Minutillo!)


  • Reset all demos. That might mean clearing cookies, clearing browser forms saved data, etc.
  • Open the PowerPoint deck to the right spot
  • Consolidate your demo scripts into a single document per talk
  • Print out notes (and don’t accidentally throw them away like I did)
  • Relax

Important Keyboard shortcuts

  • Window + to zoom in one level (and start magnifier)
  • Windows - to zoom out one level
  • Windows ESC to zoom fully out
  • CTRL . smart tag expansion
  • CTRL , Navigate to
  • SHIFT ALT K to display Solution Explorer (I remapped this on Brad’s advice because CTRL ALT L conflicts with Windows magnifier)
  • CTRL + and CTRL - change font size in NotePad2

I think this checklist nicely complements my presentation tips I learned from my (many) mistakes blog post. Did I forget an obvious preflight option? Do tell!

nuget, code comments suggest edit

UPDATE: We found an issue with 1.5 when running behind some proxies that caused an “Arithmetic operation resulted in an overflow” exception message and another issue with signed PS1 scripts. We’ve now posted an update (NuGet 1.5.20902.9023) that fixes the issues.

I’m happy to announce the release of NuGet 1.5 just in time to make sure our roadmap isn’t a liar. I won’t bore you by repeating the details of the release, but instead direct you to the NuGet 1.5 release notes.

If you are running a private NuGet.Server repository, you’ll need to update that repository the latest version of NuGet.Server to connect to it using NuGet 1.5.

I’ve updated our roadmap to reflect what we’re thinking about next. The next release is going to focus more on continuous integration scenarios that we’ve heard from customers, pre-release “beta” packages and multiple UI improvements such as allowing folks to disable package sources.

Also, on occasion, the NuGet feature team hangs out in a home-grown IRC style web based chat application written by David Fowler in his spare time as a test app for SignalR. We’re contemplating the idea of making it a more regular thing if possible.

Tags: nuget, oss, open source, package manager, mvc, code comments suggest edit

In a recent blog post, I wrote a a controller inspector to demonstrate Controller and Action Descriptors. In this blog post, I apply that knowledge to build something more useful.

One pain point when you write Ajax heavy applications using ASP.NET MVC is managing the URLs that Routing generates on the server. These URLs aren’t accessible from code in a static JavaScript file.

There are techniques to mitigate this:

  1. Generate the URLs in the view and pass them into the JavaScript API. This approach has the drawback that it isn’t unobtrusive and requires some script in the view.
  2. If you prefer the unobtrusive approach, embed the URLs in the HTML in a logical and semantic manner and the script can read them from the DOM.

Both approaches get the job done, but they start to break down when you have a list. For example, suppose I have a page that lists comic books retrieved from the server, each with a link to edit the comic book. Do I then need to generate a URL for each comic on the server and pass it into the script?

That’s not necessarily a bad idea. After all, isn’t that how Hypertext is supposed to work? When you render a set of resources, shouldn’t the response include a navigation URL for each resource?

On the other hand, you might prefer your services to return just comic books and infer the URLs by convention.

How does MvcHaack.Ajax help?

Thinking about this problem led me to build up a quick proof-of-concept prototype based on something David Fowler showed me a long time ago.

The library provides a base controller class, I tentatively named JsonController (I could extend it to support other formats, but I wanted to keep this prototype focused on one common scenario). This class sets up a custom action invoker which does a lot of the work.

With this library in place, a <script> reference pointing to the controller itself generates a jQuery based JavaScript API with methods for calling controller actions.

This API enables passing JSON objects from the client to the server, taking advantage of ASP.NET MVC’s argument model binding.

Perhaps an illustration is in order.

Lets see some codez!

The first step is to write a controller. I’ll start simple and step it up a notch later.

The controller has a single action method that returns an enumeration of anonymous objects. Since I’m inheriting from JsonController, I don’t need to specify an action result return type. I could have returned real objects too, but for the sake of simplicity, I wanted to start here.

public class ComicsController : JsonController {
  public IEnumerable List() {
    return new[] {
      new {Id = 1, Title = "Groo"},
      new {Id = 1, Title = "Batman"},
      new {Id = 1, Title = "Spiderman"}

The next step is to make sure I have a route to the controller, and not to the controller’s action. The special invoker I wrote handles action method selection. This prototype lets you use a regular route, but the JsonRoute ensures correctness.

public static void RegisterRoutes(RouteCollection routes) {
  // ... other routes
  routes.Add(new JsonRoute("json/{controller}"));
  // ... other routes ...

As a reminder, this second step with the JsonRoute is not required!

With this in place, I can add a script reference to the controller from an HTML page and call methods on it from JavaScript. Let’s do that and display each comic book.

First, I’ll write the HTML markup.

<script src="/json/comics?json"></script>
<script src="/scripts/comicsdemo.js"></script>
<ul id="comics"></ul>

The first script tag references an interesting URL,/json/comics?json. That URL points to the controller (not an action of the controller), but passes in a query string value. This value indicates that the controller descriptor should short circuit the request and generate a JavaScript with methods to call each action of the controller using the same technique I wrote about before.

Here’s an example of the generated script. It’s very short. In fact, most of it is pretty statick. The generated part is the array of actions passed to the $.each block and the URL.

if (typeof $mvc === 'undefined') {
    $mvc = {};
$mvc.Comics = [];
$.each(["List","Create","Edit","Delete"], function(action) {
    var action = this;
    $mvc.Comics[this] = function(obj) {
        return $.ajax({
            cache: false,
            dataType: 'json',
            type: 'POST',
            headers: {'x-mvc-action': action},
            data: JSON.stringify(obj),
            contentType: 'application/json; charset=utf-8',
            url: '/json/comics?invoke&action=' + action

For those of you big into REST, you’re probably groaning right now with the RPC-ness of this API. It wouldn’t be hard to extend this prototype to take a more RESTful approach. For now, I stuck with this because it more closely matches the conceptual model for ASP.NET MVC out of the box.

Reference the script, and I can now call action methods on the controller from JavaScript. For example, in the following code listing, I call the List action of the ComicsController and append the results to an unordered list. Since I didn’t need to mix client and server code to write this script, I can place it in a static script file, comicsdemo.js.

$(function() {
    $mvc.Comics.List().success(function(data) {
        $.each(data, function() {
            $('#comics').append('<li>' + this.Title + '</li>');


One more thing

It’s easy to call a parameter-less action method, but what about an action that takes in a type? Not a problem. To demonstrate, I’ll create a type on the server first.

public class ComicBook {
    public int Id { get; set; }
    public string Title { get; set; }
    public int Issue { get; set; }

Great! Now let’s add an action method that accepts a ComicBook as an action method parameter. For demonstration purposes, the method just returns the comic along with a message. The invoker serializes the return value to JSON for you. There is no need to wrap the return value in a JsonResult. The invoker handles that for us.

public object Save(ComicBook book) {
    return new { message = "Saved!", comic = book };

I can now call that action method from JavaScript and pass in a a comic book. I just need to pass in an anonymous JavaScript object with the same shape as a ComicBook. For example:

$mvc.Comics.Save({ Title: 'Adventurers', Issue: 123 })
  .success(function(data) {
      alert(data.message + ' Comic: ' + data.comic.Title);

The code results in the alert pop up. This proves I posted a comic book to the server from JavaScript.

Message from webpage

Get the codez!

Ok, enough talk! If you want to try this out, I have a live demo here. One of the demos shows how this can nicely fit in with KnockoutJS.

If you want to try the code yourself, it’s available in NuGet under the ID MvcHaack.Ajax.

The source code is up at Github. Take a look and let me know what you think. Should we put something like this in ASP.NET MVC 4? mvc,, code comments suggest edit

EDITOR’S NOTE:Microsoft has an amazing intern program. For a summer, these bright college students work with a feature crew getting real work done, all the while attending cool events nearly every week that, frankly, make the rest of us jealous! Just look at some of the perks listed in this news article!

This summer, the ASP.NET MVC is hosting an intern, Stephen Halter, who while very smart, doesn’t have a blog of his own (booo! hiss!). Being the nice guy that I am (and also being amenable to bribes), I’m letting him guest author a post on my blog (first guest post ever!) to talk about the cool project he’s been doing.

Editor Update:A lot of folks are wondering why we built our own grid. The plan is to build this on top of the jQuery UI Grid when it’s complete. It’s not done yet so we built a scaled down implementation to allow us to test out the interactions with ASP.NET MVC controllers and make sure everything is in place. It’s mostly T4 files that extends our existing Add Controller Scaffolding feature. You can tweak the T4 to build any kind of grid you want.

Hello everyone – my name is Stephen Halter, and I am a summer intern on the ASP.NET MVC team who is excited to show you the new Ajax grid scaffolder that I have been working on for ASP.NET MVC 4. Phil has graciously allowed me to use his blog to get the word out about my project.

We want to get some feedback before releasing the scaffolder as part of the MVC Framework, and we figured what better way to get feedback than to release a preview of the scaffolder as a NuGet package?

To install the package, search for “AjaxGridScaffolder” in the NuGet Package Manager dialog or just enter the following command

PM> Install-Package AjaxGridScaffolder

in the Package Manager Console to try it out.

The NuGet package, once installed, registers itself to the Add Controller dialog as a ScaffolderProvider named “Ajax Grid Controller.” When run, the scaffolder generates a controller and corresponding views for an Ajax grid representation of the selected model and data context class.

The Ajax grid controller scaffolder has many similarities to the current “Controller with read/write actions and views, using Entity Framework”, but it provides pagination, reordering and inline editing of the content using Ajax.

Add > Controller menu

Template Selection - Ajax Grid

Currently, the Ajax Grid Scaffolder only generates C# code, but both Razor and ASPX views are supported. We’ll add VB.NET support later. If you selected “None” from the Views drop down in the Add Controller dialog, only the controller will be generated.

The generated controller has 8 actions. There is Index, Create (POST/GET), Edit (POST/GET), Delete (POST), GridData, and RowData. Most of these actions probably sound familiar, but there are a few that might not be obvious. The GET versions of Create and Edit provide editing widgets to the client using HtmlHelper.EditorFor or HtmlHelper.DropDownList if the property is a foreign key. The GridData action renders a partial view of a range of rows while RowData renders a partial view a specific row given its primary key. These *Data actions aren’t very verby (editor’s note: “verby?” Seriously?) and improved naming suggestions are welcome.

Solution explorer view of the project

The scaffolder also provides three views: Index, GridData, and Edit. The Index view includes JavaScript and small amount of CSS required to make the Ajax grid usable. Due to all the possible ways a master or layout page can be done, it isn’t possible to ensure that JavaScript and CSS will be included in the <head> element of the document. However, we are looking at possible fixes for the MVC 4 release.


The GridData and Edit views are partial views that are rendered when retrieving non-editable and editable rows respectively via XMLHttpRequest instances (XHRs) using jQuery. Sending HTML snippets in response to XHRs (as opposed to JSON for example) makes using DisplayFor, EditorFor and ValidationFor more natural and progressive enhancement simpler.

With or without JavaScript, the Index view provides pagination and reordering by column. With JavaScript enabled, rows can be created/edited/deleted all inside the grid without ever leaving the index view.

Tell us what your thoughts are on the Ajax Grid Scaffolder. Is it important to you that we support editing without JavaScript? Do you want the back button to take you to your previous view of the grid? Do you have any suggestions on how to clean up the generated code? Did you encounter any bugs/issues? If so, we want to know.

Tags: aspnetmvc,, ajax, jquery, grid

code comments suggest edit

This is an age old problem and one that’s probably been solved countless times before, but I’m going to write about it anyways.

Say you’re writing code like this:

<p>You have the following themes:</p>
@foreach(var theme in Model) {

The natural inclination for the lazy developer is to leave enough alone and stop there. It’s good enough, right? Right?

Sure, when the value of Model.Count is zero or larger than one. But when it is exactly one, the phrase is incorrect English as it should be singular “You have the following theme”.

I must fight my natural inclination here! On the NuGet team, we have a rallying cry intended to inspire us to strive for better, “Apple Polish!”. We tend to blurt it out at random in meetings. I’m thinking of purchasing each member of the team a WWSJD bracelet (What Would Steve Jobs Do?).

To handle this case, I wrote a simple extension method:

public static string CardinalityLabel(this int count, string singular,
    string plural)
    return count == 1 ? singular : plural;

Notice that I didn’t try automatic pluralization. That’s just a pain.

With this method, I can change the markup to say:

<p>You have the following 
  @Model.Count.CardinalityLabel("theme", "themes"):</p>

I’m still just not sure about the name of that method though. What should it be?

Do you have such a method? Or are you fine with the awkward phrasing once in a while?

humor, personal comments suggest edit

Stumbling around the net, I ran into the “funny verticals” meme. These are typically a vertical strip of screenshots pulled from a movie or television with funny captions tacked on.

If you do an image search for “funny verticals”, you’ll see a whole slew of them. Be forewarned, there are some very crude and offensive ones.

For the lazy, here are a few representative ones I found to be funny. The first example is from the movie Inception:


This is another one from Inception. There seems to be a lot using this specific sequence.


And here’s one from the television show, LOST.


On a lark, I thought it’d be funny to try and do a geek version of this meme. I chose two personalities in our community I figured would be very well known, Scott Hanselman and Scott Guthrie. This has absolutely nothing to do with Hanselman posting this photo on his blog because I’m not the vengeful type. Nosiree.

I’ll post my versions here as well as the raw image because I know you’ll do much better than me. I needed to keep it tame because I do enjoy my employer’s salary continuation plan.


Here’s another one using the same template.


I tried changing this last one up a bit.


It’s pretty challenging to do this with folks who are not television/movie celebrities because you have far fewer images to work from, so I did the best I could with what I could find.

If you think you can do a lot better, download the raw images and prove it! When you do, post a link in the comments so we can see it. Smile