nuget, code, open source comments edit

When installing a package into a project, NuGet creates a packages.config file within the project (if it doesn’t already exist) which is an exact record of the packages that are installed in the project. At the same time, a folder for the package is created within the solution level packages folder containing the package and its contents.

Currently, it’s expected that the packages folder is committed to source control. The reason for this is that certain files from the package, such as assemblies, are referenced from their location in the packages folder. The benefit of this approach is that a package that is installed into multiple projects does not create multiple copies of the package nor the assembly. Instead, all of the projects reference the same assembly in one location.

If you commit the entire solution and packages folder into source contral, and another user gets latest from source control, they are in the same state you are in. If you omitted the the packages folder, the project would fail to build because the referenced assembly would be missing.


This approach doesn’t work for everyone. We’ve heard from many folks that they don’t want their packages folder to be checked into their source control.

Fortunately, you can enable this workflow today by following David Ebbo’s approach described in his blog post, Using NuGet without committing Packages.

But in NuGet 1.4 we’re planning to make it integrated into NuGet. We will be adding a new feature to restore any missing packages and the packages folder based on the packages.config file in each project when you attempt to build the project. This ensures that your application will compile even if the packages folder is missing at the time, which might be the case if you don’t commit it to source control.


We have certain requirements we plan to meet with this feature. Primarily, it has to work in a Continuous Integration (CI Server) scenario. So it must work both within Visual Studio when you build, but also outside of Visual Studio when you use msbuild.exe to compile the solution.

For more details, please refer to:

If you have feedback on the design of this feature, please provide it in the discussion thread. Also, do keep in mind that this next release is our first iteration to address this scenario. We think we’ll hit the primary use cases, but we may not get everything. But don’t worry, we’ll continue to release often and address scenarios that we didn’t anticipate.

Thanks for your support!

nuget, code, open source comments edit

In continuing our efforts to release early, release often, I’m happy to announce the release of NuGet 1.3!

Upgrade!If you go into Visual Studio and select Tools > Extension Manager, click on the Update tab and you’ll see that this new version of NuGet is available as an update. Click the Upgrade button and you’re all set. It only takes a minute and it really is that easy to upgrade.


As always, there’s a new version of NuGet.exe corresponding with this release as well as a new version of the Package Explorer. If you have a fairly recent version of NuGet.exe, you can upgrade it by simply running the following command:

NuGet.exe u


Expect a new version of Package Explorer to be released soon as well. It is a click once application so all you need to do is open it and it will prompt you to upgrade it when an upgrade is available.

There’s a lot of cool improvements and bug fixes in this release as you can see in the release announcement. One of my favorite features is the ability to quicky create a package from a project file (csproj, vbproj) and push the package with debugging symbols to the server. David Ebbo wrote a great blog post about this feature and Scott Hanselman and I demonstrated this feature 20 minutes into our NuGet in Depth talk at Mix 11., mvc, code comments edit

Say you want to apply an action filter to every action except one. How would you go about it? For example, suppose you want to apply an authorization filter to every action except the action that lets the user login. Seems like a pretty good idea, right?

Currently, it takes a bit of work to do this. If you add a filter to the GlobalFilters.Filters collection, it applies to every action, which in the previous scenario would mean you already need to be authorized to login. Now that is security you can trust!


You can also manually add the filter attribute to every controller and/or action method except one. This solution is a potential bug magnet since you would you need to remember to apply this attribute every time you add a new controller. Update: There’s yet another approach you can try which is to write a custom authorize attribute as described in this blog post on Securng your ASP.NET MVC 3 Application.

Fortunately, ASP.NET MVC 3 introduced a new feature called filter providers which allow you to write a class that will be used as a source of action filters. For more details about what filter providers are, I highly recommend reading Brad Wilson’s blog post on filters.

In this case, what I need to write is a conditional action filter. I actually started writing one during my ASP.NET MVC 3 presentation at this past Mix 11 but never actually finished the demo. One of the many mistakes that inspired my blog post on presentation tips.

In this blog post, I’ll finish what I started and walk through an implementation of a conditional filter provider which will let us accomplish applying filters to action methods based on any criteria we can think of.

Here’s the approach I took. First, I wrote a custom filter provider.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;

public class ConditionalFilterProvider : IFilterProvider {
  private readonly 
    IEnumerable<Func<ControllerContext, ActionDescriptor, object>> _conditions;

  public ConditionalFilterProvider(
    IEnumerable<Func<ControllerContext, ActionDescriptor, object>> conditions)
      _conditions = conditions;

  public IEnumerable<Filter> GetFilters(
      ControllerContext controllerContext, 
      ActionDescriptor actionDescriptor) {
    return from condition in _conditions
           select condition(controllerContext, actionDescriptor) into filter
           where filter != null
           select new Filter(filter, FilterScope.Global, null);

The code here is fairly straightforward despite all the angle brackets. We implement the IFilterProvider interface, but only return the filters given that meet the set of criterias represented as a Func. But each Func gets passed two pieces of information, the current ControllerContext and an ActionDescriptor. Through the ActionDescriptor, we can get access to the ControllerDescriptor.

The ActionDescriptor and ControllerDescriptor are abstractions of actions and controllers that don’t assume that the controller is a type and the action is a method. That’s why they were implemented in that way.

So now, to use this provider, I simply need to instantiate it and add it to the global filter provider collection (or register it via my Dependency Injection container like Brad described in his blog post).

Here’s an example of creating a conditional filter provider with two conditions. The first adds an instance of MyFilter to every controller except HomeController. The second adds SomeFilter to any action that starts with “About”. These scenarios are a bit contrived, but I bet you can think of a lot more interesting and powerful uses for this.

IEnumerable<Func<ControllerContext, ActionDescriptor, object>> conditions = 
    new Func<ControllerContext, ActionDescriptor, object>[] { 
    (c, a) => c.Controller.GetType() != typeof(HomeController) ? 
      new MyFilter() : null,
    (c, a) => a.ActionName.StartsWith("About") ? new SomeFilter() : null

var provider = new ConditionalFilterProvider(conditions);

Once we create the filter provider, we add it to the filter provider collection. Again, you can also do this via dependency injection instead of adding it to this static collection.

I’ve posted the conditional filter provider as a package in my personal NuGet repository I use for my own little samples located at Feel free to add that URL as a package source and Install-Package ConditionalFilterProvider in order to get the source.

Tags: aspnetmvc,, filter, filter providers

code, open source comments edit

Eric S. Raymond in the famous essay, The Cathedral and the Bazaar, states,

Release early. Release often. And listen to your customers.

This advice came from Eric’s experience of managing an open source project as well as his observations of how the Linux kernel was developed.

But why? Why release often? Do I really have to listen to my customers? They whine all the time! To question this advice is sacrilege to those who have this philosophy so deeply ingrained. It’s obvious!

Or is it?

When I was asked this in earnest, it took me a moment to answer. It’s one of those aphorisms you know is true, but perhaps you’ve never had to explain it before. It’s hard to answer not because there isn’t a good answer, but because it’s difficult to know where to begin.

It’s healthy to challenge conventional wisdom from time to time to help avoid the cargo cult mentality and remind oneself all the reasons good advice is, well, good advice.

One great approach is to take a step back and imagine explaining this to someone who isn’t ingrained in software development, such as a business or marketing person. Why is releasing early and often a good thing?It helps to clarify that releasing early doesn’t mean waking up at 3:00 AM to release, though that may happen from time to time.

In this blog post, I’ll look into this question as well as a couple of other related questions that came to mind as I thought about this topic:

  • If releasing often is a good thing, why not release even more often?
  • What factors affect how often is often enough?
  • What are common objections to releasing often?
  • Why does answering a question always leave you with more questions?

I’ll try answering these questions as best as I can based on my own experiences and research.

Why is it a good thing?

What are the benefits of releasing early and often? As I thought through this question and looked at the various responses that I received from asking the Twitterista for their opinions, three key themes kept recurring. These themes became the summary of my TL;DR version of the answer:

  1. It’s results in a better product.
  2. It results in happier customers.
  3. It fosters happier developers.

So how does it accomplish these three things? Let’s take a look.

It provides a rapid feedback loop

Steve Smith had this great observation (emphasis mine):

The shorter the feedback loop, the faster value is added. Try driving while only looking at the road every 10 secs, vs. constantly.

Driving like that is a sure formula for receiving a Darwin Award.

Every release is an opportunity to stop theorizing about what customers want and actually put your hypotheses to the test by getting your product in their hands. The results are often surprising. After all, who expected Twitter to be so big?

Matt Mullenweg, the creator of WordPress put it best in his blog post, 1.0 is the loneliest number:

Usage is like oxygen for ideas. You can never fully anticipate how an audience is going to react to something you’ve created until it’s out there. That means every moment you’re working on something without it being in the public it’s actually dying, deprived of the oxygen of the real world.

This has played out time and time again in my experience. This happened recently with NuGet when we released a bug that caused sluggishness in certain TFS scenarios, something very difficult to discover without putting the product in real customers hands to use in real scenarios. Thankfully we didn’t have to wait a year to release a proper fix.

As Miguel De Icaza points out,

Early years of the project, you get to course correct faster, keep up with demand/needs

It’s not just customer demands that require course corrections. At times, changing market conditions and other external factors may require you to quickly adjust your planned feature sets and come out with a release in response to these changes. Having short iterations allow more agility to respond to such events keeping your product relevant.

It gets features and bug fixes in customers hands faster

This point is closely related to the last point, but worth calling out. In the Chromium blog, the open source project that makes up the core of the Google Chrome browser, they point out the following in their blog post, also titled Release Early, Release Often (emphasis mine):

The first goal is fairly straightforward, given our pace of development. We have new features coming out all the time and do not want users to have to wait months before they can use them. While pace is important to us, we are all committed to maintaining high quality releases — if a feature is not ready, it will not ship in a stable release.

Well why not make users wait a few months? As Nate Kohari points out,

Nothing is real until it’s providing value (or not) to your users. Having completed work that isn’t released is wasteful.

The longer a feature is implemented, but not being used in real scenarios, the more the context for the feature is lost. By the time it’s in customers hands, the original reason for the feature may be lost in the smoky mists of memory. And as feedback on the feature comes in, it takes time for the team to re-acquaint itself with the code and the reasons the code was written the way it was. All of that ramp up time is wasteful.

Likewise, the faster the cycle, the shorter the time the team has to live with a known bug out there in the product. Sometimes products ship with bugs that aren’t serious enough for an emergency patch, but annoying enough that customers are unhappy with having to live with the bug till the next release. This also makes developers unhappy as well as they are the ones who hear about it from the customers. A short release cycle means nobody has to live with these sort of bugs for long.

It reduces pressure on the development team to “make” a release

This point is also taken from the Chromium blog post as well. You can probably tell that post really resonated with me.

As a project gets closer and closer to the end of the release cycle, the team starts to make hard decisions regarding which bugs or features will get implemented or get punted. The pressure builds as the team realizes, if they don’t get the fix in this release, customers will have to wait a long time to get it in the next. As the Chromium blog post states:

Under the old model, when we faced a deadline with an incomplete feature, we had three options, all undesirable: (1) Engineers had to rush or work overtime to complete the feature by the deadline, (2) We delayed the release to complete that feature (which affected other un-related features), or (3) The feature was disabled and had to wait approximately 3 months for the next release. With the new schedule, if a given feature is not complete, it will simply ride on the the next release train when it’s ready. Since those trains come quickly and regularly (every six weeks), there is less stress.

The importance of this point can’t be overstated. Releasing often is not only good for the customers, it’s good for the development team.

It makes the developers excited!

This point was one of the original observations that Eric Raymond made in his essay,

So, if rapid releases and leveraging the Internet medium to the hilt were not accidents but integral parts of Linus’s engineering-genius insight into the minimum-effort path, what was he maximizing? What was he cranking out of the machinery?

Put that way, the question answers itself. Linus was keeping his hacker/users constantly stimulated and rewarded – stimulated by the prospect of having an ego-satisfying piece of the action, rewarded by the sight of constant (evendaily) improvement in their work.

Contrary to popular depictions, developers love people! And we especially love happy people, which makes us very excited to see features and bug fixes get into the customers hands because it makes them happy.

It’s demoralizing to implement a great feature or key bug fix and then watch it sit and stagnate with nobody using it.

This is especially important when you’re trying to harness the energy of a community of open source contributors within your project. It’s important to keep their attention and interest in the project high, or you will lose them. And nothing makes contributors more excited than seeing their hard work be released into the public for the world to use and recognize.

Yes, appeal to your contributors egos! Let them share in the glory now, and not months from now! Let them receive the recognition they deserve sooner rather than later!

It makes the schedule more predictable and easier to scope

Quick! Tell me how many piano tuners there are in Chicago? At first glance, this is a very difficult task. But if you break it down into smaller pieces, you can come up with a pretty good estimate for each small piece which leads to a decent overall estimate.

This type of problem is known as a Fermi problem named after the physicist Enrico Fermi who was renown for his estimation abilities. The story goes that he estimated the strength of an atomic bomb by measuring the distance pieces of paper travelled that he ripped up and dropped from his hands.

Breaking down a long product schedule into short iterations is similar to attacking a Fermi Problem. It’s much easier to scope and estimate a short iteration than it is a large one.

Again, going back to the Chromium blog post,

The second goal is about implementing good project management practice. Predictable fixed duration development periods allow us to determine how much work we can do in a fixed amount of time, and makes schedule communication simple. We basically wanted to operate more like trains leaving Grand Central Station (regularly scheduled and always on time), and less like taxis leaving the Bronx (ad hoc and unpredictable).

Keeps your users excited and happy

Ultimately, all the previous points I made lead to happy customers. When a customer reports a bug, and a fix comes out soon afterwards, the customer is happy. When a customer sees new features continually released that make their lives better, they are happy. When your product does what they want because of the tight feedback cycle, the customers are ultimately much happier with the product.

And this doesn’t just benefit your current customers. Potential new customers will be attracted to the buzz that frequent releases generate. As Atley Hunter points out,

Offering software consistently and frequently helps to foster both market buzz and continued interested from your installbase.

Continual releases are the sign of an active and vibrant product and product community. This is great for marketing your product to new audiences.

So Why Not Release All The Time?

If releasing often is such a good thing, why not release all the time? Isn’t releasing more often better than less often?

Releasing every second of the day time might not be possible since it does take time to implement features, but it’s not unheard of to release features as soon as they are done. This is the idea behind a technique called continuous deployment, which is particularly well suited to web sites.

When I worked at Koders (now part of BlackDuck software), we pushed a release every two weeks. We wanted to move towards a weekly release, but it took a couple of days to build our source code index. Our plan was to make the index build incrementally so we could deploy features more often and hopefully reach the ultimate goal of releasing as soon as a feature was completely done.

I think this is a great idea, but not always attainable without significant changes in how a product is developed. For example, with NuGet, we have a continuous integration server, that produces a build with every check-in. In a manner of speaking, we do have continuous deployment because anyone can go and try out these builds.

But I wouldn’t apply the “continuous deployment” label to what we do because we these CI builds are not release quality. To get to that point would require changing our development process so that every check-in to our main branch represented a completely end-to-end tested feature that we’re ready to ship.

At this point, I’m not even sure that continuous deployment is right for every product, though I’m still learning and open to new and better ideas. To understand why I feel this way, let me transition to my next question.

What factors affect how often is often enough?

I think there are several key factors that determine how often a product should be released.

Adoption Cost to the Customers

Some products have a higher adoption cost when a new release is produced. For example, websites have a very low adoption cost. When produces a release daily or even more than once a day, the cost to me is very little as long as the changes aren’t too drastic.

I simply visit the site and if I notice the new feature, I start taking advantage of it.

A product like Google Chrome has a slightly higher adoption cost, but not much. I’m unlikely to have critical infrastructure completely dependent on their browser. The browser updates itself and I simply take advantage of the new features.

But a product like a web framework has a much higher adoption cost. There’s a steeper learning curve for a new release of a programming framework than there is for a browser update. Also, authors will want time to be able to write their training materials, courses, books before they become obsolete by the next version.

And folks running sites on these framework versions want time to upgrade to the next version without having that version become obsolete immediately. Major releases of frameworks allow the entire ecosystem to congeal around these major release points. Imagine if ASP.NET MVC had 24 official releases in two years. How much harder would it be to hire developers to support your ASP.NET MVC v18 application when they want to be on the latest and greatest because we all know v24 is the cats pajamas.

I believe this is why you see Ruby on Rails releasing every two years and ASP.NET MVC release yearly. Note that both of these products still release previews early and often, but the final “blessed” builds come out relatively infrequently.

Maturity of the product

The other factor that might affect release cadence is the maturity of the product. When a product is playing catch-up, it has to release more often or risk falling further and further behind.

As a product matures, all the low-hanging fruit gets taken and sometimes a longer release cycle is necessary as they tackle deeper features which require heavier investments of time. Keep in mind, this is me theorizing here. I don’t have hard numbers to base this on, but it’s based on my observations.

Customer Costs

Sometimes, the customers tolerance for change affects release cadence. For example, I don’t think customers would tolerate a new iPhone hardware device every month because they’ll constantly feel left behind. A year is enough time for many (but not all) consumers to feel ready to upgrade to the next version.

Deployment Costs

One last consideration might be the costs to deploy the product. For hardware, this can be a big factor when the design of the product changes drastically from one version to the next.

Suddenly new supply chains may need to be set in place. Factories may need to be retrofitted to support the new or changing components. The products have to physically be shipped to the stores.

All these things can affect how often a new product can be shipped.

Common objections to Releasing Often

I think there are three main objections to this model I’ve heard or can think of. The first is that it forces end users to update their software more often. I’ve addressed this point already by looking at customer adoption as a gating factor in how often the product should be released. Many products can be updated quietly without requiring users to take any action. If the new features are designed well, customers will naturally discover them and learn them without too much fuss. In this model, avoid having these regular releases move everyone’s cheese by re-arranging the UI and that sort of thing.

Another concern raised is that this leads to more frequent lower quality releases rather than less frequent releases with higher polish and quality. After all, releases always contain overhead and by having more releases, you’re multiplying this overhead over multiple releases.

This is definitely a concern, but one that’s easily mitigated. Before we address that, but as my co-worker Drew Miller points out, long release cycles mask wasteand that waste is far greater than the cost of more frequest release overhead.

  • The more often you release, the better you are at releasing; release overhead decreases over time. With long release cycles, the pain of release inefficiency is easy to ignore and justify and the urgency and incentive to trim that waste is very low.
  • The sense of urgency for frequent releases drives velocity, more than off-setting the cost of release overhead.
  • The rapid feedback with frequent releases reduces the waste we always have for course correction within a long release cycle. A great example of this is the introduction of ActionResult to ASP.NET MVC 1.0 between Preview 2 and 3. That was costly, but would’ve been more costly if we had made that change much later.
  • The slow start of a long release cycle alone is usually more wasteful than the total cost of release overhead over the same period.
  • Long release cycles may have milestone overhead that can be as great (or greater) than release overhead.

Release as Often as Possible and Prudent

There’s probably a lot more that can be written on this topic, but I’m quickly approaching the TL;DR event horizon (if I haven’t passed it already). I’m excited to continue to learn more about effective release strategies so I look forward to your thoughtful comments on this topic.

At the end, my goal was to make it clear why releasing early and often is a good thing. I don’t currently believe there’s an empirical answer to the question, how often should you release? Rather, my answer right now is to suggest as often as possible and prudent.

If you release often, but find that your releases tend to be of a low quality, then perhaps it’s time to take the dial back a bit. If your releases are of a very high quality, perhaps it’s worth looking at any waste that goes into each release and trying to eliminate it so you can release even more often if doing so would appeal to your customers.

For more reading, I recommend:

code, open source, nuget comments edit

The Magic 8-ball toy is a toy usually good for maybe one or two laughs before it quickly gets boring. Even so, some have been known to make all their important life/strategic decisions using it, or an equivalent mechanism.

The way the toy works is you ask it a question, shake it, and the answer to your question appears in a little viewport. What you’re seeing is one side of an icosahedron (20-sided polyhedron, or for you D&D folks, a d20). On each face of the d20 is a potential answer to your yes or no question.


I thought it would be fun to write a NuGet package that emulates this toy as one of my demos for the NuGet talk at Mix11. Yes, I am odd when it comes to defining what I think is fun. When you install the package, it adds a new command to the Package Manager Console.

The command I wrote didn’t have twenty possible answers, because I was lazy, but it followed the same general format. This command also includes support for tab expansions, which feel a lot like Intellisense.

The following screenshot shows an example of this new command, Get-Answer, in use. Note that when you hit tab after typing the command, you can see a tab expansion suggesting a set of questions. It’s important to note that unlike Intellisense, you are free to ignore the tab expansion here and type in any question you want.


In this blog post, I will walk through how I wrote and packaged that command. I must warn you, I’m no PowerShell expert. I wrote this as a learning experience with the help of other PS experts.

The first thing to do is write an init.ps1 file. As described in the NuGet documentation for creating a package on CodePlex:

Init.ps1 runs the first time a package is installed in a solution. If the same package is installed into additional projects in the solution, the script is not run during those installations. The script also runs every time the solution is opened. For example, if you install a package, close Visual Studio, and then start Visual Studio and open the solution, the Init.ps1script runs again.

This script is useful for packages that need to add commands to the console because they’ll run each time the solution is opened. Here’s what my init.ps1 file looks like:

param($installPath, $toolsPath, $package)

Import-Module (Join-Path $toolsPath MagicEightBall.psm1)

The first line declares the set of parameters to the script. These are the parameters that NuGet will pass into the init.ps1 script (note that install.ps1, a different script that can be included in NuGet packages, receives a fourth $project parameter).

  • $installPath is the path to your package install
  • $toolsPath is the path to the tools directory under the package
  • $package is a reference to your package

The second line of the script is used to import a PowerShell module. In this case, we specify a script named MagicEightBall.psm1 by its full path. We could write the entire script here in init.ps1, but I’ve been told it’s good form to simply write scripts as modules and then import them via init.ps1and I have no reason to not believe my source. I suppose init.ps1 could also import multiple modules rather than one.

Let’s look at the code for MagicEightBall.psm1. It’s pretty brief!

$answers =  "As I see it, yes", 
            "Reply hazy, try again", 
            "Outlook not so good"

function Get-Answer($question) {
    $answers | Get-Random

Register-TabExpansion 'Get-Answer' @{
    'question' = { 
        "Is this my lucky day?",
        "Will it rain tonight?",
        "Do I watch too much TV?"

Export-ModuleMember Get-Answer

The first line of code simply declares an array of answers. The real Magic Eight Ball has 20 in all, so feel free to add them all there.

I then define a function named Get-Answer. The implementation demonstrates one of the cool things I like about Powershell. I can simply pipe it into the Get-Random method and it returns a random answer from the array.

Skipping to the end, the last line of code calls Export-Module on this function, which makes it available in the Package Manager Console.

So what about that middle bit of code that calls Register-TabExpansion? Glad you asked. That function provides the Intellisense-like behavior for our function by registering a tab expansion.

It takes two parameters, the first is the name of the function, in this case Get-Answer. The second is a dictionary where the keys are the names of the parameters of the function, and the values contain an array of expansion options for that function. Since are function only has one parameter named question, we add 'question' as the key to the dictionary and supply an array of potential questions as the value.

With these two files in place, I simply opened up Package Explorer and selected File > New from the menu to start a new package and dragged both of the script files into the Package contentswindow. NuGet recognized the files as being PowerShell scripts and offered to put them in the Tools folder.

I then selected Edit > Edit Package Metadata from the menu to enter the NuSpec metadata for the package and clicked OK at the bottom.


With all that done, I selected the File > Save As… menu to save the package on disk so I could test it out. Once I was done testing, I selected File > Publish to publish the package to the real NuGet feed.

It’s really that simple to write a package that adds a command to the Package Manager console complete with tab expansions.

In a future blog post, I’ll write about how I wrote MoodSwings, a package that can automate Visual Studio from within the Package Manager Console. If you have the NuGet Package Manager Console open, you can try out this package by running the command:

Install-Package MagicEightBall

code, tech, personal comments edit

One aspect of my job that I love is being able to go in front of other developers, my peers, and give presentations on the technologies that my team and I build. I’m very fortunate to be able to do so, especially given the intense stage fright I used to have.


But over time, through giving multiple presentations, the stage fright has subsided to mere abject horror levels. Even so, I’m still nowhere near the numbers of much more polished and experienced speakers such as my cohort, Scott Hanselman.

Always looking for the silver lining, I’ve found that my lack of raw talent in this area has one great benefit, I make a lot of mistakes. A crap ton of them. But as Byron Pulsifer says, every mistake is a an “opportunity to learn”, which means I’m still cramming for that final exam.

At this past Mix 11, I made several mistakeslearning opportunities in my first talk that I was able to capitalize on by the time my second talk came around.

I thought it might be helpful for my future self (and perhaps other budding presenters) if I jotted down some of the common mistakes I’ve made and how I attempt to mitigate them.

Have a Backup For Everything!

An alternative title for this point could be worry more! I tend to be a complete optimist when it comes to preparing for a talk. I assume things will just work and it’ll generally work itself out and this attitude drives Mr. Hanselman crazy when we give a talk together. This attitude is also a setup for disaster when it comes to giving a talk.

During my talk, there were several occasions where I fat-fingered the code I was attempting to write on stage in front of a live audience. For most of my demos, I had snippets prepared in advance. But there were a couple of cases where I thought the code was simple enough that I could hack it out live.

Bad mistake!

You never know when nervousness combined with navigating a method that takes a Func-y lambda expression of a generic type can get you so lost in angle brackets you think you’re writing XML. I had to delete the method I was writing and start from scratch because I didn’t create a snippet for it, which was my backup for other code samples. This did not create a smooth experience for people attending the talk.

Another example of having a backup in place is to always have a finished version of your demo you can switch to and explain in case things get out of control with your fat fingers.

For every demo you give, think about how it could go wrong and what your backup plan will be when it does go wrong.

Minimize Dependencies Not Under Your Control

In my ASP.NET MVC 3 talk at Mix, I tried to publish a web application to the web that I had built during the session. This was meant to be the finale for the talk and would allow the attendees to visit the site and give it a spin.

It’s a risky move for sure, made all the more risky in that I was publishing over a wireless network that could be a bit creaky at times.

Prior to the talk, I successfully published multiple times in preparation. But I hadn’t set up a backup site (see previous mistake). Sure enough, when the time came to do it live with a room full of people watching, the publishing failed. The network inside the room was different than the one outside the room.

If I had a backup in place, I could have apologized for the failure and sent the attendees to visit the backup site in order for them to play with the finished demo. Instead, I sat there, mouth agape, promising attendees that it worked just before the talk. I swear!

Your audience will forgive the occasional demo failure that’s not in your control as long as the failure doesn’t distract from the overall flow of the presentation too much and as long as you can continue and still drive home the point you were trying to make.

Mock Your Dependencies

This tip is closely related to and follows up on the last tip. While at Mix, I learned how big keynotes, such as the one at Mix, are produced. These folks are Paranoid with a capital “P”! I listened intently to them about the level of fail safes they put in place for a conference keynote.

For example, they often re-create local instances of all aspects of the Internet and networking they might need on their machine through the use of local web servers, HOST files, local fake instances of web services, etc.

Not only that, but there is typically a backup person shadowing what the presenter is doing on another machine. But this person is following along the demo script carefully. If something goes wrong with the presenter’s demo, they are able to switch a KVM script so that the main presenter is now displaying and controlling the backup machine, while the shadow presenter now has the presenter’s original machine and can hopefully fix it and continue shadowing. Update:Scott Hanselman posted a video of behind-the-scenes footage from Mix11 where he and Jonathan Carter discuss keynote preparations and how the mirroring works.

It’s generally a single get-out-of jail free card for a keynote presenter.

I’m not suggesting you go that far for a standard break-out session. But faking some of your tricky dependencies (and having backups) is a very smart option.

Sometimes, a little smoke and mirrors is a good backup

In our following NuGet talk the next day, Scott and I prepared a demo in which I would create a website to serve up NuGet packages, and he would going visit the site to install a package.

We realized that publishing the site on stage was too risky and was tangential to the point of our talk, so we did something very simple. I created the site online in advance at a known location, This site would be an exact duplicate of the one I would create on stage.

During the presentation, I built the site on my local machine and casually mentioned that I had made the site available to him at that URL. We switched to his computer, he added that URL to his list of package sources, and installed the package.

The point here is that while we didn’t technically lie, we also didn’t tell the full story because it wasn’t relevant to our demo. A few people asked me afterwards how we did that, and this is how.

I would advise against using smoke and mirrors for your primary demo though! Your audience is very smart and they probably wouldn’t like it the key technology you’re demoing is fake.

Prepare and Practice, Practice, Practice

This goes without saying, but is sometimes easier said than done. I highly recommend at least one end-to-end walkthrough of your talk and practice each demo multiple times.

Personally, I don’t try to memorize or plan out exactly what I will say in between demos (except for the first few minutes of the talk). But I do think it’s important to memorize and nail the demos and have a rough idea of the key points that I plan to say in between demos.

The following screenshot depicts a page of chicken scratch from Scott Hanselman’s notebook where we planned out the general outline of our talk.


I took these notes, typed them up into an orderly outline, and printed out a simple script that we referred to during the talk to make sure we were on the right pace. Scott also makes a point to mark certain milestones in the outline. For example, we knew that around the 45 minute mark, we had better be at the AddMvcToWebForms demo or we were falling behind.

Writing the script is my way of preparing as I end up doing the demos multiple times each when writing the script. But that’s definitely not enough.

For my first talk, I never had the opportunity to do a full dry-run. I can make a lot of excuses about being busy leading up to the conference, but in truth, there is no excuse for not practicing the talk end to end at least once.

When you do a dry run, you’ll find so many issues you’ll want to streamline or fix for the actual talk. Trust me, it’s a lot better to find them during a practice run than during a live talk.

Don’t change anything before the talk

Around the Around 24:40 mark in our joint NuGet in Depth session, you can see me searching for a menu option in the Solution Explorer. I’m looking for the “Open CMD Prompt Here” menu, but I can’t find it.

It turns out, this is a feature of the Power Commands for Visual Studio 2010 VSIX extension. An extension I had just uninstalled on the suggestion from my speaking partner, Mr. Hanselman. Just prior to our talk, he suggested I disable some Visual Studio extensions to “clean things up”

I had practiced my demos with that extension enabled so it threw me off a bit during the talk (Well played Mr. Hanselman!). The point of this story is you should practice your demo in the same state as you plan to give the demo and don’t change a single thing with your machine before giving the actual talk.

I know it’s tempting to install that last Window Update just before a talk because it keeps annoying you with its prompting and what could go wrong, right? But resist that temptation. Wait till after your talk to make changes to your machine.


This post isn’t meant to be an exhaustive list of presentation tips. These are merely tips I learned recently based on mistakes I’ve made that I hope and plan to never repeat.

For more great tips, check out Scott Hanselman’s Tips for a Successful MSFT Presentation and Venkatarangan’s Tips for doing effective Presentations.

Tags: mix11,mix,presentations,tips

code, open source, mvc,, nuget comments edit

Another Spring approaches and once again, another Mix is over. This year at Mix, my team announced the release of the ASP.NET MVC 3 Tools Update at Mix, which I blogged about recently.

Working on this release as well as NuGet has kept me intensely busy since we released ASP.NET MVC 3 RTM only this past January. Hopefully now, my team and I can take a moment to breath as we start making big plans for ASP.NET MVC 4. It’s interesting to me to think that the version number for ASP.NET MVC is quickly catching up to the version of ASP.NET proper. Smile

Once again, Mix has continued to be one of my favorite conferences due to the eclectic mix of folks who attend.


The previous photo was taken from Joey De Villa’s Blog post.


It’s not just a conference where you’ll run into Scott Guthrie and Hanselman, but you’ll also run into Douglas Crockford, Miguel De Icaza or even Elvis!

I was involved with two talks at Mix which are now available on Channel9 and embedded here.

ASP.NET MVC 3 @:The Time Is Now

In this talk, I cover the new features of ASP.NET MVC 3 and the ASP.NET MVC 3 Tools Update while building an application that allows me to ask the audience survey questions. The application is hosted at

Errata: I ran into a few problems during this talk, which I will cover in a follow-up blog post about speaking tips I learned due to mistakes I’ve made.

If you attended the talk (or watched it), I learned at the end that the failure to publish was due to a proxy issue in the room’s network that I didn’t have in my hotel room or the main conference area.

I plan to follow up on various topics I covered in the talk with blog posts. For example, I wrote a helper method during the talk that allows you to pass in a Razor snippet as a template for a loop. That’s now covered in this blog post, A Better Razor Foreach Loop.

NuGet in Depth: Empowering Open Source on the .NET Platform

In this talk, Scott and I perform what we call our “HaaHa” show, which is a name derived from a combination of our last names, Phil Haack and Scott Hanselman but pronounced like our aliases PhilHaand ScottHa.

We spent the entire talk attempting to one-up each other with demos of NuGet. Each demo built on the last and showed more and more what you can do with NuGet.

Errata: During the demo, there was one point where I expected a License Agreement to pop up, but it didn’t. I gave a misleading explanation for why that happened. We should have seen the pop-up because we do not install SqlServerCompact by default.

Turns out I ran into an edge case potential bug in NuGet. Usually, when I create a project, I make sure to create a folder for the solution so that the solution is isolated in its own folder. For some reason, I didn’t have that checked and the solution was being created in my temp directory. Thus the packages folder was being shared with every project I’ve created in that folder which made NuGet think that SqlServerCompact was already installed.

If you’ve never accepted that agreement, it will pop up.

The second mistake I made was in describing install.ps1, which indeed runs every time you install it into a project, not once per solution. To get the correct definition, read our documentation page on Creating a Package.

Another minor mistake I made was in describing the Magic 8-Ball, I said it had a dodecahedron inside. I meant to say icosahedron which is a twenty-sided polyhedron.

During the talk, we randomly start talking about a ringtone. That was due to someone’s phone going off in the audience. You can’t hear it in the recording. Smile

Oh, I just pushed MoodSwings to the main feed so you can try it out.


This was the first time I stayed till the following day of a conference rather than hopping on a cab to the airport immediately after my last talk.

I highly recommend that approach. It was nice to have time to relax after my last talk. A few of us went to ride the rollercoaster at NY NY, walk around the strip, and take in a show JabbaWockeez.


Tags: aspnetmvc, nuget-gallery, mix11, mix, nuget

razor, code, mvc comments edit

Yesterday, during my ASP.NET MVC 3 talk at Mix 11, I wrote a useful helper method demonstrating an advanced feature of Razor, Razor Templated Delegates.

There are many situations where I want to quickly iterate through a bunch of items in a view, and I prefer using the foreach statement. But sometimes, I need to also know the current index. So I wrote an extension method to IEnumerable<T> that accepts Razor syntax as an argument and calls that template for each item in the enumeration.

public static class HaackHelpers {
  public static HelperResult Each<TItem>(
      this IEnumerable<TItem> items, 
      HelperResult> template) {
    return new HelperResult(writer => {
      int index = 0;

      foreach (var item in items) {
        var result = template(new IndexedItem<TItem>(index++, item));

This method calls the template for each item in the enumeration, but instead of passing in the item itself, we wrap it in a new class, IndexedItem<T>.

public class IndexedItem<TModel> {
  public IndexedItem(int index, TModel item) {
    Index = index;
    Item = item;

  public int Index { get; private set; }
  public TModel Item { get; private set; }

And here’s an example of its usage within a view. Notice that we pass in Razor markup as an argument to the method which gets called for each item. We have access to the direct item and the current index.

@model IEnumerable<Question>

@Model.Each(@<li>Item @item.Index of @(Model.Count() - 1): @item.Item.Title</li>)

If you want to try it out, I put the code in a package in my personal NuGet feed for my code samples. Just connect NuGet to and Install-Package RazorForEach. The package installs this code as source files in App_Code.

UPDATE: I updated the code and package to be more efficient (4/16/2011)., mvc, code comments edit

I’m at Mix11 all week and this past Monday, I attended the Open Source Fest where multiple tables were set up for open source project owners to show off their projects.

One of  my favorite projects is also a NuGet package named Glimpse Web Debugger. It adds a FireBug like experience for grabbing server-side diagnostics from an ASP.NET MVC application while looking at it in your browser. It provides a browser plug-in like experience without the plug-in.

One of the features of their plug-in is a route debugger inspired by my route debugger. Over time, as Glimpse catches on, I’ll probably be able to simply retire mine.

But in the meanwhile, inspired by their route debugger, I’ve updated my route debugger so that it acts like tracing and puts the debug information at the bottom of the page (click to enlarge).

About Us - Windows Internet

Note that this new feature requires that you’re running against .NET 4 and that you have the Microsoft.Web.Infrastructure assembly available (which you would in an ASP.NET MVC 3 application).

The RouteDebugger NuGet package includes the older version of RouteDebug.dll for those still running against .NET 3.5.

This takes advantage of a new feature included in the Microsoft.Web.Infrastructure assembly that allows you to register an HttpModule dynamically. That allows me to easily append this route debug information to the end of every request.

By the way, RouteDebugger is now part of the RouteMagic project if you want to see the source code.

To try it out, Install-Package RouteDebugger.

comments edit

Today at Mix, Scott Guthrie announced an update to the ASP.NET MVC 3 we’re calling the ASP.NET MVC 3 Tools Update. You can install it via Web PI or download the installer by going to the download details page. Check out the release notes as well for more details.

Notice the emphasis on calling it a Tools Update? The reason for that is simple. This only updates the tooling for ASP.NET MVC 3 and not the runtime. There are no changes to System.Web.Mvc.dll or any of its other assemblies that ship as part of the ASP.NET MVC 3 Framework. Instead, given that we just released ASP.NET MVC 3 this past January, we focused on improvements to the tools and project templates that we wanted to ship in time for Mix.

To drive this point home, here’s a screenshot of the Programs and Features dialog with ASP.NET MVC 3 RTM installed.



And here’s one with the Tools Update installed.



Did you see what changed? If not, I’ll help you. Smile


What’s new in this release?

We’ve added a lot of improvements to the tooling experience in this release. For more details, check out the release notes.

  • New Intranet Project Template that enables Windows Authentication and does not include the AccountController.
  • HTML 5 checkbox to enable HTML 5 versions of project templates.
  • Add Controller Dialog now supports full automatic scaffolding of Create, Read, Update, and Delete controller actions and corresponding views. By default, this scaffolds data access code using EF Code First.
  • Add Controller Dialog supports extensible scaffolds via NuGet packages such as MvcScaffolding. This allows plugging in custom scaffolds into the dialog which would allow you to create scaffolds for other data access technologies such as NHibernate or even JET with ODBCDirect if you’re so inclined!
  • JavaScript libraries within project templates are updatable via NuGet! (We included them as pre-installed NuGet packages.)
  • Includes Modernizr 1.7. This provides compatibility support for HTML 5 and CSS 3 in down-level browsers.
  • Includes EF Code First 4.1 as a pre-installed NuGet package.

We’ve also made several other small changes and fixed several bugs in the MVC tooling for Visual Studio:

  • We did some major cleanup to the AccountController in the Internet project template
  • We now have more “sticky” options that remember their settings even when you restart Visual Studio
  • We have much smarter model type filtering logic in the Add View and Add Controller dialogs

NuGet 1.2 Included

Around 12 days ago, we released NuGet 1.2. If you don’t already have NuGet 1.2 installed, ASP.NET MVC 3 Tools Update will install it for you. In fact, it requires it because of the pre-installed NuGet packages feature I mentioned earlier. When you create a new ASP.NET MVC 3 Tools Update project, the script libraries such as jQuery are installed as NuGet packages so that it’s easy to update them after the fact.

Give it a spin and let us know what you think!

nuget, open source, code comments edit

Hi there, it’s time to shine the bat-signal, or better yet, the NuGet-Signal!

batman-sending-nuget-signalThe NuGet community needs your help! We’re wrestling with some interesting wide ranging design decisions and we need data to test out our assumptions and help us make the best possible choices. I won’t go into too much detail about the specific issue as I don’t want to bias the results of the following survey. I simply want to gather information about common practices by answering a set of questions that mostly have empirical answers.

I think it’s a given that most Visual Studio solutions consist of multiple projects. What’s I’m not so sure about is how often those solutions consist of multiple applications?

For example, is it more common for your solution to have a single core app and all of the other projects support that app?


Or is it more common to have a solution with two different apps such as two WinForm apps or two web apps?


So please answer the following questions:

This survey requires using a browser that supports iframes.

As an example for that last question, here’s the packages folder for a sample solution I created. It has four packages where there are multiple versions.


Thanks for taking the time to answer these questions. I’ll follow up later with more details on what we’re working on.

And feel free to elaborate in the comments if you have more to say!

nuget,, code comments edit

As you may know, NuGet supports aggregating packages from multiple package sources. You can simply point NuGet at a folder containing packages or at a NuGet OData service.

A while back I wrote up a guide to hosting your own NuGet feed. Well, we’ve made it way easier to set one up now! And, surprise surprise, it involves NuGet. Smile I’ll provide step by step instructions here. But first, make sure you’re running NuGet 1.2!

Step 1: Create a new Empty Web Application in Visual Studio

Go to the File | New | Project menu option (or just hit CTRL + SHIFT + N) which will bring up the new project dialog and select “ASP.NET Empty Web Application” as in the following screenshot (click to enlarge).


This results in a very empty project template.


Step 2: Install the NuGet.Server Package

Now right click on the References node and select Add Library Package Reference to launch the NuGet dialog (alternatively, you can use the Package Manager Console instead and type Install-Package NuGet.Server).

Click the Online tab and then type NuGet.Server in the top right search box. Click Install on the NuGet.Server package as shown in the following image (click to enlarge).


Step 3: Add Packages to the Packages folder

That’s it! The NuGet.Server package just converted your empty website into a site that’s ready to serve up the OData package feed. Just add packages into the Packages folder and they’ll show up.

In the following screenshot, you can see that I’ve added a few packages to the Packages folder.


Step 4: Deploy and run your brand new Package Feed!

I can hit CTRL + F5 to run the site and it’ll provide some instructions on what to do next.


Clicking on “here” shows the OData over ATOM feed of packages.


Now all I need to do is deploy this website as I would any other site and then I can click the Settings button and add this feed to my set of package sources as in the following screenshot (click to enlarge).


Note that the URL you need to put in is http://yourdomain/nuget/ depending on how you deploy the site.

Yes, it’s that easy! Note that this feed is “read-only” in the sense that it doesn’t support publishing to it via the NuGet.exe command line tool. Instead, you need to add packages to the Packages folder and they are automatically syndicated.

nuget, code, open source comments edit

I’m happy to announce the release of NuGet 1.2. It took us a bit longer than we expected to get this release out there, and I’ll explain why later, but for now, go install it!

Upgrade!If you go into Visual Studio and select Tools | Extension Manager, click on the Update tab and you’ll see that this new version of NuGet should be available as an update. It only takes a minute and it really is that easy to upgrade.

For more details about what’s in this release, check out the announcement on

There’s also a new version of NuGet.exe corresponding with this release as well as the Package Explorer. If you have a fairly recent version of NuGet.exe, you can upgrade it by simply running the following command:

NuGet.exe u


Thanks to everyone who helped make this happen. I’ll be writing about our plans for 1.3 fairly soon. Smile

code comments edit

A colleague of mine from the Data and Modeling Group mentioned that they have a new senior developer position open working on their new datajs project.

This developer would be responsible for defining how modern web and mobile applications use and interact with data on JavaScript platforms. We deal with and work on defining a number of standards including HTML5’s IndexedDB and OData, as well as provide a new set of end-to-end experiences for accessing, managing, and storing data in JS.

What’s interesting to me is that datajs appears to be an open source project under the MIT license!

From its description on the datajs CodePlex page,

datajs is a new cross-browser JavaScript library that enables data-centric web applications by leveraging modern protocols such as JSON and OData and HTML5-enabled browser features. It’s designed to be small, fast and easy to use. \

For more details about the position, check out the job posting online.

For those of you who are interested but don’t live in Redmond, Washington, this position is in Redmond and requires relocation.

How do I apply?

If you’re interested in the position, please contact Jeff Derstadt I don’t know anything more about the job than what I posted here in this blog post.

I do know that I’ve worked with people on that team as partners on in various capacities and they’re smart folks doing good work. I’m sure my team and your team (should you get the job) would end up working closely together. Smile mvc,, code comments edit

I’m in the beautiful country of Brazil right now (I’ll hopefully blog more about that later) proctoring for the hands-on labs that’s part of the Web Camps agenda.

However, the folks here are keeping me on my toes asking me to give impromptu and deeply advanced demos. It almost feels like a form of performance art as I create brand new demos on the fly. Smile

During this time, several people reported issues binding to a decimal value that prompted me to write a new demo and this blog post.

Let’s look at the scenario. Suppose you have the following class (Jogadoris a soccer player in Portugese):

public class Jogador {
    public int ID { get; set; }
    public string Name { get; set; }
    public decimal Salary { get; set; }

And you have two controller actions, one that renders a form used to create a Jogador and another action method that receives the POST request.

public ActionResult Create() {
  // Code inside here is not important
  return View();

public ActionResult Create(Jogador player) {
  // Code inside here is not important
  return View();  

When you type in a value such as 1234567.55 into the Salary field and try to post it, it works fine. But typically, you would want to type it like 1,234,567.55 (or here in Brazil, you would type it as 1.234.567,55).

In that case, the DefaultModelBinder chokes on the value. This is unfortunate because jQuery Validate allows that value just fine. I’ll talk to the rest of my team about whether we should fix this in the next version of ASP.NET MVC, but for now it’s good to know there’s a workaround.

In general, we recommend folks don’t write custom model binders because they’re difficult to get right and they’re rarely needed. The issue I’m discussing in this post might be one of those cases where it’s warranted.

Here’s the code for my DecimalModelBinder. I should probably write one for other decimal types too, but I’m lazy.

WARNING: This is sample code! I haven’t tried to optimize it or test all scenarios. I know it works for direct decimal arguments to action methods as well as decimal properties when binding to complex objects.

using System;
using System.Globalization;
using System.Web.Mvc;

public class DecimalModelBinder : IModelBinder {
    public object BindModel(ControllerContext controllerContext, 
        ModelBindingContext bindingContext) {
        ValueProviderResult valueResult = bindingContext.ValueProvider
        ModelState modelState = new ModelState { Value = valueResult };
        object actualValue = null;
        try {
            actualValue = Convert.ToDecimal(valueResult.AttemptedValue, 
        catch (FormatException e) {

        bindingContext.ModelState.Add(bindingContext.ModelName, modelState);
        return actualValue;

With this in place, you can easily register this in Application_Start within Global.asax.cs.

protected void Application_Start() {
    ModelBinders.Binders.Add(typeof(decimal), new DecimalModelBinder());

    // All that other stuff you usually put in here...

That registers our model binder to only be applied to decimal types, which is good since we wouldn’t want model binding to try and use this model binder when binding any other type.

With this in place, the Salary field will now accept both 1234567.55and 1,234,567.55.

Hope you find this useful. I’ve had a great time in Buenos Aires, Argentina and São Paulo, Brazil. I’ll probably be swamped when I get back home, but I’ll try to make time to write about my time here., mvc comments edit

Layouts in Razor serve the same purpose as Master Pages do in Web Forms. They allow you to specify a layout for your site and carve out some placeholder sections for your views to implement.

For example, here’s a simple layout with a main body section and a footer section.

<!DOCTYPE html>
<head><title>Sample Layout</head>

In order to use this layout, your view might look like.

    Layout = "MyLayout.cshtml";
<h1>Main Content!</h1>
@section Footer {
    This is the footer.

Notice we use the @section syntax to specify the contents for the defined Footer section.

But what if we have other views that don’t specify content for the Footer section? They’ll throw an exception stating that the “Footer” section wasn’t defined.

To make a section optional, we need to call an overload of RenderSection and specify false for the required parameter.

<!DOCTYPE html>
<head><title>Sample Layout</head>
    <footer>@RenderSection("Footer", false)</footer>

But wouldn’t it be nicer if we could define some default content in the case that the section isn’t defined in the view?

Well here’s one way. It’s a bit ugly, but it works.

  @if (IsSectionDefined("Footer")) {
  else { 
      <span>This is the default yo!</span>   

That’s some ugly code. If only there were a way to write a version of RenderSection that could accept some Razor markup as a parameter to the method.

Templated Razor Delegates to the rescue! See, I told you these things would come in handy.

We can write an extension method on WebPageBase that encapsulates this bit of ugly boilerplate code. Here’s the implementation.

public static class Helpers {
  public static HelperResult RenderSection(this WebPageBase webPage, 
      string name, Func<dynamic, HelperResult> defaultContents) {
    if (webPage.IsSectionDefined(name)) {
      return webPage.RenderSection(name);
    return defaultContents(null);

What’s more interesting than this code is how we can use it now. My Layout now can do the following to define the Footer section:

  @this.RenderSection("Footer", @<span>This is the default!</span>)

That’s much cleaner! But we can do even better. Notice how there’s that ugly this keyword? That’s necessary because when you write an extension method on the current class, you have to call it using the this kewyord.

Remember when I wrote about how to change the base type of a Razor view? Here’s a case where that really comes in handy.

What we can do is write our own custom base page type (such as the CustomWebViewPage class I used in that blog post) and add the RenderSection method above as an instance method on that class. I’ll leave this as an exercise for the reader.

The end result will let you do the following:

  @RenderSection("Footer", @<span>This is the default!</span>)

Pretty slick!

You might be wondering why we didn’t just include this feature in Razor. My guess is that we wanted to but just ran out of time. Hopefully this will make it in the next version of Razor. mvc, code, razor comments edit

David Fowler turned me on to a really cool feature of Razor I hadn’t realized made it into 1.0, Templated Razor Delegates. What’s that? I’ll let the code do the speaking.

  Func<dynamic, object> b = @<strong>@item</strong>;
<span>This sentence is @b("In Bold").</span>

That could come in handy if you have friends who’ll jump on your case for using the bold tag instead of the strong tag because it’s “not semantic”. Yeah, I’m looking at you Damian :stuck_out_tongue:  I mean, don’t both words signify being forceful? I digress.

Note that the delegate that’s generated is a Func<T, HelperResult>. Also, the @item parameter is a special magic parameter. These delegates are only allowed one such parameter, but the template can call into that parameter as many times as it needs.

The example I showed is pretty trivial. I know what you’re thinking. Why not use a helper? Show me an example where this is really useful. Ok, you got it!

Suppose I wrote this really cool HTML helper method for generating any kind of list.

public static class RazorExtensions {
    public static HelperResult List<T>(this IEnumerable<T> items, 
      Func<T, HelperResult> template) {
        return new HelperResult(writer => {
            foreach (var item in items) {

This List method accepts a templated Razor delegate, so we can call it like so.

  var items = new[] { "one", "two", "three" };


As I mentioned earlier, notice that the argument to this method, <span class="asp">@</span>&lt;li><span class="asp">@</span>item&lt;/li> is automatically converted into a Func&lt;dynamic, HelperResult> which is what our method requires.

Now this List method is very reusable. Let’s use it to generate a table of comic books.

    var comics = new[] { 
        new ComicBook {Title = "Groo", Publisher = "Dark Horse Comics"},
        new ComicBook {Title = "Spiderman", Publisher = "Marvel"}


This feature was originally implemented to support the WebGrid helper method, but I’m sure you’ll think of other creative ways to take advantage of it.

If you’re interested in how this feature works under the covers, check out this blog post by Andrew Nurse.

nuget,, mvc, code comments edit

Renaming a package ID is a potentially destructive action and one we don’t recommend doing. Why? Well if any other packages depend on your package, you’ve effectively broken them if you change your package ID.

For example, today I wanted to rename a poorly named package, MicrosoftWebMvc, to Mvc2Futures. What I ended up doing is recreating the same package with the new ID and uploading it. That way existing packages that depend on MicrosoftWebMvc aren’t broken.

But now, I have two packages that have the same functionality, but different IDs. Wouldn’t it be nice to eventually remove the old one? I guess I could if I knew that no other package had a dependency on it.

This is where the benefit of having an OData service over the packages in the gallery comes in quite useful. It allows us to construct ad-hoc queries we hadn’t accounted for in our API via an URL. Here’s the URL that shows me a list of all packages that depend on MicrosoftWebMvc.$filter=substringof(‘MicrosoftWebMvc’,%20Dependencies)%20eq%20true&$select=Id,Dependencies

Notice that we’re searching the Dependencies node for the substring “MicrosoftWebMvc” anywhere in it. If my package ID was “web”, this would not be a good query to run, so you might need to tweak it for your use case.

Also, this query only detects direct dependencies. It doesn’t detect transitive dependencies. However, in this case, that’s good enough for my needs.

With this list in hand, I can now approach the MvcContrib folks (who are the only ones that depend on it), and suggest they update their existing packages in place to point to the one with the new ID.

If they do this, am I safe to delete MicrosoftWebMvc?

Not necessarily.

I really need to think twice before I remove the MicrosoftWebMvcpackage because it’s already been downloaded 939 times. For those users who’ve installed it into their applications, they’ll never get updates for that package.

In this particular case, this is not a problem because we never plan to update the Mvc2Futures package. But for a package that’s more widely used and frequently updated, this would be a bigger concern.

In the meanwhile, what I will do is update MicrosoftWebMvc to be an empty package that depends on the correct package. That’s probably a good plan while I wait for packages that depend on it to update., mvc, razor comments edit

Within a Razor view, you have access to a base set of properties (such as Html, Url, Ajax, etc.) each of which provides methods you can use within the view.

For example, in the following view, we use the Html property to access the TextBox method.


Html is a property of type HtmlHelper and there are a large number of useful extension methods that hang off this type, such as  TextBox.

But where did the Html property come from? It’s a property of System.Web.Mvc.WebViewPage, the default base type for all razor views. If that last phrase doesn’t make sense to you, let me explain.

Unlike many templating engines or interpreted view engines, Razor views are dynamically compiled at runtime into a class and then executed. The class that they’re compiled into derives from WebViewPage. For long time ASP.NET users, this shouldn’t come as a surprise because this is how ASP.NET pages work as well.

Customizing the Base Class

HTML 5 (or is it simply “HTML” now) is a big topic these days. It’d be nice to write a set of HTML 5 specific helpers extension methods, but you’d probably like to avoid adding even more extension methods to the HtmlHelper class because it’s already getting a little crowded in there.


Well perhaps what we need is a new property we can access from within Razor. Well how do we do that?

What we need to do is change the base type for all Razor views to something we control. Fortunately, that’s pretty easy. When you create a new ASP.NET MVC 3 project, you might have noticed that the Views directory contains a Web.config file.

Look inside that file and you’ll notice the following snippet of XML.

    <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, 
    System.Web.Mvc, Version=, 
    Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
  <pages pageBaseType="System.Web.Mvc.WebViewPage">
      <add namespace="System.Web.Mvc" />
      <add namespace="System.Web.Mvc.Ajax" />
      <add namespace="System.Web.Mvc.Html" />
      <add namespace="System.Web.Routing" />

The thing to notice is the <pages> element which has the pageBaseType attribute. The value of that attribute specifies the base page type for all Razor views in your application. But you can change that value by simply replacing that value with your custom class. While it’s not strictly required, it’s pretty easy to simply write a class that derives from WebViewPage.

Let’s look at a simple example of this.

public abstract class CustomWebViewPage : WebViewPage {
  public Html5Helper Html5 { get; set; }

  public override void InitHelpers() {
    Html5 = new Html5Helper<object>(base.ViewContext, this);

Note that our custom class derives from WebViewPage, but adds a new Html5 property of type Html5Helper. I’ll show the code for that helper here. In this case, it pretty much follows the pattern that HtmlHelper does. I’ve left out some properties for brevity, but at this point, you can add whatever you want to this class.

public class Html5Helper {
  public Html5Helper(ViewContext viewContext, 
    IViewDataContainer viewDataContainer)
    : this(viewContext, viewDataContainer, RouteTable.Routes) {

  public Html5Helper(ViewContext viewContext,
     IViewDataContainer viewDataContainer, RouteCollection routeCollection) {
    ViewContext = viewContext;
    ViewData = new ViewDataDictionary(viewDataContainer.ViewData);

  public ViewDataDictionary ViewData {
    private set;

  public ViewContext ViewContext {
    private set;

Let’s write a simple extension method that takes advantage of this new property first, so we can get the benefits of all this work.

public static class Html5Extensions {
    public static IHtmlString EmailInput(this Html5Helper html, string name,       string value) {
        var tagBuilder = new TagBuilder("input");
        tagBuilder.Attributes.Add("type", "email");
        tagBuilder.Attributes.Add("value", value);
        return new HtmlString(tagBuilder.ToString());

Now, if we change the pageBaseType to CustomWebViewPage, we can recompile the application and start using the new property within our Razor views.


Nice! We can now start using our new helpers. Note that if you try this and don’t see your new property in Intellisense right away, try closing and re-opening Visual Studio.

What about Strongly Typed Views

What if I have a Razor view that specifies a strongly typed model like so:

@model Product
    ViewBag.Title = "Home Page";


The base class we wrote wasn’t a generic class so how’s this going to work? Not to worry. This is the part of Razor that’s pretty cool. We can simply write a generic version of our class and Razor will inject the model type into that class when it compiles the razor code.

In this case, we’ll need a generic version of both our CustomWebViewPage and our Html5Helper classes. I’ll follow a similar pattern implemented by HtmlHelper<T> and WebViewPage<T>.

public abstract class CustomWebViewPage<TModel> : CustomWebViewPage {
  public new Html5Helper<TModel> Html5 { get; set; }

  public override void InitHelpers() {
    Html5 = new Html5Helper<TModel>(base.ViewContext, this);

public class Html5Helper<TModel> : Html5Helper {
  public Html5Helper(ViewContext viewContext, IViewDataContainer container)
    : this(viewContext, container, RouteTable.Routes) {

  public Html5Helper(ViewContext viewContext, IViewDataContainer container, 
      RouteCollection routeCollection) : base(viewContext, container,
      routeCollection) {
    ViewData = new ViewDataDictionary<TModel>(container.ViewData);

  public new ViewDataDictionary<TModel> ViewData {
    private set;

Now you can write extension methods of Html5Helper<TModel> which will have access to the model type much like HtmlHelper<TModel> does.

As usual, if there’s a change you want to make, there’s probably an extensibility point in ASP.NET MVC that’ll let you make it. The tricky part of course, in some cases, is finding the correct point.

comments edit

It pains me to say it, but ASP.NET MVC 3 introduces a minor regression in routing from ASP.NET MVC 2. The good news is that there’s an easy workaround.

The bug manifests when you have a route with two consecutive optional URL parameters and you attempt to use the route to generate an URL. The incoming request matching behavior is unchanged and continues to work fine.

For example, suppose you have the following route defined:

        new { controller = "Home", action = "Index", 
            month = UrlParameter.Optional, day = UrlParameter.Optional }

Notice that the month and day parameters are both optional.

        new { controller = "Home", action = "Index", 
            month = UrlParameter.Optional, day = UrlParameter.Optional }

Now suppose you have the following view code to generate URLs using this route.

@Url.RouteUrl("by-day", new { month = 1, day = 23 })
@Url.RouteUrl("by-day", new { month = 1 })
@Url.RouteUrl("by-day", null)

In ASP.NET MVC 2 the above code (well actually, the equivalent to the above code since Razor didn’t exist in ASP.NET MVC 2) would result in the following URLs as you would expect:

  • /archive/1/23
  • /archive/1
  • /archive

But in ASP.NET MVC 3, you get:

  • /archive/1/23
  • /archive/1

In the last case, the value returned is nullbecause of this bug. The bug occurs when two or more consecutive optional URL parameters don’t have values specified for URL generation.

Let’s look at the workaround first, then we’ll dive deeper into why this bug occurs.

The Workaround

The workaround is simple. To fix this issue, change the existing route to not have any optional parameters by removing the default values for month and day. This route now handles the first URL where month and day was specified.

We then add a new route for the other two cases, but this route only has one optional month parameter.

Here are the two routes after we’re done with these changes.

        new { controller = "Home", action = "Index"}

        new { controller = "Home", action = "Index", 
            month = UrlParameter.Optional}

And now, we need to change the last two calls to generate URLs to use the by-month route.

@Url.RouteUrl("by-day", new { month = 1, day = 23 })
@Url.RouteUrl("by-month", new { month = 1 })
@Url.RouteUrl("by-month", null)

Just to be clear, this bug affects all the URL generation methods in ASP.NET MVC. So if you were generating action links like so:

@Html.ActionLink("sample", "Index", "Home", new { month = 1, day = 23 }, null)
@Html.ActionLink("sample", "Index", "Home", new { month = 1}, null)
@Html.ActionLink("sample", "Index", "Home")

The last one would be broken without the workaround just provided.

The workaround is not too bad if you happen to follow the practice of centralizing your URL generation. For example, the developers building ran into this problem as well during the upgrade to ASP.NET MVC 3. But rather than having calls to ActionLink all over their views, they have calls to methods that are specific to their application domain such as ForumDetailUrl. This allowed them to workaround this issue by updating a single method.

The Root Cause

For the insanely curious, let’s look at the root cause of this bug. Going back to the original route defined at the top of this post, we never tried generating an URL where only the second optional parameter was specified.

@Url.RouteUrl("by-day", new { day = 23 })

This call really should fail because we didn’t specify a value for the first optional parameter, month. If it’s not clear why it should fail, suppose we allowed this to succeed, what URL would it generate? /archive/23?  Well that’s obviously not correct because when a request is made for that URL, 23 will be interpreted to be the month, not the date.

In ASP.NET MVC 2, if you made this call, you ended up with /archive/System.Web.Mvc.UrlParameter/23. UrlParameter.Optional is a class introduced by ASP.NET MVC 2 which ships on its own schedule outside of the core ASP.NET Framework. What that means is we added this new class which provided this new behavior in ASP.NET MVC, but core routing didn’t know about it.

The way we fixed this in ASP.NET MVC 3 was to make the ToString method of UrlParameter.Optional return an empty string. That solved this bug, but uncovered a bug in core routing where a route with optional parameters having default values behaves incorrectly when two of them don’t have values specified during URL generation. Sound familiar?

In hindsight, I think it was a mistake to take this fix because it caused a regression for many applications that had worked around the bug. The bug was found very late in our ship cycle and this is just one of the many challenging decisions we make when building software that sometimes don’t work out the way you hoped or expected. All we can do is learn from it and let the experience factor into the next time we are faced with such a dilemma.

The good news is we have bugs logged against this behavior in core ASP.NET Routing so hopefully this will all get resolved in the next core .NET framework release.