, mvc, nuget, code comments suggest edit

Sometimes, despite your best efforts, you encounter a problem with your ASP.NET MVC application that seems impossible to figure out and makes you want to pull out your hair. Or worse, it makes you want to pull out my hair. In some of those situations, it ends up being a PEBKAC issue, but in the interest of avoiding physical harm, I try not to point that out.


Thankfully, in the interest of saving my hair, Brad Wilson (recentlyfeatured on This Developer’s Life!) wrote a simple diagnostics web page for ASP.NET MVC that you can drop into any ASP.NET MVC application. When you visit the page in your browser, it provides diagnostics information that can help discover potential problems with your ASP.NET application.

To make it as easy as possible to use it, I created a NuGet package named “MvcDiagnostics”. If you’re not familiar with NuGet, check out my announcement of NuGet as well as our Getting Started guide written by Tim Teebken.

With NuGet, you can use the Add Package Library Dialog to install MvcDiagnostics. Simply type in “MVC” in the search dialog to filter the online entries. Then locate the MvcDiagnostics entry and click “Install”.


Or you can use the Package Manager Gonsole and simply type:

install-package MvcDiagnostics

Either way, this will add the MvcDiagnostics.aspx page to the root of your web application.


You can then visit the page with your browser to get diagnostics information.


With NuGet, it’s much easier to make use of this diagnostics page. Hopefully you’ll rarely need to use it, but it’s nice to know it’s there. Let us know if you have ways to improve the diagnostics page., mvc, code comments suggest edit

UPDATE: 2011/02/13: This code is now included in the RouteMagic NuGet package! To use this code, simply run Install-Package RouteMagic within the NuGet Package Manager Console.

One thing ASP.NET Routing doesn’t support is washing and detailing my car. I really pushed for that feature, but my coworkers felt it was out of scope. Kill joys.

Another thing Routing doesn’t support out of the box is a way to group a set of routes within another route. For example, suppose I want a set of routes to all live under the same URL path. Today, I’d need to make sure all the routes started with the same URL segment. For example, here’s a set of routes that all live under the “/blog” URL path.

RouteTable.Routes.MapRoute("r1", "blog/posts");
RouteTable.Routes.MapRoute("r2", "blog/posts/{id}");
RouteTable.Routes.MapRoute("r3", "blog/archives");

If I decide I want all these routes to live under something other than “blog” such as in the root or under a completely different name such as “archives”, I have to change each route. Not such a big deal with only three routes, but with a large system with multiple groupings, this can be a hassle.

I suppose one easy way to solve this is to do the following:

string baseUrl = "blog/";
RouteTable.Routes.MapRoute("r1", baseUrl + "posts");
RouteTable.Routes.MapRoute("r2", baseUrl + "posts/{id}");
RouteTable.Routes.MapRoute("r3", baseUrl + "archives");

Bam! Done! Call it a night Frank.

This is actually a very simple and great solution to the problem I stated. In fact, it probably works better than the alternative I’m about to show you. If this works so well, why am I showing you the alternative?

Well, there’s something unsatisfying about that answer. Suppose a request comes in for /not-blog. Every one of those routes is going to be evaluated even though we already know none of them will match. If we could group them, we could reduce the check to just one check. Also, it’s just not as much fun as what I’m about to show you.

What I would like to be able to do is the following.

var blogRoutes = new RouteCollection();
blogRoutes.MapRoute("r1", "posts");
blogRoutes.MapRoute("r2", "posts/{id}");
blogRoutes.MapRoute("r3", "archives");

RouteTable.Routes.Add("blog-routes", new GroupRoute("~/blog", blogRoutes));

In this code snippet, I’ve declared a set of routes and added them to a proposed GroupRoute instance. That group route is then added to the route table. Note that the child routes are not themselves added to the route table and they have no idea what parent path they’ll end up responding to.

With this proposed route, I these child routes would then handle requests to /blog/posts and /blog/archives. But if I decide to place them under a different path, I can simply change a single route, the group route, and I don’t need to change each child route.


In this section, I’ll describe the implementation of such a group route in broad brush strokes. The goal here is to provide an under the hood look at how routing works and how it can be extended.

Implementing such a grouping route is not trivial. Routes in general work directly off of the current http request in order to determine if they match a request or not.

By themselves, those child routes I defined earlier would not match a request for /blog/posts. Note that the URL for the child routes don’t start with “blog”. Fortunately though, the request that is supplied to each route is an instance of HttpRequestBase, an abstract base class.

What this means is we can muck around with the request and even change it so that the child routes don’t even know the actual requests starts with /blog. That way, when a request comes in for /blog/posts, the group route matches it, but then rewrites the request only for its child routes so that they think they’re trying to match /posts.

Please note that what I’m about to show you here is based on internal knowledge of routing and is unsupported and may cause you to lose hair, get a rash, and suffer much grief if you depend on it. Use this approach at your own risk.

The first thing I did was implement my own wrapper classes for the http context class.

public class ChildHttpContextWrapper : HttpContextBase {
  private HttpContextBase _httpContext;
  private HttpRequestBase _request;

  public ChildHttpContextWrapper(HttpContextBase httpContext, 
      string parentVirtualPath, string parentPath) {
    _httpContext = httpContext;
    _request = new ChildHttpRequestWrapper(httpContext.Request, 
      parentVirtualPath, parentPath);

  public override HttpRequestBase Request {
    get {
      return _request;

  // ... All other properties/methods delegate to _httpContext

Note that all this does is delegate every method and property to the supplied HttpContextBase instance that it wraps except for the Request property, which returns an instance of my next wrapper class.

public class ChildHttpRequestWrapper : HttpRequestBase {
  HttpRequestBase _httpRequest;
  string _path;
  string _appRelativeCurrentExecutionFilePath;

  public ChildHttpRequestWrapper(HttpRequestBase httpRequest, 
    string parentVirtualPath, string parentPath) {
    if (!parentVirtualPath.StartsWith("~/")) {
      throw new InvalidOperationException("parentVirtualPath 
        must start with ~/");

    if (!httpRequest.AppRelativeCurrentExecutionFilePath
        .StartsWith(parentVirtualPath, StringComparison.OrdinalIgnoreCase)) {
      throw new InvalidOperationException("This request is not valid 
        for the current path.");

    _path = httpRequest.Path.Remove(0, parentPath.Length);
    _appRelativeCurrentExecutionFilePath = httpRequest.
      AppRelativeCurrentExecutionFilePath.Remove(1,       parentVirtualPath.Length - 1);
    _httpRequest = httpRequest;

  public override string Path { get { return _path; } }

  public override string AppRelativeCurrentExecutionFilePath {
    get { return _appRelativeCurrentExecutionFilePath; }

  // all other properties/method delegate to_httpRequest

What this child request does is strip off the portion of the request path that corresponds to its parent’s virtual path. That’s the “~/blog” part supplied by the group route.

It then makes sure that the Path and the AppRelativeCurrentExecutionFilePath properties return this updated URL. Current implementations of routing look at these two properties when matching an incoming request. However, that’s an internal implementation detail of routing that could change, hence my admonition earlier that this is voodoo magic.

The implementation of request matching for GroupRoute is fairly straightforward then.

public override RouteData GetRouteData(HttpContextBase httpContext) {
  if (!httpContext.Request.AppRelativeCurrentExecutionFilePath.
      StartsWith(VirtualPath, StringComparison.OrdinalIgnoreCase)) {
    return null;

  HttpContextBase childHttpContext = VirtualPath != ApplicationRootPath ? 
    new ChildHttpContextWrapper(httpContext, VirtualPath, _path) : null;

  return ChildRoutes.GetRouteData(childHttpContext ?? httpContext);

All we did here is to make sure that the group route matches the current request. If so, we then created a child http context which as we saw earlier, looks just like the current http context, only the /blog portion of the request is stripped off. We then pass that to our internal route collection to see if any child route matches. If so, we return the route data from that match and we’re done.

In Part 2 of this series, we’ll look at implementing URL generation. That’s where things get really tricky., mvc, code comments suggest edit

A question I often receive via my blog and email goes like this:

Hi, I just got an email from a Nigerian prince asking me to hold some money in a bank account for him after which I’ll get a cut. Is this a scam?

The answer is yes. But that’s not the question I wanted to write about. Rather, a question that I often see on StackOverflow and our ASP.NET MVC forums is more interesting to me and it goes something like this:

How do I get the route name for the current route?

My answer is “You can’t”. Bam! End of blog post, short and sweet.

Joking aside, I admit that’s not a satisfying answer and ending it there wouldn’t make for much of a blog post. Not that continuing to expound on this question necessarily will make a good blog post, but expound I will.

It’s not possible to get the route name of the route because the name is not a property of the Route. When adding a route to a RouteCollection, the name is used as an internal unique index for the route so that lookup for the route is extremely fast. This index is never exposed.

The reason why the route name can’t be a property becomes more apparent when you consider that it’s possible to add a route to multiple route collections.

var routeCollection1 = new RouteCollection();
var routeCollection2 = new RouteCollection();

var route = new Route("{controller}/{action}", new MvcRouteHandler());

routeCollection1.Add("route-name1", route);
routeCollection2.Add("route-name2", route);

So in this example, we add the same route to two different route collections using two different route names when we added the route. So we can’t really talk about the name of the route here because what would it be? Would it be “route-name1” or “route-name2”? I call this the “Route Name Uncertainty Principle” but trust me, I’m alone in this.

Some of you might be thinking that ASP.NET Routing didn’t have to be designed this way. I address that at the end of this blog post. For now, this is the world we live in, so let’s deal with it.

Let’s do it anyways

I’m not one to let logic and an irrefutable mathematical proof stand in the way of me and getting what I want. I want a route’s name, and golly gee wilickers, I’m damn well going to get it.

After all, while in theory I can add a route to multiple route collections, I rarely do that in real life. If I promise to behave and not do that, maybe I can have my route name with my route. How do we accomplish this?

It’s simple really. When we add a route to the route collection, we need to tell the route what the route name is so it can store it in its DataTokens dictionary property. That’s exactly what that property of Route was designed for. Well not for storing the name of the route, but for storing additional metadata about the route that doesn’t affect route matching or URL generation. Any time you need some information stored with a route so that you can retrieve it later, DataTokens is the way to do it.

I wrote some simple extension methods for setting and retrieving the name of a route.

public static string GetRouteName(this Route route) {
    if (route == null) {
        return null;
    return route.DataTokens.GetRouteName();

public static string GetRouteName(this RouteData routeData) {
    if (routeData == null) {
        return null;
    return routeData.DataTokens.GetRouteName();

public static string GetRouteName(this RouteValueDictionary routeValues) {
    if (routeValues == null) {
        return null;
    object routeName = null;
    routeValues.TryGetValue("__RouteName", out routeName);
    return routeName as string;

public static Route SetRouteName(this Route route, string routeName) {
    if (route == null) {
        throw new ArgumentNullException("route");
    if (route.DataTokens == null) {
        route.DataTokens = new RouteValueDictionary();
    route.DataTokens["__RouteName"] = routeName;
    return route;

Yeah, besides changing diapers, this is what I do on the weekends. Pretty sad isn’t it?

So now, when I register routes, I just need to remember to call SetRouteName.

routes.MapRoute("rName", "{controller}/{action}").SetRouteName("rName");

BTW, did you know that MapRoute returns a Route? Well now you do. I think we made that change in v2 after I begged for it like a little toddler. But I digress.

Like eating a Turducken, that code doesn’t sit well with me. We’re repeating the route name twice here which is prone to error. Ideally, MapRoute would do it for us, but it doesn’t. So we need some new and improved extension methods for mapping routes.

public static Route Map(this RouteCollection routes, string name, 
    string url) {
  return routes.Map(name, url, null, null, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults) {
  return routes.Map(name, url, defaults, null, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults, object constraints) {
  return routes.Map(name, url, defaults, constraints, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults, object constraints, string[] namespaces) {
  return routes.MapRoute(name, url, defaults, constraints, namespaces)

These methods correspond to some (but not all, because I’m lazy) of the MapRoute extension methods in the System.Web.Mvc namespace. I called them Map simply because I didn’t want to conflict with the existing MapRoute extension methods.

With these set of methods, I can easily create routes for which I can retrieve the route name.

var route = routes.Map("rName", "url");

// within a controller
string routeName = RouteData.GetRouteName();

With these methods, you can now grab the route name from the route should you need it.

Of course, one question to ask yourself is why do you need to know the route name in the first place? Many times, when people ask this question, what they really are doing is making the route name do double duty. They want it to act as an index for route lookup as well as be a label applied to the route so they can take some custom action based on the name.

In this second case though, the “label” doesn’t have to be the route name. It could be anything stored in data tokens. In a future blog post, I’ll show you an example of a situation where I really do need to know the route name.

Alternate Design Aside

As an aside, why is routing designed this way? I wasn’t there when this particular decision was made, but I believe it has to do with performance and safety. With the current API, once a route name has been added to a route collection with a name, internally, the route collection can safely use the route name as a dictionary key for the route knowing full well that the route name cannot change.

But imagine instead that RouteBase (the base class for all routes) had a Name property and the RouteCollection.Add method used that as the key for route lookup. Well it’s quite possible that the value of the route’s name could change for some reason due to a poor implementation. In that case, the index would be out of sync with the route’s name.

While I agree that the current design is safer, in retrospect I doubt many will screw up  a read-only name property which should never change. We could have documented that the contract for the Name property of Route is that it should never change during the lifetime of the route. But then again, who reads the documentation? After all, I offered $1,000 to the first person who emailed me a hidden message embedded in our ASP.NET MVC 3 release notes and haven’t received one email yet. Also, you’d be surprised how many people screw up GetHashCode(), which effectively would have the same purpose as a route’s Name property.

And by the way, there are no hidden messages in the release notes. Did I make you look?

tdd, code comments suggest edit

A while back I wrote about mocking successive calls to the same method which returns a sequence of objects. Read that post for more context.

In that post, I had written up an implementation, but quickly was won over by a better extension method implementation from Fredrik Kalseth.

public static class MoqExtensions
  public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup, 
    params TResult[] results) where T : class  {
    setup.Returns(new Queue<TResult>(results).Dequeue);

As good as this extension method is, I was able to improve on it today during a coding session. I was writing some code where I needed the second call to the same method to throw an exception and realized this extension wouldn’t allow for that.

However, it wasn’t hard to write an overload that allows for that.

public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup,
    params object[] results) where T : class {
  var queue = new Queue(results);
    setup.Returns(() => {
        var result = queue.Dequeue();
        if (result is Exception) {
            throw result as Exception;
        return (TResult)result;

So rather than taking a parameter array of TResult, this overload accepts an array of object instances.

Within the method, we create a non generic Queue and then create a lambda that captures that queue in a closure. The lambda is passed to the Returns method so that it’s called every time the mocked method is called, returning the next item in the queue.

Here’s an example of the method in action:

var mock = new Mock<ISomeInterface>();
mock.Setup(r => r.GetNext())
    .ReturnsInOrder(1, 2, new InvalidOperationException());

Console.WriteLine(mock.Object.GetNext()); // Throws InvalidOperationException

In this sample code, I mock an interface so that when its GetNext method is called a third time, it will throw an InvalidOperationException.

I’ve found this to be a helpful and useful extension to Moq and hope you find some use for it if you’re using Moq.

NOTE: As Richard Reeves pointed out to me in an email, do be careful if you mock a property using this approach. If you evaluate the property while within a debugger, you will dequeue the element potentially causing maddening debugging difficulty., mvc, code comments suggest edit

The beginning of wisdom is to call things by their right names – Chinese Proverb

Routing in ASP.NET doesn’t require that you name your routes, and in many cases it works out great. When you want to generate an URL, you grab this bag of values you have lying around, hand it to the routing engine, and let it sort it all out.


For example, suppose an application has the following two routes defined

    name: "Test",
    url: "code/p/{action}/{id}",
    defaults: new { controller = "Section", action = "Index", id = "" }

    name: "Default",
    url: "{controller}/{action}/{id}",
    defaults: new { controller = "Home", action = "Index", id = "" }

To generate a hyperlink to each route, you’d write the following code.

@Html.RouteLink("Test", new {controller="section", action="Index", id=123})

@Html.RouteLink("Default", new {controller="Home", action="Index", id=123})

Notice that these two method calls don’t specify which route to use to generate the links. They simply supply some route values and let ASP.NET Routing figure it out.

In this example, The first one generates a link to the URL /code/p/Index/123 and the second to /Home/Index/123which should match your expectations.

This is fine for these simple cases, but there are situations where this can bite you. ASP.NET 4 introduced the ability to use routing to route to a Web Form page.  Let’s suppose I add the following page route at the beginning of my list of routes so that the URL /static/url is handled by the page /aspx/SomePage.aspx.

routes.MapPageRoute("new", "static/url", "~/aspx/SomePage.aspx"); 

Note that I can’t put this route at the end of my list of routes because it would never match incoming requests since /static/url would match the default route. Adding it to the beginning of the list seems like the right thing to do here.

If you’re not using Web Forms, you still might run into a case like this if you use routing with a custom route handler, such as the one I blogged about a while ago (with source code). In that blog post, I showed how to use routing to route to standard IHttpHandler instances.

Seems like an innocent enough change, right? For incoming requests, this route will only match requests that exactly matches /static/url but no others, which is great. But if I look at my page, I’ll find that the two URLs I generated earlier are broken.

Now, the two URLs are/url?controller=section&action=Index&id=123and /static/url?controller=Home&action=Index&id=123.


This is running into a subtle behavior of routing which is admittedly somewhat of an edge case, but is something that people run into from time to time. In fact, I had to help Scott Hanselman with such an issue when he was preparing his Metaweblog example for his fantastic PDC talk (HD quality MP4).

Typically, when you generate a URL using routing, the route values you supply are used to “fill in” the URL parameters. In case you don’t remember, URL parameters are those placeholders within a route’s URL with the curly braces such as {controller} and {action}.

So when you have a route with the URL {controller}/{action}/{Id}, you’re expected to supply values for controller, action, and Id when generating a URL. During URL generation, you need to supply a route value for each URL parameter so that an URL can be generated. If every route parameter is supplied with a value, that route is considered a match for the purposes of URL generation. If you supply extra parameters above and beyond the URL parameters for the route, those extra values are appended to the generated URL as query string parameters.

In this case, since the new route I mapped doesn’t have any URL parameters, that route matches every URL generation attempt since technically, “a route value is supplied for each URL parameter.” It just so happens in this case there are no URL parameters. That’s why all my existing URLs are broken because every attempt to generate a URL now matches this new route.

There’s even more details I’ve glossed over having to do with how a route’s default values figure into URL generation. That’s a topic for another time, but it explains why you don’t run into this problem with routes to controller actions which have an URL without parameters.

This might seem like a big problem, but the fix is actually very simple. Use names for all your routes and always use the route name when generating URLs. Most of the time, letting routing sort out which route you want to use to generate an URL is really leaving it to chance. When generating an URL, you generally know exactly which route you want to link to, so you might as well specify it by name.

Also, by specifying the name of the route, you avoid ambiguities and may even get a bit of a performance improvement since the routing engine can go directly to the named route and attempt to use it for URL generation.

So in the sample above where I have code to generate the two links, the following change fixes the issue (I changed the code to use named parameters to make it clear what the change was).

    linkText: "route: Test", 
    routeName: "test", 
    routeValues: new {controller="section", action="Index", id=123}

    linkText: "route: Default", 
    routeName: "default", 
    routeValues: new {controller="Home", action="Index", id=123}

People’s fates are simplified by their names.  ~Elias Canetti

And the same goes for routing. Smile

code, nuget, open source comments suggest edit

Note, this blog post applies to v1.0 of NuGet and the details are subject to change in a future version.

In general, when you create a NuGet package, the files that you include in the package are not modified in any way but simply placed in the appropriate location within your solution.

However, there are cases where you may want a file to be modified or transformed in some way during installation. NuGet supports two types of transformations during installation of a package:

  • Config transformations
  • Source transformations

Config Transformations

Config transformations provide a simple way for a package to modify a web.config or app.config when the package is installed. Ideally, this type of transformation would be rare, but it’s very useful when needed.

One example of this is ELMAH (Error Logging Modules and Handlers for ASP.NET). ELMAH requires that its http modules and http handlers be registered in the web.config file.

In order to apply a config transform, add a file to your packages content with the name of the file you want to transform followed by a .transform extension. For example, in the ELMAH package, there’s a file named web.config.transform.


The contents of that file looks like a web.config (or app.config) file, but it only contains the sections that need to be merged into the config file.

            <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
            <add verb="POST,GET,HEAD" path="elmah.axd"              type="Elmah.ErrorLogPageFactory, Elmah" />
        <validation validateIntegratedModeConfiguration="false" />
            <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
            <add name="Elmah" verb="POST,GET,HEAD" path="elmah.axd"              type="Elmah.ErrorLogPageFactory, Elmah" />

When NuGet sees this transformation file, it attempts to merge in the various sections into your existing web.config file. Let’s look at a simple example.

Suppose this is my existing web.config file.

Existing web.config File

            <add name="MyCoolModule" type="Haack.MyCoolModule" />

Now suppose I want my NuGet package to add an entry into the modules section of config. I’d simply add a file named web.config.transform to my package with the following contents.

web.config.transform File

            <add name="MyNuModule" type="Haack.MyNuModule" />

After I install the package, the web.config file will look like

Existing web.config File

            <add name="MyCoolModule" type="Haack.MyCoolModule" />
            <add name="MyNuModule" type="Haack.MyNuModule" />

Notice that we didn’t replace the modules section, we merged our entry into the modules section.

I’m currently working on documenting the full set of rules for config transformations which I will post to our NuGet documentation page once I’m done.I just wanted to give you a taste for what you can do today.

Also, in v1 of NuGet we only support these simple transformations. If we hear a lot of customer feedback that more powerful transformations are needed for their packages, we may consider supporting the more powerful web.config transformation language as an alternative to our simple approach.

Source Transformations

NuGet also supports source code transformations in a manner very similar to Visual Studio project templates. These are useful in cases where your NuGet package includes source code to be added to the developer’s project. For example, you may want to include some source code used to initialize your package library, but you want that code to exist in the target project’s namespace. Source transformations help in this case.

To enable source transformations, simply append the .pp file extension to your source file within your package.

Here’s a screenshot of a package I’m currently authoring.


When installed, this package will add four files to the target project’s ~/Models directory. These files will be transformed and the .pp extension will be removed. Let’s take a look at one of these files.

namespace $rootnamespace$.Models {
    public struct CategoryInfo {
        public string categoryid;
        public string description;
        public string htmlUrl;
        public string rssUrl;
        public string title;

Notice the highlighted section that has the token $rootnamespace$. That’s a Visual Studio project property which gets replaced with the current project’s root namespace during installation.

We expect that $rootnamespace$ will be the most commonly used project property, though we support any project property such as $FileName$. The available properties may be specific to the current project type, but this MSDN documentation on project properties is a good starting point for what might be possible.

nuget, code comments suggest edit

My team has been hard at work the past few weeks cranking out code and today we are releasing the second preview of NuGet (which you may have heard referred to as NuPack in the past, but was renamed for CTP 2 by the community). If you’re not familiar with what NuGet is, please read my introductory blog post on the topic.

For a detailed list of what changed, check out the NuGet Release Notes.

To see NuGet in action, watch the talk that Scott Hanselman’s gave at the Professional Developer’s Conference which was the highest rated talk. You can watch it online or download it in HD.

How do I get it?

There are three ways to get NuGet CTP 2.

Via MVC 3

NuGet CTP 2 is included as part of the ASP.NET MVC 3 Release Candidate installation (install via Web PI or download the standalone installer) . So when you install ASP.NET MVC 3 RC, you’ll have NuGet installed.

If you want to try out NuGet without installing ASP.NET MVC 3 RC, feel free to install it via the Visual Studio Extension Gallery.


As with all of our releases, we also make the download available on our CodePlex website.

What’s new?

As the release notes point out, we’ve made a lot of improvements. Some of the big ones are changes to the NuSpec package format, so if you have any old .nupkg files laying around, you’ll need to build them with the new CTP 2 NuGet.exe command line tool.

But to be nice, we already updated all the packages in the temporary feed which is at a new location now, so you won’t need to do that. But if you’re building new packages, be sure to update your copy of Nuget.exe.

The NuSpec format now includes two new fields you should take advantage of if you are creating packages:

  • The iconUrl field specifies the URL for a 32x32 png icon that shows up next to your package entry within the Add Package Dialog. Be sure to set that to distinguish your package.
  • The projectUrl field points to a web page that provides more information about your package.

Another big change we made is that the package feed is now an Open Data Protocol (OData) Service Endpoint. This makes it easy for clients to write  arbitrary queries using LINQ against an IQueryable interface which is automatically translated to the proper query URL. For example, to see the first 10 packages that start with “N”:$filter=startswith(Id,’N’) eq true&$top=10

Also, when using the Powershell based Package Manager Console, be sure to note that we renamed the Add-Package command to Install-Package and the Remove-Package command to Uninstall-Package. We felt the new names conveyed the right semantics.

How’s things?

So far, the project has been a lot of fun to work on, in large part due to the enthusiasm and excitement that we’ve seen from the community. As I mentioned in the past, this is truly an Open Source project and we’ve had quite a few community code contributions.

Of course, we still have plenty of items up for grabs if you’re looking for something to work on.


One cool thing we’ve done is integrated the use of ReviewBoard for doing code reviews into our process. For information on that, check out our code review instructions. Our review board is currently hosted at but that domain name will change soon.

Continuous Integration

For those of you who like life in the fast lane, we do have a Team City based Continuous Integration (CI) server hosted at You can get daily builds compiled directly from our source tree. So for those of you who knew about the build server, you would have been playing with the CTP 2 for a while now. Winking

What’s next?

Well our next release is going to be NuGet version 1.0 RTM. A lot of our focus for this iteration will be on applying some spit and polish as well as integration work on our sister project, Gallery Server.

The Gallery Server project is building what will become the official gallery for NuGet (as well as for Orchard modules and other types of galleries). It’s being developed as an Open Source project as well so that anyone can take the source and host their own galleries.

Once the gallery server is completed and hosted, we’ll start to transition from our current temporary feed over to the gallery server. We’ll leave the temporary feed up for a while to allow people time to transition over to whatever the final official gallery location ends up at.

At this point, if you haven’t tried NuGet, give it a try. If you have tried it, let us know what you think. I hope you enjoy using it, I know I do. Smile, mvc, code, nuget comments suggest edit

Today we’re releasing the release candidate for ASP.NET MVC 3. We’re in the home stretch now so it’ll mostly be bug fixes and small tweaks from here on out.

There are two ways to install ASP.NET MVC 3:

Also, be sure to check out the ASP.NET MVC 3 web page for information and content about ASP.NET MVC 3 as well as the release notes for this release.

Also, don’t miss Scott Guthrie’s blog post on ASP.NET MVC 3 which provides the usual level of detail on the release.

Razor Intellisense. Ah Yeah!

Probably the most frequently asked question I received when we released the Beta of ASP.NET MVC 3 was “When are we going to get Intellisense for Razor?” Well I’m happy to say the answer to that question is right now!

Not only Intellisense, but syntax highlighting and colorization also works for Razor views. ScottGu’s blog post I mentioned earlier has some screenshots of the Intellisense in action as well as details on some of the other improvements included in ASP.NET MVC 3 RC.


As I wrote earlier, this release of ASP.NET MVC includes an updated version of NuGet, a free and open source Package Manager that integrates nicely into Visual Studio.

What’s Next?

Well if all goes well, we’ll land this plane nicely with an RTM release, and then it’s time to start thinking about ASP.NET MVC 4. There, I said it. Well, actually, I should probably already be thinking about 4, but seriously, can’t a guy catch a break once in a while to breathe for a moment?

Well, since I’m lazy, I’ll probably be asking you very soon for your thoughts on what you’d like to see us focus on for the next version of ASP.NET MVC. Then I can present your best ideas as my own in the next executive review. You don’t mind that at all, do you? Winking

Seriously though, please do provide feedback and I’ll keep you posted on our planning.

Now that we have Nuget in place, one thing we’ll be focusing on is looking at building packages for features that we would have liked to include in ASP.NET MVC, but maybe didn’t have the time to implement them. Or perhaps simply for experimental features that we’d like feedback on. I think building NuGet packages will be a great way to try out new feature ideas and for the ones we think belong in the product, we can always roll them into ASP.NET MVC core.

comments suggest edit

This month’s Scientific American has an interesting commentary by Scott Lilienfield entitled Fudge Factor that discusses the fine line between academic misconduct and errors caused by confirmation bias.

For a great description of confirmation bias, read the’s post on the topic.

The Misconception: Your opinions are the result of years of rational, objective analysis.

The Truth:Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.

The fudge factor article talks about some of the circumstances that contribute to confirmation bias in the sciences.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

Obviously this doesn’t just apply to scientists. I’m sure we all know developers who are equally prone to confirmation bias, present company excluded of course. Winking
smile Pretty much everybody is succeptbile. We all probably witnessed an impressive (in magnitude) display of confirmation bias in the recent elections.

However, there’s another contributing factor that the article doesn’t touch upon that I think is worth calling out, our education system. I remember when I was in high school and college, I had a lot of “lab” classes for the various sciences. We’d conduct experiments, take measurements, and plot the measurements on a graph. However, we already knew what the results were supposed to look like. So if a measurement was way off the expected graph, there was a tendency to retake the measurement.

“Whoops, I must’ve nudged the apparatus when I took that measurement, let’s try it again.”

As the article points out (emphasis mine)…

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias.

So how can schools do a better job of teaching scientific methods? I think one interesting thing a teacher can do is have students conduct an experiment where the students think they know what the expected results should be beforehand, but where the actual results will not match up.

I think this would be interesting as an experiment in its own right. I’d be curious to see how many students turn in results which match their expectations rather than what matched their actual observations. That could provide a powerful teaching opportunity about scientific methods and confirmation bias.

code, comments suggest edit

It was a dark and stormy coding session; the rain fell in torrents as my eyes were locked to two LCD screens in a furious display of coding …


…sorry sorry, I just can’t continue. It’s all a lie.

This actually a cautionary tale describing one subtle way that you can run afoul Code Access Security (CAS) when attempting to run an application in partial trust. But who wants to read about that? Right? Right?

Well this isn’t a sordid tale, but if you bear with me, you may just find it interesting. Either that, or you may just take pity on me that I find this type of thing interesting.

I was hacking on NuGet the other day and all I wanted to do was write some code that accessed the version number of the current assembly. This is something we do in Subtext, for example. If you scroll to the very bottom of the admin section, you’ll see the following.

Subtext Admin - Feedback - Google

As you can imagine, the code for to get the version number is very straightforward:


Or is it!? (cue scary organ music)

What the code does here (besides appearing to smack the Law of Demeter in the mouth) is get the currently executing assembly. From that it gets the Assembly name and extracts the version from the name. What could go wrong? I tested this in medium trust and it received the “works on my machine” seal of approval!

But does it work all the time? Well if it did, I wouldn’t be writing this blog post would I?

Fortunately, my colleague David Fowler caught this latent bug during a code review. Levi (no blog) Broderick was brought in to help explain the whole issue so a dunce like me could understand it. These two co-workers are scary smart and must never be allowed to fall into a life of crime as they would decimate the countryside. Just letting you know.

As it turns out, code exactly like this was the source of a medium trust bug in ASP.NET MVC 2 (that we fortunately caught and fixed before RTM). So what gives?

Well there’s very subtle latent bug with this code. To illustrate, I’ll put the code in context. The following snippet is a class library that makes use of the code I just wrote.

using System.Reflection;
using System.Security; 
[assembly: SecurityTransparent] 
namespace ClassLibrary1 {
  public static class Class1 {
    public static string GetExecutingAssemblyVersion() {
        return Assembly.GetExecutingAssembly().GetName().Version.ToString();

We need an application to reference that code. The following is code for an ASP.NET MVC controller with an action method that calls the method in the class library and returns it as a string. It may seem odd that the action method returns a string rather than an ActionResult, but that’s allowed. ASP.NET MVC simply wraps it in a ContentResult.

using System.Web.Mvc;

namespace MvcApplication1.Controllers {
  public class HomeController : Controller {
        public string ClassLibAssemblyVersion() {
            return ClassLibrary1.Class1.GetExecutingAssemblyVersion();

Still with me?

When I run this application and visit /Home/ClassLibAssemblyVersion everything works fine and we see the version number.

httplocalhost29519homeClassLibAssemblyVersionFixed - Windows Internet

Now’s where the party gets a bit wild (but still safe for work). At this point, I’ll put the class library assembly in the GAC and then recompile the application. I’m going to assume you know how to do that. Note that I’ll need to remove the local copy of the class library from the bin directory of my ASP.NET MVC application and also remove the project reference and replace it with a GAC reference.

When I do that and run the application again, I get.


Oh noes!

So what happened here? Reflector to the rescue! Looking at the stack trace, let’s dig into RuntimeAssembly.GetName(Boolean copiedName) method.

public override AssemblyName GetName(bool copiedName) {
    AssemblyName name = new AssemblyName();
    string codeBase = this.GetCodeBase(copiedName);
    // ... snipped for brevity ...

    return name;

I’ve snipped out some code so we can focus on the interesting part. This method wants to return a fully populated AssemblyName instance. One of the properties of AssemblyName is CodeBase, which is a path to the assembly.

Once it has this path, it attempts to verify the path by calling VerifyCodeBaseDiscovery. Let’s take a look.

private void VerifyCodeBaseDiscovery(string codeBase)
    if ((codeBase != null) && 
      (string.Compare(codeBase, 0, "file:", 0, 5
        , StringComparison.OrdinalIgnoreCase) == 0))
        URLString str = new URLString(codeBase, true);
        new FileIOPermission(FileIOPermissionAccess.PathDiscovery
          , str.GetFileName()).Demand();

Notice that last line of code? It’s making a security demand to check if you have path discovery permissions on the specified path. That’s what’s failing. Why?

Well before you put the assembly in the GAC, the assembly was being loaded from your bin directory. Naturally, even in medium trust, you have rights to discover that path. But now that the class library is in the GAC, it’s being loaded from a subdirectory of c:\Windows\Assembly and guess what. Your medium trust application doesn’t have path discovery permissions to that directory.

As an aside, I think it’s too bad that this particular property doesn’t check its security demand lazily. That would be my kind of property access. My gut feeling is that people don’t often ask for an assembly’s Codebase as much as they ask for the other “safe” properties, like Version!

So how do we fix this? Well the answer is to construct our own AssemblyName instance.

new AssemblyName(typeof(Class1).Assembly.FullName).Version.ToString();

This implementation avoids the security issue I mentioned earlier because we’re generating the AssemblyName instance ourselves and it never has a reference to the disallowed path.

If you want to see this in action, I put together a little demo showing the bad approach and the fixed approach.

You’ll need to GAC the ClassLibrary1 assembly to see the exception occurred. I have another action that has the safe implementation. Try it out.

As a tangent, the astute reader may have noticed that I used the assembly level SecurityTransparentAttribute in my class library. Is that a case of my assembly attempting to deal with self esteem issues and shying away from a clamoring public? Why did I put that attribute there? The answer to that, my friends, is a story for another time. Smile

code, open source, nuget comments suggest edit

The polls have closed and we now have a new name for our project, NuGet (pronounced “New Get” and not “Nugget” and not “Noojay” for you hoity-toity) which had the most votes by a large margin.

For those who missed it, the following posts will get you up to speed on the name change:

Over the next couple of days we’ll start transitioning the project over to the new name. We’ll try to minimize the impact of the change and make sure existing links to the CodePlex project redirect to the new URL. If you have a local clone of the repository with work in progress when we rename the project, don’t worry. All you have to do is push your changes to the new URL for your fork rather than the old one.

Thanks for your participation and support! I’m glad to have this behind us so we can continue to focus on delivering a great product. I’ve even thought of a tagline we can use until one of you come up with a much better one. Winking

NuGet: A new way to get libraries.

OR NuGet The caramel goodness of open source in your projects.

Tags: package manager, NuGet, not-nupack

nuget, code, open source comments suggest edit

Just a quick follow-up to my last posts about naming NuPack. Looks like the community is not content to sit back and let the project be labelled with a lame name. I’ve seen a couple of community inspired names created as new issues in the CodePlex issue tracker.

However, NFetch has a huge lead, but the community chosen NRocks is a close second. The name I like the best so far is NuGet.

(vote for it here)

As before, voting still closes on Tuesday 10/26 at 11:59 PM PDT. If you feel strongly enough around a name, rally your friends to vote for one. Smile

open source, nuget comments suggest edit

There are only 2 hard problems in Computer Science. Naming things, cache invalidation and off-by-one errors.

I’m always impressed with the passion of the open source community and nothing brings it out more than a naming exercise. Smile

In my last blog post, I posted about our need to rename NuPack. Needless to say, I got a bit of angrypassionate feedback. There have been a lot of questions that keep coming up over and over again and I thought I would try and address the most common questions here.

Why not stay with the NuPack name? It was just fine!

In the original announcement, we pointed out that:

We want to avoid confusion with another software project that just happens to have the same name. This other project, NUPACK, is a software suite by a group of researchers at Caltech having to do with analysis and design of nucleic acid systems.

Now some of you may be thinking, “Why let that stop you? Many projects in different fields are fine sharing the same name. After all, you named a blog engine Subtext and there’s a Subtext programming language already.”

There’s a profound difference between Microsoft starting an open source project that accepts contributionsand some nobody named Phil Haack starting a little blog engine project.

Most likely, the programming language project has never heard of Subtext and Subtext doesn’t garner enough attention for them to care.

As Paula Hunter points out in a comment on the Outercurve blog post:

Sometimes we are victims of our own success, and NuPack has generated so much buzz that it caught CalTech’s attention. They have been using NuPack since 2007 and theoretically could assert their common law right of “first use” (and, they recently filed a TM application). Phil and the project team are doing the right thing in making the change now while the project is young. Did they have to? The answer is debatable, but they want to eliminate confusion and show respect to CalTech’s project team.

Naming is tough, and you can’t please everyone, but a year from now, most won’t remember the old name. How many remember Mozilla “Firebird”?

Apparently, we’re in good company when it comes to open source projects that have had to pick a new name. It’s always a painful process. This time around, we’re following guidelines posted by Paula in a blog post entitled The Naming Game: Things to consider when naming an open source project which talks about this concept of “first use” Paul mentioned.

Why not go back to NPack?

There’s already a project on CodePlex with that name.

Why not name it NGem?

Honestly, I’d prefer not to use the N prefix. I know one of the choices we provided had it in the name, but it was one of the better names we could come up with. Also, I’d like to not simply appropriate a name associated with the Ruby community. I think that could cause confusion as well. I’d love to have a name that’s uniquely ours if possible.

Why not name it ****?

In the original announcement, we listed three criteria:

  • Domain name available
  • No other project/product with a name similar to ours in the same field
  • No outstanding trademarks on the name that we could find

Domain name

The reason we wanted to make sure the domain name is available is that if it is, it’s less likely to be the name of an existing product or company. Not only that, we need a decent domain name to help market our project. This is one area where I think the community is telling us to be flexible. And I’m willing to consider being more flexible about this just as long as the name we choose won’t run afoul of the second criteria and we get a decent domain name that doesn’t cause confusion with other projects.

Product/Project With Similar Names

This one is a judgment call, but all it takes is a little time with Google/Bing to assess the risk here. There’s always going to be a risk that the name we pick will conflict with something out there. The point is not to eliminate risk but reduce it to a reasonable level. If you think of a name, try it out in a search engine and see what you find.


This one is tricky. Pretty much, if your search engine doesn’t pull up anything, it’s unlikely there is a trademark. Even so, it doesn’t hurt to put your search through the US Patent office’s Trademark Basic Word Mark Search and make sure it’s clean there. I’m not sure how comprehensive or accurate it is, but if it is there, you’re facing more risk than if it doesn’t show up.

I have a name that meets your criteria and is way better than the four options you gave us!

Ok, this is not exactly a question, but something I hear a lot. In the original blog post, we said the following:

Can I write in my own suggestion?

Unfortunately no. Again, we want to make sure we can secure the domains for our new project name, so we needed to start with a list that was actually attainable. If you really can’t bring yourself to pick even one, we won’t be offended if you abstain from voting. And don’t worry, the product will continue to function in the same way despite the name change.

However, I don’t want to be completely unreasonable and I think people have found a loophole. We’re conducting voting through our issue tracker and voting closes at 10/26 at 11:59 PM PDT. Our reasoning for not accepting suggestions was we wanted to avoid domain squatting. However, one creative individual created a bug to rename NuPack to a name for which they own the domain name and are willing to assign it over to the Outercurve foundation.

Right now, NFetch is way in the lead. But if some other name were to take the lead and meet all our criteria, I’d consider it. I reserve the right of veto power because I know one of you will put something obscene up there and somehow get a bajillion votes. Yeah, I have my eye on you Rob!

So where does that leave us?

We really don’t want to leave naming the project as an open ended process. So I think it’s good to set a deadline. On the morning of 10/27, for better or worse, you’ll wake up to a new name for the project.

Maybe you’ll hate it. Maybe you’ll love it. Maybe you’ll be ambivalent. Either way, over time, hopefully this mess will fade to a distant memory (much as Firebird has) and the name will start to fit in its new clothes.

As Paul Castle stated over Twitter:

@haacked to me the name is irrelevant the prouduct is ace

No matter what the name is, we’re still committed to delivering the best product we can with your help!

And no, we’re not going to name it:


nuget, code, open source comments suggest edit

UPDATE: The new name is NuGet

The NuPack project is undergoing a rename and we need your help! For details, read the announcement about the rename on the Outercurve Foundation’s blog.

What is the new name?

We don’t know. You tell us! The NuPack project team brainstormed a set of names and narrowed down the list to four names.

I’ve posted a set of names as issues in our NuPack site and will ask you to vote for your favorite name among the lot. Vote for as many as you want, but realize that if you vote for all of them, you’ve just cancelled your vote. Winking

Here are the choices:

Voting will close at 10/26 at 11:59 PM.

nuget, code, open source comments suggest edit

Note: Everything I write here is based on a very early pre-release version of NuGet (formerly known as NuPack) and is subject to change.

A few weeks ago I wrote a blog post introducing the first preview, CTP 1, of NuGet Package Manager. It’s an open source (we welcome contributions!) developer focused package manager meant to make it easy to discover and make use of third party dependencies as well as keep them up to date.

As of CTP 2 NuGet by default points to an ODATA service temporarily located at (in CTP 1 this was an ATOM feed located at

This feed was set up so that people could try out NuGet, but it’s only temporary. We’ll have a more permanent gallery set up as we get closer to RTM.

If you want to get your package in the temporary feed, follow the instructions at a companion project, NuPackPackages on

Local Feeds

Some companies keep very tight control over which third party libraries their developers may use. They may not want their developers to point NuGet to arbitrary code over the internet. Or, they may have a set of proprietary libraries they want to make available for developers via NuGet.

NuGet supports these scenarios with a combination of two features:

  1. NuGet can point to any number of different feeds. You don’t have to point it to just our feed.
  2. NuGet can point to a local folder (or network share) that contains a set of packages.

For example, suppose I have a folder on my desktop named packages and drop in a couple of packages that I created like so:


I can add that director to the NuGet settings. To get to the settings, go to the Visual Studio Tools| Options dialog and scroll down to Package Manager.

A shortcut to get there is to go to the Add Package Dialog and click on the Settings button or click the button in the Package Manager Console next to the list of package sources. This brings up the Options dialog (click to see larger image).

Package Manager

Type in the path to your packages folder and then click the Addbutton. Your local directory is now added as another package feed source.


When you go back to the Package Manager Console, you can choose this new local package source and list the packages in that source.

MvcApplication7 - Microsoft Visual Studio (Administrator)

You can also install packages from your local directory. If you’re creating packages, this is a great way to test them out without having to publish them online anywhere.MvcApplication7 - Microsoft Visual
Studio (Administrator)

Note that if you launch the Add Package Reference Dialog, you won’t see the local package feed unless you’ve made it the default package source. This limitation is only temporary as we’re changing the dialog to allow you to select the package source.


Now when you launch the Add Package Reference Dialog, you’ll see your local packages.


Please note, as of CTP 1, if one of these local packages has a dependency on a package in another registered feed, it won’t work. However, we are tracking this issue and plan to implement this feature in the next release.

Custom Read Only Feeds

Let’s suppose that what you really want to do is host a feed at an URL rather than a package folder. Perhaps you are known for your great taste in music and package selection and you want to host your own curated NuGet feed of the packages you think are great.

Well you can do that with NuGet. For step by step instructions, check out this follow-up blog post, Hosting a Simple “Read Only” NuGet Package Feed.

We imagine that the primary usage of NuGet will be to point it to our main online feed. But the flexibility of NuGet to allow for private local feeds as well as curated feeds should appeal to many.

Tags: NuGet, Package Manager, OData

nuget, code, open source comments suggest edit

A couple days ago I wrote a blog post entitled, Running Open Source In A Distributed World which outlined some thoughts I had about how managing core contributors to an open source project changes when you move from a centralized version control repository to distributed version control.

The post was really a way for me to probe for ideas on how best to handle feature contributions. In the post, I asked this question,

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

None other than Karl Fogel, whose book has served me well to this point, and whose book I was critiquing provided a great answer,

One simple way is, just get the agreement from each contributor the first time any change of theirs is approved for incorporation into the code. No matter whether it’s a large feature or a small bugfix – the contributor form is a small, one-time effort, so even for a tiny bugfix it’s still worth it (on the theory that the person is likely to contribute again, and the cost of collecting the form is amortized over all the contributions that person ever makes anyway).

So simple I’ll smack myself every hour for a week for not thinking of it. Smile

Unfortunately, the process for accepting a contributor agreement is not yet fully automated (the Outercurve Foundation is working on it), so we won’t be doing this for small bug fixes. But we will do it for any feature contributions.

I’ve updated our guide to Contributing to NuGet based on this feedback. I welcome feedback on how we can improve the guide. I really want to make sure we can make it easy to contribute while still ensuring the integrity of the intellectual property. Thanks!

nuget, open source comments suggest edit

In my last post, I described how we’re trying to improve and streamline contributor guidelines to make it easy for others to contribute to NuGet.

Like all product cycles anywhere, we’re always running on tight time constraints. This helps us to maintain a tight focus on the product. We don’t want to the product to do anything and everything. However, we do want to deliver everything needed (along with double rainbows and unicorns) to meet our vision for this first release.

The best to meet those goals is to get more contributions from outside the core team. And the best way to do that is to remove as many roadblocks as possible for those interested in contributing.

What’s Up For Grabs?!

When approaching a new project, it can be really challenging to even know what bugs to tackle. So much is happening so quickly and you don’t want to step on any toes.

So we’re trying an experiment where we mark issues in our bug tracker with the tag “UpForGrabs”. The idea here is that any item marked in such a way is something that none of the core team will work on if someone else will take it. Some of these are assigned to core team members, but we hope that someone externally will come along and start a discussion and say, “Yeah, I’ll handle that and provide a pull request with high quality code.

That would so rock!

So how do you find out about our Up for Grabs items? It’s really easy.

  1. Visit our issue tracker.
  2. Search for “UpForGrabs” (sans quotes) as in the screenshot below.

Searching for Up For Grabs

Once you find an item you’d like to tackle, start a discussion and let everyone know. That’ll make sure that if anyone else has started on it, that you can work together or decide to choose something else.

Note that if the status is “Active”, it’s likely that someone has already started on it.

Another way to search for these items is to use the Advanced View on the issue tracker and add a filter with the word “UpForGrabs” and set the status to “Proposed”.

NuPack - Issue Tracker - Windows Internet


This reminds me, it’d be really nice if I could create an URL that took you directly to this filtered view our issue tracker. That’s something I need to log with the team.

In the meanwhile, we have a list of issues we’d love for you to vote up that would help us manage NuGet more effectively on Smile

nuget, code, open source comments suggest edit

When it comes to running an open source project, the book Producing Open Source Software - How to Run a Successful Free Software Project by Karl Fogel (free pdf available) is my bible (see my review and summary of the book).

The book is based on Karl Fogel’s experiences as the leader of the Subversion project and has heavily influenced how I run the projects I’m involved in. Lately though, I’ve noticed one problem with some of his advice. It’s so Subversion-y.

Take a look at this snippet on Committers.

As the only formally distinct class of people found in all open source projects, committers deserve special attention here. Committers are an unavoidable concession to discrimination in a system which is otherwise as non-discriminatory as possible. But “discrimination” is not meant as a pejorative here. The function committers perform is utterly necessary, and I do not think a project could succeed without it.

A Committer in this sense is someone who has direct commit access to the source code repository. This makes sense in a world where your source control is completely centralized as it would be with a Subversion repository. But what about a world in which you’re using a completely decentralized version control like Git or Mercurial? What does it mean to be a “committer” when anyone can clone the repository, commit to their local copy, and then send a pull request?

In the book, Mercurial: The Definitive Guide, Bryan O’Sullivan discusses different collaboration models. The one the Linux kernel uses for example is such that Linus Torvalds maintains the “master” repository and only pulls from his “trusted lieutenants”.

At first glance, it might seem reasonable that a project could allow anyone to send a pull request to main and thus focus the “discrimination”, that Karl mentions, on the technical merits of each pull request rather than the history of a person’s involvement in the project.

One one level, that seems even more merit based egalitarian, but you start to wonder if that is scalable. Based on the Linux kernel model, it clearly is not scalable. As Karl points out,

Quality control requires, well, control. There are always many people who feel competent to make changes to a program, and some smaller number who actually are. The project cannot rely on people’s own judgement; it must impose standards and grant commit access only to those who meet them.

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

On one hand, if someone has a great feature idea, wouldn’t it be nice if we could just pull in their work without making them jump through hoops? On the other hand, if we have a hundred people go through this paperwork process, but only one actually ends up contributing anything, what a waste of our time. I would love to hear your thoughts on this.

NuGet, a package manager project I work on is currently following the latter approach as described in our guide to becoming a core contributor, but we’re open to refinements and improvements. I should point out that a hosted Mercurial solution does support the centralized committer model where we provide direct commit access. It just so happens that while some developers in the NuGet project have direct commit access, most don’t and shouldn’t make use of it per project policy as we’re still following a distributed model. We’re not letting the technical abilities/limitations of our source control system or project hosting define our collaboration model.

I know I’m late to the game when it comes to distributed source control, but it’s really striking to me how it’s turned the concept of committers on its head. In the centralized source control world, being a contributor was enforced via a technical gate, either you had commit access or you didn’t. With distributed version control it’s become more a matter of social contract and project policies. mvc, code, open source, nuget comments suggest edit

NuGet (recently renamed from NuPack) is a free open source developer focused package manager intent on simplifying the process of incorporating third party libraries into a .NET application during development.

After several months of work, the Outercurve Foundation (formerly CodePlex Foundation) today announced the acceptance of the NuGet project to the ASP.NET Open Source Gallery. This is another contribution to the foundation by the Web Platform and Tools (WPT) team at Microsoft.

Also be sure to read Scott Guthrie’s announcement post and Scott Hanselman’s NuGet walkthrough. There’s also a video interview with me on Web Camps TV where I talk about NuGet.

nuget-229x64Just to warn you, the rest of this blog post is full of blah blah blah about NuGet so if you’re a person of action, feel free to go:

Now back to my blabbing. I have to tell you, I’m really excited to finally be able to talk about this in public as we’ve been incubating this for several months now. During that time, we collaborated with various influential members of the .NET open source community including the Nu team in order to gather feedback on delivering the right project.

What Does NuGet Solve?

The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project.

Take ELMAH, for example. For the most part, this is a very simple library to use. Even so, it may take the following steps to get started:

  1. You first need to discover ELMAH somehow.
  2. The download page for ELMAH includes multiple zip files. You need to make sure you choose the correct one.
  3. After downloading the zip file, don’t forget to unblock it.
  4. If you’re really careful, you’ll verify the hash of the downloaded file against the hash provided by the download page.
  5. The package needs to be unzipped, typically into a lib folder within the solution.
  6. You’ll then add an assembly reference to the assembly from within the Visual Studio solution explorer.
  7. Finally, you need to figure out the correct configuration settings and apply them to the web.config file.

That’s a lot of steps for a simple library, and it doesn’t even take into account what you might do if the library itself depends on multiple other libraries.

NuGet automates all of these common and tedious tasks, allowing you to spend more time using the library than getting it set up in your project.

NuGet Guiding Principles

I remember several months ago, Hot on the heels of shipping ASP.NET MVC 2, I was in a meeting with Scott Guthrie (aka “The Gu”) reviewing plans for ASP.NET MVC 3 when he laid the gauntlet down and said it was time to ship a package manager for .NET developers. The truth was, it was long overdue.

I set about doing some research looking at existing package management systems on other platforms for inspiration such as Ruby Gems, Apt-Get, and Maven. Package Management is well trodden ground and we have a lot to learn from what’s come before.

After this research, I came up with a set of guiding principles for the design of NuGet that I felt specifically addressed the needs of .NET developers.

  1. Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.
  2. Works against a well known central feed.As part of this project, we plan to host a central feed that contains (or points to) NuGet packages. Package authors will be able to create an account and start adding packages to the feed. The NuGet client tools will know about this feed by default.
  3. No central approval process for adding packages. When you upload a package to the NuGet Package Gallery (which doesn’t exist yet), you won’t have to wait around for days or weeks waiting for someone to review it and approve it. Instead, we’ll rely on the community to moderate and police itself when it comes to the feed. This is in the spirit of how and work.
  4. Anyone can host a feed. While we will host a central feed, we wanted to make sure that anyone who wants to can also host a feed. I would imagine that some companies might want to host an internal feed of approved open source libraries, for example. Or you may want to host a feed containing your curated list of the best open source libraries. Who knows! The important part is that the NuGet tools are not hard-coded to a single feed but support pointing them to multiple feeds.
  5. Command Line and GUI based user interfaces. It was important to us to support the productivity of a command line based console interface. Thus NuGet ships with the PowerShell based Package Manager Console which I believe will appeal to power users. Likewise, NuGet also includes an easy to use GUI dialog for adding packages.

NuGet’s Primary Goal

In my mind, the primary goal of NuGet is to help foster a vibrant open source community on the .NET platform by providing a means for .NET developers to easily share and make use of open source libraries.

As an open source developer myself, this goal is something that is near and dear to my heart. It also reflects the evolution of open source in DevDiv (the division I work in) as this is a product that will ship with other Microsoft products, but also accepts contributions. Given the primary goal that I stated, it only makes sense that NuGet itself would be released as a truly open source product.

There’s one feature in particular I want to call out that’s particularly helpful to me as an open source developer. I run an open source blog engine called Subtext that makes use of around ten to fifteen other open source libraries. Before every release, I go through the painful process of looking at each of these libraries looking for new updates and incorporating them into our codebase.

With NuGet, this is one simple command: List-Package –updates. The dialog also displays which packages have updates available. Nice!

And keep in mind, while the focus is on open source, NuGet works just fine with any kind of package. So you can create a network share at work, put all your internal packages in there, and tell your co-workers to point NuGet to that directory. No need to set up a NuGet server.

Get Involved!

So in the fashion of all true open source projects, this is the part where I beg for your help. ;)

It is still early in the development cycle for NuGet. For example, the Add Package Dialog is really just a prototype intended to be rewritten from scratch. We kept it in the codebase so people can try out the user interface workflow and provide feedback.

We have yet to release our first official preview (though it’s coming soon). What we have today is closer in spirit to a nightly build (we’re working on getting a Continuous Integration (CI) server in place).

So go over to the NuGet website on CodePlex and check out our guide to contributing to NuGet. I’ve been working hard to try and get documentation in place, but I could sure use some help.

With your help, I hope that NuGet becomes a wildly successful example of how building products in collaboration with the open source community benefits our business and the community.

Tags: NuGet, Package Manager, Open Source mvc,, code comments suggest edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Wow! It’s been a busy two months and change since we released Preview 1 of ASP.NET MVC 3. Today I’m happy (and frankly, relieved) to announce the Beta release of ASP.NET MVC 3. Be sure to read Scott Guthrie’s announcement as well.

onward Credits: Image from ICanHazCheezburger

Yes, you heard me right, we’re jumping straight to Beta with this release! To try it out…

As always, be sure to read the release notes (also available as a Word doc if you prefer that sort of thing) for all the juicy details about what’s new in ASP.NET MVC 3.

A big part of this release focuses on polishing and improving features started in Preview 1. We’ve made a lot of improvements (and changes) to our support for Dependency Injection allowing you to control how ASP.NET MVC creates your controllers and views as well as services that it needs.

One big change in this release is that client validation now is built on top of jQuery Validation in an unobtrusive manner. In ASP.NET MVC 3, jQuery Validation is the default client validation script. It’s pretty slick so give it a try and let us know what you think.

Likewise, our Ajax features such as the Ajax.ActionLink etc. are now built on top of jQuery. There’s a way to switch back to the old behavior if you need to, but moving forward, we’ll be leveraging jQuery for this sort of thing.

Where’s the Razor Syntax Highlighting and Intellisense?

This is probably a good point to stop and provide a little bit of bad news. One of the most frequently asked questions I hear is when are we going to get syntax highlighting? Unfortunately, it’s not yet ready for this release, but the Razor editor team is hard at work on it and we will see it in a future release.

I know it’s a bummer (believe me, I’m bummed about it) but I think it’ll make it that much sweeter when the feature arrives and you get to try it out the first time! See, I’m always looking for that silver lining. ;)

What’s this NuPack Thing?

That’s been the other major project I’ve been working on which has been keeping me very busy. I’ll be posting a follow-up blog post that talks about that.

What’s Next?

The plan is to have our next release be a Release Candidate. I’ve updated the Roadmap to provide an idea of some of the features that will be coming in the RC. For the most part, we try not to add too many features between Beta and RC preferring to focus on bug fixing and polish.