personal, mvc, 0 comments suggest edit

Along with James Senior, I’ll be speaking at a couple of free Web Camps events in South America in March 2011.


Buenos Aires, Argentina –March 14-15, 2011

São Paulo, Brazil –March 18-19, 2011

The registration links are not yet available, but I’ll update this blog post once they are.Registration is open! Register for Argentina. Register for Brazil. For a list of all upcoming Web Camps events, see the events list.

If you’re not familiar with Web Camps, the website provides the following description, emphasis mine:

Microsoft’s 2 day Web Camps are events that allow you to learn and build websites using ASP.NET MVC, WebMatrix, OData and more. Register today at a location near you! These events will cover all 3 topics and will have presentations on day 1 with hands on development on day 2. They will be available at 7 countries worldwide with dates and locations confirming soon.

Did I mention that these events are free? The description neglects to mention NuGet, but you can bet I’ll talk about that as well. Smile 

I’m really excited to visit South America (this will be my first time) and I hope that schedules align in a way that I can catch a Fútbol/Futebol game or two. I also need to brush up on my Spanish for Argentina and learn a bit of Portugese for Brazil.

One interesting thing I’m learning is that visiting Brazil requires a Visa (Argentina does not) which is a heavyweight process. According to the instructions I received, it takes a minimum of 40 business days to receive the Visa! Wow. I’m sure the churrascaria will be worth it though.

Good thing I’m getting started on this early. Hey Brazil, I promise not to trash your country. So feel free to make my application go through more quickly.

Tags: webcamps, aspnetmvc,, brazil, argentina

code, nuget 0 comments suggest edit

We could have done better. That’s the thought that goes through my mind when looking back on this past year and reflecting on NuGet.

Overall, I think we did pretty well with the product. Nobody died from using it, we received a lot of positive feedback, and users seem genuinely happy to use the product. So why start off with a negative review?

It’s just my way. If you can’t look back on every project you do and say to yourself “I could have done better”, then you weren’t paying attention and you weren’t learning. For example, why stop at double rainbows when we could have gone for triple?


When starting out on NuGet, we hoped to accomplish even more in our first full release. Like many projects, we have iteration milestones which each culminate in a public release. Ours tended to be around two months in duration, though our last one was one month.

Because we were a bit short staffed in the QA department, at the end of each milestone our one lone QA person, Drew Miller, would work like, well, a mad Ninja on fire trying to verify all the fixed bugs and test implemented features. Keep in mind that the developers do test out their own code and write unit tests before checking the code in, but it’s still important to manually test code with an eye towards thinking like a user of the software.

This my friends, does not scale.

When we look back on this past year, we came to the conclusion that our current model was not working out as well as it could. We weren’t achieving the level of quality that we would have liked and continuing in this fashion would burn out Drew.

I came to the realization that we need to assume we’ll never be fully staffed on the QA side. Given this, it became obvious that we need a new approach.

This was apparent to the developers too. David Fowler noted to me that we needed to have features tested closer to when they were implemented. As we discussed this, I remember a radical notion that Drew told me about when he first joined our QA team. He told me that he wants to eliminate dedicated testers. Not actually kill them mind you, just get rid of the position.

An odd stance for someone who is joining the QA profession. But as he explained it to me in more detail over time, it started to make more sense. In the most effective place he worked, every developer was responsible for testing. After implementing a feature and unit testing it (both manually and via automated tests), the developer would call over another developer and fully test the feature end-to-end as a pair. So it wasn’t that there was no QA there, it was that QA was merely a role that every developer would pitch in to help out with. In other words, everyone on the team is responsible for QA.

So as we were discussing these concepts recently, something clicked in my thick skull. They echoed some of the concepts I learned attending a fantastic presentation back in 2009 at the Norwegian Developer’s Conference by Mary Poppendieck. Her set of talks focused on the concept of a problem solving organization and the principles of Lean. She gave a fascinating account of how the Empire State Building finished in around a year and under budget by employing principles that became known as Lean. I remember thinking to myself that I would love to learn more about this and how to apply it at work.

Well fast forward a year and I think the time is right. Earlier in the year, I had discussed much more conservative changes we could make. But in many ways, by being an external open source project with a team open to trying new ideas out, the NuGet team is well positioned to try out something different than we’ve done before as an experiment. We gave ourselves around two months starting in January with this new approach and we’ll evaluate it at the end of those two months to see how it’s working for us.

We decided to go with an approach where each feature was itself a micro-iteration. In other words, a feature was not considered “done” until it was fully done and able to be shipped.

So if I am a developer working on a feature, I don’t get to write a few tests, implement the feature, try it out a bit, check it in, and move on to the next feature. Instead, developers need to call over Drew or another available developer and pair test the feature end-to-end. Once the feature is fully tested, only then does it get checked into the main branch of our main fork.

Note that under this model, every developer also wears the QA hat throughout the development cycle. This allows us to scale out testing whether we have two dedicated QA, one dedicated QA, or even zero. You’ll notice we’re still planning to keep Drew as a dedicated QA person while we experiment with this new approach so that he can help guide the overall QA process and look at system level testing that might slip by the pair testing. Over time, we really want to get to a point where most of our QA effort is spent in preventing defects in the first place, not just finding them.

Once a feature has been pair tested, that feature should be in a state that it can be shipped publicly, if we so choose.

We’re also planning to have a team iteration meeting every two weeks where we demonstrate the features that we implemented in the past two weeks. This serves both to review the overall user experience of the features as well as to ensure that everyone on the team is aware of what we implemented.

You’ll note that I’m careful not to call what we’re doing “Lean” with a capital “L”. Drew cautioned me to user lower-case “lean” as opposed to capital “Lean” because he wasn’t familiar with Lean when he worked this model at his previous company. We wouldn’t want to tarnish the good name of Lean with our own misconceptions about what it is.

This is where I have to confess something to you. Honestly, I’m not really that interested in Lean. What I’m really interested in is getting better results. It just seems to me that the principles of Lean are a very good approach to achieving those results in our situation.

I’m not one to believe in one true method of software development that works for all situations. What works for the start-up doesn’t work for the Space Shuttle and vice versa. But from what I understand, NuGet seems to be a great candidate for gaining benefits from applying lean principles.

So when I said I’m not interested in Lean, yeah, that was a bit of a fib. I definitely am interested in learning more about Lean (and I imagine I’ll learn a lot from many of you). But I am so much more interested on the better results we hope to achieve by applying lean principles.

personal 0 comments suggest edit

At some point, everybody and every team makes a mistake they regret and wish they could take back. During our regular status meetings, I sometimes make the mistake of saying something like “if I could go back in time, I’d tell myself not to make that decision.”

flux-capacitor \ Image from the greenhead.

That tees it up for our lead developer who’s so smart even his ass is smart. You might say he’s a smart ass. His response is usually “Really? I can think of a lot better things I would do with a time machine.”

Which got me thinking. Hypothetically speaking of course, if I did have a time machine, how exactly would I maximize my profit?

Often, time travel questions fixate on boring topics such as if you could meet anyone in history, who would it be?

Lincoln? Snore. Jesus? Yah, maybe he can do something about that chronic rash you got going down there.

I think the question of how you’d become rich is much more interesting and potentially creative. Let’s put some constraints on the question to make it more interesting.

  1. The time machine room size and located in your house. Thus you can’t travel with the time machine.
  2. The time machine can only transport you back in time and back. It can’t transport you through space to another location. So if you live in Seattle Washington, you can travel back in time to any year, say 1951, but you’ll still be in Seattle.
  3. You have one trip and one trip only and you can only bring yourself and the clothes on your back. I’d recommend a backpack.
  4. You have the resources available to you today. You can’t go back in time and buy a million shares of some stock unless you could actually afford to buy that stock.

Keep in mind the consequences of your action. It might seem like it’d be easy to go back in time around ten years, go to a public library and login to your old E-Trade account and buy a bunch of stock. But you, ten years ago, would probably notice and assume you’ve been hacked and perhaps sell immediately.

Also, if you travel to a time before you were born, consider that you probably won’t have proper identification and papers, unless you forge them. Could you buy stocks without identification?

So my friends, I ask you. Given these constraints, how would you maximize profit with a single trip back in time?

Again, this is purely hypothetical. I don’t have a time machine in the garage. Let’s just say, I like to be prepared just in case., mvc, nuget, code 0 comments suggest edit

Almost exactly one month ago, we released the Release Candidate for ASP.NET MVC 3. And today we learn why we use the term “Candidate”.

As Scott writes, Visual Studio 2010 SP1 Beta was released just this week and as we were testing it we found a few incompatibilities with it and the ASP.NET MVC 3 RC that we had just released.

newdotnetlogo_2That’s when we, in the parlance of the military, scrambled the jets to get another release candidate prepared.

You can install it directly using the Web Platform Installer (Web PI) download the installer yourself from from here.

Be sure to read the release notes for known issues and breaking changes in this release. I’m not saying I put an Easter egg in there or not, but you’ll have to read all the notes to find out.

In particular, there are two issues I want to call out.

Breaking Change Alert!

The first is a breaking change. Remember way back when I wrote about Dynamic Methods in ViewData? Near the end of that post I wrote an Addendum about the property name mismatch between ViewModel and View.

Well we finally resolved that mismatch. The new property name, both in the controller and in the view is ViewBag. This may break many of your existing ASP.NET MVC 3 pre-release applications.

NuGet Upgrade Alert

The other issue I want to call out is that if you already have NuGet installed, running the ASP.NET MVC 3 RC 2 installer will not upgrade it. Instead, you need to go to the Visual Studio Extension Manager dialog (via the Tools | Extensions menu option) and click on the Updates tab. You should see NuGet listed there (click on image for larger image):


The NuGet.exe command line tool for creating packages is available on CodePlex.

Overall, this release consists mostly of bug fixes along with some fit and finish work for ASP.NET MVC 3. We’ve updated the version of jQuery and jQuery Validation that we include in the project templates and now also include jQuery UI, a library that builds on top of jQuery to provide animation, advanced effects, as well as themeable widgets.

In terms of NuGet, this release contains a significant amount of work. I’ll try and follow up soon with more details on the NuGet release along with release notes.

nuget, code 0 comments suggest edit

I don’t normally post lists of links as it’s really not my style. But there’s a lot of great NuGet blog posts I want to call out so I thought I’d try my hand at it.

Hey! Here’s a random picture of a goat.


I also tend to post links from my twitter account

Well that’s it for now. If you found this helpful, let me know and I’ll try to do it once in a while. Either once a quarter or once a month depending on interest. Smile, mvc, nuget, code 0 comments suggest edit

Sometimes, despite your best efforts, you encounter a problem with your ASP.NET MVC application that seems impossible to figure out and makes you want to pull out your hair. Or worse, it makes you want to pull out my hair. In some of those situations, it ends up being a PEBKAC issue, but in the interest of avoiding physical harm, I try not to point that out.


Thankfully, in the interest of saving my hair, Brad Wilson (recentlyfeatured on This Developer’s Life!) wrote a simple diagnostics web page for ASP.NET MVC that you can drop into any ASP.NET MVC application. When you visit the page in your browser, it provides diagnostics information that can help discover potential problems with your ASP.NET application.

To make it as easy as possible to use it, I created a NuGet package named “MvcDiagnostics”. If you’re not familiar with NuGet, check out my announcement of NuGet as well as our Getting Started guide written by Tim Teebken.

With NuGet, you can use the Add Package Library Dialog to install MvcDiagnostics. Simply type in “MVC” in the search dialog to filter the online entries. Then locate the MvcDiagnostics entry and click “Install”.


Or you can use the Package Manager Gonsole and simply type:

install-package MvcDiagnostics

Either way, this will add the MvcDiagnostics.aspx page to the root of your web application.


You can then visit the page with your browser to get diagnostics information.


With NuGet, it’s much easier to make use of this diagnostics page. Hopefully you’ll rarely need to use it, but it’s nice to know it’s there. Let us know if you have ways to improve the diagnostics page., mvc, code 0 comments suggest edit

UPDATE: 2011/02/13: This code is now included in the RouteMagic NuGet package! To use this code, simply run Install-Package RouteMagic within the NuGet Package Manager Console.

One thing ASP.NET Routing doesn’t support is washing and detailing my car. I really pushed for that feature, but my coworkers felt it was out of scope. Kill joys.

Another thing Routing doesn’t support out of the box is a way to group a set of routes within another route. For example, suppose I want a set of routes to all live under the same URL path. Today, I’d need to make sure all the routes started with the same URL segment. For example, here’s a set of routes that all live under the “/blog” URL path.

RouteTable.Routes.MapRoute("r1", "blog/posts");
RouteTable.Routes.MapRoute("r2", "blog/posts/{id}");
RouteTable.Routes.MapRoute("r3", "blog/archives");

If I decide I want all these routes to live under something other than “blog” such as in the root or under a completely different name such as “archives”, I have to change each route. Not such a big deal with only three routes, but with a large system with multiple groupings, this can be a hassle.

I suppose one easy way to solve this is to do the following:

string baseUrl = "blog/";
RouteTable.Routes.MapRoute("r1", baseUrl + "posts");
RouteTable.Routes.MapRoute("r2", baseUrl + "posts/{id}");
RouteTable.Routes.MapRoute("r3", baseUrl + "archives");

Bam! Done! Call it a night Frank.

This is actually a very simple and great solution to the problem I stated. In fact, it probably works better than the alternative I’m about to show you. If this works so well, why am I showing you the alternative?

Well, there’s something unsatisfying about that answer. Suppose a request comes in for /not-blog. Every one of those routes is going to be evaluated even though we already know none of them will match. If we could group them, we could reduce the check to just one check. Also, it’s just not as much fun as what I’m about to show you.

What I would like to be able to do is the following.

var blogRoutes = new RouteCollection();
blogRoutes.MapRoute("r1", "posts");
blogRoutes.MapRoute("r2", "posts/{id}");
blogRoutes.MapRoute("r3", "archives");

RouteTable.Routes.Add("blog-routes", new GroupRoute("~/blog", blogRoutes));

In this code snippet, I’ve declared a set of routes and added them to a proposed GroupRoute instance. That group route is then added to the route table. Note that the child routes are not themselves added to the route table and they have no idea what parent path they’ll end up responding to.

With this proposed route, I these child routes would then handle requests to /blog/posts and /blog/archives. But if I decide to place them under a different path, I can simply change a single route, the group route, and I don’t need to change each child route.


In this section, I’ll describe the implementation of such a group route in broad brush strokes. The goal here is to provide an under the hood look at how routing works and how it can be extended.

Implementing such a grouping route is not trivial. Routes in general work directly off of the current http request in order to determine if they match a request or not.

By themselves, those child routes I defined earlier would not match a request for /blog/posts. Note that the URL for the child routes don’t start with “blog”. Fortunately though, the request that is supplied to each route is an instance of HttpRequestBase, an abstract base class.

What this means is we can muck around with the request and even change it so that the child routes don’t even know the actual requests starts with /blog. That way, when a request comes in for /blog/posts, the group route matches it, but then rewrites the request only for its child routes so that they think they’re trying to match /posts.

Please note that what I’m about to show you here is based on internal knowledge of routing and is unsupported and may cause you to lose hair, get a rash, and suffer much grief if you depend on it. Use this approach at your own risk.

The first thing I did was implement my own wrapper classes for the http context class.

public class ChildHttpContextWrapper : HttpContextBase {
  private HttpContextBase _httpContext;
  private HttpRequestBase _request;

  public ChildHttpContextWrapper(HttpContextBase httpContext, 
      string parentVirtualPath, string parentPath) {
    _httpContext = httpContext;
    _request = new ChildHttpRequestWrapper(httpContext.Request, 
      parentVirtualPath, parentPath);

  public override HttpRequestBase Request {
    get {
      return _request;

  // ... All other properties/methods delegate to _httpContext

Note that all this does is delegate every method and property to the supplied HttpContextBase instance that it wraps except for the Request property, which returns an instance of my next wrapper class.

public class ChildHttpRequestWrapper : HttpRequestBase {
  HttpRequestBase _httpRequest;
  string _path;
  string _appRelativeCurrentExecutionFilePath;

  public ChildHttpRequestWrapper(HttpRequestBase httpRequest, 
    string parentVirtualPath, string parentPath) {
    if (!parentVirtualPath.StartsWith("~/")) {
      throw new InvalidOperationException("parentVirtualPath 
        must start with ~/");

    if (!httpRequest.AppRelativeCurrentExecutionFilePath
        .StartsWith(parentVirtualPath, StringComparison.OrdinalIgnoreCase)) {
      throw new InvalidOperationException("This request is not valid 
        for the current path.");

    _path = httpRequest.Path.Remove(0, parentPath.Length);
    _appRelativeCurrentExecutionFilePath = httpRequest.
      AppRelativeCurrentExecutionFilePath.Remove(1,       parentVirtualPath.Length - 1);
    _httpRequest = httpRequest;

  public override string Path { get { return _path; } }

  public override string AppRelativeCurrentExecutionFilePath {
    get { return _appRelativeCurrentExecutionFilePath; }

  // all other properties/method delegate to_httpRequest

What this child request does is strip off the portion of the request path that corresponds to its parent’s virtual path. That’s the “~/blog” part supplied by the group route.

It then makes sure that the Path and the AppRelativeCurrentExecutionFilePath properties return this updated URL. Current implementations of routing look at these two properties when matching an incoming request. However, that’s an internal implementation detail of routing that could change, hence my admonition earlier that this is voodoo magic.

The implementation of request matching for GroupRoute is fairly straightforward then.

public override RouteData GetRouteData(HttpContextBase httpContext) {
  if (!httpContext.Request.AppRelativeCurrentExecutionFilePath.
      StartsWith(VirtualPath, StringComparison.OrdinalIgnoreCase)) {
    return null;

  HttpContextBase childHttpContext = VirtualPath != ApplicationRootPath ? 
    new ChildHttpContextWrapper(httpContext, VirtualPath, _path) : null;

  return ChildRoutes.GetRouteData(childHttpContext ?? httpContext);

All we did here is to make sure that the group route matches the current request. If so, we then created a child http context which as we saw earlier, looks just like the current http context, only the /blog portion of the request is stripped off. We then pass that to our internal route collection to see if any child route matches. If so, we return the route data from that match and we’re done.

In Part 2 of this series, we’ll look at implementing URL generation. That’s where things get really tricky., mvc, code 0 comments suggest edit

A question I often receive via my blog and email goes like this:

Hi, I just got an email from a Nigerian prince asking me to hold some money in a bank account for him after which I’ll get a cut. Is this a scam?

The answer is yes. But that’s not the question I wanted to write about. Rather, a question that I often see on StackOverflow and our ASP.NET MVC forums is more interesting to me and it goes something like this:

How do I get the route name for the current route?

My answer is “You can’t”. Bam! End of blog post, short and sweet.

Joking aside, I admit that’s not a satisfying answer and ending it there wouldn’t make for much of a blog post. Not that continuing to expound on this question necessarily will make a good blog post, but expound I will.

It’s not possible to get the route name of the route because the name is not a property of the Route. When adding a route to a RouteCollection, the name is used as an internal unique index for the route so that lookup for the route is extremely fast. This index is never exposed.

The reason why the route name can’t be a property becomes more apparent when you consider that it’s possible to add a route to multiple route collections.

var routeCollection1 = new RouteCollection();
var routeCollection2 = new RouteCollection();

var route = new Route("{controller}/{action}", new MvcRouteHandler());

routeCollection1.Add("route-name1", route);
routeCollection2.Add("route-name2", route);

So in this example, we add the same route to two different route collections using two different route names when we added the route. So we can’t really talk about the name of the route here because what would it be? Would it be “route-name1” or “route-name2”? I call this the “Route Name Uncertainty Principle” but trust me, I’m alone in this.

Some of you might be thinking that ASP.NET Routing didn’t have to be designed this way. I address that at the end of this blog post. For now, this is the world we live in, so let’s deal with it.

Let’s do it anyways

I’m not one to let logic and an irrefutable mathematical proof stand in the way of me and getting what I want. I want a route’s name, and golly gee wilickers, I’m damn well going to get it.

After all, while in theory I can add a route to multiple route collections, I rarely do that in real life. If I promise to behave and not do that, maybe I can have my route name with my route. How do we accomplish this?

It’s simple really. When we add a route to the route collection, we need to tell the route what the route name is so it can store it in its DataTokens dictionary property. That’s exactly what that property of Route was designed for. Well not for storing the name of the route, but for storing additional metadata about the route that doesn’t affect route matching or URL generation. Any time you need some information stored with a route so that you can retrieve it later, DataTokens is the way to do it.

I wrote some simple extension methods for setting and retrieving the name of a route.

public static string GetRouteName(this Route route) {
    if (route == null) {
        return null;
    return route.DataTokens.GetRouteName();

public static string GetRouteName(this RouteData routeData) {
    if (routeData == null) {
        return null;
    return routeData.DataTokens.GetRouteName();

public static string GetRouteName(this RouteValueDictionary routeValues) {
    if (routeValues == null) {
        return null;
    object routeName = null;
    routeValues.TryGetValue("__RouteName", out routeName);
    return routeName as string;

public static Route SetRouteName(this Route route, string routeName) {
    if (route == null) {
        throw new ArgumentNullException("route");
    if (route.DataTokens == null) {
        route.DataTokens = new RouteValueDictionary();
    route.DataTokens["__RouteName"] = routeName;
    return route;

Yeah, besides changing diapers, this is what I do on the weekends. Pretty sad isn’t it?

So now, when I register routes, I just need to remember to call SetRouteName.

routes.MapRoute("rName", "{controller}/{action}").SetRouteName("rName");

BTW, did you know that MapRoute returns a Route? Well now you do. I think we made that change in v2 after I begged for it like a little toddler. But I digress.

Like eating a Turducken, that code doesn’t sit well with me. We’re repeating the route name twice here which is prone to error. Ideally, MapRoute would do it for us, but it doesn’t. So we need some new and improved extension methods for mapping routes.

public static Route Map(this RouteCollection routes, string name, 
    string url) {
  return routes.Map(name, url, null, null, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults) {
  return routes.Map(name, url, defaults, null, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults, object constraints) {
  return routes.Map(name, url, defaults, constraints, null);

public static Route Map(this RouteCollection routes, string name, 
    string url, object defaults, object constraints, string[] namespaces) {
  return routes.MapRoute(name, url, defaults, constraints, namespaces)

These methods correspond to some (but not all, because I’m lazy) of the MapRoute extension methods in the System.Web.Mvc namespace. I called them Map simply because I didn’t want to conflict with the existing MapRoute extension methods.

With these set of methods, I can easily create routes for which I can retrieve the route name.

var route = routes.Map("rName", "url");

// within a controller
string routeName = RouteData.GetRouteName();

With these methods, you can now grab the route name from the route should you need it.

Of course, one question to ask yourself is why do you need to know the route name in the first place? Many times, when people ask this question, what they really are doing is making the route name do double duty. They want it to act as an index for route lookup as well as be a label applied to the route so they can take some custom action based on the name.

In this second case though, the “label” doesn’t have to be the route name. It could be anything stored in data tokens. In a future blog post, I’ll show you an example of a situation where I really do need to know the route name.

Alternate Design Aside

As an aside, why is routing designed this way? I wasn’t there when this particular decision was made, but I believe it has to do with performance and safety. With the current API, once a route name has been added to a route collection with a name, internally, the route collection can safely use the route name as a dictionary key for the route knowing full well that the route name cannot change.

But imagine instead that RouteBase (the base class for all routes) had a Name property and the RouteCollection.Add method used that as the key for route lookup. Well it’s quite possible that the value of the route’s name could change for some reason due to a poor implementation. In that case, the index would be out of sync with the route’s name.

While I agree that the current design is safer, in retrospect I doubt many will screw up  a read-only name property which should never change. We could have documented that the contract for the Name property of Route is that it should never change during the lifetime of the route. But then again, who reads the documentation? After all, I offered $1,000 to the first person who emailed me a hidden message embedded in our ASP.NET MVC 3 release notes and haven’t received one email yet. Also, you’d be surprised how many people screw up GetHashCode(), which effectively would have the same purpose as a route’s Name property.

And by the way, there are no hidden messages in the release notes. Did I make you look?

tdd, code 0 comments suggest edit

A while back I wrote about mocking successive calls to the same method which returns a sequence of objects. Read that post for more context.

In that post, I had written up an implementation, but quickly was won over by a better extension method implementation from Fredrik Kalseth.

public static class MoqExtensions
  public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup, 
    params TResult[] results) where T : class  {
    setup.Returns(new Queue<TResult>(results).Dequeue);

As good as this extension method is, I was able to improve on it today during a coding session. I was writing some code where I needed the second call to the same method to throw an exception and realized this extension wouldn’t allow for that.

However, it wasn’t hard to write an overload that allows for that.

public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup,
    params object[] results) where T : class {
  var queue = new Queue(results);
    setup.Returns(() => {
        var result = queue.Dequeue();
        if (result is Exception) {
            throw result as Exception;
        return (TResult)result;

So rather than taking a parameter array of TResult, this overload accepts an array of object instances.

Within the method, we create a non generic Queue and then create a lambda that captures that queue in a closure. The lambda is passed to the Returns method so that it’s called every time the mocked method is called, returning the next item in the queue.

Here’s an example of the method in action:

var mock = new Mock<ISomeInterface>();
mock.Setup(r => r.GetNext())
    .ReturnsInOrder(1, 2, new InvalidOperationException());

Console.WriteLine(mock.Object.GetNext()); // Throws InvalidOperationException

In this sample code, I mock an interface so that when its GetNext method is called a third time, it will throw an InvalidOperationException.

I’ve found this to be a helpful and useful extension to Moq and hope you find some use for it if you’re using Moq.

NOTE: As Richard Reeves pointed out to me in an email, do be careful if you mock a property using this approach. If you evaluate the property while within a debugger, you will dequeue the element potentially causing maddening debugging difficulty., mvc, code 0 comments suggest edit

The beginning of wisdom is to call things by their right names – Chinese Proverb

Routing in ASP.NET doesn’t require that you name your routes, and in many cases it works out great. When you want to generate an URL, you grab this bag of values you have lying around, hand it to the routing engine, and let it sort it all out.


For example, suppose an application has the following two routes defined

    name: "Test",
    url: "code/p/{action}/{id}",
    defaults: new { controller = "Section", action = "Index", id = "" }

    name: "Default",
    url: "{controller}/{action}/{id}",
    defaults: new { controller = "Home", action = "Index", id = "" }

To generate a hyperlink to each route, you’d write the following code.

@Html.RouteLink("Test", new {controller="section", action="Index", id=123})

@Html.RouteLink("Default", new {controller="Home", action="Index", id=123})

Notice that these two method calls don’t specify which route to use to generate the links. They simply supply some route values and let ASP.NET Routing figure it out.

In this example, The first one generates a link to the URL /code/p/Index/123 and the second to /Home/Index/123which should match your expectations.

This is fine for these simple cases, but there are situations where this can bite you. ASP.NET 4 introduced the ability to use routing to route to a Web Form page.  Let’s suppose I add the following page route at the beginning of my list of routes so that the URL /static/url is handled by the page /aspx/SomePage.aspx.

routes.MapPageRoute("new", "static/url", "~/aspx/SomePage.aspx"); 

Note that I can’t put this route at the end of my list of routes because it would never match incoming requests since /static/url would match the default route. Adding it to the beginning of the list seems like the right thing to do here.

If you’re not using Web Forms, you still might run into a case like this if you use routing with a custom route handler, such as the one I blogged about a while ago (with source code). In that blog post, I showed how to use routing to route to standard IHttpHandler instances.

Seems like an innocent enough change, right? For incoming requests, this route will only match requests that exactly matches /static/url but no others, which is great. But if I look at my page, I’ll find that the two URLs I generated earlier are broken.

Now, the two URLs are/url?controller=section&action=Index&id=123and /static/url?controller=Home&action=Index&id=123.


This is running into a subtle behavior of routing which is admittedly somewhat of an edge case, but is something that people run into from time to time. In fact, I had to help Scott Hanselman with such an issue when he was preparing his Metaweblog example for his fantastic PDC talk (HD quality MP4).

Typically, when you generate a URL using routing, the route values you supply are used to “fill in” the URL parameters. In case you don’t remember, URL parameters are those placeholders within a route’s URL with the curly braces such as {controller} and {action}.

So when you have a route with the URL {controller}/{action}/{Id}, you’re expected to supply values for controller, action, and Id when generating a URL. During URL generation, you need to supply a route value for each URL parameter so that an URL can be generated. If every route parameter is supplied with a value, that route is considered a match for the purposes of URL generation. If you supply extra parameters above and beyond the URL parameters for the route, those extra values are appended to the generated URL as query string parameters.

In this case, since the new route I mapped doesn’t have any URL parameters, that route matches every URL generation attempt since technically, “a route value is supplied for each URL parameter.” It just so happens in this case there are no URL parameters. That’s why all my existing URLs are broken because every attempt to generate a URL now matches this new route.

There’s even more details I’ve glossed over having to do with how a route’s default values figure into URL generation. That’s a topic for another time, but it explains why you don’t run into this problem with routes to controller actions which have an URL without parameters.

This might seem like a big problem, but the fix is actually very simple. Use names for all your routes and always use the route name when generating URLs. Most of the time, letting routing sort out which route you want to use to generate an URL is really leaving it to chance. When generating an URL, you generally know exactly which route you want to link to, so you might as well specify it by name.

Also, by specifying the name of the route, you avoid ambiguities and may even get a bit of a performance improvement since the routing engine can go directly to the named route and attempt to use it for URL generation.

So in the sample above where I have code to generate the two links, the following change fixes the issue (I changed the code to use named parameters to make it clear what the change was).

    linkText: "route: Test", 
    routeName: "test", 
    routeValues: new {controller="section", action="Index", id=123}

    linkText: "route: Default", 
    routeName: "default", 
    routeValues: new {controller="Home", action="Index", id=123}

People’s fates are simplified by their names.  ~Elias Canetti

And the same goes for routing. Smile

code, nuget, open source 0 comments suggest edit

Note, this blog post applies to v1.0 of NuGet and the details are subject to change in a future version.

In general, when you create a NuGet package, the files that you include in the package are not modified in any way but simply placed in the appropriate location within your solution.

However, there are cases where you may want a file to be modified or transformed in some way during installation. NuGet supports two types of transformations during installation of a package:

  • Config transformations
  • Source transformations

Config Transformations

Config transformations provide a simple way for a package to modify a web.config or app.config when the package is installed. Ideally, this type of transformation would be rare, but it’s very useful when needed.

One example of this is ELMAH (Error Logging Modules and Handlers for ASP.NET). ELMAH requires that its http modules and http handlers be registered in the web.config file.

In order to apply a config transform, add a file to your packages content with the name of the file you want to transform followed by a .transform extension. For example, in the ELMAH package, there’s a file named web.config.transform.


The contents of that file looks like a web.config (or app.config) file, but it only contains the sections that need to be merged into the config file.

            <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
            <add verb="POST,GET,HEAD" path="elmah.axd"              type="Elmah.ErrorLogPageFactory, Elmah" />
        <validation validateIntegratedModeConfiguration="false" />
            <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
            <add name="Elmah" verb="POST,GET,HEAD" path="elmah.axd"              type="Elmah.ErrorLogPageFactory, Elmah" />

When NuGet sees this transformation file, it attempts to merge in the various sections into your existing web.config file. Let’s look at a simple example.

Suppose this is my existing web.config file.

Existing web.config File

            <add name="MyCoolModule" type="Haack.MyCoolModule" />

Now suppose I want my NuGet package to add an entry into the modules section of config. I’d simply add a file named web.config.transform to my package with the following contents.

web.config.transform File

            <add name="MyNuModule" type="Haack.MyNuModule" />

After I install the package, the web.config file will look like

Existing web.config File

            <add name="MyCoolModule" type="Haack.MyCoolModule" />
            <add name="MyNuModule" type="Haack.MyNuModule" />

Notice that we didn’t replace the modules section, we merged our entry into the modules section.

I’m currently working on documenting the full set of rules for config transformations which I will post to our NuGet documentation page once I’m done.I just wanted to give you a taste for what you can do today.

Also, in v1 of NuGet we only support these simple transformations. If we hear a lot of customer feedback that more powerful transformations are needed for their packages, we may consider supporting the more powerful web.config transformation language as an alternative to our simple approach.

Source Transformations

NuGet also supports source code transformations in a manner very similar to Visual Studio project templates. These are useful in cases where your NuGet package includes source code to be added to the developer’s project. For example, you may want to include some source code used to initialize your package library, but you want that code to exist in the target project’s namespace. Source transformations help in this case.

To enable source transformations, simply append the .pp file extension to your source file within your package.

Here’s a screenshot of a package I’m currently authoring.


When installed, this package will add four files to the target project’s ~/Models directory. These files will be transformed and the .pp extension will be removed. Let’s take a look at one of these files.

namespace $rootnamespace$.Models {
    public struct CategoryInfo {
        public string categoryid;
        public string description;
        public string htmlUrl;
        public string rssUrl;
        public string title;

Notice the highlighted section that has the token $rootnamespace$. That’s a Visual Studio project property which gets replaced with the current project’s root namespace during installation.

We expect that $rootnamespace$ will be the most commonly used project property, though we support any project property such as $FileName$. The available properties may be specific to the current project type, but this MSDN documentation on project properties is a good starting point for what might be possible.

nuget, code 0 comments suggest edit

My team has been hard at work the past few weeks cranking out code and today we are releasing the second preview of NuGet (which you may have heard referred to as NuPack in the past, but was renamed for CTP 2 by the community). If you’re not familiar with what NuGet is, please read my introductory blog post on the topic.

For a detailed list of what changed, check out the NuGet Release Notes.

To see NuGet in action, watch the talk that Scott Hanselman’s gave at the Professional Developer’s Conference which was the highest rated talk. You can watch it online or download it in HD.

How do I get it?

There are three ways to get NuGet CTP 2.

Via MVC 3

NuGet CTP 2 is included as part of the ASP.NET MVC 3 Release Candidate installation (install via Web PI or download the standalone installer) . So when you install ASP.NET MVC 3 RC, you’ll have NuGet installed.

If you want to try out NuGet without installing ASP.NET MVC 3 RC, feel free to install it via the Visual Studio Extension Gallery.


As with all of our releases, we also make the download available on our CodePlex website.

What’s new?

As the release notes point out, we’ve made a lot of improvements. Some of the big ones are changes to the NuSpec package format, so if you have any old .nupkg files laying around, you’ll need to build them with the new CTP 2 NuGet.exe command line tool.

But to be nice, we already updated all the packages in the temporary feed which is at a new location now, so you won’t need to do that. But if you’re building new packages, be sure to update your copy of Nuget.exe.

The NuSpec format now includes two new fields you should take advantage of if you are creating packages:

  • The iconUrl field specifies the URL for a 32x32 png icon that shows up next to your package entry within the Add Package Dialog. Be sure to set that to distinguish your package.
  • The projectUrl field points to a web page that provides more information about your package.

Another big change we made is that the package feed is now an Open Data Protocol (OData) Service Endpoint. This makes it easy for clients to write  arbitrary queries using LINQ against an IQueryable interface which is automatically translated to the proper query URL. For example, to see the first 10 packages that start with “N”:$filter=startswith(Id,’N’) eq true&$top=10

Also, when using the Powershell based Package Manager Console, be sure to note that we renamed the Add-Package command to Install-Package and the Remove-Package command to Uninstall-Package. We felt the new names conveyed the right semantics.

How’s things?

So far, the project has been a lot of fun to work on, in large part due to the enthusiasm and excitement that we’ve seen from the community. As I mentioned in the past, this is truly an Open Source project and we’ve had quite a few community code contributions.

Of course, we still have plenty of items up for grabs if you’re looking for something to work on.


One cool thing we’ve done is integrated the use of ReviewBoard for doing code reviews into our process. For information on that, check out our code review instructions. Our review board is currently hosted at but that domain name will change soon.

Continuous Integration

For those of you who like life in the fast lane, we do have a Team City based Continuous Integration (CI) server hosted at You can get daily builds compiled directly from our source tree. So for those of you who knew about the build server, you would have been playing with the CTP 2 for a while now. Winking

What’s next?

Well our next release is going to be NuGet version 1.0 RTM. A lot of our focus for this iteration will be on applying some spit and polish as well as integration work on our sister project, Gallery Server.

The Gallery Server project is building what will become the official gallery for NuGet (as well as for Orchard modules and other types of galleries). It’s being developed as an Open Source project as well so that anyone can take the source and host their own galleries.

Once the gallery server is completed and hosted, we’ll start to transition from our current temporary feed over to the gallery server. We’ll leave the temporary feed up for a while to allow people time to transition over to whatever the final official gallery location ends up at.

At this point, if you haven’t tried NuGet, give it a try. If you have tried it, let us know what you think. I hope you enjoy using it, I know I do. Smile, mvc, code, nuget 0 comments suggest edit

Today we’re releasing the release candidate for ASP.NET MVC 3. We’re in the home stretch now so it’ll mostly be bug fixes and small tweaks from here on out.

There are two ways to install ASP.NET MVC 3:

Also, be sure to check out the ASP.NET MVC 3 web page for information and content about ASP.NET MVC 3 as well as the release notes for this release.

Also, don’t miss Scott Guthrie’s blog post on ASP.NET MVC 3 which provides the usual level of detail on the release.

Razor Intellisense. Ah Yeah!

Probably the most frequently asked question I received when we released the Beta of ASP.NET MVC 3 was “When are we going to get Intellisense for Razor?” Well I’m happy to say the answer to that question is right now!

Not only Intellisense, but syntax highlighting and colorization also works for Razor views. ScottGu’s blog post I mentioned earlier has some screenshots of the Intellisense in action as well as details on some of the other improvements included in ASP.NET MVC 3 RC.


As I wrote earlier, this release of ASP.NET MVC includes an updated version of NuGet, a free and open source Package Manager that integrates nicely into Visual Studio.

What’s Next?

Well if all goes well, we’ll land this plane nicely with an RTM release, and then it’s time to start thinking about ASP.NET MVC 4. There, I said it. Well, actually, I should probably already be thinking about 4, but seriously, can’t a guy catch a break once in a while to breathe for a moment?

Well, since I’m lazy, I’ll probably be asking you very soon for your thoughts on what you’d like to see us focus on for the next version of ASP.NET MVC. Then I can present your best ideas as my own in the next executive review. You don’t mind that at all, do you? Winking

Seriously though, please do provide feedback and I’ll keep you posted on our planning.

Now that we have Nuget in place, one thing we’ll be focusing on is looking at building packages for features that we would have liked to include in ASP.NET MVC, but maybe didn’t have the time to implement them. Or perhaps simply for experimental features that we’d like feedback on. I think building NuGet packages will be a great way to try out new feature ideas and for the ones we think belong in the product, we can always roll them into ASP.NET MVC core.

0 comments suggest edit

This month’s Scientific American has an interesting commentary by Scott Lilienfield entitled Fudge Factor that discusses the fine line between academic misconduct and errors caused by confirmation bias.

For a great description of confirmation bias, read the’s post on the topic.

The Misconception: Your opinions are the result of years of rational, objective analysis.

The Truth:Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.

The fudge factor article talks about some of the circumstances that contribute to confirmation bias in the sciences.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

Obviously this doesn’t just apply to scientists. I’m sure we all know developers who are equally prone to confirmation bias, present company excluded of course. Winking
smile Pretty much everybody is succeptbile. We all probably witnessed an impressive (in magnitude) display of confirmation bias in the recent elections.

However, there’s another contributing factor that the article doesn’t touch upon that I think is worth calling out, our education system. I remember when I was in high school and college, I had a lot of “lab” classes for the various sciences. We’d conduct experiments, take measurements, and plot the measurements on a graph. However, we already knew what the results were supposed to look like. So if a measurement was way off the expected graph, there was a tendency to retake the measurement.

“Whoops, I must’ve nudged the apparatus when I took that measurement, let’s try it again.”

As the article points out (emphasis mine)…

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias.

So how can schools do a better job of teaching scientific methods? I think one interesting thing a teacher can do is have students conduct an experiment where the students think they know what the expected results should be beforehand, but where the actual results will not match up.

I think this would be interesting as an experiment in its own right. I’d be curious to see how many students turn in results which match their expectations rather than what matched their actual observations. That could provide a powerful teaching opportunity about scientific methods and confirmation bias.

code, 0 comments suggest edit

It was a dark and stormy coding session; the rain fell in torrents as my eyes were locked to two LCD screens in a furious display of coding …


…sorry sorry, I just can’t continue. It’s all a lie.

This actually a cautionary tale describing one subtle way that you can run afoul Code Access Security (CAS) when attempting to run an application in partial trust. But who wants to read about that? Right? Right?

Well this isn’t a sordid tale, but if you bear with me, you may just find it interesting. Either that, or you may just take pity on me that I find this type of thing interesting.

I was hacking on NuGet the other day and all I wanted to do was write some code that accessed the version number of the current assembly. This is something we do in Subtext, for example. If you scroll to the very bottom of the admin section, you’ll see the following.

Subtext Admin - Feedback - Google

As you can imagine, the code for to get the version number is very straightforward:


Or is it!? (cue scary organ music)

What the code does here (besides appearing to smack the Law of Demeter in the mouth) is get the currently executing assembly. From that it gets the Assembly name and extracts the version from the name. What could go wrong? I tested this in medium trust and it received the “works on my machine” seal of approval!

But does it work all the time? Well if it did, I wouldn’t be writing this blog post would I?

Fortunately, my colleague David Fowler caught this latent bug during a code review. Levi (no blog) Broderick was brought in to help explain the whole issue so a dunce like me could understand it. These two co-workers are scary smart and must never be allowed to fall into a life of crime as they would decimate the countryside. Just letting you know.

As it turns out, code exactly like this was the source of a medium trust bug in ASP.NET MVC 2 (that we fortunately caught and fixed before RTM). So what gives?

Well there’s very subtle latent bug with this code. To illustrate, I’ll put the code in context. The following snippet is a class library that makes use of the code I just wrote.

using System.Reflection;
using System.Security; 
[assembly: SecurityTransparent] 
namespace ClassLibrary1 {
  public static class Class1 {
    public static string GetExecutingAssemblyVersion() {
        return Assembly.GetExecutingAssembly().GetName().Version.ToString();

We need an application to reference that code. The following is code for an ASP.NET MVC controller with an action method that calls the method in the class library and returns it as a string. It may seem odd that the action method returns a string rather than an ActionResult, but that’s allowed. ASP.NET MVC simply wraps it in a ContentResult.

using System.Web.Mvc;

namespace MvcApplication1.Controllers {
  public class HomeController : Controller {
        public string ClassLibAssemblyVersion() {
            return ClassLibrary1.Class1.GetExecutingAssemblyVersion();

Still with me?

When I run this application and visit /Home/ClassLibAssemblyVersion everything works fine and we see the version number.

httplocalhost29519homeClassLibAssemblyVersionFixed - Windows Internet

Now’s where the party gets a bit wild (but still safe for work). At this point, I’ll put the class library assembly in the GAC and then recompile the application. I’m going to assume you know how to do that. Note that I’ll need to remove the local copy of the class library from the bin directory of my ASP.NET MVC application and also remove the project reference and replace it with a GAC reference.

When I do that and run the application again, I get.


Oh noes!

So what happened here? Reflector to the rescue! Looking at the stack trace, let’s dig into RuntimeAssembly.GetName(Boolean copiedName) method.

public override AssemblyName GetName(bool copiedName) {
    AssemblyName name = new AssemblyName();
    string codeBase = this.GetCodeBase(copiedName);
    // ... snipped for brevity ...

    return name;

I’ve snipped out some code so we can focus on the interesting part. This method wants to return a fully populated AssemblyName instance. One of the properties of AssemblyName is CodeBase, which is a path to the assembly.

Once it has this path, it attempts to verify the path by calling VerifyCodeBaseDiscovery. Let’s take a look.

private void VerifyCodeBaseDiscovery(string codeBase)
    if ((codeBase != null) && 
      (string.Compare(codeBase, 0, "file:", 0, 5
        , StringComparison.OrdinalIgnoreCase) == 0))
        URLString str = new URLString(codeBase, true);
        new FileIOPermission(FileIOPermissionAccess.PathDiscovery
          , str.GetFileName()).Demand();

Notice that last line of code? It’s making a security demand to check if you have path discovery permissions on the specified path. That’s what’s failing. Why?

Well before you put the assembly in the GAC, the assembly was being loaded from your bin directory. Naturally, even in medium trust, you have rights to discover that path. But now that the class library is in the GAC, it’s being loaded from a subdirectory of c:\Windows\Assembly and guess what. Your medium trust application doesn’t have path discovery permissions to that directory.

As an aside, I think it’s too bad that this particular property doesn’t check its security demand lazily. That would be my kind of property access. My gut feeling is that people don’t often ask for an assembly’s Codebase as much as they ask for the other “safe” properties, like Version!

So how do we fix this? Well the answer is to construct our own AssemblyName instance.

new AssemblyName(typeof(Class1).Assembly.FullName).Version.ToString();

This implementation avoids the security issue I mentioned earlier because we’re generating the AssemblyName instance ourselves and it never has a reference to the disallowed path.

If you want to see this in action, I put together a little demo showing the bad approach and the fixed approach.

You’ll need to GAC the ClassLibrary1 assembly to see the exception occurred. I have another action that has the safe implementation. Try it out.

As a tangent, the astute reader may have noticed that I used the assembly level SecurityTransparentAttribute in my class library. Is that a case of my assembly attempting to deal with self esteem issues and shying away from a clamoring public? Why did I put that attribute there? The answer to that, my friends, is a story for another time. Smile

code, open source, nuget 0 comments suggest edit

The polls have closed and we now have a new name for our project, NuGet (pronounced “New Get” and not “Nugget” and not “Noojay” for you hoity-toity) which had the most votes by a large margin.

For those who missed it, the following posts will get you up to speed on the name change:

Over the next couple of days we’ll start transitioning the project over to the new name. We’ll try to minimize the impact of the change and make sure existing links to the CodePlex project redirect to the new URL. If you have a local clone of the repository with work in progress when we rename the project, don’t worry. All you have to do is push your changes to the new URL for your fork rather than the old one.

Thanks for your participation and support! I’m glad to have this behind us so we can continue to focus on delivering a great product. I’ve even thought of a tagline we can use until one of you come up with a much better one. Winking

NuGet: A new way to get libraries.

OR NuGet The caramel goodness of open source in your projects.

Tags: package manager, NuGet, not-nupack

nuget, code, open source 0 comments suggest edit

Just a quick follow-up to my last posts about naming NuPack. Looks like the community is not content to sit back and let the project be labelled with a lame name. I’ve seen a couple of community inspired names created as new issues in the CodePlex issue tracker.

However, NFetch has a huge lead, but the community chosen NRocks is a close second. The name I like the best so far is NuGet.

(vote for it here)

As before, voting still closes on Tuesday 10/26 at 11:59 PM PDT. If you feel strongly enough around a name, rally your friends to vote for one. Smile

open source, nuget 0 comments suggest edit

There are only 2 hard problems in Computer Science. Naming things, cache invalidation and off-by-one errors.

I’m always impressed with the passion of the open source community and nothing brings it out more than a naming exercise. Smile

In my last blog post, I posted about our need to rename NuPack. Needless to say, I got a bit of angrypassionate feedback. There have been a lot of questions that keep coming up over and over again and I thought I would try and address the most common questions here.

Why not stay with the NuPack name? It was just fine!

In the original announcement, we pointed out that:

We want to avoid confusion with another software project that just happens to have the same name. This other project, NUPACK, is a software suite by a group of researchers at Caltech having to do with analysis and design of nucleic acid systems.

Now some of you may be thinking, “Why let that stop you? Many projects in different fields are fine sharing the same name. After all, you named a blog engine Subtext and there’s a Subtext programming language already.”

There’s a profound difference between Microsoft starting an open source project that accepts contributionsand some nobody named Phil Haack starting a little blog engine project.

Most likely, the programming language project has never heard of Subtext and Subtext doesn’t garner enough attention for them to care.

As Paula Hunter points out in a comment on the Outercurve blog post:

Sometimes we are victims of our own success, and NuPack has generated so much buzz that it caught CalTech’s attention. They have been using NuPack since 2007 and theoretically could assert their common law right of “first use” (and, they recently filed a TM application). Phil and the project team are doing the right thing in making the change now while the project is young. Did they have to? The answer is debatable, but they want to eliminate confusion and show respect to CalTech’s project team.

Naming is tough, and you can’t please everyone, but a year from now, most won’t remember the old name. How many remember Mozilla “Firebird”?

Apparently, we’re in good company when it comes to open source projects that have had to pick a new name. It’s always a painful process. This time around, we’re following guidelines posted by Paula in a blog post entitled The Naming Game: Things to consider when naming an open source project which talks about this concept of “first use” Paul mentioned.

Why not go back to NPack?

There’s already a project on CodePlex with that name.

Why not name it NGem?

Honestly, I’d prefer not to use the N prefix. I know one of the choices we provided had it in the name, but it was one of the better names we could come up with. Also, I’d like to not simply appropriate a name associated with the Ruby community. I think that could cause confusion as well. I’d love to have a name that’s uniquely ours if possible.

Why not name it ****?

In the original announcement, we listed three criteria:

  • Domain name available
  • No other project/product with a name similar to ours in the same field
  • No outstanding trademarks on the name that we could find

Domain name

The reason we wanted to make sure the domain name is available is that if it is, it’s less likely to be the name of an existing product or company. Not only that, we need a decent domain name to help market our project. This is one area where I think the community is telling us to be flexible. And I’m willing to consider being more flexible about this just as long as the name we choose won’t run afoul of the second criteria and we get a decent domain name that doesn’t cause confusion with other projects.

Product/Project With Similar Names

This one is a judgment call, but all it takes is a little time with Google/Bing to assess the risk here. There’s always going to be a risk that the name we pick will conflict with something out there. The point is not to eliminate risk but reduce it to a reasonable level. If you think of a name, try it out in a search engine and see what you find.


This one is tricky. Pretty much, if your search engine doesn’t pull up anything, it’s unlikely there is a trademark. Even so, it doesn’t hurt to put your search through the US Patent office’s Trademark Basic Word Mark Search and make sure it’s clean there. I’m not sure how comprehensive or accurate it is, but if it is there, you’re facing more risk than if it doesn’t show up.

I have a name that meets your criteria and is way better than the four options you gave us!

Ok, this is not exactly a question, but something I hear a lot. In the original blog post, we said the following:

Can I write in my own suggestion?

Unfortunately no. Again, we want to make sure we can secure the domains for our new project name, so we needed to start with a list that was actually attainable. If you really can’t bring yourself to pick even one, we won’t be offended if you abstain from voting. And don’t worry, the product will continue to function in the same way despite the name change.

However, I don’t want to be completely unreasonable and I think people have found a loophole. We’re conducting voting through our issue tracker and voting closes at 10/26 at 11:59 PM PDT. Our reasoning for not accepting suggestions was we wanted to avoid domain squatting. However, one creative individual created a bug to rename NuPack to a name for which they own the domain name and are willing to assign it over to the Outercurve foundation.

Right now, NFetch is way in the lead. But if some other name were to take the lead and meet all our criteria, I’d consider it. I reserve the right of veto power because I know one of you will put something obscene up there and somehow get a bajillion votes. Yeah, I have my eye on you Rob!

So where does that leave us?

We really don’t want to leave naming the project as an open ended process. So I think it’s good to set a deadline. On the morning of 10/27, for better or worse, you’ll wake up to a new name for the project.

Maybe you’ll hate it. Maybe you’ll love it. Maybe you’ll be ambivalent. Either way, over time, hopefully this mess will fade to a distant memory (much as Firebird has) and the name will start to fit in its new clothes.

As Paul Castle stated over Twitter:

@haacked to me the name is irrelevant the prouduct is ace

No matter what the name is, we’re still committed to delivering the best product we can with your help!

And no, we’re not going to name it:


nuget, code, open source 0 comments suggest edit

UPDATE: The new name is NuGet

The NuPack project is undergoing a rename and we need your help! For details, read the announcement about the rename on the Outercurve Foundation’s blog.

What is the new name?

We don’t know. You tell us! The NuPack project team brainstormed a set of names and narrowed down the list to four names.

I’ve posted a set of names as issues in our NuPack site and will ask you to vote for your favorite name among the lot. Vote for as many as you want, but realize that if you vote for all of them, you’ve just cancelled your vote. Winking

Here are the choices:

Voting will close at 10/26 at 11:59 PM.

nuget, code, open source 0 comments suggest edit

Note: Everything I write here is based on a very early pre-release version of NuGet (formerly known as NuPack) and is subject to change.

A few weeks ago I wrote a blog post introducing the first preview, CTP 1, of NuGet Package Manager. It’s an open source (we welcome contributions!) developer focused package manager meant to make it easy to discover and make use of third party dependencies as well as keep them up to date.

As of CTP 2 NuGet by default points to an ODATA service temporarily located at (in CTP 1 this was an ATOM feed located at

This feed was set up so that people could try out NuGet, but it’s only temporary. We’ll have a more permanent gallery set up as we get closer to RTM.

If you want to get your package in the temporary feed, follow the instructions at a companion project, NuPackPackages on

Local Feeds

Some companies keep very tight control over which third party libraries their developers may use. They may not want their developers to point NuGet to arbitrary code over the internet. Or, they may have a set of proprietary libraries they want to make available for developers via NuGet.

NuGet supports these scenarios with a combination of two features:

  1. NuGet can point to any number of different feeds. You don’t have to point it to just our feed.
  2. NuGet can point to a local folder (or network share) that contains a set of packages.

For example, suppose I have a folder on my desktop named packages and drop in a couple of packages that I created like so:


I can add that director to the NuGet settings. To get to the settings, go to the Visual Studio Tools| Options dialog and scroll down to Package Manager.

A shortcut to get there is to go to the Add Package Dialog and click on the Settings button or click the button in the Package Manager Console next to the list of package sources. This brings up the Options dialog (click to see larger image).

Package Manager

Type in the path to your packages folder and then click the Addbutton. Your local directory is now added as another package feed source.


When you go back to the Package Manager Console, you can choose this new local package source and list the packages in that source.

MvcApplication7 - Microsoft Visual Studio (Administrator)

You can also install packages from your local directory. If you’re creating packages, this is a great way to test them out without having to publish them online anywhere.MvcApplication7 - Microsoft Visual
Studio (Administrator)

Note that if you launch the Add Package Reference Dialog, you won’t see the local package feed unless you’ve made it the default package source. This limitation is only temporary as we’re changing the dialog to allow you to select the package source.


Now when you launch the Add Package Reference Dialog, you’ll see your local packages.


Please note, as of CTP 1, if one of these local packages has a dependency on a package in another registered feed, it won’t work. However, we are tracking this issue and plan to implement this feature in the next release.

Custom Read Only Feeds

Let’s suppose that what you really want to do is host a feed at an URL rather than a package folder. Perhaps you are known for your great taste in music and package selection and you want to host your own curated NuGet feed of the packages you think are great.

Well you can do that with NuGet. For step by step instructions, check out this follow-up blog post, Hosting a Simple “Read Only” NuGet Package Feed.

We imagine that the primary usage of NuGet will be to point it to our main online feed. But the flexibility of NuGet to allow for private local feeds as well as curated feeds should appeal to many.

Tags: NuGet, Package Manager, OData