nuget, code comments edit

My team has been hard at work the past few weeks cranking out code and today we are releasing the second preview of NuGet (which you may have heard referred to as NuPack in the past, but was renamed for CTP 2 by the community). If you’re not familiar with what NuGet is, please read my introductory blog post on the topic.

For a detailed list of what changed, check out the NuGet Release Notes.

To see NuGet in action, watch the talk that Scott Hanselman’s gave at the Professional Developer’s Conference which was the highest rated talk. You can watch it online or download it in HD.

How do I get it?

There are three ways to get NuGet CTP 2.

Via MVC 3

NuGet CTP 2 is included as part of the ASP.NET MVC 3 Release Candidate installation (install via Web PI or download the standalone installer) . So when you install ASP.NET MVC 3 RC, you’ll have NuGet installed.

If you want to try out NuGet without installing ASP.NET MVC 3 RC, feel free to install it via the Visual Studio Extension Gallery.

Via CodePlex.com

As with all of our releases, we also make the download available on our CodePlex website.

What’s new?

As the release notes point out, we’ve made a lot of improvements. Some of the big ones are changes to the NuSpec package format, so if you have any old .nupkg files laying around, you’ll need to build them with the new CTP 2 NuGet.exe command line tool.

But to be nice, we already updated all the packages in the temporary feed which is at a new location now, so you won’t need to do that. But if you’re building new packages, be sure to update your copy of Nuget.exe.

The NuSpec format now includes two new fields you should take advantage of if you are creating packages:

  • The iconUrl field specifies the URL for a 32x32 png icon that shows up next to your package entry within the Add Package Dialog. Be sure to set that to distinguish your package.
  • The projectUrl field points to a web page that provides more information about your package.

Another big change we made is that the package feed is now an Open Data Protocol (OData) Service Endpoint. This makes it easy for clients to write  arbitrary queries using LINQ against an IQueryable interface which is automatically translated to the proper query URL. For example, to see the first 10 packages that start with “N”:

http://feed.nuget.org/current/odata/v1/Packages?$filter=startswith(Id,’N’) eq true&$top=10

Also, when using the Powershell based Package Manager Console, be sure to note that we renamed the Add-Package command to Install-Package and the Remove-Package command to Uninstall-Package. We felt the new names conveyed the right semantics.

How’s things?

So far, the project has been a lot of fun to work on, in large part due to the enthusiasm and excitement that we’ve seen from the community. As I mentioned in the past, this is truly an Open Source project and we’ve had quite a few community code contributions.

Of course, we still have plenty of items up for grabs if you’re looking for something to work on.

ReviewBoard

One cool thing we’ve done is integrated the use of ReviewBoard for doing code reviews into our process. For information on that, check out our code review instructions. Our review board is currently hosted at http://reviewboard.nupack.com/ but that domain name will change soon.

Continuous Integration

For those of you who like life in the fast lane, we do have a Team City based Continuous Integration (CI) server hosted at http://ci.nuget.org:8080/. You can get daily builds compiled directly from our source tree. So for those of you who knew about the build server, you would have been playing with the CTP 2 for a while now. Winking
smile

What’s next?

Well our next release is going to be NuGet version 1.0 RTM. A lot of our focus for this iteration will be on applying some spit and polish as well as integration work on our sister project, Gallery Server.

The Gallery Server project is building what will become the official gallery for NuGet (as well as for Orchard modules and other types of galleries). It’s being developed as an Open Source project as well so that anyone can take the source and host their own galleries.

Once the gallery server is completed and hosted, we’ll start to transition from our current temporary feed over to the gallery server. We’ll leave the temporary feed up for a while to allow people time to transition over to whatever the final official gallery location ends up at.

At this point, if you haven’t tried NuGet, give it a try. If you have tried it, let us know what you think. I hope you enjoy using it, I know I do. Smile

asp.net, asp.net mvc, code, nuget comments edit

Today we’re releasing the release candidate for ASP.NET MVC 3. We’re in the home stretch now so it’ll mostly be bug fixes and small tweaks from here on out.

There are two ways to install ASP.NET MVC 3:

Also, be sure to check out the ASP.NET MVC 3 web page for information and content about ASP.NET MVC 3 as well as the release notes for this release.

Also, don’t miss Scott Guthrie’s blog post on ASP.NET MVC 3 which provides the usual level of detail on the release.

Razor Intellisense. Ah Yeah!

Probably the most frequently asked question I received when we released the Beta of ASP.NET MVC 3 was “When are we going to get Intellisense for Razor?” Well I’m happy to say the answer to that question is right now!

Not only Intellisense, but syntax highlighting and colorization also works for Razor views. ScottGu’s blog post I mentioned earlier has some screenshots of the Intellisense in action as well as details on some of the other improvements included in ASP.NET MVC 3 RC.

NuGet

As I wrote earlier, this release of ASP.NET MVC includes an updated version of NuGet, a free and open source Package Manager that integrates nicely into Visual Studio.

What’s Next?

Well if all goes well, we’ll land this plane nicely with an RTM release, and then it’s time to start thinking about ASP.NET MVC 4. There, I said it. Well, actually, I should probably already be thinking about 4, but seriously, can’t a guy catch a break once in a while to breathe for a moment?

Well, since I’m lazy, I’ll probably be asking you very soon for your thoughts on what you’d like to see us focus on for the next version of ASP.NET MVC. Then I can present your best ideas as my own in the next executive review. You don’t mind that at all, do you? Winking
smile

Seriously though, please do provide feedback and I’ll keep you posted on our planning.

Now that we have Nuget in place, one thing we’ll be focusing on is looking at building packages for features that we would have liked to include in ASP.NET MVC, but maybe didn’t have the time to implement them. Or perhaps simply for experimental features that we’d like feedback on. I think building NuGet packages will be a great way to try out new feature ideas and for the ones we think belong in the product, we can always roll them into ASP.NET MVC core.

comments edit

This month’s Scientific American has an interesting commentary by Scott Lilienfield entitled Fudge Factor that discusses the fine line between academic misconduct and errors caused by confirmation bias.

For a great description of confirmation bias, read the YouAreNotSoSmart.com’s post on the topic.

The Misconception: Your opinions are the result of years of rational, objective analysis.

The Truth:Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.

The fudge factor article talks about some of the circumstances that contribute to confirmation bias in the sciences.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

Obviously this doesn’t just apply to scientists. I’m sure we all know developers who are equally prone to confirmation bias, present company excluded of course. Winking
smile Pretty much everybody is succeptbile. We all probably witnessed an impressive (in magnitude) display of confirmation bias in the recent elections.

However, there’s another contributing factor that the article doesn’t touch upon that I think is worth calling out, our education system. I remember when I was in high school and college, I had a lot of “lab” classes for the various sciences. We’d conduct experiments, take measurements, and plot the measurements on a graph. However, we already knew what the results were supposed to look like. So if a measurement was way off the expected graph, there was a tendency to retake the measurement.

“Whoops, I must’ve nudged the apparatus when I took that measurement, let’s try it again.”

As the article points out (emphasis mine)…

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias.

So how can schools do a better job of teaching scientific methods? I think one interesting thing a teacher can do is have students conduct an experiment where the students think they know what the expected results should be beforehand, but where the actual results will not match up.

I think this would be interesting as an experiment in its own right. I’d be curious to see how many students turn in results which match their expectations rather than what matched their actual observations. That could provide a powerful teaching opportunity about scientific methods and confirmation bias.

code, asp.net comments edit

It was a dark and stormy coding session; the rain fell in torrents as my eyes were locked to two LCD screens in a furious display of coding …

stormy

…sorry sorry, I just can’t continue. It’s all a lie.

This actually a cautionary tale describing one subtle way that you can run afoul Code Access Security (CAS) when attempting to run an application in partial trust. But who wants to read about that? Right? Right?

Well this isn’t a sordid tale, but if you bear with me, you may just find it interesting. Either that, or you may just take pity on me that I find this type of thing interesting.

I was hacking on NuGet the other day and all I wanted to do was write some code that accessed the version number of the current assembly. This is something we do in Subtext, for example. If you scroll to the very bottom of the admin section, you’ll see the following.

Subtext Admin - Feedback - Google
Chrome

As you can imagine, the code for to get the version number is very straightforward:

System.Reflection.Assembly.ExecutingAssembly().GetName().Version

Or is it!? (cue scary organ music)

What the code does here (besides appearing to smack the Law of Demeter in the mouth) is get the currently executing assembly. From that it gets the Assembly name and extracts the version from the name. What could go wrong? I tested this in medium trust and it received the “works on my machine” seal of approval!

But does it work all the time? Well if it did, I wouldn’t be writing this blog post would I?

Fortunately, my colleague David Fowler caught this latent bug during a code review. Levi (no blog) Broderick was brought in to help explain the whole issue so a dunce like me could understand it. These two co-workers are scary smart and must never be allowed to fall into a life of crime as they would decimate the countryside. Just letting you know.

As it turns out, code exactly like this was the source of a medium trust bug in ASP.NET MVC 2 (that we fortunately caught and fixed before RTM). So what gives?

Well there’s very subtle latent bug with this code. To illustrate, I’ll put the code in context. The following snippet is a class library that makes use of the code I just wrote.

using System.Reflection;
using System.Security; 
[assembly: SecurityTransparent] 
namespace ClassLibrary1 {
  public static class Class1 {
    public static string GetExecutingAssemblyVersion() {
        return Assembly.GetExecutingAssembly().GetName().Version.ToString();
    }
  }
}

We need an application to reference that code. The following is code for an ASP.NET MVC controller with an action method that calls the method in the class library and returns it as a string. It may seem odd that the action method returns a string rather than an ActionResult, but that’s allowed. ASP.NET MVC simply wraps it in a ContentResult.

using System.Web.Mvc;

namespace MvcApplication1.Controllers {
  public class HomeController : Controller {
        public string ClassLibAssemblyVersion() {
            return ClassLibrary1.Class1.GetExecutingAssemblyVersion();
        }
    }
}

Still with me?

When I run this application and visit /Home/ClassLibAssemblyVersion everything works fine and we see the version number.

httplocalhost29519homeClassLibAssemblyVersionFixed - Windows Internet
Explorer

Now’s where the party gets a bit wild (but still safe for work). At this point, I’ll put the class library assembly in the GAC and then recompile the application. I’m going to assume you know how to do that. Note that I’ll need to remove the local copy of the class library from the bin directory of my ASP.NET MVC application and also remove the project reference and replace it with a GAC reference.

When I do that and run the application again, I get.

security-exception

Oh noes!

So what happened here? Reflector to the rescue! Looking at the stack trace, let’s dig into RuntimeAssembly.GetName(Boolean copiedName) method.

[SecuritySafeCritical]
public override AssemblyName GetName(bool copiedName) {
    AssemblyName name = new AssemblyName();
    string codeBase = this.GetCodeBase(copiedName);
    this.VerifyCodeBaseDiscovery(codeBase);
    
    // ... snipped for brevity ...

    return name;
}

I’ve snipped out some code so we can focus on the interesting part. This method wants to return a fully populated AssemblyName instance. One of the properties of AssemblyName is CodeBase, which is a path to the assembly.

Once it has this path, it attempts to verify the path by calling VerifyCodeBaseDiscovery. Let’s take a look.

[SecurityCritical]
private void VerifyCodeBaseDiscovery(string codeBase)
{
    if ((codeBase != null) && 
      (string.Compare(codeBase, 0, "file:", 0, 5
        , StringComparison.OrdinalIgnoreCase) == 0))
    {
        URLString str = new URLString(codeBase, true);
        new FileIOPermission(FileIOPermissionAccess.PathDiscovery
          , str.GetFileName()).Demand();
    }
}

Notice that last line of code? It’s making a security demand to check if you have path discovery permissions on the specified path. That’s what’s failing. Why?

Well before you put the assembly in the GAC, the assembly was being loaded from your bin directory. Naturally, even in medium trust, you have rights to discover that path. But now that the class library is in the GAC, it’s being loaded from a subdirectory of c:\Windows\Assembly and guess what. Your medium trust application doesn’t have path discovery permissions to that directory.

As an aside, I think it’s too bad that this particular property doesn’t check its security demand lazily. That would be my kind of property access. My gut feeling is that people don’t often ask for an assembly’s Codebase as much as they ask for the other “safe” properties, like Version!

So how do we fix this? Well the answer is to construct our own AssemblyName instance.

new AssemblyName(typeof(Class1).Assembly.FullName).Version.ToString();

This implementation avoids the security issue I mentioned earlier because we’re generating the AssemblyName instance ourselves and it never has a reference to the disallowed path.

If you want to see this in action, I put together a little demo showing the bad approach and the fixed approach.

You’ll need to GAC the ClassLibrary1 assembly to see the exception occurred. I have another action that has the safe implementation. Try it out.

As a tangent, the astute reader may have noticed that I used the assembly level SecurityTransparentAttribute in my class library. Is that a case of my assembly attempting to deal with self esteem issues and shying away from a clamoring public? Why did I put that attribute there? The answer to that, my friends, is a story for another time. Smile

code, open source, nuget comments edit

The polls have closed and we now have a new name for our project, NuGet (pronounced “New Get” and not “Nugget” and not “Noojay” for you hoity-toity) which had the most votes by a large margin.

For those who missed it, the following posts will get you up to speed on the name change:

Over the next couple of days we’ll start transitioning the project over to the new name. We’ll try to minimize the impact of the change and make sure existing links to the CodePlex project redirect to the new URL. If you have a local clone of the repository with work in progress when we rename the project, don’t worry. All you have to do is push your changes to the new URL for your fork rather than the old one.

Thanks for your participation and support! I’m glad to have this behind us so we can continue to focus on delivering a great product. I’ve even thought of a tagline we can use until one of you come up with a much better one. Winking
smile

NuGet: A new way to get libraries.

OR NuGet The caramel goodness of open source in your projects.

Tags: package manager, NuGet, not-nupack

nuget, code, open source comments edit

Just a quick follow-up to my last posts about naming NuPack. Looks like the community is not content to sit back and let the project be labelled with a lame name. I’ve seen a couple of community inspired names created as new issues in the CodePlex issue tracker.

However, NFetch has a huge lead, but the community chosen NRocks is a close second. The name I like the best so far is NuGet.

(vote for it here)

As before, voting still closes on Tuesday 10/26 at 11:59 PM PDT. If you feel strongly enough around a name, rally your friends to vote for one. Smile

open source, nuget comments edit

There are only 2 hard problems in Computer Science. Naming things, cache invalidation and off-by-one errors.

I’m always impressed with the passion of the open source community and nothing brings it out more than a naming exercise. Smile

In my last blog post, I posted about our need to rename NuPack. Needless to say, I got a bit of angrypassionate feedback. There have been a lot of questions that keep coming up over and over again and I thought I would try and address the most common questions here.

Why not stay with the NuPack name? It was just fine!

In the original announcement, we pointed out that:

We want to avoid confusion with another software project that just happens to have the same name. This other project, NUPACK, is a software suite by a group of researchers at Caltech having to do with analysis and design of nucleic acid systems.

Now some of you may be thinking, “Why let that stop you? Many projects in different fields are fine sharing the same name. After all, you named a blog engine Subtext and there’s a Subtext programming language already.”

There’s a profound difference between Microsoft starting an open source project that accepts contributionsand some nobody named Phil Haack starting a little blog engine project.

Most likely, the programming language project has never heard of Subtext and Subtext doesn’t garner enough attention for them to care.

As Paula Hunter points out in a comment on the Outercurve blog post:

Sometimes we are victims of our own success, and NuPack has generated so much buzz that it caught CalTech’s attention. They have been using NuPack since 2007 and theoretically could assert their common law right of “first use” (and, they recently filed a TM application). Phil and the project team are doing the right thing in making the change now while the project is young. Did they have to? The answer is debatable, but they want to eliminate confusion and show respect to CalTech’s project team.

Naming is tough, and you can’t please everyone, but a year from now, most won’t remember the old name. How many remember Mozilla “Firebird”?

Apparently, we’re in good company when it comes to open source projects that have had to pick a new name. It’s always a painful process. This time around, we’re following guidelines posted by Paula in a blog post entitled The Naming Game: Things to consider when naming an open source project which talks about this concept of “first use” Paul mentioned.

Why not go back to NPack?

There’s already a project on CodePlex with that name.

Why not name it NGem?

Honestly, I’d prefer not to use the N prefix. I know one of the choices we provided had it in the name, but it was one of the better names we could come up with. Also, I’d like to not simply appropriate a name associated with the Ruby community. I think that could cause confusion as well. I’d love to have a name that’s uniquely ours if possible.

Why not name it ****?

In the original announcement, we listed three criteria:

  • Domain name available
  • No other project/product with a name similar to ours in the same field
  • No outstanding trademarks on the name that we could find

Domain name

The reason we wanted to make sure the domain name is available is that if it is, it’s less likely to be the name of an existing product or company. Not only that, we need a decent domain name to help market our project. This is one area where I think the community is telling us to be flexible. And I’m willing to consider being more flexible about this just as long as the name we choose won’t run afoul of the second criteria and we get a decent domain name that doesn’t cause confusion with other projects.

Product/Project With Similar Names

This one is a judgment call, but all it takes is a little time with Google/Bing to assess the risk here. There’s always going to be a risk that the name we pick will conflict with something out there. The point is not to eliminate risk but reduce it to a reasonable level. If you think of a name, try it out in a search engine and see what you find.

Trademarks

This one is tricky. Pretty much, if your search engine doesn’t pull up anything, it’s unlikely there is a trademark. Even so, it doesn’t hurt to put your search through the US Patent office’s Trademark Basic Word Mark Search and make sure it’s clean there. I’m not sure how comprehensive or accurate it is, but if it is there, you’re facing more risk than if it doesn’t show up.

I have a name that meets your criteria and is way better than the four options you gave us!

Ok, this is not exactly a question, but something I hear a lot. In the original blog post, we said the following:

Can I write in my own suggestion?

Unfortunately no. Again, we want to make sure we can secure the domains for our new project name, so we needed to start with a list that was actually attainable. If you really can’t bring yourself to pick even one, we won’t be offended if you abstain from voting. And don’t worry, the product will continue to function in the same way despite the name change.

However, I don’t want to be completely unreasonable and I think people have found a loophole. We’re conducting voting through our issue tracker and voting closes at 10/26 at 11:59 PM PDT. Our reasoning for not accepting suggestions was we wanted to avoid domain squatting. However, one creative individual created a bug to rename NuPack to a name for which they own the domain name and are willing to assign it over to the Outercurve foundation.

Right now, NFetch is way in the lead. But if some other name were to take the lead and meet all our criteria, I’d consider it. I reserve the right of veto power because I know one of you will put something obscene up there and somehow get a bajillion votes. Yeah, I have my eye on you Rob!

So where does that leave us?

We really don’t want to leave naming the project as an open ended process. So I think it’s good to set a deadline. On the morning of 10/27, for better or worse, you’ll wake up to a new name for the project.

Maybe you’ll hate it. Maybe you’ll love it. Maybe you’ll be ambivalent. Either way, over time, hopefully this mess will fade to a distant memory (much as Firebird has) and the name will start to fit in its new clothes.

As Paul Castle stated over Twitter:

@haacked to me the name is irrelevant the prouduct is ace

No matter what the name is, we’re still committed to delivering the best product we can with your help!

And no, we’re not going to name it:

prince-symbol

nuget, code, open source comments edit

UPDATE: The new name is NuGet

The NuPack project is undergoing a rename and we need your help! For details, read the announcement about the rename on the Outercurve Foundation’s blog.

What is the new name?

We don’t know. You tell us! The NuPack project team brainstormed a set of names and narrowed down the list to four names.

I’ve posted a set of names as issues in our NuPack CodePlex.com site and will ask you to vote for your favorite name among the lot. Vote for as many as you want, but realize that if you vote for all of them, you’ve just cancelled your vote. Winking
smile

Here are the choices:

Voting will close at 10/26 at 11:59 PM.

nuget, code, open source comments edit

Note: Everything I write here is based on a very early pre-release version of NuGet (formerly known as NuPack) and is subject to change.

A few weeks ago I wrote a blog post introducing the first preview, CTP 1, of NuGet Package Manager. It’s an open source (we welcome contributions!) developer focused package manager meant to make it easy to discover and make use of third party dependencies as well as keep them up to date.

As of CTP 2 NuGet by default points to an ODATA service temporarily located at http://go.microsoft.com/fwlink/?LinkID=204820 (in CTP 1 this was an ATOM feed located at http://go.microsoft.com/fwlink/?LinkID=199193).

This feed was set up so that people could try out NuGet, but it’s only temporary. We’ll have a more permanent gallery set up as we get closer to RTM.

If you want to get your package in the temporary feed, follow the instructions at a companion project, NuPackPackages on CodePlex.com.

Local Feeds

Some companies keep very tight control over which third party libraries their developers may use. They may not want their developers to point NuGet to arbitrary code over the internet. Or, they may have a set of proprietary libraries they want to make available for developers via NuGet.

NuGet supports these scenarios with a combination of two features:

  1. NuGet can point to any number of different feeds. You don’t have to point it to just our feed.
  2. NuGet can point to a local folder (or network share) that contains a set of packages.

For example, suppose I have a folder on my desktop named packages and drop in a couple of packages that I created like so:

packages-folder

I can add that director to the NuGet settings. To get to the settings, go to the Visual Studio Tools| Options dialog and scroll down to Package Manager.

A shortcut to get there is to go to the Add Package Dialog and click on the Settings button or click the button in the Package Manager Console next to the list of package sources. This brings up the Options dialog (click to see larger image).

Package Manager
Options

Type in the path to your packages folder and then click the Addbutton. Your local directory is now added as another package feed source.

Options-with-local-source-added

When you go back to the Package Manager Console, you can choose this new local package source and list the packages in that source.

MvcApplication7 - Microsoft Visual Studio (Administrator)
(2)

You can also install packages from your local directory. If you’re creating packages, this is a great way to test them out without having to publish them online anywhere.MvcApplication7 - Microsoft Visual
Studio (Administrator)
(4)

Note that if you launch the Add Package Reference Dialog, you won’t see the local package feed unless you’ve made it the default package source. This limitation is only temporary as we’re changing the dialog to allow you to select the package source.

Options-setting-default

Now when you launch the Add Package Reference Dialog, you’ll see your local packages.

add-package-reference-local-packages

Please note, as of CTP 1, if one of these local packages has a dependency on a package in another registered feed, it won’t work. However, we are tracking this issue and plan to implement this feature in the next release.

Custom Read Only Feeds

Let’s suppose that what you really want to do is host a feed at an URL rather than a package folder. Perhaps you are known for your great taste in music and package selection and you want to host your own curated NuGet feed of the packages you think are great.

Well you can do that with NuGet. For step by step instructions, check out this follow-up blog post, Hosting a Simple “Read Only” NuGet Package Feed.

We imagine that the primary usage of NuGet will be to point it to our main online feed. But the flexibility of NuGet to allow for private local feeds as well as curated feeds should appeal to many.

Tags: NuGet, Package Manager, OData

nuget, code, open source comments edit

A couple days ago I wrote a blog post entitled, Running Open Source In A Distributed World which outlined some thoughts I had about how managing core contributors to an open source project changes when you move from a centralized version control repository to distributed version control.

The post was really a way for me to probe for ideas on how best to handle feature contributions. In the post, I asked this question,

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

None other than Karl Fogel, whose book has served me well to this point, and whose book I was critiquing provided a great answer,

One simple way is, just get the agreement from each contributor the first time any change of theirs is approved for incorporation into the code. No matter whether it’s a large feature or a small bugfix – the contributor form is a small, one-time effort, so even for a tiny bugfix it’s still worth it (on the theory that the person is likely to contribute again, and the cost of collecting the form is amortized over all the contributions that person ever makes anyway).

So simple I’ll smack myself every hour for a week for not thinking of it. Smile

Unfortunately, the process for accepting a contributor agreement is not yet fully automated (the Outercurve Foundation is working on it), so we won’t be doing this for small bug fixes. But we will do it for any feature contributions.

I’ve updated our guide to Contributing to NuGet based on this feedback. I welcome feedback on how we can improve the guide. I really want to make sure we can make it easy to contribute while still ensuring the integrity of the intellectual property. Thanks!

nuget, open source comments edit

In my last post, I described how we’re trying to improve and streamline contributor guidelines to make it easy for others to contribute to NuGet.

Like all product cycles anywhere, we’re always running on tight time constraints. This helps us to maintain a tight focus on the product. We don’t want to the product to do anything and everything. However, we do want to deliver everything needed (along with double rainbows and unicorns) to meet our vision for this first release.

The best to meet those goals is to get more contributions from outside the core team. And the best way to do that is to remove as many roadblocks as possible for those interested in contributing.

What’s Up For Grabs?!

When approaching a new project, it can be really challenging to even know what bugs to tackle. So much is happening so quickly and you don’t want to step on any toes.

So we’re trying an experiment where we mark issues in our bug tracker with the tag “UpForGrabs”. The idea here is that any item marked in such a way is something that none of the core team will work on if someone else will take it. Some of these are assigned to core team members, but we hope that someone externally will come along and start a discussion and say, “Yeah, I’ll handle that and provide a pull request with high quality code.

That would so rock!

So how do you find out about our Up for Grabs items? It’s really easy.

  1. Visit our issue tracker.
  2. Search for “UpForGrabs” (sans quotes) as in the screenshot below.

Searching for Up For Grabs
Items

Once you find an item you’d like to tackle, start a discussion and let everyone know. That’ll make sure that if anyone else has started on it, that you can work together or decide to choose something else.

Note that if the status is “Active”, it’s likely that someone has already started on it.

Another way to search for these items is to use the Advanced View on the issue tracker and add a filter with the word “UpForGrabs” and set the status to “Proposed”.

NuPack - Issue Tracker - Windows Internet
Explorer

Improving CodePlex.com

This reminds me, it’d be really nice if I could create an URL that took you directly to this filtered view our issue tracker. That’s something I need to log with the CodePlex.com team.

In the meanwhile, we have a list of issues we’d love for you to vote up that would help us manage NuGet more effectively on CodePlex.com. Smile

nuget, code, open source comments edit

When it comes to running an open source project, the book Producing Open Source Software - How to Run a Successful Free Software Project by Karl Fogel (free pdf available) is my bible (see my review and summary of the book).

The book is based on Karl Fogel’s experiences as the leader of the Subversion project and has heavily influenced how I run the projects I’m involved in. Lately though, I’ve noticed one problem with some of his advice. It’s so Subversion-y.

Take a look at this snippet on Committers.

As the only formally distinct class of people found in all open source projects, committers deserve special attention here. Committers are an unavoidable concession to discrimination in a system which is otherwise as non-discriminatory as possible. But “discrimination” is not meant as a pejorative here. The function committers perform is utterly necessary, and I do not think a project could succeed without it.

A Committer in this sense is someone who has direct commit access to the source code repository. This makes sense in a world where your source control is completely centralized as it would be with a Subversion repository. But what about a world in which you’re using a completely decentralized version control like Git or Mercurial? What does it mean to be a “committer” when anyone can clone the repository, commit to their local copy, and then send a pull request?

In the book, Mercurial: The Definitive Guide, Bryan O’Sullivan discusses different collaboration models. The one the Linux kernel uses for example is such that Linus Torvalds maintains the “master” repository and only pulls from his “trusted lieutenants”.

At first glance, it might seem reasonable that a project could allow anyone to send a pull request to main and thus focus the “discrimination”, that Karl mentions, on the technical merits of each pull request rather than the history of a person’s involvement in the project.

One one level, that seems even more merit based egalitarian, but you start to wonder if that is scalable. Based on the Linux kernel model, it clearly is not scalable. As Karl points out,

Quality control requires, well, control. There are always many people who feel competent to make changes to a program, and some smaller number who actually are. The project cannot rely on people’s own judgement; it must impose standards and grant commit access only to those who meet them.

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

On one hand, if someone has a great feature idea, wouldn’t it be nice if we could just pull in their work without making them jump through hoops? On the other hand, if we have a hundred people go through this paperwork process, but only one actually ends up contributing anything, what a waste of our time. I would love to hear your thoughts on this.

NuGet, a package manager project I work on is currently following the latter approach as described in our guide to becoming a core contributor, but we’re open to refinements and improvements. I should point out that a hosted Mercurial solution does support the centralized committer model where we provide direct commit access. It just so happens that while some developers in the NuGet project have direct commit access, most don’t and shouldn’t make use of it per project policy as we’re still following a distributed model. We’re not letting the technical abilities/limitations of our source control system or project hosting define our collaboration model.

I know I’m late to the game when it comes to distributed source control, but it’s really striking to me how it’s turned the concept of committers on its head. In the centralized source control world, being a contributor was enforced via a technical gate, either you had commit access or you didn’t. With distributed version control it’s become more a matter of social contract and project policies.

asp.net mvc, code, open source, nuget comments edit

NuGet (recently renamed from NuPack) is a free open source developer focused package manager intent on simplifying the process of incorporating third party libraries into a .NET application during development.

After several months of work, the Outercurve Foundation (formerly CodePlex Foundation) today announced the acceptance of the NuGet project to the ASP.NET Open Source Gallery. This is another contribution to the foundation by the Web Platform and Tools (WPT) team at Microsoft.

Also be sure to read Scott Guthrie’s announcement post and Scott Hanselman’s NuGet walkthrough. There’s also a video interview with me on Web Camps TV where I talk about NuGet.

nuget-229x64Just to warn you, the rest of this blog post is full of blah blah blah about NuGet so if you’re a person of action, feel free to go:

Now back to my blabbing. I have to tell you, I’m really excited to finally be able to talk about this in public as we’ve been incubating this for several months now. During that time, we collaborated with various influential members of the .NET open source community including the Nu team in order to gather feedback on delivering the right project.

What Does NuGet Solve?

The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project.

Take ELMAH, for example. For the most part, this is a very simple library to use. Even so, it may take the following steps to get started:

  1. You first need to discover ELMAH somehow.
  2. The download page for ELMAH includes multiple zip files. You need to make sure you choose the correct one.
  3. After downloading the zip file, don’t forget to unblock it.
  4. If you’re really careful, you’ll verify the hash of the downloaded file against the hash provided by the download page.
  5. The package needs to be unzipped, typically into a lib folder within the solution.
  6. You’ll then add an assembly reference to the assembly from within the Visual Studio solution explorer.
  7. Finally, you need to figure out the correct configuration settings and apply them to the web.config file.

That’s a lot of steps for a simple library, and it doesn’t even take into account what you might do if the library itself depends on multiple other libraries.

NuGet automates all of these common and tedious tasks, allowing you to spend more time using the library than getting it set up in your project.

NuGet Guiding Principles

I remember several months ago, Hot on the heels of shipping ASP.NET MVC 2, I was in a meeting with Scott Guthrie (aka “The Gu”) reviewing plans for ASP.NET MVC 3 when he laid the gauntlet down and said it was time to ship a package manager for .NET developers. The truth was, it was long overdue.

I set about doing some research looking at existing package management systems on other platforms for inspiration such as Ruby Gems, Apt-Get, and Maven. Package Management is well trodden ground and we have a lot to learn from what’s come before.

After this research, I came up with a set of guiding principles for the design of NuGet that I felt specifically addressed the needs of .NET developers.

  1. Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.
  2. Works against a well known central feed.As part of this project, we plan to host a central feed that contains (or points to) NuGet packages. Package authors will be able to create an account and start adding packages to the feed. The NuGet client tools will know about this feed by default.
  3. No central approval process for adding packages. When you upload a package to the NuGet Package Gallery (which doesn’t exist yet), you won’t have to wait around for days or weeks waiting for someone to review it and approve it. Instead, we’ll rely on the community to moderate and police itself when it comes to the feed. This is in the spirit of how CodePlex.com and RubyGems.org work.
  4. Anyone can host a feed. While we will host a central feed, we wanted to make sure that anyone who wants to can also host a feed. I would imagine that some companies might want to host an internal feed of approved open source libraries, for example. Or you may want to host a feed containing your curated list of the best open source libraries. Who knows! The important part is that the NuGet tools are not hard-coded to a single feed but support pointing them to multiple feeds.
  5. Command Line and GUI based user interfaces. It was important to us to support the productivity of a command line based console interface. Thus NuGet ships with the PowerShell based Package Manager Console which I believe will appeal to power users. Likewise, NuGet also includes an easy to use GUI dialog for adding packages.

NuGet’s Primary Goal

In my mind, the primary goal of NuGet is to help foster a vibrant open source community on the .NET platform by providing a means for .NET developers to easily share and make use of open source libraries.

As an open source developer myself, this goal is something that is near and dear to my heart. It also reflects the evolution of open source in DevDiv (the division I work in) as this is a product that will ship with other Microsoft products, but also accepts contributions. Given the primary goal that I stated, it only makes sense that NuGet itself would be released as a truly open source product.

There’s one feature in particular I want to call out that’s particularly helpful to me as an open source developer. I run an open source blog engine called Subtext that makes use of around ten to fifteen other open source libraries. Before every release, I go through the painful process of looking at each of these libraries looking for new updates and incorporating them into our codebase.

With NuGet, this is one simple command: List-Package –updates. The dialog also displays which packages have updates available. Nice!

And keep in mind, while the focus is on open source, NuGet works just fine with any kind of package. So you can create a network share at work, put all your internal packages in there, and tell your co-workers to point NuGet to that directory. No need to set up a NuGet server.

Get Involved!

So in the fashion of all true open source projects, this is the part where I beg for your help. ;)

It is still early in the development cycle for NuGet. For example, the Add Package Dialog is really just a prototype intended to be rewritten from scratch. We kept it in the codebase so people can try out the user interface workflow and provide feedback.

We have yet to release our first official preview (though it’s coming soon). What we have today is closer in spirit to a nightly build (we’re working on getting a Continuous Integration (CI) server in place).

So go over to the NuGet website on CodePlex and check out our guide to contributing to NuGet. I’ve been working hard to try and get documentation in place, but I could sure use some help.

With your help, I hope that NuGet becomes a wildly successful example of how building products in collaboration with the open source community benefits our business and the community.

Tags: NuGet, Package Manager, Open Source

asp.net mvc, asp.net, code comments edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Wow! It’s been a busy two months and change since we released Preview 1 of ASP.NET MVC 3. Today I’m happy (and frankly, relieved) to announce the Beta release of ASP.NET MVC 3. Be sure to read Scott Guthrie’s announcement as well.

onward Credits: Image from ICanHazCheezburger http://icanhascheezburger.com/tag/onward/

Yes, you heard me right, we’re jumping straight to Beta with this release! To try it out…

As always, be sure to read the release notes (also available as a Word doc if you prefer that sort of thing) for all the juicy details about what’s new in ASP.NET MVC 3.

A big part of this release focuses on polishing and improving features started in Preview 1. We’ve made a lot of improvements (and changes) to our support for Dependency Injection allowing you to control how ASP.NET MVC creates your controllers and views as well as services that it needs.

One big change in this release is that client validation now is built on top of jQuery Validation in an unobtrusive manner. In ASP.NET MVC 3, jQuery Validation is the default client validation script. It’s pretty slick so give it a try and let us know what you think.

Likewise, our Ajax features such as the Ajax.ActionLink etc. are now built on top of jQuery. There’s a way to switch back to the old behavior if you need to, but moving forward, we’ll be leveraging jQuery for this sort of thing.

Where’s the Razor Syntax Highlighting and Intellisense?

This is probably a good point to stop and provide a little bit of bad news. One of the most frequently asked questions I hear is when are we going to get syntax highlighting? Unfortunately, it’s not yet ready for this release, but the Razor editor team is hard at work on it and we will see it in a future release.

I know it’s a bummer (believe me, I’m bummed about it) but I think it’ll make it that much sweeter when the feature arrives and you get to try it out the first time! See, I’m always looking for that silver lining. ;)

What’s this NuPack Thing?

That’s been the other major project I’ve been working on which has been keeping me very busy. I’ll be posting a follow-up blog post that talks about that.

What’s Next?

The plan is to have our next release be a Release Candidate. I’ve updated the Roadmap to provide an idea of some of the features that will be coming in the RC. For the most part, we try not to add too many features between Beta and RC preferring to focus on bug fixing and polish.

asp.net, subtext, code comments edit

By now, you’re probably aware of a serious ASP.NET Vulnerability going around. The ASP.NET team has been working around the clock to address this. Quite literally as last weekend, I came in twice over the weekend (to work on something unrelated) to find people working to address the exploit.

Recently, Scott Guthrie posted a follow-up blog post with an additional recommended mitigation you should apply to your servers. I’ve seen a lot of questions about these mitigations, as well as a lot of bad advice. The best advice I’ve seen is this - if you’re running an ASP.NET application, follow the advice in Scott’s blog to the letter. Better to assume your site is vulnerable than to second-guess the mitigation.

In the follow-up post, Scott recommends installing the handy dandy UrlScan IIS Module and applying a specific configuration setting. I’ve used UrlScan in the past and have found it extremely useful in dealing with DOS attacks.

However, when I installed UrlScan, my blog broke. Specifically, all the styles were gone and many images were broken. It took me a while to notice because of my blog cache. It wasn’t till someone commented that my new site design was a tad bit bland, that I hit CTRL+F5 to hard refresh my browser to see the changes.

I looked at the URLs for my CSS and I knew they existed physically on disk, but when I tried to visit them directly, I received a 404 error with some message in the URL about being blocked by UrlScan.

I opened up the UrlScan.ini file located:

%windir%\system32\inetsrv\urlscan\UrlScan.ini

And started scanning it. One of the entries that caught my eye was this one.

AllowDotInPath=0         ; If 1, allow dots that are not file
                         ; extensions. The default is 0. Note that
                         ; setting this property to 1 will make checks
                         ; based on extensions unreliable and is
                         ; therefore not recommended other than for
                         ; testing.

That’s when I had a hunch. I started digging around and remembered that I have a custom skin in my blog named “haacked-3.0”. I viewed source and noticed my CSS files and many images were in a URL that looked like:

https://haacked.com/skins/haacked-3.0/style/foo.css

Aha! Notice the dot in the URL segment there?

What I should have done next was go and rename my skin. Unfortunately, I have many blog posts with a dot in the slug (and thus in the blog post URL). So I changed that setting to be 1 and restarted my web server. There’s a small risk of making my site slightly less secure by doing so, but I’m willing to take that risk as I can’t easily go through and fix every blog post that has a dot in the URL right now.

So if you’ve run into the same problem, it may be that you have dots in your URL that UrlScan is blocking. The best and recommended solution is to remove the dots from the URL if you are able to.

asp.net, asp.net mvc, code comments edit

I was drawn to an interesting question on StackOverflow recently about how to override a request for a non-existent .svc request using routing.

One useful feature of routing in ASP.NET is that requests for files that exist on disk are ignored by routing. Thus requests for static files and for .aspx and .svc files don’t run through the routing system.

In this particular scenario, the developer wanted to replace an existing .svc service with a call to an ASP.NET MVC controller. So he deletes the .svc file and adds the following route:

routes.MapRoute(
  "UpdateItemApi",
  "Services/api.svc/UpdateItem",
  new { controller = "LegacyApi", action = "UpdateItem" }
);

Since api.svc is not a physical file on disk, at first glance, this should work just fine. But I tried it out myself with a brand new project, and sure enough, it doesn’t work.

Baffling!

So I started digging into it. First, I looked in event viewer and saw the following exception.

System.ServiceModel.EndpointNotFoundException: The service '/Services/api.svc' does not exist.

Ok, so there’s probably something special about the .svc file extension. So I opened up the machine web.config file located here on my machine:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config

And I found this interesting entry within the buildProviders section.

<add extension=".svc" 
  type="System.ServiceModel.Activation.ServiceBuildProvider, 
  System.ServiceModel.Activation,
  Version=4.0.0.0, Culture=neutral, 
  PublicKeyToken=31bf3856ad364e35" 
/>

Ah! There’s a default build provider registered for the .svc extension. And as we all know, build providers allow for runtime compilation of requests for ASP.NET files and occur very early in response to a request.

The fix I came up with was to simply remove this registration within my application’s web.config file.

  <system.web>
    <compilation debug="true" targetFramework="4.0">
      <buildProviders>
        <remove extension=".svc"/>            
      </buildProviders>
    ...

Doing that now allowed my route with the .svc extension to work. Of course, if I have other .svc services that should continue to work, I’ve pretty much disabled all of them by doing this. However, if those services are in a common subfolder (for example, a folder named services), we may be able to get around this by adding the build provider in a web.config file within that common subfolder.

In any case, I thought the question was interesting as it demonstrated the delicate interplay between routing and build providers.

personal, humor comments edit

Our eye in the sky reports two angry evil (but devishly good looking) cyborg units, XSP 2000 and TRS-80, are fast approaching Black Rock City. They are considered very armed and dangerous. In fact, they are mostly armed and not much else.

Cyborg
Battle

These cyborgs do not come in peace. I repeat, they are to be considerd hostiles. However, we’ve received a secret communiqué that reveals a weakness built into these cyborg models. Due to a lack of TDD during development, a bug in their FOF system (friend or foe) causes them to view anyone offering a frosty beverage to be a friend, not foe.

Any attempts to engage with these hostiles will result in calamity unless you offer them an ice cold beverage. For the sake of your beloved city, I suggest stocking up.

Intelligence confirms they are headed towards their evil cyborg camp at 8:15 and Kyoto on the Playa and are predicted to arrive on Tuesday morning. If we band together, we may be able to save our fair city by, once again, offering frosty alcoholic beverages in order to confuse their FOF system.

You’ve been duly warned.

This blog (and associated Twitter account) will go quiet for at least a week as communication systems are nonexistent within the Black Rock City area.

code comments edit

On Twitter yesterday I made the following comment:

We’re not here to write software, we’re here to ship products and deliver value. Writing code is just a fulfilling  means to that end :)

binary-code All I see now is blonde, brunette, redhead.

For the most part, I received a lot of tweets in agreement, but there were a few who disagreed with me:

While I agree in principle, the stated sentiment “justifies” the pervasive lack of quality in development

Doctors with this mentality don’t investigate root causes, because patients don’t define that as valuable

That’s BS. If you live only, or even primarily, for end results you’re probably zombie. We’re here to write code AND deliver value.

I have no problem with people disagreeing with me. Eventually they’ll learn I’m always right. ;) In this particular case, I think an important piece of context was missing.

What’s that you say? Context missing in a 140 character limited tweet? That could never happen, right? Sure, you keep telling yourself that while I pop a beer over here with Santa Claus.

The tweet was a rephrasing of something I told a Program Manager candidate during a phone interview. It just so happens that the role of a program manager at Microsoft is not focused on writing code like developers. But that wasn’t the point I was making. I’ve been a developer in the past (and I still play at being a developer in my own time) and I still think this applies.

What I really meant to say was that we’re not paid to write code. I absolutely love writing code, but in general, it’s not what I’m paid to do and I don’t believe it ever was what I was paid to do even when I was a consultant.

For example, suppose a customer calls me up and says,

“Hey man, I need software that allows me to write my next book. I want to be able to print the book and save the book to disk. Can you do that for me?”

I’m not going to be half way through writing my first unit test in Visual Studio by the end of that phone call. Hell no! I’ll step away from the IDE and hop over to Best Buy to purchase a copy of Microsoft Word. I’ll then promptly sell it to the customer with a nice markup for my troubles and go and sip Pina Coladas on the beach the rest of the day. Because that’s what I do. I sip on Pina Coladas.

At the end of the day, I get paid to provide products to my customers that meet their needs and provides them real value, whether by writing code from scratch or finding something else that already does what they need.

Yeah, that’s a bit of cheeky example so let’s look at another one. Suppose a customer really needs a custom software product. I could write the cleanest most well crafted code the world has ever seen (what a guy like me might produce during a prototype session on an off night), but if it doesn’t ship, I don’t get paid. Customer doesn’t care how much time I spent writing that code. They’re not going to pay me, until I deliver.

Justifying lack of quality

Now, I don’t think, as one Twitterer suggested, that this “justifies a pervasive lack of quality in development” by any means.

Quality in development is important, but it has to be scaled appropriately. Hear that? That’s the sound of a bunch of eggs lofted at my house in angry disagreement. But hear me out before chucking.

A lot of people will suggest that all software should be written with the utmost of quality. But the reality is that we all scale the quality of our code to the needs of the product. If that weren’t true, we’d all use Cleanroom Software Engineering processes like those employed by the Space Shuttle developers.

So why don’t we use these same processes? Because there are factors more important than quality in building a product. While even the Space Shuttle coders have to deal with changing requirements from time to time, in general, the laws of physics don’t change much over time last I checked. And certainly, their requirements don’t undergo the level of churn that developers trying to satisfy business needs under a rapidly changing business climate would face. Hence the rise of agile methodologies which recognize the need to embrace change.

Writing software that meets changing business needs and provides value is more important than writing zero defect code. While this might seem I’m giving quality a short shrift, another way to look at it is that I’m taking a higher view of what defines quality in the first place. Quality isn’t just the defect count of the code. It’s also how well the code meets the business needs that defines the “quality” of an overall product.

The debunking of the Betamax is better than VHS myth is a great example of this idea. While Betamax might have been technically superior to VHS in some ways, when you looked at the “whole product”, it didn’t satisfy customer needs as well as VHS did.

Nate Kohari had an interesting insight on how important delivering value is when he writes about the lessons learned building Agile Zen, a product I think is of wonderful quality.

It also completely changed the way that I look at software. I’ve tried to express this to others since, but I think you just have to experience it firsthand in order to really understand. It’s a unique experience to build a product of your own, from scratch, with no paycheck or deferred responsibility or venture capital to save you — you either create real value for your customers, or you fail. And I don’t like to fail.

Update: Dare Obasanjo wrote a timely blog that dovetails nicely with the point I’m making. He writes that Google Wave and REST vs SOAP provide a cautionary tale for those who focus too much on solving hard technical problems and miss solving their customers actual problems. Sometimes, when we think we’re paid to code, we write way too much code. Sometimes, less code solves the actual problems we’re concerned with just fine.

Code is a part of the whole

The Betamax vs VHS point leads into another point I had in mind when I made the original statement. As narcissistic developers (c’mon admit it. You are all narcissists!), we tend to see the code as being the only thing that matters. But the truth is, it’s one part of the whole that makes a product.

There’s many other components that go into a product. A lot of time is spent identifying future business needs to look for areas where software can provide value. After all, no point in writing the code if nobody wants to use it or it doesn’t provide any value.

Not to mention, at Microsoft, we put a lot of effort into localization and globalization ensuring that the software is translated into multiple languages. On top of this, we have writers who produce documentation, legal teams who work on licenses, marketing teams who market the product, and the list goes on. A lot goes into a product beyond just the code. There are also a lot of factors outside the product that determines its success such as community ecosystem, availability of add-ons, etc.

I love to code

Now don’t go running to tell on me to my momma.

“Your son is talking trash about writing code!”

It’d break her heart and it’d be completely untrue. I love to code! There, I said it. In fact, I love it so much, I tried to marry it, but then got a much better offer from a very lovely woman. But I digress.

Yes, I love coding so much I often do it for free in my spare time.

I wasn’t trying to make a point that writing code isn’t important and doesn’t provide value. It absolutely does. In fact, I firmly believe that writing code is a huge part of providing that value or we wouldn’t be doing it in the first place. This importance is why we spend so much time and effort trying to elevate the craft and debating the finer points of how to write good software. It’s an essential ingredient to building great software products.

The mere point I was making is simply that while writing code is a huge factor in providing value, it’s not the part we get paid for. Customers pay to receive value. And they only get that value when the code is in their hands.

Tags: software development

code comments edit

In my last blog post, I covered some challenges with versioning methods that differ only by optional parameters. If you haven’t read it, go read it. If I do say so myself, it’s kind of interesting. ;) In this post, I want to cover another very subtle versioning issue with using optional parameters.

At the very end of that last post, I made the following comment.

By the way, you can add overloads that have additional requiredparameters. So in this way, you are in the same boat as before.

However, this can lead to subtle bugs. Let’s walk through a scenario. Imagine that some class library has the following method in version 1.0.

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

And you have a client application which calls this method like so:

ClassName.Foo("one", "two");

That’s just fine right. You don’t need to supply a value for the argument s3 because its optional. Everything is hunky dory!

But now, the class library author decides to release version 2 of the library and adds the following overload.

public static void Foo(string s1, string s3 = "v2") {
    Console.WriteLine("version 2");
}

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

Notice that they’ve added an overload that only has two parameters. It differs from the existing method by one required parameter, which is allowed.

As I mentioned before, you’re always allowed to add overloads and maintain binary compatibility. So if you don’t recompile your client application and upgrade the class library, you’ll still get the following output when you run the application.

version 1

But what happens when you recompile your client application against version 2 of the class library and run it again with no source code changes. The output becomes:

version 2

Wow, that’s pretty subtle.

It may not seem so bad in this contrived example, but lets contemplate a real world scenario. Let’s suppose there’s a very commonly used utility method in the .NET Framework that follows this pattern in .NET 4. And in the next version of the framework, a new overload is added with one less required parameter.

Suddenly, when you recompile your application, every call to the one method is now calling the new one.

Now, I’m not one to be alarmist. Realistically, this is probably very unlikely in the .NET Framework because of stringent backwards compatibility requirements. Very likely, if such a method overload was introduced, calling it would be backwards compatible with calling the original.

But the same discipline might not apply to every library that you depend on today. It’s not hard to imagine that such a subtle versioning issue might crop up in a commonly used 3rd party open source library and it would be very hard for you to even know it exists without testing your application very thoroughly.

The moral of the story is, you do write unit tests dontcha? Well dontcha?!If not, now’s a good time to start.

code comments edit

One nice new feature introduced in C# 4 is support for named and optional arguments. While these two features are often discussed together, they really are orthogonal concepts.

Let’s look at a quick example of these two concepts at work. Suppose we have a class with one method having the following signature.

  // v1
  public static void Redirect(string url, string protocol = "http");

This hypothetical library contains a single method that takes in two parameters, a required string url and an optional string protocol.

The following shows the six possible ways this method can be called.

HttpHelpers.Redirect("https://haacked.com/");
HttpHelpers.Redirect(url: "https://haacked.com/");
HttpHelpers.Redirect("https://haacked.com/", "https");
HttpHelpers.Redirect("https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(url: "https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(protocol: "https", url: "https://haacked.com/");

Notice that whether or not a parameter is optional, you can choose to refer to the parameter by name or not. In the last case, notice that the parameters are specified out of order. In this case, using named parameters is required.

The Next Version

One apparent benefit of using optional parameters is that you can reduce the number of overloads your API has. However, relying on optional parameters does have its quirks you need to be aware of when it comes to versioning.

Let’s suppose we’re ready to make version two of our awesome HttpHelpers library and we add an optional parameter to the existing method.

// v2
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

What happens when we try to execute the client without recompiling the client application?

We get a the following exception message.

Unhandled Exception: System.MissingMethodException: Method not found: 'Void HttpLib.HttpHelpers.Redirect(System.String, System.String)'....

Whoops! By changing the method signature, we caused a runtime breaking change to occur. That’s not good.

Let’s try to avoid a runtime breaking change by adding an overload instead of changing the existing method.

// v2.1
public static void Redirect(string url, string protocol = "http");
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

Now, when we run our client application, it works fine. It’s still calling the two parameter version of the method. Adding overloads is never a runtime breaking change.

But let’s suppose we’re now ready to update the client application and we attempt to recompile it. Uh oh!

The call is ambiguous between the following methods or properties: 'HttpLib.HttpHelpers.Redirect(string, string)' and 'HttpLib.HttpHelpers.Redirect(string, string, bool)'

While adding an overload is not a runtime breaking change, it can result in a compile time breaking change. Doh!

Talk about a catch-22! If we add an overload, we break in one way. If we instead add an argument to the existing method, we’re broken in another way.

Why Is This Happening?

When I first heard about optional parameter support, I falsely assumed it was implemented as a feature of the CLR which might allow dynamic dispatch to the method. This was perhaps very naive of me.

My co-worker Levi (no blog still) broke it down for me as follows. Keep in mind, he’s glossing over a lot of details, but at a high level, this is roughly what’s going on.

When optional parameters are in use, the C# compiler follows a simple algorithm to determine which overload of a method you actually meant to call. It considers as a candidate *every* overload of the method, then one by one it eliminates overloads that can’t possibly work for the particular parameters you’re passing in.

Consider these overloads:

public static void Blah(int i);
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

Suppose you make the following method call: Blah(0).

The last candidate is eliminated since the parameter types are incorrect, which leaves us with the first two.

public static void Blah(int i); // Candidate
public static void Blah(int i, int j = 5); // Candidate
public static void Blah(string i = "Hello");  // Eliminated

At this point, the compiler needs to perform a conflict resolution. The conflict resolution is very simple: if one of the candidates has the same number of parameters as the call site, it wins. Otherwise the compiler bombs with an error.

In the case of Blah(0), the first overload is chosen since the number of parameters is exactly one.

public static void Blah(int i); //WINNER!!!
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

This allows you to take an existing method that doesn’t have optional parameters and add overloads that have optional parameters without breaking anybody (except in Visual Basic which has a slightly different algorithm).

But what happens if you need to version an API that already has optional parameters?  Consider this example:

public static void Helper(int i = 2, int j = 3);            // v1
public static void Helper(int i = 2, int j = 3, int k = 4); // added in v2

And say that the call site is Helper(j: 10). Both candidates still exist after the elimination process, but since neither candidate has exactly one argument, the compiler will not prefer one over another. This leads to the compilation error we saw earlier about the call being ambiguous.

Conclusion

The reason that optional parameters were introduced to C# 4 in the first place was to support COM interop. That’s it. And now, we’re learning about the full implications of this fact.

If you have a method with optional parameters, you can never add an overload with additional optional parameters out of fear of causing a compile-time breaking change. And you can never remove an existing overload, as this has always been a runtime breaking change.

You pretty much need to treat it like an interface. Your only recourse in this case is to write a new method with a new name.

So be aware of this if you plan to use optional arguments in your APIs.

UPDATE: By the way, you can add overloads that have additional required parameters. So in this way, you are in the same boat as before. However, this can lead to other subtle versioning issues as my follow-up post describes.