nuget, code, open source comments edit

UPDATE: The new name is NuGet

The NuPack project is undergoing a rename and we need your help! For details, read the announcement about the rename on the Outercurve Foundation’s blog.

What is the new name?

We don’t know. You tell us! The NuPack project team brainstormed a set of names and narrowed down the list to four names.

I’ve posted a set of names as issues in our NuPack site and will ask you to vote for your favorite name among the lot. Vote for as many as you want, but realize that if you vote for all of them, you’ve just cancelled your vote. Winking

Here are the choices:

Voting will close at 10/26 at 11:59 PM.

nuget, code, open source comments edit

Note: Everything I write here is based on a very early pre-release version of NuGet (formerly known as NuPack) and is subject to change.

A few weeks ago I wrote a blog post introducing the first preview, CTP 1, of NuGet Package Manager. It’s an open source (we welcome contributions!) developer focused package manager meant to make it easy to discover and make use of third party dependencies as well as keep them up to date.

As of CTP 2 NuGet by default points to an ODATA service temporarily located at (in CTP 1 this was an ATOM feed located at

This feed was set up so that people could try out NuGet, but it’s only temporary. We’ll have a more permanent gallery set up as we get closer to RTM.

If you want to get your package in the temporary feed, follow the instructions at a companion project, NuPackPackages on

Local Feeds

Some companies keep very tight control over which third party libraries their developers may use. They may not want their developers to point NuGet to arbitrary code over the internet. Or, they may have a set of proprietary libraries they want to make available for developers via NuGet.

NuGet supports these scenarios with a combination of two features:

  1. NuGet can point to any number of different feeds. You don’t have to point it to just our feed.
  2. NuGet can point to a local folder (or network share) that contains a set of packages.

For example, suppose I have a folder on my desktop named packages and drop in a couple of packages that I created like so:


I can add that director to the NuGet settings. To get to the settings, go to the Visual Studio Tools| Options dialog and scroll down to Package Manager.

A shortcut to get there is to go to the Add Package Dialog and click on the Settings button or click the button in the Package Manager Console next to the list of package sources. This brings up the Options dialog (click to see larger image).

Package Manager

Type in the path to your packages folder and then click the Addbutton. Your local directory is now added as another package feed source.


When you go back to the Package Manager Console, you can choose this new local package source and list the packages in that source.

MvcApplication7 - Microsoft Visual Studio (Administrator)

You can also install packages from your local directory. If you’re creating packages, this is a great way to test them out without having to publish them online anywhere.MvcApplication7 - Microsoft Visual
Studio (Administrator)

Note that if you launch the Add Package Reference Dialog, you won’t see the local package feed unless you’ve made it the default package source. This limitation is only temporary as we’re changing the dialog to allow you to select the package source.


Now when you launch the Add Package Reference Dialog, you’ll see your local packages.


Please note, as of CTP 1, if one of these local packages has a dependency on a package in another registered feed, it won’t work. However, we are tracking this issue and plan to implement this feature in the next release.

Custom Read Only Feeds

Let’s suppose that what you really want to do is host a feed at an URL rather than a package folder. Perhaps you are known for your great taste in music and package selection and you want to host your own curated NuGet feed of the packages you think are great.

Well you can do that with NuGet. For step by step instructions, check out this follow-up blog post, Hosting a Simple “Read Only” NuGet Package Feed.

We imagine that the primary usage of NuGet will be to point it to our main online feed. But the flexibility of NuGet to allow for private local feeds as well as curated feeds should appeal to many.

Tags: NuGet, Package Manager, OData

nuget, code, open source comments edit

A couple days ago I wrote a blog post entitled, Running Open Source In A Distributed World which outlined some thoughts I had about how managing core contributors to an open source project changes when you move from a centralized version control repository to distributed version control.

The post was really a way for me to probe for ideas on how best to handle feature contributions. In the post, I asked this question,

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

None other than Karl Fogel, whose book has served me well to this point, and whose book I was critiquing provided a great answer,

One simple way is, just get the agreement from each contributor the first time any change of theirs is approved for incorporation into the code. No matter whether it’s a large feature or a small bugfix – the contributor form is a small, one-time effort, so even for a tiny bugfix it’s still worth it (on the theory that the person is likely to contribute again, and the cost of collecting the form is amortized over all the contributions that person ever makes anyway).

So simple I’ll smack myself every hour for a week for not thinking of it. Smile

Unfortunately, the process for accepting a contributor agreement is not yet fully automated (the Outercurve Foundation is working on it), so we won’t be doing this for small bug fixes. But we will do it for any feature contributions.

I’ve updated our guide to Contributing to NuGet based on this feedback. I welcome feedback on how we can improve the guide. I really want to make sure we can make it easy to contribute while still ensuring the integrity of the intellectual property. Thanks!

nuget, open source comments edit

In my last post, I described how we’re trying to improve and streamline contributor guidelines to make it easy for others to contribute to NuGet.

Like all product cycles anywhere, we’re always running on tight time constraints. This helps us to maintain a tight focus on the product. We don’t want to the product to do anything and everything. However, we do want to deliver everything needed (along with double rainbows and unicorns) to meet our vision for this first release.

The best to meet those goals is to get more contributions from outside the core team. And the best way to do that is to remove as many roadblocks as possible for those interested in contributing.

What’s Up For Grabs?!

When approaching a new project, it can be really challenging to even know what bugs to tackle. So much is happening so quickly and you don’t want to step on any toes.

So we’re trying an experiment where we mark issues in our bug tracker with the tag “UpForGrabs”. The idea here is that any item marked in such a way is something that none of the core team will work on if someone else will take it. Some of these are assigned to core team members, but we hope that someone externally will come along and start a discussion and say, “Yeah, I’ll handle that and provide a pull request with high quality code.

That would so rock!

So how do you find out about our Up for Grabs items? It’s really easy.

  1. Visit our issue tracker.
  2. Search for “UpForGrabs” (sans quotes) as in the screenshot below.

Searching for Up For Grabs

Once you find an item you’d like to tackle, start a discussion and let everyone know. That’ll make sure that if anyone else has started on it, that you can work together or decide to choose something else.

Note that if the status is “Active”, it’s likely that someone has already started on it.

Another way to search for these items is to use the Advanced View on the issue tracker and add a filter with the word “UpForGrabs” and set the status to “Proposed”.

NuPack - Issue Tracker - Windows Internet


This reminds me, it’d be really nice if I could create an URL that took you directly to this filtered view our issue tracker. That’s something I need to log with the team.

In the meanwhile, we have a list of issues we’d love for you to vote up that would help us manage NuGet more effectively on Smile

nuget, code, open source comments edit

When it comes to running an open source project, the book Producing Open Source Software - How to Run a Successful Free Software Project by Karl Fogel (free pdf available) is my bible (see my review and summary of the book).

The book is based on Karl Fogel’s experiences as the leader of the Subversion project and has heavily influenced how I run the projects I’m involved in. Lately though, I’ve noticed one problem with some of his advice. It’s so Subversion-y.

Take a look at this snippet on Committers.

As the only formally distinct class of people found in all open source projects, committers deserve special attention here. Committers are an unavoidable concession to discrimination in a system which is otherwise as non-discriminatory as possible. But “discrimination” is not meant as a pejorative here. The function committers perform is utterly necessary, and I do not think a project could succeed without it.

A Committer in this sense is someone who has direct commit access to the source code repository. This makes sense in a world where your source control is completely centralized as it would be with a Subversion repository. But what about a world in which you’re using a completely decentralized version control like Git or Mercurial? What does it mean to be a “committer” when anyone can clone the repository, commit to their local copy, and then send a pull request?

In the book, Mercurial: The Definitive Guide, Bryan O’Sullivan discusses different collaboration models. The one the Linux kernel uses for example is such that Linus Torvalds maintains the “master” repository and only pulls from his “trusted lieutenants”.

At first glance, it might seem reasonable that a project could allow anyone to send a pull request to main and thus focus the “discrimination”, that Karl mentions, on the technical merits of each pull request rather than the history of a person’s involvement in the project.

One one level, that seems even more merit based egalitarian, but you start to wonder if that is scalable. Based on the Linux kernel model, it clearly is not scalable. As Karl points out,

Quality control requires, well, control. There are always many people who feel competent to make changes to a program, and some smaller number who actually are. The project cannot rely on people’s own judgement; it must impose standards and grant commit access only to those who meet them.

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

On one hand, if someone has a great feature idea, wouldn’t it be nice if we could just pull in their work without making them jump through hoops? On the other hand, if we have a hundred people go through this paperwork process, but only one actually ends up contributing anything, what a waste of our time. I would love to hear your thoughts on this.

NuGet, a package manager project I work on is currently following the latter approach as described in our guide to becoming a core contributor, but we’re open to refinements and improvements. I should point out that a hosted Mercurial solution does support the centralized committer model where we provide direct commit access. It just so happens that while some developers in the NuGet project have direct commit access, most don’t and shouldn’t make use of it per project policy as we’re still following a distributed model. We’re not letting the technical abilities/limitations of our source control system or project hosting define our collaboration model.

I know I’m late to the game when it comes to distributed source control, but it’s really striking to me how it’s turned the concept of committers on its head. In the centralized source control world, being a contributor was enforced via a technical gate, either you had commit access or you didn’t. With distributed version control it’s become more a matter of social contract and project policies. mvc, code, open source, nuget comments edit

NuGet (recently renamed from NuPack) is a free open source developer focused package manager intent on simplifying the process of incorporating third party libraries into a .NET application during development.

After several months of work, the Outercurve Foundation (formerly CodePlex Foundation) today announced the acceptance of the NuGet project to the ASP.NET Open Source Gallery. This is another contribution to the foundation by the Web Platform and Tools (WPT) team at Microsoft.

Also be sure to read Scott Guthrie’s announcement post and Scott Hanselman’s NuGet walkthrough. There’s also a video interview with me on Web Camps TV where I talk about NuGet.

nuget-229x64Just to warn you, the rest of this blog post is full of blah blah blah about NuGet so if you’re a person of action, feel free to go:

Now back to my blabbing. I have to tell you, I’m really excited to finally be able to talk about this in public as we’ve been incubating this for several months now. During that time, we collaborated with various influential members of the .NET open source community including the Nu team in order to gather feedback on delivering the right project.

What Does NuGet Solve?

The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project.

Take ELMAH, for example. For the most part, this is a very simple library to use. Even so, it may take the following steps to get started:

  1. You first need to discover ELMAH somehow.
  2. The download page for ELMAH includes multiple zip files. You need to make sure you choose the correct one.
  3. After downloading the zip file, don’t forget to unblock it.
  4. If you’re really careful, you’ll verify the hash of the downloaded file against the hash provided by the download page.
  5. The package needs to be unzipped, typically into a lib folder within the solution.
  6. You’ll then add an assembly reference to the assembly from within the Visual Studio solution explorer.
  7. Finally, you need to figure out the correct configuration settings and apply them to the web.config file.

That’s a lot of steps for a simple library, and it doesn’t even take into account what you might do if the library itself depends on multiple other libraries.

NuGet automates all of these common and tedious tasks, allowing you to spend more time using the library than getting it set up in your project.

NuGet Guiding Principles

I remember several months ago, Hot on the heels of shipping ASP.NET MVC 2, I was in a meeting with Scott Guthrie (aka “The Gu”) reviewing plans for ASP.NET MVC 3 when he laid the gauntlet down and said it was time to ship a package manager for .NET developers. The truth was, it was long overdue.

I set about doing some research looking at existing package management systems on other platforms for inspiration such as Ruby Gems, Apt-Get, and Maven. Package Management is well trodden ground and we have a lot to learn from what’s come before.

After this research, I came up with a set of guiding principles for the design of NuGet that I felt specifically addressed the needs of .NET developers.

  1. Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.
  2. Works against a well known central feed.As part of this project, we plan to host a central feed that contains (or points to) NuGet packages. Package authors will be able to create an account and start adding packages to the feed. The NuGet client tools will know about this feed by default.
  3. No central approval process for adding packages. When you upload a package to the NuGet Package Gallery (which doesn’t exist yet), you won’t have to wait around for days or weeks waiting for someone to review it and approve it. Instead, we’ll rely on the community to moderate and police itself when it comes to the feed. This is in the spirit of how and work.
  4. Anyone can host a feed. While we will host a central feed, we wanted to make sure that anyone who wants to can also host a feed. I would imagine that some companies might want to host an internal feed of approved open source libraries, for example. Or you may want to host a feed containing your curated list of the best open source libraries. Who knows! The important part is that the NuGet tools are not hard-coded to a single feed but support pointing them to multiple feeds.
  5. Command Line and GUI based user interfaces. It was important to us to support the productivity of a command line based console interface. Thus NuGet ships with the PowerShell based Package Manager Console which I believe will appeal to power users. Likewise, NuGet also includes an easy to use GUI dialog for adding packages.

NuGet’s Primary Goal

In my mind, the primary goal of NuGet is to help foster a vibrant open source community on the .NET platform by providing a means for .NET developers to easily share and make use of open source libraries.

As an open source developer myself, this goal is something that is near and dear to my heart. It also reflects the evolution of open source in DevDiv (the division I work in) as this is a product that will ship with other Microsoft products, but also accepts contributions. Given the primary goal that I stated, it only makes sense that NuGet itself would be released as a truly open source product.

There’s one feature in particular I want to call out that’s particularly helpful to me as an open source developer. I run an open source blog engine called Subtext that makes use of around ten to fifteen other open source libraries. Before every release, I go through the painful process of looking at each of these libraries looking for new updates and incorporating them into our codebase.

With NuGet, this is one simple command: List-Package –updates. The dialog also displays which packages have updates available. Nice!

And keep in mind, while the focus is on open source, NuGet works just fine with any kind of package. So you can create a network share at work, put all your internal packages in there, and tell your co-workers to point NuGet to that directory. No need to set up a NuGet server.

Get Involved!

So in the fashion of all true open source projects, this is the part where I beg for your help. ;)

It is still early in the development cycle for NuGet. For example, the Add Package Dialog is really just a prototype intended to be rewritten from scratch. We kept it in the codebase so people can try out the user interface workflow and provide feedback.

We have yet to release our first official preview (though it’s coming soon). What we have today is closer in spirit to a nightly build (we’re working on getting a Continuous Integration (CI) server in place).

So go over to the NuGet website on CodePlex and check out our guide to contributing to NuGet. I’ve been working hard to try and get documentation in place, but I could sure use some help.

With your help, I hope that NuGet becomes a wildly successful example of how building products in collaboration with the open source community benefits our business and the community.

Tags: NuGet, Package Manager, Open Source mvc,, code comments edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Wow! It’s been a busy two months and change since we released Preview 1 of ASP.NET MVC 3. Today I’m happy (and frankly, relieved) to announce the Beta release of ASP.NET MVC 3. Be sure to read Scott Guthrie’s announcement as well.

onward Credits: Image from ICanHazCheezburger

Yes, you heard me right, we’re jumping straight to Beta with this release! To try it out…

As always, be sure to read the release notes (also available as a Word doc if you prefer that sort of thing) for all the juicy details about what’s new in ASP.NET MVC 3.

A big part of this release focuses on polishing and improving features started in Preview 1. We’ve made a lot of improvements (and changes) to our support for Dependency Injection allowing you to control how ASP.NET MVC creates your controllers and views as well as services that it needs.

One big change in this release is that client validation now is built on top of jQuery Validation in an unobtrusive manner. In ASP.NET MVC 3, jQuery Validation is the default client validation script. It’s pretty slick so give it a try and let us know what you think.

Likewise, our Ajax features such as the Ajax.ActionLink etc. are now built on top of jQuery. There’s a way to switch back to the old behavior if you need to, but moving forward, we’ll be leveraging jQuery for this sort of thing.

Where’s the Razor Syntax Highlighting and Intellisense?

This is probably a good point to stop and provide a little bit of bad news. One of the most frequently asked questions I hear is when are we going to get syntax highlighting? Unfortunately, it’s not yet ready for this release, but the Razor editor team is hard at work on it and we will see it in a future release.

I know it’s a bummer (believe me, I’m bummed about it) but I think it’ll make it that much sweeter when the feature arrives and you get to try it out the first time! See, I’m always looking for that silver lining. ;)

What’s this NuPack Thing?

That’s been the other major project I’ve been working on which has been keeping me very busy. I’ll be posting a follow-up blog post that talks about that.

What’s Next?

The plan is to have our next release be a Release Candidate. I’ve updated the Roadmap to provide an idea of some of the features that will be coming in the RC. For the most part, we try not to add too many features between Beta and RC preferring to focus on bug fixing and polish., subtext, code comments edit

By now, you’re probably aware of a serious ASP.NET Vulnerability going around. The ASP.NET team has been working around the clock to address this. Quite literally as last weekend, I came in twice over the weekend (to work on something unrelated) to find people working to address the exploit.

Recently, Scott Guthrie posted a follow-up blog post with an additional recommended mitigation you should apply to your servers. I’ve seen a lot of questions about these mitigations, as well as a lot of bad advice. The best advice I’ve seen is this - if you’re running an ASP.NET application, follow the advice in Scott’s blog to the letter. Better to assume your site is vulnerable than to second-guess the mitigation.

In the follow-up post, Scott recommends installing the handy dandy UrlScan IIS Module and applying a specific configuration setting. I’ve used UrlScan in the past and have found it extremely useful in dealing with DOS attacks.

However, when I installed UrlScan, my blog broke. Specifically, all the styles were gone and many images were broken. It took me a while to notice because of my blog cache. It wasn’t till someone commented that my new site design was a tad bit bland, that I hit CTRL+F5 to hard refresh my browser to see the changes.

I looked at the URLs for my CSS and I knew they existed physically on disk, but when I tried to visit them directly, I received a 404 error with some message in the URL about being blocked by UrlScan.

I opened up the UrlScan.ini file located:


And started scanning it. One of the entries that caught my eye was this one.

AllowDotInPath=0         ; If 1, allow dots that are not file
                         ; extensions. The default is 0. Note that
                         ; setting this property to 1 will make checks
                         ; based on extensions unreliable and is
                         ; therefore not recommended other than for
                         ; testing.

That’s when I had a hunch. I started digging around and remembered that I have a custom skin in my blog named “haacked-3.0”. I viewed source and noticed my CSS files and many images were in a URL that looked like:

Aha! Notice the dot in the URL segment there?

What I should have done next was go and rename my skin. Unfortunately, I have many blog posts with a dot in the slug (and thus in the blog post URL). So I changed that setting to be 1 and restarted my web server. There’s a small risk of making my site slightly less secure by doing so, but I’m willing to take that risk as I can’t easily go through and fix every blog post that has a dot in the URL right now.

So if you’ve run into the same problem, it may be that you have dots in your URL that UrlScan is blocking. The best and recommended solution is to remove the dots from the URL if you are able to., mvc, code comments edit

I was drawn to an interesting question on StackOverflow recently about how to override a request for a non-existent .svc request using routing.

One useful feature of routing in ASP.NET is that requests for files that exist on disk are ignored by routing. Thus requests for static files and for .aspx and .svc files don’t run through the routing system.

In this particular scenario, the developer wanted to replace an existing .svc service with a call to an ASP.NET MVC controller. So he deletes the .svc file and adds the following route:

  new { controller = "LegacyApi", action = "UpdateItem" }

Since api.svc is not a physical file on disk, at first glance, this should work just fine. But I tried it out myself with a brand new project, and sure enough, it doesn’t work.


So I started digging into it. First, I looked in event viewer and saw the following exception.

System.ServiceModel.EndpointNotFoundException: The service '/Services/api.svc' does not exist.

Ok, so there’s probably something special about the .svc file extension. So I opened up the machine web.config file located here on my machine:


And I found this interesting entry within the buildProviders section.

<add extension=".svc" 
  Version=, Culture=neutral, 

Ah! There’s a default build provider registered for the .svc extension. And as we all know, build providers allow for runtime compilation of requests for ASP.NET files and occur very early in response to a request.

The fix I came up with was to simply remove this registration within my application’s web.config file.

    <compilation debug="true" targetFramework="4.0">
        <remove extension=".svc"/>            

Doing that now allowed my route with the .svc extension to work. Of course, if I have other .svc services that should continue to work, I’ve pretty much disabled all of them by doing this. However, if those services are in a common subfolder (for example, a folder named services), we may be able to get around this by adding the build provider in a web.config file within that common subfolder.

In any case, I thought the question was interesting as it demonstrated the delicate interplay between routing and build providers.

personal, humor comments edit

Our eye in the sky reports two angry evil (but devishly good looking) cyborg units, XSP 2000 and TRS-80, are fast approaching Black Rock City. They are considered very armed and dangerous. In fact, they are mostly armed and not much else.


These cyborgs do not come in peace. I repeat, they are to be considerd hostiles. However, we’ve received a secret communiqué that reveals a weakness built into these cyborg models. Due to a lack of TDD during development, a bug in their FOF system (friend or foe) causes them to view anyone offering a frosty beverage to be a friend, not foe.

Any attempts to engage with these hostiles will result in calamity unless you offer them an ice cold beverage. For the sake of your beloved city, I suggest stocking up.

Intelligence confirms they are headed towards their evil cyborg camp at 8:15 and Kyoto on the Playa and are predicted to arrive on Tuesday morning. If we band together, we may be able to save our fair city by, once again, offering frosty alcoholic beverages in order to confuse their FOF system.

You’ve been duly warned.

This blog (and associated Twitter account) will go quiet for at least a week as communication systems are nonexistent within the Black Rock City area.

code comments edit

On Twitter yesterday I made the following comment:

We’re not here to write software, we’re here to ship products and deliver value. Writing code is just a fulfilling  means to that end :)

binary-code All I see now is blonde, brunette, redhead.

For the most part, I received a lot of tweets in agreement, but there were a few who disagreed with me:

While I agree in principle, the stated sentiment “justifies” the pervasive lack of quality in development

Doctors with this mentality don’t investigate root causes, because patients don’t define that as valuable

That’s BS. If you live only, or even primarily, for end results you’re probably zombie. We’re here to write code AND deliver value.

I have no problem with people disagreeing with me. Eventually they’ll learn I’m always right. ;) In this particular case, I think an important piece of context was missing.

What’s that you say? Context missing in a 140 character limited tweet? That could never happen, right? Sure, you keep telling yourself that while I pop a beer over here with Santa Claus.

The tweet was a rephrasing of something I told a Program Manager candidate during a phone interview. It just so happens that the role of a program manager at Microsoft is not focused on writing code like developers. But that wasn’t the point I was making. I’ve been a developer in the past (and I still play at being a developer in my own time) and I still think this applies.

What I really meant to say was that we’re not paid to write code. I absolutely love writing code, but in general, it’s not what I’m paid to do and I don’t believe it ever was what I was paid to do even when I was a consultant.

For example, suppose a customer calls me up and says,

“Hey man, I need software that allows me to write my next book. I want to be able to print the book and save the book to disk. Can you do that for me?”

I’m not going to be half way through writing my first unit test in Visual Studio by the end of that phone call. Hell no! I’ll step away from the IDE and hop over to Best Buy to purchase a copy of Microsoft Word. I’ll then promptly sell it to the customer with a nice markup for my troubles and go and sip Pina Coladas on the beach the rest of the day. Because that’s what I do. I sip on Pina Coladas.

At the end of the day, I get paid to provide products to my customers that meet their needs and provides them real value, whether by writing code from scratch or finding something else that already does what they need.

Yeah, that’s a bit of cheeky example so let’s look at another one. Suppose a customer really needs a custom software product. I could write the cleanest most well crafted code the world has ever seen (what a guy like me might produce during a prototype session on an off night), but if it doesn’t ship, I don’t get paid. Customer doesn’t care how much time I spent writing that code. They’re not going to pay me, until I deliver.

Justifying lack of quality

Now, I don’t think, as one Twitterer suggested, that this “justifies a pervasive lack of quality in development” by any means.

Quality in development is important, but it has to be scaled appropriately. Hear that? That’s the sound of a bunch of eggs lofted at my house in angry disagreement. But hear me out before chucking.

A lot of people will suggest that all software should be written with the utmost of quality. But the reality is that we all scale the quality of our code to the needs of the product. If that weren’t true, we’d all use Cleanroom Software Engineering processes like those employed by the Space Shuttle developers.

So why don’t we use these same processes? Because there are factors more important than quality in building a product. While even the Space Shuttle coders have to deal with changing requirements from time to time, in general, the laws of physics don’t change much over time last I checked. And certainly, their requirements don’t undergo the level of churn that developers trying to satisfy business needs under a rapidly changing business climate would face. Hence the rise of agile methodologies which recognize the need to embrace change.

Writing software that meets changing business needs and provides value is more important than writing zero defect code. While this might seem I’m giving quality a short shrift, another way to look at it is that I’m taking a higher view of what defines quality in the first place. Quality isn’t just the defect count of the code. It’s also how well the code meets the business needs that defines the “quality” of an overall product.

The debunking of the Betamax is better than VHS myth is a great example of this idea. While Betamax might have been technically superior to VHS in some ways, when you looked at the “whole product”, it didn’t satisfy customer needs as well as VHS did.

Nate Kohari had an interesting insight on how important delivering value is when he writes about the lessons learned building Agile Zen, a product I think is of wonderful quality.

It also completely changed the way that I look at software. I’ve tried to express this to others since, but I think you just have to experience it firsthand in order to really understand. It’s a unique experience to build a product of your own, from scratch, with no paycheck or deferred responsibility or venture capital to save you — you either create real value for your customers, or you fail. And I don’t like to fail.

Update: Dare Obasanjo wrote a timely blog that dovetails nicely with the point I’m making. He writes that Google Wave and REST vs SOAP provide a cautionary tale for those who focus too much on solving hard technical problems and miss solving their customers actual problems. Sometimes, when we think we’re paid to code, we write way too much code. Sometimes, less code solves the actual problems we’re concerned with just fine.

Code is a part of the whole

The Betamax vs VHS point leads into another point I had in mind when I made the original statement. As narcissistic developers (c’mon admit it. You are all narcissists!), we tend to see the code as being the only thing that matters. But the truth is, it’s one part of the whole that makes a product.

There’s many other components that go into a product. A lot of time is spent identifying future business needs to look for areas where software can provide value. After all, no point in writing the code if nobody wants to use it or it doesn’t provide any value.

Not to mention, at Microsoft, we put a lot of effort into localization and globalization ensuring that the software is translated into multiple languages. On top of this, we have writers who produce documentation, legal teams who work on licenses, marketing teams who market the product, and the list goes on. A lot goes into a product beyond just the code. There are also a lot of factors outside the product that determines its success such as community ecosystem, availability of add-ons, etc.

I love to code

Now don’t go running to tell on me to my momma.

“Your son is talking trash about writing code!”

It’d break her heart and it’d be completely untrue. I love to code! There, I said it. In fact, I love it so much, I tried to marry it, but then got a much better offer from a very lovely woman. But I digress.

Yes, I love coding so much I often do it for free in my spare time.

I wasn’t trying to make a point that writing code isn’t important and doesn’t provide value. It absolutely does. In fact, I firmly believe that writing code is a huge part of providing that value or we wouldn’t be doing it in the first place. This importance is why we spend so much time and effort trying to elevate the craft and debating the finer points of how to write good software. It’s an essential ingredient to building great software products.

The mere point I was making is simply that while writing code is a huge factor in providing value, it’s not the part we get paid for. Customers pay to receive value. And they only get that value when the code is in their hands.

Tags: software development

code comments edit

In my last blog post, I covered some challenges with versioning methods that differ only by optional parameters. If you haven’t read it, go read it. If I do say so myself, it’s kind of interesting. ;) In this post, I want to cover another very subtle versioning issue with using optional parameters.

At the very end of that last post, I made the following comment.

By the way, you can add overloads that have additional requiredparameters. So in this way, you are in the same boat as before.

However, this can lead to subtle bugs. Let’s walk through a scenario. Imagine that some class library has the following method in version 1.0.

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");

And you have a client application which calls this method like so:

ClassName.Foo("one", "two");

That’s just fine right. You don’t need to supply a value for the argument s3 because its optional. Everything is hunky dory!

But now, the class library author decides to release version 2 of the library and adds the following overload.

public static void Foo(string s1, string s3 = "v2") {
    Console.WriteLine("version 2");

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");

Notice that they’ve added an overload that only has two parameters. It differs from the existing method by one required parameter, which is allowed.

As I mentioned before, you’re always allowed to add overloads and maintain binary compatibility. So if you don’t recompile your client application and upgrade the class library, you’ll still get the following output when you run the application.

version 1

But what happens when you recompile your client application against version 2 of the class library and run it again with no source code changes. The output becomes:

version 2

Wow, that’s pretty subtle.

It may not seem so bad in this contrived example, but lets contemplate a real world scenario. Let’s suppose there’s a very commonly used utility method in the .NET Framework that follows this pattern in .NET 4. And in the next version of the framework, a new overload is added with one less required parameter.

Suddenly, when you recompile your application, every call to the one method is now calling the new one.

Now, I’m not one to be alarmist. Realistically, this is probably very unlikely in the .NET Framework because of stringent backwards compatibility requirements. Very likely, if such a method overload was introduced, calling it would be backwards compatible with calling the original.

But the same discipline might not apply to every library that you depend on today. It’s not hard to imagine that such a subtle versioning issue might crop up in a commonly used 3rd party open source library and it would be very hard for you to even know it exists without testing your application very thoroughly.

The moral of the story is, you do write unit tests dontcha? Well dontcha?!If not, now’s a good time to start.

code comments edit

One nice new feature introduced in C# 4 is support for named and optional arguments. While these two features are often discussed together, they really are orthogonal concepts.

Let’s look at a quick example of these two concepts at work. Suppose we have a class with one method having the following signature.

  // v1
  public static void Redirect(string url, string protocol = "http");

This hypothetical library contains a single method that takes in two parameters, a required string url and an optional string protocol.

The following shows the six possible ways this method can be called.

HttpHelpers.Redirect(url: "");
HttpHelpers.Redirect("", "https");
HttpHelpers.Redirect("", protocol: "https");
HttpHelpers.Redirect(url: "", protocol: "https");
HttpHelpers.Redirect(protocol: "https", url: "");

Notice that whether or not a parameter is optional, you can choose to refer to the parameter by name or not. In the last case, notice that the parameters are specified out of order. In this case, using named parameters is required.

The Next Version

One apparent benefit of using optional parameters is that you can reduce the number of overloads your API has. However, relying on optional parameters does have its quirks you need to be aware of when it comes to versioning.

Let’s suppose we’re ready to make version two of our awesome HttpHelpers library and we add an optional parameter to the existing method.

// v2
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

What happens when we try to execute the client without recompiling the client application?

We get a the following exception message.

Unhandled Exception: System.MissingMethodException: Method not found: 'Void HttpLib.HttpHelpers.Redirect(System.String, System.String)'....

Whoops! By changing the method signature, we caused a runtime breaking change to occur. That’s not good.

Let’s try to avoid a runtime breaking change by adding an overload instead of changing the existing method.

// v2.1
public static void Redirect(string url, string protocol = "http");
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

Now, when we run our client application, it works fine. It’s still calling the two parameter version of the method. Adding overloads is never a runtime breaking change.

But let’s suppose we’re now ready to update the client application and we attempt to recompile it. Uh oh!

The call is ambiguous between the following methods or properties: 'HttpLib.HttpHelpers.Redirect(string, string)' and 'HttpLib.HttpHelpers.Redirect(string, string, bool)'

While adding an overload is not a runtime breaking change, it can result in a compile time breaking change. Doh!

Talk about a catch-22! If we add an overload, we break in one way. If we instead add an argument to the existing method, we’re broken in another way.

Why Is This Happening?

When I first heard about optional parameter support, I falsely assumed it was implemented as a feature of the CLR which might allow dynamic dispatch to the method. This was perhaps very naive of me.

My co-worker Levi (no blog still) broke it down for me as follows. Keep in mind, he’s glossing over a lot of details, but at a high level, this is roughly what’s going on.

When optional parameters are in use, the C# compiler follows a simple algorithm to determine which overload of a method you actually meant to call. It considers as a candidate *every* overload of the method, then one by one it eliminates overloads that can’t possibly work for the particular parameters you’re passing in.

Consider these overloads:

public static void Blah(int i);
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

Suppose you make the following method call: Blah(0).

The last candidate is eliminated since the parameter types are incorrect, which leaves us with the first two.

public static void Blah(int i); // Candidate
public static void Blah(int i, int j = 5); // Candidate
public static void Blah(string i = "Hello");  // Eliminated

At this point, the compiler needs to perform a conflict resolution. The conflict resolution is very simple: if one of the candidates has the same number of parameters as the call site, it wins. Otherwise the compiler bombs with an error.

In the case of Blah(0), the first overload is chosen since the number of parameters is exactly one.

public static void Blah(int i); //WINNER!!!
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

This allows you to take an existing method that doesn’t have optional parameters and add overloads that have optional parameters without breaking anybody (except in Visual Basic which has a slightly different algorithm).

But what happens if you need to version an API that already has optional parameters?  Consider this example:

public static void Helper(int i = 2, int j = 3);            // v1
public static void Helper(int i = 2, int j = 3, int k = 4); // added in v2

And say that the call site is Helper(j: 10). Both candidates still exist after the elimination process, but since neither candidate has exactly one argument, the compiler will not prefer one over another. This leads to the compilation error we saw earlier about the call being ambiguous.


The reason that optional parameters were introduced to C# 4 in the first place was to support COM interop. That’s it. And now, we’re learning about the full implications of this fact.

If you have a method with optional parameters, you can never add an overload with additional optional parameters out of fear of causing a compile-time breaking change. And you can never remove an existing overload, as this has always been a runtime breaking change.

You pretty much need to treat it like an interface. Your only recourse in this case is to write a new method with a new name.

So be aware of this if you plan to use optional arguments in your APIs.

UPDATE: By the way, you can add overloads that have additional required parameters. So in this way, you are in the same boat as before. However, this can lead to other subtle versioning issues as my follow-up post describes.

code comments edit

UPDATE: A reader named Matthias pointed out there is a flaw in my code. Thanks Matthias! I’ve corrected it in my GitHub Repository. The code would break if your attribute had an array property or constructor argument.

I’ve been working on a lovely little prototype recently but ran into a problem where my code receives a collection of attributes and needs to change them in some way and then pass the changed collection along to another method that consumes the collection.


I  want to avoid changing the attributes directly, because when you use reflection to retrieve attributes, those attributes may be cached by the framework. So changing an attribute is not a safe operation as you may be changing the attribute for everyone else who tries to retrieve them.

What I really wanted to do is create a copy of all these attributes, and pass the collection of copied attributes along. But how do I do that?


Brad Wilson and David Ebbo to the rescue! In a game of geek telephone, David told Brad a while back, who then recently told me, about a little class in the framework called CustomAttributeData.

This class takes advantage of a feature of the framework known as a Reflection-Only context. This allows you to examine an assembly without instantiating any of its types. This is useful, for example, if you need to examine an assembly compiled against a different version of the framework or a different platform.

Copying an Attribute

As you’ll find out, it’s also useful when you need to copy an attribute. This might raise the question in your head, “if you have an existing attribute instance, why can’t you just copy it?” The problem is that a given attribute might not have a default constructor. So then you’re left with the challenge of figuring out how to populate the parameters of a constructor from an existing instance of an attribute. Let’s look at a sample attribute.

[AttributeUsage(AttributeTargets.All, AllowMultiple = true)]
public class SomethingAttribute : Attribute {
  public SomethingAttribute(string readOnlyProperty) {
      ReadOnlyProperty = readOnlyProperty;
  public string ReadOnlyProperty { get; private set; }
  public string NamedProperty { get; set; }
  public string NamedField;

And here’s an example of this attribute applied to a class a couple of times.

[Something("ROVal1", NamedProperty = "NVal1", NamedField = "Val1")]
[Something("ROVal2", NamedProperty = "NVal2", NamedField = "Val2")]
public class Character {

Given an instance of this attribute, I might be able to figure out how the constructor argument should be populated by assuming a convention of using the property with the same name as the argument. But what if the attribute had a constructor argument that had no corresponding property? Keep in mind, I want this to work with arbitrary attributes, not just ones that I wrote.

CustomAttributeData saves the day!

This is where CustomAttributeData comes into play. An instance of this class tells you everything you need to know about the attribute and how to construct it. It provides access to the constructor, the constructor parameters, and the named parameters used to declare the attribute.

Let’s look at a method that will create an attribute instance given an instance of CustomAttributeData.

public static Attribute CreateAttribute(this CustomAttributeData data){
  var arguments = from arg in data.ConstructorArguments
                    select arg.Value;

  var attribute = data.Constructor.Invoke(arguments.ToArray())     as Attribute;

  foreach (var namedArgument in data.NamedArguments) {
    var propertyInfo = namedArgument.MemberInfo as PropertyInfo;
    if (propertyInfo != null) {
      propertyInfo.SetValue(attribute, namedArgument.TypedValue.Value, null);
    else {
      var fieldInfo = namedArgument.MemberInfo as FieldInfo;
      if (fieldInfo != null) {
        fieldInfo.SetValue(attribute, namedArgument.TypedValue.Value);

  return attribute;

The code sample demonstrates how we use the information within the CustomAttributeData instance to figure out how to create an instance of the attribute described by the data.

So how did we get the CustomAttributeData instance in the first place? That’s pretty easy, we called the CustomAttributeData.GetCustomAttributes() method. With these pieces in hand, it’s pretty straightforward now to copy the attributes on a type or member. Here’s a set of extension methods I wrote to do just that.

NOTE: The following code does not handle array properties and constructor arguments correctly. Check out my repository for the correct code.

public static IEnumerable<Attribute> GetCustomAttributesCopy(this Type type) {
  return CustomAttributeData.GetCustomAttributes(type).CreateAttributes();

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this Assembly assembly) {
  return CustomAttributeData.GetCustomAttributes(assembly).CreateAttributes();

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this MemberInfo memberInfo) {
  return CustomAttributeData.GetCustomAttributes(memberInfo).CreateAttributes();

public static IEnumerable<Attribute> CreateAttributes(    this IEnumerable<CustomAttributeData> attributesData) {
  return from attributeData in attributesData
          select attributeData.CreateAttribute();

And here’s a bit of code I wrote in a console application to demonstrate the usage.

foreach (var instance in typeof(Character).GetCustomAttributesCopy()) {
  var somethingAttribute = instance as SomethingAttribute;
  Console.WriteLine("ReadOnlyProperty: " + somethingAttribute.ReadOnlyProperty);
  Console.WriteLine("NamedProperty: " + somethingAttribute.NamedProperty);
  Console.WriteLine("NamedField: " + somethingAttribute.NamedField);

And there you have it, I can grab the attributes from a type and produce a copy of those attributes.

With this out of the way, I can hopefully continue with my original prototype which led me down this rabbit hole in the first place. It always seems to happen this way, where I start a blog post, only to start writing a blog post to support that blog post, and then a blog post to support that one. Much like a dream within a dream within a dream. ;), mvc, code comments edit

In ASP.NET MVC 3 Preview 1, we introduced some syntactic sugar for creating and accessing view data using new dynamic properties.

sugarSugar, it’s not just for breakfast.

Within a controller action, the ViewModel property of Controller allows setting and accessing view data via property accessors that are resolved dynamically at runtime. From within a view, the View property provides the same thing (see the addendum at the bottom of this post for why these property names do not match).


This blog post talks about ASP.NET MVC 3 Preview 1, which is a pre-release version. Specific technical details may change before the final release of MVC 3. This release is designed to elicit feedback on features with enough time to make meaningful changes before MVC 3 ships, so please comment on this blog post if you have comments.

Let’s take a look at the old way and the new way of doing this:

The old way

The following is some controller code that adds a string to the view data.

public ActionResult Index() {
  ViewData["Message"] = "Some Message";
  return View();

The following is code within a view that accesses the view data we supplied in the controller action.

<h1><%: ViewData["Message"] %></h1>

The new way

This time around, we use the ViewModel property which is typed as dynamic. We use it like we would any property.

public ActionResult Index() {
  ViewModel.Message = "Some Message";
  return View();

And we reference it in a Razor view. Note that this also works in a WebForms View too.


Note that View.Message is equivalent to View["Message"].

Going beyond properties

However, what might not be clear to everyone is that you can also store and call methods using the same approach. Just for fun, I wrote an example of doing this.

In the controller, I defined a lambda expression that takes in an index and two strings. It returns the first string if the index is even, and the second string if the index is odd. It’s very simple.

The next thing I do is assign that lambda to the Cycle property of ViewModel, which is created on the spot since ViewModel is dynamic.

public ActionResult Index() {
  ViewModel.Message = "Welcome to ASP.NET MVC!";

  Func<int, string, string, string> cycleMethod = 
    (index, even, odd) => index % 2 == 0 ? even : odd;
  ViewModel.Cycle = cycleMethod;

  return View();

Now, I can dynamically call that method from my view.

@for (var i = 0; i < 10; i++) {
    <tr class="@View.Cycle(i, "even-css", "odd-css")">

As a fan of dynamic languages, I find this technique to be pretty slick. :)

The point of this blog post was to show that this is possible, but it raises the question, “why would anyone want to do this over writing a custom helper method?”

Very good question! Right now, it’s mostly a curiosity to me, but I can imagine cases where this might come in handy. However, if you re-use such view functionality or really need Intellisense, I’d highly recommend making it a helper method. I think this approach works well for rapid prototyping and maybe for one time use helper functions.

Perhaps you’ll find even better uses I didn’t think of at all.

Addendum: The Property name mismatch

Earlier in this post I mentioned the mismatch between property names, ViewModel vs View. I also talked about this in a video I recorded for MvcConf on MVC 3 Preview 1. Originally, we wanted to pick a nice terse name for this property so when referencing it in the view, there is minimal noise. We liked the property View for this purpose and implemented it for our view page first.

But when we went to port this property over to the Controller, we realized it wouldn’t work. Anyone care to guess why? Yep, that’s right. Controller already has a method named View so it can’t also have a property named the same thing. So we called it ViewModel for the time being and figured we’d change it once we came up with a better name.

So far, we haven’t come up with a better name that’s both short and descriptive. And before you suggest it, the acronym of “View Data” is not an option.

If you have a better name, do suggest it. :)

Addendum 2: Unit Testing

Someone on Twitter asked me how you would unit test this action method. Here’s an example of a unit tests that shows you can simply call this dynamic method directly from within a unit test (see the act section below).

public void CanCallCycle() {
  // arrange
  var controller = new HomeController();

  // act
  string even = controller.ViewModel.Cycle(0, "even", "odd");

  // assert
  Assert.AreEqual("even", even);

Tags: aspnetmvc, dynamic, viewdata, code, mvc comments edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Feels like just yesterday that we released ASP.NET MVC 2 to the world and here I am already talking about an early preview. In a way, we’re right on schedule. It was almost exactly a year ago that we released Preview 1 of ASP.NET MVC 2.

Today I’m happy to announce that ASP.NET MVC 3 Preview 1 is available for download. Give it a try out and let us know what you think. Some key notes before you give it a whirl:

  • ASP.NET MVC 3 Preview 1 tooling requires Visual Studio 2010
  • ASP.NET MVC 3 Preview 1 runtime requires the ASP.NET 4 runtime

As usual, to find out what’s in this release, check out the release notes. Also at the recent MVCConf, a virtual conference about ASP.NET MVC, I recorded a talk that provided a sneak peek at ASP.NET MVC 3 Preview 1. The audio quality isn’t great, but I do demo some of the key new features so be sure to check it out.

So what’s in this release that I’m excited about? Here’s a small sampling:

  • Razor View Engine which ScottGu wrote about recently. Note that for Preview 1, we only support the C# version (CSHTML). IN later previews, we will add support for the VB.NET version (VBHTML). Also, Intellisense support for Razor syntax in Visual Studio 2010 will be released later.
  • Dependency Injection hooks using service locator interface. Brad Wilson should have a few blog posts on this over the next few days.
  • Support for .NET 4 Data Annotation and Validation attributes.
  • Add View dialog support for multiple view engines including custom view engines.
  • Global Action Filters

In the next few days you should see more details about each of these areas start to show up in various blog posts. I’ll try to keep this blog post updated with relevant blog posts so you can find them all. Enjoy! mvc,, code comments edit

I wanted to confirm something about how to upload a file or set of files with ASP.NET MVC and the first search result for the phrase “uploading a file with mvc” is Scott Hanselman’s blog post on the topic.

His blog post is very thorough and helps provide a great understanding of what’s happening under the hood. The only complaint I have is that the code could be much simpler since we’ve made improvements to the ASP.NET MVC 2. I write this blog post in the quixotic hopes of knocking his post from the #1 spot.

Uploading a single file

Let’s start with the view. Here’s a form that will post back to the current action.

<form action="" method="post" enctype="multipart/form-data">
  <label for="file">Filename:</label>
  <input type="file" name="file" id="file" />

  <input type="submit" />

Here’s the action method that this view will post to which saves the file into a directory in the App_Data folder named “uploads”.

public ActionResult Index(HttpPostedFileBase file) {
  if (file.ContentLength > 0) {
    var fileName = Path.GetFileName(file.FileName);
    var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
  return RedirectToAction("Index");

Notice that the argument to the action method is an instance of HttpPostedFileBase. ASP.NET MVC 2 introduces a new value providers feature which I’ve covered before.

Whereas model binders are used to bind incoming data to an object model, value providers provide an abstraction for the incoming data itself.

In this case, there’s a default value provider called the HttpFileCollectionValueProvider which supplies the uploaded files to the model binder.Also notice that the argument name, file, is the same name as the name of the file input. This is important for the model binder to match up the uploaded file to the action method argument.

Uploading multiple files

In this scenario, we want to upload a set of files. We can simply have multiple file inputs all with the same name.

<form action="" method="post" enctype="multipart/form-data">
  <label for="file1">Filename:</label>
  <input type="file" name="files" id="file1" />
  <label for="file2">Filename:</label>
  <input type="file" name="files" id="file2" />

  <input type="submit"  />

Now, we just tweak our controller action to accept an IEnumerable of HttpPostedFileBase instances. Once again, notice that the argument name matches the name of the file inputs.

public ActionResult Index(IEnumerable<HttpPostedFileBase> files) {
  foreach (var file in files) {
    if (file.ContentLength > 0) {
      var fileName = Path.GetFileName(file.FileName);
      var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
  return RedirectToAction("Index");

Yes, it’s that easy. :)

Tags: aspnetmvc, upload, mvc, razor comments edit

UPDATE: Check out my Razor View Syntax Quick Reference for a nice quick reference to Razor.

There’s an old saying, “Good things come to those who wait.” I remember when I first joined the ASP.NET MVC project, I (and many customers) wanted to include a new streamlined custom view engine. Unfortunately at the time, it wasn’t in the card since we had higher priority features to implement.

Well the time for a new view engine has finally come as announced by Scott Guthrie in this very detailed blog post.

Photo by "clix"

While I’m very excited about the new streamlined syntax, there’s a lot under the hood I’m also excited about.

Andrew Nurse, who writes the parser for the Razor syntax, provides more under-the-hood details in this blog post. Our plan for the next version of ASP.NET MVC is to make this the new default view engine, but for backwards compatibility we’ll keep the existing WebForm based view engine.

As part of that work, we’re also focusing on making sure ASP.NET MVC tooling supports any view engine. In ScottGu’s blog post, if you look carefully, you’ll see Spark listed in the view engines drop down in the Add View dialog. We’ll make sure it’s trivially easy to add Spark, Haml, whatever, to an ASP.NET MVC project. :)

Going back to Razor, one benefit that I look forward to is that unlike an ASPX page, it’s possible to fully compile a CSHTML page without requiring the ASP.NET pipeline. So while you can allow views to be compiled via the ASP.NET runtime, it may be possible to fully compile a site using T4 for example. A lot of cool options are opened up by a cleanly implemented parser.

In the past several months, our team has been working with other teams around the company to take a more holistic view of the challenges developing web applications. ScottGu recently blogged about the results of some of this work:

  • SQLCE 4 – Medium trust x-copy deployable database for ASP.NET.
  • IIS Express – A replacement for Cassini that does the right thing.

The good news is there’s a lot more coming! In some cases, we had to knock some heads together (our heads and the heads of other teams) to drive focus on what developers really want and need rather than too much pie in the sky architectural astronomy.

I look forward to talking more about what I’ve been working on when the time is right. :)

subtext, personal comments edit

My son and I returned from a week long vacation to visit my parents in Anchorage Alaska last night. Apparently, having the boys out of the house was quite the vacation for my wife as well. :)

We had a great time watching the World Cup and going on outings to the zoo as well as hiking.


Well, at least one of us was hiking while another was just enjoying the ride. We hiked up a trail to Flattop which has spectacular views of Anchorage. Unfortunately, we didn’t make it all the way to the top as the trail became a bit too much while carrying a toddler who was more interested in watching Go, Diego, Go episodes on his iPod.


Funny how all that “hiking” works up an appetite.


Also, while in Alaska I gave a talk on ASP.NET MVC 2 to the local .NET User Group. It was their second meeting ever and somehow, in the delirium of perpetual sunlight, I spent two hours talking! It was slated to be a one hour talk.


I didn’t see a hint of resentfulness in the group though as they peppered me with great questions after the talk. Apparently, some of them are fans of .NET. ;)

The other thing I was able to do while in Alaska was finish up a bug fix release of Subtext in the wake of our big 2.5 release. There were some high priority bugs in that release. Simone has the details and breakdown on the Subtext 2.5.1 release.

code comments edit

In my last blog post, I wrote about the proper way to check for empty enumerations and proposed an IsNullOrEmpty method for collections which sparked a lot of discussion.

This post covers a similar issue, but from a different angle. A very long time ago, I wrote about my love for the null coalescing operator. However, over time, I’ve found it to be not quite as useful as it could be when dealing with strings. For example, here’s the code I might want to write:

public static void DoSomething(string argument) {
  var theArgument = argument ?? "defaultValue";

But here’s the code I actually end up writing:

public static void DoSomething(string argument) {
  var theArgument = argument;
  if(String.IsNullOrWhiteSpace(theArgument)) {
    theArgument = "defaultValue";

The issue here is that I want to treat an argument that consists only of whitespace as if the argument is null and replace the value with my default value. This is something the null coalescing operator won’t help me with.

This lead me to jokingly propose a null or empty coalescing operator on Twitter with the syntax ???. This would allow me to write something like:

var s = argument ??? "default";

Of course, that doesn’t go far enough because wouldn’t I also need a null or whitespace coalescing operator???? ;)

Perhaps a better approach than the PERLification of C# is to write an extension method that normalizes string in such a way you can use the tried and true (and existing!) null coalescing operator.

Thus I present to you the AsNullIfEmpty and AsNullIfWhiteSpace methods!

Here’s my previous example refactored to use these methods.

public static void DoSomething(string argument) {
  var theArgument = argument.AsNullIfWhiteSpace() ?? "defaultValue";


You can also take the same approach with collections.

public static void DoSomething(IEnumerable<string> argument) {
  var theArgument = argument.AsNullIfEmpty() ?? new string[]{"default"};


The following is the code for these simple methods.

public static class EnumerationExtensions {
  public static string AsNullIfEmpty(this string items) {
    if (String.IsNullOrEmpty(items)) {
      return null;
    return items;

  public static string AsNullIfWhiteSpace(this string items) {
    if (String.IsNullOrWhiteSpace(items)) {
      return null;
    return items;
  public static IEnumerable<T> AsNullIfEmpty<T>(this IEnumerable<T> items) {
    if (items == null || !items.Any()) {
      return null;
    return items;

Another approach that some commenters to my last post recommended is to write a Coalesce method. That’s also a pretty straightforward approach which I leave as an exercise to the reader. :)