, code 0 comments suggest edit

Good news! I have contributed my T4 template to the .less project. You can access it from the big download button on their homepage.

Pain is often a great motivator for invention, unless you become dull to the pain. I think CSS is one of those cases where there’s a lot of pain that we as web developers often take in stride.

Fortunately not everyone accepts that pain and efforts such as LESS are born. As the home page states:

LESS extends CSS with: variables, mixins, operations and nested rules.

Best of all, LESS uses existing CSS syntax. This means you can rename your current .css files .less and they’ll just work.

LESS solves a lot of the pain of duplication when writing CSS. Originally written as a Ruby gem, Christopher Owen, Erik van Brakel and Daniel Hoelbing ported a version to .NET not surprisingly called .less. Here are some examples from the .less homepage:

.Less implements LESS as an HttpHandler you add to your website mapped to the .less extension. I think this is a great approach when combined with the proper cache headers in the response.

However, sometimes I just want to have static CSS files. So I decided to write a T4 template for .less. Simply drop it in a folder that contains .less files and it will generate a .css file for each .less file.

Not only that, I also added a setting to then minimize the CSS using the YUI Compressor for .NET. This allows you to write your CSS using clear readable .LESS syntax, but deploy minified CSS. To turn off minification just edit the T4 file to change the value of the _minified variable.

Usage is very easy. The following screenshot shows a folder containing a .less file, one for each sample on the .LESS homepage. Notice that I don’t actually reference the YUI compression nor .LESS libraries. I just added them to a solution folder so you could see them (and so the T4 template can find them).


With this in place, I can simply drop my file into the Css directory and voila!


You can see the converted CSS files by expanding the node in the solution explorer as shown in the screenshot above.

In order to produce multiple output files, I used the approach that Damien Guard did in his post, Multiple outputs from T4 made easy – revisited. Basically I just embedded his T4 in mine.

You’ll notice there’s an extra file, T4CSS.log, that’s generated. Unfortunately using Damien’s approach, it’s not possible to suppress the default file generated by the T4 template, so I used that to log the operations.

Also note that at the time of this writing, there’s a bug in .LESS that causes one of the samples to throw an exception. I reported it to the developers so hopefully that gets fixed soon.

Lastly, there have been T4 changes in Visual Studio 2010. I didn’t try this template with VS10 yet. When importing the assemblies, you may need to either add them to the GAC or reference them with a full path. When I get some time I’ll play around with that to see how to get it to work.

I have contributed this T4 template to the .less project. You can access it from the big download button on their homepage.

0 comments suggest edit

Just wanted to highlight a couple of podcasts that were suckersgracious enough to have me as a guest.



In this podcast I join the fellas at HerdingCode. Although this podcast came out after the Hanselminutes one, it was actually recorded a long time prior to that one. Jon Galloway, who does the editing, probably has some lame excuse about life changing events keeping him busy.

I spent most of the time covering what’s new with ASP.NET MVC 2 Preview 2, how the community influences our project, and how we prioritize bugs. I also finally reveal the dirty truth about Rob Conery’s departure from Microsoft.

What’s notable about this episode is the introduction of a brand new segment, “Abusive Questions from Twitter”. I was having problems with my Borg implant and there was disruption of the transmission of marketing answer from the mother ship. Hear me stumble and ramble through the answer to that one when all I really meant to say was “I think ASP.NET MVC has a lot going for it, but use what makes you happy.” These guys are great at putting you on the spot. :)

Listen: Herding Code 64: Phil Haack on MVC 2


In this one I chat with my friend and co-worker Scott Hanselman, who is as great an interviewer as he is a speaker.

We spend most of the interview talking about ASP.NET MVC 2 Beta.

This one is notable as the interview is interrupted by a call from the one and only ScottGu who says some words about me (audible on the podcast) that you never want to hear about you coming from your VP.

He was only joking. (Right Scott? Ha ha ha… It was a joke, right? Ha ha ha…. Ha?)

Listen: Hanselminutes Show #188: ASP.NET MVC 2 Beta with Phil Haack

I hope you enjoy them and if you hated them, I welcome constructive criticism. It’s so much easier to gather your thoughts when writing than when being interviewed. I’m impressed that these guys all do so well to be quick witted and come up with great questions time and time again.

Technorati Tags:,aspnetmvc,podcast

0 comments suggest edit

UPDATE: I’ve updated this post to cover changes to client validation made in ASP.NET MVC 2 RC 2.

This is the third post in my series ASP.NET MVC 2 Beta and its new features.

  1. ASP.NET MVC 2 Beta Released (Release Announcement)
  2. Html.RenderAction and Html.Action
  3. ASP.NET MVC 2 Custom Validation

In this post I will cover validation.

storage.canoe No, not that kind of validation, though I do think you’re good enough, you’re smart enough, and doggone it, people like you.

Rather, I want to cover building a custom validation attribute using the base classes available in System.ComponentModel.DataAnnotations. ASP.NET MVC 2 has built-in support for data annotation validation attributes for doing validation on a server. For details on how data annotations work with ASP.NET MVC 2, check out Brad’s blog post.

But I won’t stop there. I’ll then cover how to hook into ASP.NET MVC 2’s client validation extensibility so you can have validation logic run as JavaScript on the client.

Finally I will cover some of changes we still want to make for the release candidate.

Of course, the first thing I need is a contrived scenario. Due to my lack of imagination, I’ll build a PriceAttribute that validates that a value is greater than the specified price and that it ends in 99 cents. Thus $20.00 is not valid, but $19.99 is valid.

Here’s the code for the attribute:

public class PriceAttribute : ValidationAttribute {
  public double MinPrice { get; set; }
  public override bool IsValid(object value) {
    if (value == null) {
      return true;
    var price = (double)value;
    if (price < MinPrice) {
      return false;
    double cents = price - Math.Truncate(price);
    if(cents < 0.99 || cents >= 0.995) {
      return false;
    return true;

Notice that if the value is null, we return true. This attribute is not intended to validate required fields. I’ll defer to the RequiredAttribute to validate whether the value is required or not. This allows me to place this attribute on an optional value and not have it show an error when the user leaves the field blank.

We can test this out quickly by creating a view model and applying this attribute to the model. Here’s an example of the model.

public class ProductViewModel {
  [Price(MinPrice = 1.99)]
  public double Price { get; set; }

  public string Title { get; set; }

And let’s quickly write a view (Index.aspx) that will display an edit form which we can use to edit the product.

<%@ Page Language="C#" Inherits="ViewPage<ProductViewModel>" %>

<% using (Html.BeginForm()) { %>

  <%= Html.TextBoxFor(m => m.Title) %>
    <%= Html.ValidationMessageFor(m => m.Title) %>
  <%= Html.TextBoxFor(m => m.Price) %>
    <%= Html.ValidationMessageFor(m => m.Price) %>
    <input type="submit" />
<% } %>

Now we just need a controller with two actions, one which will render the edit view and the other which will receive the posted ProductViewModel. For the sake of demonstration, these methods are exceedingly simple and don’t do anything useful really.

public class HomeController : Controller {
  public ActionResult Index() {
    return View();

  public ActionResult Index(ProductViewModel model) {
    return View(model);

We haven’t enabled client validation yet, but let’s see what happens when we view this page and try to submit some values.


As expected, it posts the form to the server and we see the error messages.

Making It Work In The Client

Great, now we have it working on the server, but how do we get this working with client validation?

The first step is to reference the appropriate scripts. In Site.master, I’ve added the following two script references.

<script src="/Scripts/MicrosoftAjax.js" type="text/javascript"></script>
<script src="/Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script>
<script src="/Scripts/MicrosoftMvcValidation.js" type="text/javascript"

The next step is to enable client validation for the form by calling EnableClientValidation before**we call BeginForm. Under the hood, this sets a flag in the new FormContext which lets the BeginForm method know that client validation is enabled. That way, if you set an id for the form, we’ll know which ID to use when hooking up client validation. If you don’t, the form will render one for you.

<%@ Page Language="C#" Inherits="ViewPage<ProductViewModel>" %>

<% Html.EnableClientValidation(); %>
<% using (Html.BeginForm()) { %>

  <%= Html.TextBoxFor(m => m.Title) %>
    <%= Html.ValidationMessageFor(m => m.Title) %>
  <%= Html.TextBoxFor(m => m.Price) %>
    <%= Html.ValidationMessageFor(m => m.Price) %>
    <input type="submit" />
<% } %>

If you try this now, you’ll notice that the Title field validates on the client, but the Price field doesn’t. We need to take advantage of the validation extensibility available to hook in a client validation function for the price validation attribute we wrote earlier.

The first step is to write a ModelValidator associated with the attribute. Since the attribute is a data annotation, I can simply derive from DataAnnotationsModelValidator<PriceAttribute> like so.

public class PriceValidator : DataAnnotationsModelValidator<PriceAttribute> 
  double _minPrice;
  string _message;

  public PriceValidator(ModelMetadata metadata, ControllerContext context
    , PriceAttribute attribute)
    : base(metadata, context, attribute) 
    _minPrice = attribute.MinPrice;
    _message = attribute.ErrorMessage;

  public override IEnumerable<ModelClientValidationRule>   GetClientValidationRules() 
    var rule = new ModelClientValidationRule {
      ErrorMessage = _message,
      ValidationType = "price"
    rule.ValidationParameters.Add("min", _minPrice);

    return new[] { rule };

The method GetValidationRules returns an array of ModelClientValidationRule instances. Each of these instances represents metadata for a validation rule that is written in JavaScript and will be run in the client. This is purely metadata at this point and the array will get converted into JSON and emitted in the client so that client validation can hook up all the correct rules.

In this case, we only have one rule and we are calling its validation type “price”. This fact will come into play later.

The next step is for us to now register this validator. Since we wrote this as a Data Annotations validator, we can register it in Application_Start as demonstrated by the following code snippet. If you you’re using another model validation provider such as the one for the Enterprise Library’s Validation Block, it might have its own means of registration.

protected void Application_Start() {
    .RegisterAdapter(typeof(PriceAttribute), typeof(PriceValidator));

At this point, we still need to write the actual JavaScript validation logic as well as the hookup to the JSON metadata. For the purposes of this demo, I’ll put the script inline with the view.

<script type="text/javascript">
  Sys.Mvc.ValidatorRegistry.validators["price"] = function(rule) {
    // initialization code can go here.
    var minValue = rule.ValidationParameters["min"];

    // we return the function that actually does the validation 
    return function(value, context) {
      if (value > minValue) {
        var cents = value - Math.floor(value);
        if (cents >= 0.99 && cents < 0.995) {
          return true; /* success */

      return rule.ErrorMessage;

Now when I run the demo, I can see validation take effect as I tab out of each field. Note that to get the required field validation to fire, you’ll need to type something in the field and then clear it before tabbing out.

Let’s pause for a moment and take a deeper look at what’s going on the code above. At a high level, we’re adding a client validator to a dictionary of validators using the key “price”. You may recall that “price” is the validation type we defined when writing the PriceValidator. That’s how we hook up this client function to the server validation attribute.

You’ll notice that the function we add to the validators itself returns a function which does the actual validation. Why is there this seemingly extra level of indirection? Why not simply add a function that does the validation directly to the dictionary?

This approach allows us to run some initialization code at the time the validator is being hooked up to the metadata (as opposed to every time validation occurs). This is helpful if you have expensive initialization logic. The validate method of the object we return in that initialization method may get called multiple times when the form is being validated.

Notice that in this case, the initialization code grabs the min value from the ValidationParameters dictionary. This is the same dictionary created in the PriceValidator class, but now living on the client.

We then run through similar logic as we did in the server side validation code. The difference here is we return null to indicate that no error occurred and we return an array of error messages if an error occurred.

Validation using jQuery Validation?

Client validation in ASP.NET MVC is meant to be extremely extensible. At the core, we emit some JSON metadata describing what fields to validate and what type of validation to perform. This makes it possible to build adapters which can hook up any client validation library to ASP.NET MVC.

For example, if you’re a fan of using jQuery it’s quite easy to use our adapter to hook up jQuery Validation library to perform client validation.

First, reference the following scripts.

<script src="/Scripts/jquery-1.4.1.js" type="text/javascript"></script>
<script src="/Scripts/jquery.validate.js" type="text/javascript"></script>
<script src="/Scripts/MicrosoftMvcJQueryValidation.js" type="text/javascript">

When we emit the JSON in the page, we define it as part of an array which we declare inline. If you view source you’ll see something like this (truncated for brevity):

<script type="text/javascript"> 
if (!window.mvcClientValidationMetadata) {
  window.mvcClientValidationMetadata = []; 
      [{"ErrorMessage":"This field is required",...,        "ValidationType":"required"}]},

Note the array named window.mvcClientValidationMetadata.

Simply by referencing the MicrosoftMvcJQueryValidation script, you’ve hooked up jQuery validation to that metadata. Both of the validation adapter scripts look for the existence of the special array and consumes the JSON within the array.

How about a demo?

And before I forget, here’s a demo application demonstrating the attribute described in this post.

Tags: aspnetmvc, validation, client validation,

0 comments suggest edit

One of the upcoming new features being added to ASP.NET MVC 2 Beta is a little helper method called Html.RenderAction and its counterpart, Html.Action. This has been a part of our ASP.NET MVC Futures library for a while, but is now being added to the core product.

Both of these methods allow you to call into an action method from a view and output the results of the action in place within the view. The difference between the two is that Html.RenderAction will render the result directly to the Response (which is more efficient if the action returns a large amount of HTML) whereas Html.Action returns a string with the result.

For the sake of brevity, I’ll use the term RenderAction to refer to both of these methods. Here’s a quick look at how you might use this method. Suppose you have the following controller.

public class MyController {
  public ActionResult Index() {
    return View();
  public ActionResult Menu() {
    var menu = GetMenuFromSomewhere();
      return PartialView(menu);

The Menu action grabs the Menu model and returns a partial view with just the menu.

<%@ Control Inherits="System.Web.Mvc.ViewUserControl<Menu>" %>
<% foreach(var item in Model.MenuItem) { %>
  <li><%= item %></li>
<% } %>

In your Index.aspx view, you can now call into the Menu action to display the menu:

<%@ Page %>
  <%= Html.Action("Menu") %>
  <h1>Welcome to the Index View</h1>

Notice that the Menu action is marked with a ChildActionOnlyAttribute. This attribute indicates that this action should not be callable directly via the URL. It’s not required for an action to be callable via RenderAction.

We also added a new property to ControllerContext named IsChildAction. This lets you know whether the action method is being called via a RenderAction call or via the URL.

This is used by some of our action filters which should do not get called when applied to an action being called via RenderAction such as AuthorizeAttribute and OutputCacheAttribute.

Passing Values With RenderAction

Because these methods are being used to call action methods much like an ASP.NET Request does, it’s possible to specify route values when calling RenderAction. What’s really cool about this is you can pass in complex objects.

For example, suppose we want to supply the menu with some options. We can define a new class, MenuOptions like so.

public class MenuOptions {
    public int Width { get; set; }
    public int Height { get; set; }

Next, we’ll change the Menu action method to accept this as a parameter.

public ActionResult Menu(MenuOptions options) {
    return PartialView(options);

And now we can pass in menu options from our action call in the view

<%= Html.Action("Menu", 
  new { options = new MenuOptions { Width=400, Height=500} })%>

Cooperating with the ActionName attribute {.clear}

Another thing to note is that RenderAction honors the ActionName attribute when calling an action name. Thus if you annotate the action like so.

public ActionResult Menu(MenuOptions options) {
    return PartialView(options);

You’ll need to make sure to use “CoolMenu” as the action name and not “Menu” when calling RenderAction.

Cooperating With Output Caching

Note that in previous previews of the RenderAction method, there was an issue where calling RenderAction to render an action method that had the OutputCache attribute would cause the whole view to be cached. We fixed that issue by by changing the OutputCache attribute to not cache if it’s part of a child request.

If you want to output cache the portion of the page rendered by the call to RenderAction, you can use a technique I mentioned here where you place the call to RenderAction in a ViewUserControl which has its OutputCache directive set.


Let us know how this feature works for you. I think it could really help simplify some scenarios when composing a user interface from small parts.

0 comments suggest edit

This is the first in a series on ASP.NET MVC 2 Beta

  1. ASP.NET MVC 2 Beta Released (Release Announcement)
  2. Html.RenderAction and Html.Action
  3. ASP.NET MVC 2 Custom Validation

Today at PDC09 (the keynote was streaming live), Bob Muglia announced the release of ASP.NET MVC 2 Beta. Feel free to download it right away! While you do that I want to present this public service message.


The Beta release includes tooling for Visual Studio 2008 SP1. We did not ship updated tooling for Visual Studio 2010 because ASP.NET MVC 2 is now included as a part of VS10, which is on its own schedule.

Unfortunately, because Visual Studio 2010 Beta 2 and ASP.NET MVC 2 Beta share components which are currently not in sync, running ASP.NET MVC 2 Beta on VS10 Beta 2 is not supported.

Here are some highlights of what’s new in ASP.NET MVC 2.

  • RenderAction (and Action)
  • AsyncController
  • Expression Based Helpers (TextBoxFor, TextAreaFor, etc.)
  • Client Validation Improvements (validation summary)
  • Add Area Dialog
  • Empty Project Template
  • And More!

Go Live

ASP.NET MVC 2 Beta also includes an explicit go-live clause within the EULA. You should make sure to read it has an interesting clause which references the operation of nuclear facilities, aircraft navigation, etc. ;)

More Details Please!

You can find more details about this release in the release notes. Also be on the look out for one of ScottGu’s trademark blog posts covering what’s new

I’ve started working on a series of blog posts where I will cover features of ASP.NET MVC 2 in more detail. I’ll start publishing these posts one at a time soon.

Next Stop: RC

Our next release is going to be the release candidate hopefully before the year’s end. The work from now to RC will consist almost solely of bug fixes with a few minor feature improvements and changes.

Please do play with the Beta. If you run into an issue that’s serious enough, there’s still time to consider changes for RC. Otherwise it will have to wait for ASP.NET MVC 3 which I’m just starting to think about.

I’m thinking, “man, I can’t believe I’m already thinking about version 3”!”

Tags: aspnetmvc,

0 comments suggest edit

I learned something new yesterday about interface inheritance in .NET as compared to implementation inheritance. To illustrate this difference, here’s a simple demonstration.

I’ll start with two concrete classes, one which inherits from the other. Each class defines a property. In this case, we’re dealing with implementation inheritance.

public class Person {
    public string Name { get; set; }

public class SuperHero : Person {
    public string Alias { get; set; }

We can now use two different techniques to print out the properties of the SuperHero type: type descriptors and reflection. Here’s a little console app that does this. Note the code I’m showing below doesn’t include a few Console.WriteLine calls that I have in the actual app.

static void Main(string[] args) {
  // type descriptor
  var properties = TypeDescriptor.GetProperties(typeof(SuperHero));
  foreach (PropertyDescriptor property in properties) {

  // reflection
  var reflectedProperties = typeof(SuperHero).GetProperties();
  foreach (var property in reflectedProperties) {

Let’s look at the output of this code.


No surprises there.

The SuperHero type has two properties, Alias defined on SuperHero and the Name property inherited from its base type.

But now, let’s change these classes into interfaces so that we’re now dealing with interface inheritance. Notice that ISupeHero now derives from IPerson.

public interface IPerson {
  string Name { get; set; }

public interface ISuperHero : IPerson {
  string Alias { get; set; }

I’ve also made the corresponding changes to the console app.

var properties = TypeDescriptor.GetProperties(typeof(ISuperHero));
foreach (PropertyDescriptor property in properties) {

// reflection
var reflectedProperties = typeof(ISuperHero).GetProperties();
foreach (var property in reflectedProperties) {

Before looking at the next screenshot, take a moment to answer the question, what is the output of the program now?

interface-inheritance Well it should be obvious that the output is different otherwise I wouldn’t be writing this blog post in the first place, right?

When I first tried this out, I found the behavior surprising. However, it’s probably not surprising to anyone who has an encyclopedic knowledge of the ECMA-335 Common Language Infrastructure specification (PDF) such as Levi, one of the ASP.NET MVC developers who pointed me to section 8.9.11 of the spec when I asked about this behavior:

8.9.11 Interface type derivation Interface types can require the implementation of one or more other interfaces. Any type that implements support for an interface type shall also implement support for any required interfaces specified by that interface. This is different from object type inheritance in two ways:

  • Object types form a single inheritance tree; interface types do not.
  • Object type inheritance specifies how implementations are inherited; required interfaces do not, since interfaces do not define implementation. Required interfaces specify additional contracts that an implementing object type shall support.

To highlight the last difference, consider an interface, IFoo, that has a single method. An interface, IBar, which derives from it, is requiring that any object type that supports IBar also support IFoo. It does not say anything about which methods IBar itself will have.

The last paragraph provides a great example of why the code I wrote behaves as it does. The fact that ISuperHero inherits from IPerson doesn’t mean the ISuperHero interface type inherits the properties of IPerson because interfaces do not define implementation.

Rather, what it means is that any class that implements ISuperHero must also implement the IPerson interface. Thus if I wrote an implementation of ISuperHero such as:

public class Groo : ISuperHero {
  public string Name {get; set;}
  public string Alias {get; set;}

The Groo type must implement both ISuperHero and IPerson and iterating over its properties would show both properties.

Implications for ASP.NET MVC Model Binding

You probably could have guessed this part was coming. Let’s say you’re trying to use model binding to bind the Name property of an ISuperHero. Since our model binder uses type descriptors under the hood, we won’t be able to bind that property for the reasons stated above.

I learned of this detail investigating a bug reported in StackOverflow. It turns out this behavior is by design. In the context of sending a view model to the view, that view model should be a simple carrier of data. Thus it makes sense to use concrete types on your view model, in contrast to your domain models which will more likely be interface based.

0 comments suggest edit

I was stepping through some code in a debugger today and noticed a neat little feature of Visual Studio 2010 that I hadn’t noticed before.

When debugging, you can easily examine the value of a variably by highlighting it with your mouse. Nothing new there. But then I noticed a little pin next to it, which I’ve never seen before.


So what do you see when you see a pin? You click on it!


As you might expect, that pins the quick watch in place. So now I hit the play button, continue running my app in the debugger, and the next time I hit that breakpoint:


I can clearly see the value changed since the last time. I think this may come in useful when walking through code as a way of seeing the value of important variables right next to where they are declared. I thought that was pretty neat., code 0 comments suggest edit

This code has been incorporated into a new RouteMagic library I wrote which includes Source Code on as well as a NuGet package!

I saw a bug on Connect today in which someone offers the suggestion that the PageRouteHandler (new in ASP.NET 4) should handle IHttpHandler as well as Page.

I don’t really agree with the suggestion because while a Page is an IHttpHandler, an IHttpHandler is not a Page. What I this person really wants is a new handler specifically for http handlers. Let’s give it the tongue twisting name: IHttpHandlerRouteHandler.

Unfortunately, it’s too late to add this for ASP.NET 4, but it turns out such a thing is trivially easy to write. In fact, here it is.

public class HttpHandlerRouteHandler<THandler> 
    : IRouteHandler where THandler : IHttpHandler, new() {
  public IHttpHandler GetHttpHandler(RequestContext requestContext) {
    return new THandler();

Of course, by itself it’s not all that useful. We need extension methods to make it really easy to register routes for http handlers. I wrote a set of those, but will only post two examples here on my blog. To get the full set download the sample project at the very end of this post.

public static class HttpHandlerExtensions {
  public static void MapHttpHandler<THandler>(this RouteCollection routes,     string url) where THandler : IHttpHandler, new() {
    routes.MapHttpHandler<THandler>(null, url, null, null);
  public static void MapHttpHandler<THandler>(this RouteCollection routes, 
      string name, string url, object defaults, object constraints) 
      where THandler : IHttpHandler, new() { 
    var route = new Route(url, new HttpHandlerRouteHandler<THandler>());
    route.Defaults = new RouteValueDictionary(defaults);
    route.Constraints = new RouteValueDictionary(constraints);
    routes.Add(name, route);

This now allows me to register a route which is handled by an IHttpHandler very easily. In this case, I’m registering a route that will use my SimpleHttpHandler to handle any two segment URL.

public static void RegisterRoutes(RouteCollection routes) {

And here’s the code for SampleHttpHandler for completeness. All it does is print out the route values.

public class SampleHttpHandler : IHttpHandler {
  public bool IsReusable {
    get { return false; }

  public void ProcessRequest(HttpContext context) {
    var routeValues = context.Request.RequestContext.RouteData.Values;
    string message = "I saw foo='{0}' and bar='{1}'";
    message = string.Format(message, routeValues["foo"], routeValues["bar"]);


When I make a request for /testing/yo I’ll see the message

I saw foo=’testing’ and bar=’yo’

in my browser. Very cool.


One limitation here is that my http handler has to have a parameterless constructor. That’s not really that bad of a limitation since to register an HTTP Handler in the old way you had to make sure that the handler had an empty constructor.

However, this code that I wrote for this blog post is based on code that I added to Subtext. In that code, I am passing an IKernel (I’m using Ninject) to my HttpRouteHandler. That way, my route handler will use Ninject to instantiate the http handler and thus my http handlers aren’t required to have a parameterless constructor.

Try it out!

The RouteMagic solution includes a sample project that demonstrates all this., code, mvc 0 comments suggest edit

This is the second in a three part series related to HTML encoding blocks, aka the <%: ... %> syntax.

In a recent blog post, I introduced ASP.NET 4’s new HTML Encoding code block syntax as well as the corresponding IHtmlString interface and HtmlString class. I also mentioned that ASP.NET MVC 2 would support this new syntax when running on ASP.NET 4.

In fact, you can try it out now by downloading and installing Visual Studio 2010 Beta 2.

I’ve also mentioned in the past that we are not conditionally compiling ASP.NET MVC 2 for each platform. Instead, we’re building System.Web.Mvc.dll against ASP.NET 3.5 SP1 and simply including that one in VS08 and VS10. Thus when you’re running ASP.NET MVC 2 on ASP.NET 4, it’s the same byte for byte assembly as the same one you would run on ASP.NET 3.5 SP1.

This fact ought to raise a question in your mind. If ASP.NET MVC 2 is built against ASP.NET 3.5 SP1, how the heck does it take advantage of the new HTML encoding blocks which require that you implement an interface introduced in ASP.NET 4?

The answer involves a tiny bit of voodoo black magic we’re doing in ASP.NET MVC 2.voodoo

We introduced a new type MvcHtmlString which is created via a factory method, MvcHtmlString.Create. When this method determines that it is being called from an ASP.NET 4 application, it uses Reflection.Emit to dynamically generate a derived type which implements IHtmlString.

If you look at the source code for ASP.NET MVC 2 Preview 2, you’ll see the following method call when we are instantiating an MvcHtmlString:

Type dynamicType = DynamicTypeGenerator.
    typeof(MvcHtmlString), new Type[] {

Note that we’re using a new internal class, DynamicTypeGenerator, to generate a brand new type named DynamicMvcHtmlString. This type derives from MvcHtmlString and implements IHtmlString. We’ll return this instance instead of a standard MvcHtmlString when running on ASP.NET 4.

When running on ASP.NET 3.5 SP1, we simply new up an MvcHtmlString and return that, completely bypassing the Reflection.Emit logic. Note that we only generate this type once per AppDomain so you only pay the Reflection Emit cost once.

The code in DynamicTypeGenerater is standard Reflection.Emit stuff which at runtime creates an assembly at runtime, adds this new type to it, and returns a lambda used to instantiate the new type. If you’ve never seen Reflection.Emit code, it’s worth a look.

In general, we really frown on this sort of “tricky” code as it’s often hard to maintain and a potential bug magnet. For example, since System.Web.Mvc.dll is security transparent, we needed to make sure that the assembly we generate is marked with the SecurityTransparentAttribute. This is something that would be easy to overlook until you start testing in medium trust scenarios.

However, in this case, the type we’re generating is very small and very simple. Not only that, we only need to keep this code for one version of ASP.NET MVC. ASP.NET MVC 3 will be compiled against ASP.NET 4 only (no support for ASP.NET 3.5 planned) and we’ll be able to remove this “clever” code and have much more straightforward code. I’m looking forward to that. :)

In any case, the point of this post was to fulfill a promise I made in an earlier post where I said I’d give some more details on how ASP.NET MVC 2 works with the new Html encoding block feature.

This is all behind-the-scenes detail that’s not necessary to understand to use ASP.NET MVC, but might be interesting to some of you. Especially those who ever find themselves in a situation where you need to support forward compatibility. 0 comments suggest edit

Have you ever needed to quickly spawn a web server against a local folder to preview a web application? If not, what would you say you do here?

This is actually quite common for me since I receive a lot of zip files containing web applications which reproduce a bug. After I unzip the repro, I need a way to quickly point a web server at the folder and run the web site.

A while back I wrote about a useful registry hack to do just this. It adds a right click menu to start a web server (Cassini) pointing to any folder. This was based on a shell extension by Robert McLaws.

Well that was soooo 2008. It’s almost 2010 and Visual Studio 2010 Beta 2 is out which means it’s time to update this shell extension to run an ASP.NET 4 web server.


Obviously this is not rocket science as I merely copied my old settings and updated the paths. But if you’re too lazy to look up the new file paths, you can just copy these settings (changes are in bold).

32 bit (x86)

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer]
@="ASP.NET 4 Web Server Here"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer\command]
@="C:\\Program Files\\Common Files\\microsoft shared\\DevServer
\\10.0\\Webdev.WebServer40.exe /port:8081 /path:\"%1\""

64 bit (x64)

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer]
@="ASP.NET 4 Web Server Here"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer\command]
@="C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer
\\10.0\\Webdev.WebServer40.exe /port:8081 /path:\"%1\""

I chose a different port and name for this shell extension so that it lives side-by-side with my other one.

Of course, I wouldn’t even bother trying to copy these settings from this blog post since I conveniently zipped up .reg files you can run. 0 comments suggest edit

You probably don’t need me to tell you that Visual Studio 2010 Beta 2 has been released as it’s been blogged to death all over the place. Definitely check out the many blog posts out there if you want more details on what’s included.

This post will focus more on what Visual Studio 2010 means to ASP.NET MVC and vice versa.

Important: If you installed ASP.NET MVC for Visual Studio 2010 Beta 1, make sure to uninstall it (and VS10 Beta 1) before installing Beta 2.

In the box baby!

Well one of the first things you’ll notice is that ASP.NET MVC 2 Preview 2 is included in VS10 Beta 2. When you select the File | New menu option, you’ll be greeted with an ASP.NET MVC 2 project template option under the Web node.


Note that when you create your ASP.NET MVC 2 project with Visual Studio 2010, you can choose whether you wish to target ASP.NET 3.5 or ASP.NET 4.


If you choose to target ASP.NET 4, you’ll be able to take advantage of the new HTML encoding code blocks with ASP.NET MVC which I wrote about earlier.

As an aside, you might find it interesting that the System.Web.Mvc.dll assembly we shipped in VS10 is the exact same binary we shipped out-of-band for VS2008 and .NET 3.5. How then does that assembly implement an interface that is new in ASP.NET 4? That’s a subject for another blog post.

What about ASP.NET MVC 1.0?

Unfortunately, we have no plans to support ASP.NET MVC 1.0 tooling in Visual Studio 2010. When we were going through planning, we realized it would’ve taken a lot of work to update our 1.0 project templates. We felt that time would be better spent focused on ASP.NET MVC 2.

However, that doesn’t mean you can’t develop an ASP.NET MVC 1.0 application with Visual Studio 2010! All it means is you’ll have to do so without the nice ASP.NET MVC specific tooling such as the add controllerandadd viewdialogs. After all, at it’s core, an ASP.NET MVC project is a Web Application Project.

Eilon Lipton, the lead dev for ASP.NET MVC, wrote a blog post a while back describing how to open an ASP.NET MVC project without having ASP.NET MVC installed. All it requires is for you to edit the .csproj file and remove the following GUID from the <ProjectTypeGuids> element.


Once you do that, you’ll be able to open, code, and debug your project from VS10.

Upgrading ASP.NET MVC 1.0 to ASP.NET MVC 2

Another option is to upgrade your ASP.NET MVC 1.0 application to ASP.NET MVC 2 and then open the upgraded project with Visual Studio 2010 Beta 2.

Eilon has your back again as he’s written a handy little tool for upgrading existing ASP.NET MVC 1.0 applications to version 2.

After using this tool, your project will still be a Visual Studio 2008 project. But you can then open it with VS10 and it knows how to open and upgrade the project to be a VS10 project.

What about automatic upgrades?

We are investigating implementing a more automatic process for upgrading ASP.NET MVC 1.0 applications to ASP.NET MVC 2 when you try to open the existing project in Visual Studio 2010. We plan to have something in place by the RTM of VS10.

Ideally, when you try to open an ASP.NET MVC 1.0 project, instead of showing an error dialog, VS10 will provide a wizard to upgrade the project which will be somewhat based on the sample Eilon provided. So be sure to supply feedback on his wizard soon!

Tags: aspnetmvc,, visual studio 2010, visual studio

0 comments suggest edit

Despite what your well intentioned elementary school teachers would have liked you to believe, there is such a thing as a stupid question, and you probably get them all the time via email or IM.

You also know that in half the time it takes to type the question, the person pestering you could have typed the query in their favorite search engine and received an answer immediately.

Let me Google that for you addressed this little annoyance by providing a passive aggressive means to tell annoying question askers to bugger off while at the same time teaching them the power of using a search engine to help themselves.

lmbtfyWhen I first heard about the Microsoft’s new search engine, Bing, I jumped at purchasing the domain name (though I was remiss in not also registering as well. If you own that domain, may I buy it off of you?)

Unfortunately, being way too busy caused me to leave the domain name unused gathering dust until I put out a call on Twitter for help. Not long after Maarten Balliauw and Juliën Hanssens answered the call and put together the bulk of this ASP.NET MVC application using jQuery and jQuery UI.

I really like what they did in that the background image for changes daily to match the one on I finally found some time to review the code, do a bit of clean-up, fix some minor issues, and test it so I am now ready to deploy it.

Keep in mind that even though I’m employed by Microsoft, this site is a pet project I’m doing on the side in collaboration with Maarten and Juliën and is in not associated with Microsoft nor Bing in any official capacity. We’re just some folks doing this for fun.

Now go try it out and release your inner snarkiness.

0 comments suggest edit

A little while ago, Scott Guthrie announced the launch of the Microsoft Ajax CDN. In his post he talked about how ASP.NET 4 will have support for the CDN as well as the list of scripts that are included.

The good news today is due to the hard work of Stephen Walther and the ASP.NET Ajax team, they’ve added a couple of new scripts to the CDN which are near and dear to my heart, the ASP.NET MVC 1.0 scripts. The following code snippet shows how you can start using them today.

<script src=""
<script src=""

Debug versions are also available on the CDN.

<script src=""
<script src=""

As ScottGu wrote,

The Microsoft AJAX CDN makes it really easy to add the jQuery and ASP.NET AJAX script libraries to your web sites, and have them be automatically served from one of our thousands of geo-located edge-cache servers around the world.

We currently don’t have the ASP.NET MVC 2 scripts available on the CDN, but that’s something we can consider as we get closer and closer to RTM.

0 comments suggest edit

If you’re a manufacturing plant, one way to maximize profit is to keep costs as low as possible. One way to do that is to cut corners. Go ahead and dump that toxic waste into the river and pollute the heck out of the air with your smoke stacks. These options are much cheaper than installing smoke scrubbers or trucking waste to proper disposal sites.


Of course, economists have long known that this does not paint the entire picture. Taking these shortcuts incur other costs, it’s just that these costs are not borne by the manufacturing plant. The term externalities describes such spillover costs.

In economics an externality or spillover of an economic transaction is an impact on a party that is not directly involved in the transaction. In such a case, prices do not reflect the full costs or benefits in production or consumption of a product or service.

Thus the full cost of manufacturing includes the hospital bills of those who get sick by drinking the tainted water, the cost of the crops damaged by the acid rain, etc.

Software is the same way. I got to thinking about this after reading Ted’s latest post that Agile is treating the symptoms not the disease where the complexity that Agile introduces is disparaged and Access is held up as one example of a great “simple” way to develop applications.

I agree that Access is great when you’re building a little database to track Billy’s baseball cards. However, the real world doesn’t stay that simple. As the second law of thermodynamics states (paraphrasing here), entropy tends to increase over time, which is something that Ted doesn’t address in his discussion.

I’m all for simplicity in our tools and methodologies as I think we still have a lot of room for improvement in reducing accidental complexity. Unfortunately, the business processes for which we build software are not simple at all and full of inherent complexity. Oh sure, they may start off as a simple Access database, but they never stay that simple. Every business I’ve ever interacted had very complex sets of business processes, some seemingly cargo cultish in origin, which led to major complexity in business processes.

Ted mentions friends of his who’ve made a healthy living using simple tools to build simple line-of-business apps for customers. And I’m sure they did a fine job of it. But I also made a healthy living in the past coming in to clean up the externalities left by such applications.

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies aroundand were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application. I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place? I don’t engage in developing custom school curricula. An even simpler option is to buy some off the shelf software or simply use a Wiki, but I digress. 

These were apps that would make The Daily WTF look like paragons of good software development in comparison.

The core problem here is that while it’s fine to push for simpler tools to reduce accidental complexity, at one point or another we are going to have to deal with inherent complexity caused by entropy. Business processes are inherently complex, usually more than they need to be, and this is not a problem that will be solved by any software. Most are not only inherently complex, but chock-full of accidental complexity as well. Your line of business app won’t solve that. It takes systemic change in the organization to make that happen.

Not only that, but business processes get more complex over time as entropy sets in. The applications I mentioned dealt with this entropy and reached a point where the current solution could not scale to meet that new level of complexity (a different sort of scaling up), so they started to drown in it, the original authors of the applications long gone off to create new apps with new externalities.

Fortunately, the externalities of these applications didn’t cause cancer, but rather kept guys like me employed. Of course, it was a negative externality for the company who kept pumping cash to fix these applications.

Ted paraphrases Billy suggesting that Agile requires even more complex tools, story cards, continuous integration servers, etc. This is an unfair characterization and misses the point of Agile. Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application.

A higher principle of agile is YAGNI (You ain’t gonna need it) until you need it. For example, the 1 to 2 guys in a garage probably don’t need a continuous integration server, stand up meetings, etc and any real agilist worth his or her salt would recognize that and not try to force unnecessary procedures on a team that didn’t need it.

However, as the two garage dwellers start to grow the business and need to coordinate with more developers, such tools come in handy. As you grow a team beyond two people, the lines of communication start to grow exponentially, thus creating inherent complexity.  Looking at the cost of developing and maintaining an application over time is where you start to get a full picture of the true cost of building an application.

As Robert Glass pointed out in Facts and Fallacies of Software Engineering, research shows that maintenance typically consumes from 40 to 80 percent of software costs, typically making it the dominant life cycle phase of a software project. Thus these so called “simple” solutions need to factor that in, or the customers will continually be left with the clean-up duty while the polluters have long since moved on.

0 comments suggest edit

This morning at 3:17 AM, Mia Yokoyama Haack was born weighing in at 7lb 8.5 oz. Now my world domination crew is complete!

DSC_0013Mia is a fast little one as labor started around 11 PM and she was delivered only four hours later!

This time around, we did a water birth at a birthing center which involves the momma sitting in a big tub for the last part of labor and delivery, which made for a much more comfortable experience than last time. I think she’d definitely recommend it.


We were back home by 6:30 AM which just amazes me. Momma and Baby are doing well. I’m still getting over my cold, but I think the adrenaline of the whole experience helped a lot. :)

0 comments suggest edit

Today we just released ASP.NET MVC 2 Preview 2 for Visual Studio 2008 SP1 (and ASP.NET 3.5 SP1), which builds on top of the work we did in Preview 1 released two months ago.

Some of the cool new features we’ve added to Preview 2 include:

  • Client-Side Validation – ASP.NET MVC 2 includes the jQuery validation library to provide client-side validation based on the model’s validation metadata. It is possible to hook in alternative client-side validation libraries by writing an adapter which adapts the client library to the JSON metadata in a manner similar to the xVal validation framework.
  • Areas – Preview 2 includes in-the-box support for single project areas for developers who wish to organize their application without requiring multiple projects. Registration of areas has also been streamlined.
  • Model Validation Providers - allow hooking in alternative validation logic to provide validation when model binding. The default validation providers uses Data Annotations.
  • Metadata Providers - allow hooking in alternative sources of metadata for model objects. The default metadata provider uses Data Annotations.

Based on this list, you’ll notice a theme where in Preview 1, we tied much functionality directly to Data Annotation attributes, in Preview 2 we inserted abstractions around our usage of Data Annotations which allow hooking in custom implementations of validation and metadata providers.

This will allow you to do things like swapping out our default validation with the Enterprise Library Validation Block for example. It also allows providing implementations where model metadata is stored in alternative locations rather than via attributes, with a bit of work.

What About Visual Studio 2010?

The tools for this particular release only work in Visual Studio 2008 SP1. The version of ASP.NET MVC 2 Preview 2 for Visual Studio 2010 will be released in-the-box with Visual Studio 2010 Beta 2. You won’t need to go anywhere else, it’ll just be there waiting for you. Likewise, the RTM of ASP.NET MVC 2 will be included with the RTM of Visual Studio 2010.

Therefore, if you want to try out the new HTML encoding code blocks with ASP.NET MVC 2 Preview 2, you’ll have to wait till Visual Studio 2010 Beta 2 is released. But for now, you can try out Preview 2 on VS 2008 and start providing feedback.

code, tdd 0 comments suggest edit

UPDATE: For a better approach, check out MoQ Sequences Revisited.

One area where using MoQ is confusing is when mocking successive calls to the same method of an object.

For example, I was writing some tests for legacy code where I needed to fake out multiple calls to a data reader. You remember data readers, don’t you?

Here’s a snippet of the code I was testing. Ignore the map method and focus on the call to reader.Read.

while(reader.Read()) {
  yield return map(reader);

Notice that there are multiple calls to reader.Read. The first couple times, I wanted Read to return true. The last time, it should return false. And here’s the code I hoped to write to fake this using MoQ:

reader.Setup(r => r.Read()).Returns(true);
reader.Setup(r => r.Read()).Returns(true);
reader.Setup(r => r.Read()).Returns(false);

Unfortunately, MoQ doesn’t work that way. The last call wins and nullifies the previous two calls. Fortunately, there are many overloads of the Returns method, some of which accept functions used to return the value when the method is called.

That’s the approach I found on Matt Hamilton’s blog post (Mad Props indeed!) where he describes his clever solution to this issue involving a Queue:

var pq = new Queue<IDbDataParameter>(new[]
mockCommand.Expect(c => c.CreateParameter()).Returns(() => pq.Dequeue());

Each time the method is called, it will return the next value in the queue.

One cool thing I stumbled on is that the syntax can be made even cleaner and more succinct by passing in a method group. Here’s my MoQ code for the original IDataReader issue I mentioned above.

var reader = new Mock<IDataReader>();
reader.Setup(r => r.Read())
  .Returns(new Queue<bool>(new[] { true, true, false }).Dequeue);

I’m defining a Queue inline and then passing what is effectively a pointer to its Dequeue method. Notice the lack of parentheses at the end of Dequeue which is how you can tell that I’m passing the method itself and not the result of the method.

Using this apporach, MoQ will call Dequeue each time it calls r.Read() grabbing the next value from the queue. Thanks to Matt for posting his solution! This is a great technique for dealing with sequences using MoQ.

UPDATE: There’s a great discussion in the comments to this post. Fredrik Kalseth proposed an extension method to make this pattern even simpler to apply and much more understandable. Why didn’t I think of this?! Here’s the extension method he proposed (but renamed to the name that Matt proposed because I like it better).

public static class MoqExtensions
  public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup, 
    params TResult[] results) where T : class  {
    setup.Returns(new Queue<TResult>(results).Dequeue);

Now with this extension method, I can rewrite my above test to be even more readable.

var reader = new Mock<IDataReader>();
reader.Setup(r => r.Read()).ReturnsInOrder(true, true, false);

In the words of Borat, Very Nice!

Tags: TDD, unit testing, MoQ, code, mvc 0 comments suggest edit

This is the first in a three part series related to HTML encoding blocks, aka the <%: ... %> syntax.

One great new feature being introduced in ASP.NET 4 is a new code block (often called a Code Nugget by members of the Visual Web Developer team) syntax which provides a convenient means to HTML encode output in an ASPX page or view.

<%: CodeExpression %>

I often tell people it’s <%= but with the = seen from the front.

Let’s look at an example of how this might be used in an ASP.NET MVC view. Suppose you have a form which allows the user to submit their first and last name. After submitting the form, the same view is used to display the submitted values.

First Name: <%: Model.FirstName %>
Last Name: <%: Model.FirstName %>

<form method="post">
  <%: Html.TextBox("FirstName") %>
  <%: Html.TextBox("LastName") %>

By using the the new syntax, Model.FirstName and Model.LastName are properly HTML encoded which helps in mitigating Cross Site Scripting (XSS) attacks.

Expressing Intent with the new IHtmlString interface

If you’re paying close attention, you might be asking yourself “Html.TextBox is supposed to return HTML that is already sanitized. Wouldn’t using this syntax with Html.TextBox cause double encoding?

ASP.NET 4 also introduces a new interface, IHtmlString along with a default implementation, HtmlString. Any method that returns a value that implements the IHtmlString interface will not get encoded by this new syntax.

In ASP.NET MVC 2, all helpers which return HTML now take advantage of this new interface which means that when you’re writing a view, you can simply use this new syntax all the time and it will just work.By adopting this habit, you’ve effectively changed the act of HTML encoding from an opt-in model to an opt-out model.

The Goals

There were four primary goals we wanted to satisfy with the new syntax.

  1. Obvious at a glance. When you look at a page or a view, it should be immediately obvious which code blocks are HTML encoded and which are not. You shouldn’t have to refer back to flags in web.config or the page directive (which could turn encoding on or off) to figure out whether the code is actually being encoded. Also, it’s not uncommon to review code changes via check-in emails which only show a DIFF. This is one reason we didn’t reuse existing syntax.

    Not only that, code review becomes a bit easier with this new syntax. For example, it would be easy to do a global search for <%= in a code base and review those lines with more scrutiny (though we hope there won’t be any to review). Also, when you receive a check-in email which shows a DIFF, you have most of the context you need to review that code.

  2. Evokes a similar meaning to <%=. We could have used something entirely new, but we didn’t have the time to drastically change the syntax. We also wanted something that had a similar feel to <%= which evokes the sense that it’s related to output. Yeah, it’s a bit touchy feely and arbitrary, but I think it helps people feel immediately familiar with the syntax.

  3. Replaces the old syntax and allows developers to show their intent. One issue with the current implementation of output code blocks is there’s no way for developers to indicate that a method is returning already sanitized HTML. Having this in place helps enable our goal of completely replacing the old syntax with this new syntax in practice.

    This also means we need to work hard to make sure all new samples, books, blog posts, etc. eventually use the new syntax when targeting ASP.NET 4.

    Hopefully, the next generation of ASP.NET developers will experience this as being the default output code block syntax and <%= will just be a bad memory for us old-timers like punch cards, manual memory allocations, and Do While Not rs.EOF.

  4. Make it easy to migrate from ASP.NET 3.5. We strongly considered just changing the existing <%= syntax to encode by default. We eventually decided against this for several reasons, some of which are listed in the above goals. Doing so would make it tricky and painful to upgrade an existing application from earlier versions of ASP.NET.

    Also, we didn’t want to impose an additional burden for those who already do practice good encoding. For those who don’t already practice good encoding, this additional burden might prevent them from porting their app and thus they wouldn’t get the benefit anyways.

When Can I Use This?

This is a new feature of ASP.NET 4. If you’re developing on ASP.NET 3.5, you will have to continue to use the existing <%= syntax and remember to encode the output yourself.

In ASP.NET 4 Beta 2, you will have the ability to try this out yourself with ASP.NET MVC 2 Preview 2. If you’re running on ASP.NET 3.5, you’ll have to use the old syntax.

What about ASP.NET MVC 2?

As mentioned, ASP.NET MVC 2 supports this new syntax in its helper when running on ASP.NET 4.

In order to make this possible, we are making a breaking change such that the relevant helper methods (ones that return HTML as a string) will return a type that implements IHtmlString.

In a follow-up blog post, I’ll write about the specifics of that change. It was an interesting challenge given that IHtmlString is new to ASP.NET 4, but ASP.NET MVC 2 is actually compiled against ASP.NET 3.5 SP1. :)

0 comments suggest edit

In my last post, I presented a general overview of the CodePlex foundation and talked a bit about what it means to the .NET OSS developer, admittedly without much in the way of details. I plan to fix some of that in this post.

Before I continue, I encourage you to read Scott Bellware’s great analysis of the CodePlex foundation which covers some of the points I planned to make (making my life easier). It’s a must-read to better understand the potential and opportunity presented by the foundation.

There’s one particular point he makes which I’d like to expound upon.

The CodePlex Foundation will bring influential open source projects under its auspices. The details aren’t clear yet, but it’s reasonable to assume that the foundation will support its projects the way that other software foundations support their projects, with protection for these projects as they are used in corporate and commercial contexts and who knows, maybe even some financial support will be part of the deal.

I talked to Bill Staples recently and he pointed out that The Apache Foundation is one source (among many) of inspiration for the CodePlex Foundation. If you go to the Apache FAQ, you’ll find the answer to the following question, “How does the ASF help its projects?” (emphasis mine)

As a corporate entity, the Apache Software Foundation is able to be a party to contracts, such as for technical services or guarantee-bonds for conferences. It can also accept donations on behalf of its projects, clarifying the associated tax issues, and create additional self-funded services via community-building activities, such as Apache-related T-shirts and user conferences.

In addition, the Foundation provides a framework for limiting the legal exposure of individual volunteers while they work on behalf of one of the ASF projects. In the past, these volunteers have been personally vulnerable to lawsuits, whether legitimate or frivolous, which impaired many activities that might have significantly improved contributions to the projects and benefited our users.

The first paragraph is what I alluded to in my last post, and this is something that the CodePlex Foundation would like to do in the long run, but as I mentioned before, it all depends on the level of participation and sponsorship funding. In an ideal world, the foundation would be able to add some level of funding of projects to this list of benefits for a member project.

The second paragraph is something that the CodePlex Foundation definitely wants to do right off the bat.

This is great news for those of us hosting open source projects. It’s generally not a worry for many small .NET open source projects, but the risk is always there that if a project starts to get noticed, some company may come along and sue the project owner for patent infringement etc. Typical projects may not have any money to go after, but I can imagine a commercial company going after a competing OSS product simply to shutter it.

Assigning your project’s copyright to the CodePlex Foundation would afford some level of legal protection against this sort of thing, similar to the way it works with the Apache Foundation.

One nice thing about the CodePlex Foundation is you have the option to assign copyright to the foundation or license your code to the foundation. I’m not a lawyer so I don’t understand if one provides more legal protection than the other. Honestly, once the foundation starts accepting projects at large, I would want to assign Subtext’s copyright over so that my name doesn’t appear as the big red bulls-eye in the Subtext copyright notice! ;)

And if you’re wondering, “am I losing control over my project by assigning copyright over”you are not. As I wrote in my post Who Owns The Copyright For An Open Source Project (part of my series called the Developer’s Guide To Copyright Law) you’d be assigning it under the open source license of your choice (yes, the CodePlex Foundation is more or less license agnostic. It doesn’t require a specific license to join), which always gives you the freedom to fork it should the foundation suddenly be overtaken by evil Ninjas.

As I said before, many of these details are still being hashed out and I’m guessing some of them won’t be finalized until the final board of directors is in place. But in the meanwhile, I think understanding the sources of inspiration for this new foundation will help provide insight into the direction it may take.

I hope this provides more concrete details than my last post.

0 comments suggest edit

UPDATE: Be sure to read my follow-up post on this topic as well.

Yesterday, Microsoft announced some exciting news about the formation of the CodePlex Foundation (not to be confused with project hosting website despite the unfortunately confusing same name) whose mission is to “enable the exchange of code and understanding among software companies and open source communities”.


This is an 501(c)(6) organization completely independent of Microsoft. For example, search the by-laws for mentions of Microsoft and you’ll find zero. Zilch.

One thing to keep in mind about this organization is that it’s very early in its formation. There was debate on trying to hash out all the details first and perhaps announcing the project some time further in the future, but that sort of goes against the open source ethos. As the main website states (emphasis mine):

We don’t have it all figured out yet. We know that commercial software developers are under-represented on open source projects. We know that commercial software companies face very specific challenges in determining how to engage with open source communities. We know that there are misunderstandings on both sides. Our aim is to advance the IT industry for both commercial software companies and open source communities by helping to meet these challenges.

Meeting these challenges is a collaborative process. We want your participation.

I’m personally excited about this as I’ve been a proponent of open source on the Microsoft stack for a long time and have called for Microsoft to get more involved in the past. I remember way back then, Scott Hanselman suggested Microsoft form an INETA like organization for open source as an editorial aside in his post on NDoc.

How does it benefit .NET OSS projects?

However, all is not roses just yet. If you read the mission statement carefully, it’s a very broad statement. In fact, it’s not specific to the Microsoft open source ecosystem, though obviously Microsoft will benefit from the mission statement being carried out.

If you look at it from Microsoft’s perspective, there are many legal and other challenges to participating in open source more fully. While Microsoft has made contributions to Linux, has collaborated closely with PHP, etc. Each time presents a unique set of challenges.

If the foundation succeeds in its mission, I believe it will open the doors for Microsoft to collaborate with and encourage the .NET open source ecosystem in a more meaningful manner.I don’t know what shape that will take in the end, but I believe that removing roadblocks to Microsoft’s participation is required and a great first step.

I’m honored to serve as an advisor to the board. In our first advisory board conference call, my first question asked the question, “what does this mean for those running open source projects on the .NET platform?” After all, while I’m a Microsoft employee by day, I also run an open source project at night and I have my own motivations as such.

I’m happy to see the mission statement take such a broad stance as it seems to be focused on the greater good and not focused on Microsoft specifically, but I am personally interested in seeing more details on why this is good for the open source developer who runs a project on the .NET platform. For example, can the foundation provide something more than moral support to .NET OSS projects such as MSDN licenses or more direct funding?

These are all interesting questions and I don’t know the answers. Microsoft put some skin in the game by seeding the foundation with a million dollars for the first year. The foundation, as an independent organization, will be looking for more sponsors to also pony up money. They will have to find the right balance in how they spend that money so that they can continue to operate. I imagine the answer to these questions will depend in how successful they are in finding sponsors and operating within their budget. As an advisor, I’ll be pushing for more clarity around this.

The full details for what the foundation will do are still being hashed out. The interim board has 100 days to choose a more permanent board of directors. Now is the time to get involved if you want to help make sure it continues in the right direction.