comments suggest edit

This is the first in a series on ASP.NET MVC 2 Beta

  1. ASP.NET MVC 2 Beta Released (Release Announcement)
  2. Html.RenderAction and Html.Action
  3. ASP.NET MVC 2 Custom Validation

Today at PDC09 (the keynote was streaming live), Bob Muglia announced the release of ASP.NET MVC 2 Beta. Feel free to download it right away! While you do that I want to present this public service message.


The Beta release includes tooling for Visual Studio 2008 SP1. We did not ship updated tooling for Visual Studio 2010 because ASP.NET MVC 2 is now included as a part of VS10, which is on its own schedule.

Unfortunately, because Visual Studio 2010 Beta 2 and ASP.NET MVC 2 Beta share components which are currently not in sync, running ASP.NET MVC 2 Beta on VS10 Beta 2 is not supported.

Here are some highlights of what’s new in ASP.NET MVC 2.

  • RenderAction (and Action)
  • AsyncController
  • Expression Based Helpers (TextBoxFor, TextAreaFor, etc.)
  • Client Validation Improvements (validation summary)
  • Add Area Dialog
  • Empty Project Template
  • And More!

Go Live

ASP.NET MVC 2 Beta also includes an explicit go-live clause within the EULA. You should make sure to read it has an interesting clause which references the operation of nuclear facilities, aircraft navigation, etc. ;)

More Details Please!

You can find more details about this release in the release notes. Also be on the look out for one of ScottGu’s trademark blog posts covering what’s new

I’ve started working on a series of blog posts where I will cover features of ASP.NET MVC 2 in more detail. I’ll start publishing these posts one at a time soon.

Next Stop: RC

Our next release is going to be the release candidate hopefully before the year’s end. The work from now to RC will consist almost solely of bug fixes with a few minor feature improvements and changes.

Please do play with the Beta. If you run into an issue that’s serious enough, there’s still time to consider changes for RC. Otherwise it will have to wait for ASP.NET MVC 3 which I’m just starting to think about.

I’m thinking, “man, I can’t believe I’m already thinking about version 3”!”

Tags: aspnetmvc,

comments suggest edit

I learned something new yesterday about interface inheritance in .NET as compared to implementation inheritance. To illustrate this difference, here’s a simple demonstration.

I’ll start with two concrete classes, one which inherits from the other. Each class defines a property. In this case, we’re dealing with implementation inheritance.

public class Person {
    public string Name { get; set; }

public class SuperHero : Person {
    public string Alias { get; set; }

We can now use two different techniques to print out the properties of the SuperHero type: type descriptors and reflection. Here’s a little console app that does this. Note the code I’m showing below doesn’t include a few Console.WriteLine calls that I have in the actual app.

static void Main(string[] args) {
  // type descriptor
  var properties = TypeDescriptor.GetProperties(typeof(SuperHero));
  foreach (PropertyDescriptor property in properties) {

  // reflection
  var reflectedProperties = typeof(SuperHero).GetProperties();
  foreach (var property in reflectedProperties) {

Let’s look at the output of this code.


No surprises there.

The SuperHero type has two properties, Alias defined on SuperHero and the Name property inherited from its base type.

But now, let’s change these classes into interfaces so that we’re now dealing with interface inheritance. Notice that ISupeHero now derives from IPerson.

public interface IPerson {
  string Name { get; set; }

public interface ISuperHero : IPerson {
  string Alias { get; set; }

I’ve also made the corresponding changes to the console app.

var properties = TypeDescriptor.GetProperties(typeof(ISuperHero));
foreach (PropertyDescriptor property in properties) {

// reflection
var reflectedProperties = typeof(ISuperHero).GetProperties();
foreach (var property in reflectedProperties) {

Before looking at the next screenshot, take a moment to answer the question, what is the output of the program now?

interface-inheritance Well it should be obvious that the output is different otherwise I wouldn’t be writing this blog post in the first place, right?

When I first tried this out, I found the behavior surprising. However, it’s probably not surprising to anyone who has an encyclopedic knowledge of the ECMA-335 Common Language Infrastructure specification (PDF) such as Levi, one of the ASP.NET MVC developers who pointed me to section 8.9.11 of the spec when I asked about this behavior:

8.9.11 Interface type derivation Interface types can require the implementation of one or more other interfaces. Any type that implements support for an interface type shall also implement support for any required interfaces specified by that interface. This is different from object type inheritance in two ways:

  • Object types form a single inheritance tree; interface types do not.
  • Object type inheritance specifies how implementations are inherited; required interfaces do not, since interfaces do not define implementation. Required interfaces specify additional contracts that an implementing object type shall support.

To highlight the last difference, consider an interface, IFoo, that has a single method. An interface, IBar, which derives from it, is requiring that any object type that supports IBar also support IFoo. It does not say anything about which methods IBar itself will have.

The last paragraph provides a great example of why the code I wrote behaves as it does. The fact that ISuperHero inherits from IPerson doesn’t mean the ISuperHero interface type inherits the properties of IPerson because interfaces do not define implementation.

Rather, what it means is that any class that implements ISuperHero must also implement the IPerson interface. Thus if I wrote an implementation of ISuperHero such as:

public class Groo : ISuperHero {
  public string Name {get; set;}
  public string Alias {get; set;}

The Groo type must implement both ISuperHero and IPerson and iterating over its properties would show both properties.

Implications for ASP.NET MVC Model Binding

You probably could have guessed this part was coming. Let’s say you’re trying to use model binding to bind the Name property of an ISuperHero. Since our model binder uses type descriptors under the hood, we won’t be able to bind that property for the reasons stated above.

I learned of this detail investigating a bug reported in StackOverflow. It turns out this behavior is by design. In the context of sending a view model to the view, that view model should be a simple carrier of data. Thus it makes sense to use concrete types on your view model, in contrast to your domain models which will more likely be interface based.

comments suggest edit

I was stepping through some code in a debugger today and noticed a neat little feature of Visual Studio 2010 that I hadn’t noticed before.

When debugging, you can easily examine the value of a variably by highlighting it with your mouse. Nothing new there. But then I noticed a little pin next to it, which I’ve never seen before.


So what do you see when you see a pin? You click on it!


As you might expect, that pins the quick watch in place. So now I hit the play button, continue running my app in the debugger, and the next time I hit that breakpoint:


I can clearly see the value changed since the last time. I think this may come in useful when walking through code as a way of seeing the value of important variables right next to where they are declared. I thought that was pretty neat., code comments suggest edit

This code has been incorporated into a new RouteMagic library I wrote which includes Source Code on as well as a NuGet package!

I saw a bug on Connect today in which someone offers the suggestion that the PageRouteHandler (new in ASP.NET 4) should handle IHttpHandler as well as Page.

I don’t really agree with the suggestion because while a Page is an IHttpHandler, an IHttpHandler is not a Page. What I this person really wants is a new handler specifically for http handlers. Let’s give it the tongue twisting name: IHttpHandlerRouteHandler.

Unfortunately, it’s too late to add this for ASP.NET 4, but it turns out such a thing is trivially easy to write. In fact, here it is.

public class HttpHandlerRouteHandler<THandler> 
    : IRouteHandler where THandler : IHttpHandler, new() {
  public IHttpHandler GetHttpHandler(RequestContext requestContext) {
    return new THandler();

Of course, by itself it’s not all that useful. We need extension methods to make it really easy to register routes for http handlers. I wrote a set of those, but will only post two examples here on my blog. To get the full set download the sample project at the very end of this post.

public static class HttpHandlerExtensions {
  public static void MapHttpHandler<THandler>(this RouteCollection routes,     string url) where THandler : IHttpHandler, new() {
    routes.MapHttpHandler<THandler>(null, url, null, null);
  public static void MapHttpHandler<THandler>(this RouteCollection routes, 
      string name, string url, object defaults, object constraints) 
      where THandler : IHttpHandler, new() { 
    var route = new Route(url, new HttpHandlerRouteHandler<THandler>());
    route.Defaults = new RouteValueDictionary(defaults);
    route.Constraints = new RouteValueDictionary(constraints);
    routes.Add(name, route);

This now allows me to register a route which is handled by an IHttpHandler very easily. In this case, I’m registering a route that will use my SimpleHttpHandler to handle any two segment URL.

public static void RegisterRoutes(RouteCollection routes) {

And here’s the code for SampleHttpHandler for completeness. All it does is print out the route values.

public class SampleHttpHandler : IHttpHandler {
  public bool IsReusable {
    get { return false; }

  public void ProcessRequest(HttpContext context) {
    var routeValues = context.Request.RequestContext.RouteData.Values;
    string message = "I saw foo='{0}' and bar='{1}'";
    message = string.Format(message, routeValues["foo"], routeValues["bar"]);


When I make a request for /testing/yo I’ll see the message

I saw foo=’testing’ and bar=’yo’

in my browser. Very cool.


One limitation here is that my http handler has to have a parameterless constructor. That’s not really that bad of a limitation since to register an HTTP Handler in the old way you had to make sure that the handler had an empty constructor.

However, this code that I wrote for this blog post is based on code that I added to Subtext. In that code, I am passing an IKernel (I’m using Ninject) to my HttpRouteHandler. That way, my route handler will use Ninject to instantiate the http handler and thus my http handlers aren’t required to have a parameterless constructor.

Try it out!

The RouteMagic solution includes a sample project that demonstrates all this., code, mvc comments suggest edit

This is the second in a three part series related to HTML encoding blocks, aka the <%: ... %> syntax.

In a recent blog post, I introduced ASP.NET 4’s new HTML Encoding code block syntax as well as the corresponding IHtmlString interface and HtmlString class. I also mentioned that ASP.NET MVC 2 would support this new syntax when running on ASP.NET 4.

In fact, you can try it out now by downloading and installing Visual Studio 2010 Beta 2.

I’ve also mentioned in the past that we are not conditionally compiling ASP.NET MVC 2 for each platform. Instead, we’re building System.Web.Mvc.dll against ASP.NET 3.5 SP1 and simply including that one in VS08 and VS10. Thus when you’re running ASP.NET MVC 2 on ASP.NET 4, it’s the same byte for byte assembly as the same one you would run on ASP.NET 3.5 SP1.

This fact ought to raise a question in your mind. If ASP.NET MVC 2 is built against ASP.NET 3.5 SP1, how the heck does it take advantage of the new HTML encoding blocks which require that you implement an interface introduced in ASP.NET 4?

The answer involves a tiny bit of voodoo black magic we’re doing in ASP.NET MVC 2.voodoo

We introduced a new type MvcHtmlString which is created via a factory method, MvcHtmlString.Create. When this method determines that it is being called from an ASP.NET 4 application, it uses Reflection.Emit to dynamically generate a derived type which implements IHtmlString.

If you look at the source code for ASP.NET MVC 2 Preview 2, you’ll see the following method call when we are instantiating an MvcHtmlString:

Type dynamicType = DynamicTypeGenerator.
    typeof(MvcHtmlString), new Type[] {

Note that we’re using a new internal class, DynamicTypeGenerator, to generate a brand new type named DynamicMvcHtmlString. This type derives from MvcHtmlString and implements IHtmlString. We’ll return this instance instead of a standard MvcHtmlString when running on ASP.NET 4.

When running on ASP.NET 3.5 SP1, we simply new up an MvcHtmlString and return that, completely bypassing the Reflection.Emit logic. Note that we only generate this type once per AppDomain so you only pay the Reflection Emit cost once.

The code in DynamicTypeGenerater is standard Reflection.Emit stuff which at runtime creates an assembly at runtime, adds this new type to it, and returns a lambda used to instantiate the new type. If you’ve never seen Reflection.Emit code, it’s worth a look.

In general, we really frown on this sort of “tricky” code as it’s often hard to maintain and a potential bug magnet. For example, since System.Web.Mvc.dll is security transparent, we needed to make sure that the assembly we generate is marked with the SecurityTransparentAttribute. This is something that would be easy to overlook until you start testing in medium trust scenarios.

However, in this case, the type we’re generating is very small and very simple. Not only that, we only need to keep this code for one version of ASP.NET MVC. ASP.NET MVC 3 will be compiled against ASP.NET 4 only (no support for ASP.NET 3.5 planned) and we’ll be able to remove this “clever” code and have much more straightforward code. I’m looking forward to that. :)

In any case, the point of this post was to fulfill a promise I made in an earlier post where I said I’d give some more details on how ASP.NET MVC 2 works with the new Html encoding block feature.

This is all behind-the-scenes detail that’s not necessary to understand to use ASP.NET MVC, but might be interesting to some of you. Especially those who ever find themselves in a situation where you need to support forward compatibility. comments suggest edit

Have you ever needed to quickly spawn a web server against a local folder to preview a web application? If not, what would you say you do here?

This is actually quite common for me since I receive a lot of zip files containing web applications which reproduce a bug. After I unzip the repro, I need a way to quickly point a web server at the folder and run the web site.

A while back I wrote about a useful registry hack to do just this. It adds a right click menu to start a web server (Cassini) pointing to any folder. This was based on a shell extension by Robert McLaws.

Well that was soooo 2008. It’s almost 2010 and Visual Studio 2010 Beta 2 is out which means it’s time to update this shell extension to run an ASP.NET 4 web server.


Obviously this is not rocket science as I merely copied my old settings and updated the paths. But if you’re too lazy to look up the new file paths, you can just copy these settings (changes are in bold).

32 bit (x86)

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer]
@="ASP.NET 4 Web Server Here"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer\command]
@="C:\\Program Files\\Common Files\\microsoft shared\\DevServer
\\10.0\\Webdev.WebServer40.exe /port:8081 /path:\"%1\""

64 bit (x64)

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer]
@="ASP.NET 4 Web Server Here"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\VS2010 WebServer\command]
@="C:\\Program Files (x86)\\Common Files\\microsoft shared\\DevServer
\\10.0\\Webdev.WebServer40.exe /port:8081 /path:\"%1\""

I chose a different port and name for this shell extension so that it lives side-by-side with my other one.

Of course, I wouldn’t even bother trying to copy these settings from this blog post since I conveniently zipped up .reg files you can run. comments suggest edit

You probably don’t need me to tell you that Visual Studio 2010 Beta 2 has been released as it’s been blogged to death all over the place. Definitely check out the many blog posts out there if you want more details on what’s included.

This post will focus more on what Visual Studio 2010 means to ASP.NET MVC and vice versa.

Important: If you installed ASP.NET MVC for Visual Studio 2010 Beta 1, make sure to uninstall it (and VS10 Beta 1) before installing Beta 2.

In the box baby!

Well one of the first things you’ll notice is that ASP.NET MVC 2 Preview 2 is included in VS10 Beta 2. When you select the File | New menu option, you’ll be greeted with an ASP.NET MVC 2 project template option under the Web node.


Note that when you create your ASP.NET MVC 2 project with Visual Studio 2010, you can choose whether you wish to target ASP.NET 3.5 or ASP.NET 4.


If you choose to target ASP.NET 4, you’ll be able to take advantage of the new HTML encoding code blocks with ASP.NET MVC which I wrote about earlier.

As an aside, you might find it interesting that the System.Web.Mvc.dll assembly we shipped in VS10 is the exact same binary we shipped out-of-band for VS2008 and .NET 3.5. How then does that assembly implement an interface that is new in ASP.NET 4? That’s a subject for another blog post.

What about ASP.NET MVC 1.0?

Unfortunately, we have no plans to support ASP.NET MVC 1.0 tooling in Visual Studio 2010. When we were going through planning, we realized it would’ve taken a lot of work to update our 1.0 project templates. We felt that time would be better spent focused on ASP.NET MVC 2.

However, that doesn’t mean you can’t develop an ASP.NET MVC 1.0 application with Visual Studio 2010! All it means is you’ll have to do so without the nice ASP.NET MVC specific tooling such as the add controllerandadd viewdialogs. After all, at it’s core, an ASP.NET MVC project is a Web Application Project.

Eilon Lipton, the lead dev for ASP.NET MVC, wrote a blog post a while back describing how to open an ASP.NET MVC project without having ASP.NET MVC installed. All it requires is for you to edit the .csproj file and remove the following GUID from the <ProjectTypeGuids> element.


Once you do that, you’ll be able to open, code, and debug your project from VS10.

Upgrading ASP.NET MVC 1.0 to ASP.NET MVC 2

Another option is to upgrade your ASP.NET MVC 1.0 application to ASP.NET MVC 2 and then open the upgraded project with Visual Studio 2010 Beta 2.

Eilon has your back again as he’s written a handy little tool for upgrading existing ASP.NET MVC 1.0 applications to version 2.

After using this tool, your project will still be a Visual Studio 2008 project. But you can then open it with VS10 and it knows how to open and upgrade the project to be a VS10 project.

What about automatic upgrades?

We are investigating implementing a more automatic process for upgrading ASP.NET MVC 1.0 applications to ASP.NET MVC 2 when you try to open the existing project in Visual Studio 2010. We plan to have something in place by the RTM of VS10.

Ideally, when you try to open an ASP.NET MVC 1.0 project, instead of showing an error dialog, VS10 will provide a wizard to upgrade the project which will be somewhat based on the sample Eilon provided. So be sure to supply feedback on his wizard soon!

Tags: aspnetmvc,, visual studio 2010, visual studio

comments suggest edit

Despite what your well intentioned elementary school teachers would have liked you to believe, there is such a thing as a stupid question, and you probably get them all the time via email or IM.

You also know that in half the time it takes to type the question, the person pestering you could have typed the query in their favorite search engine and received an answer immediately.

Let me Google that for you addressed this little annoyance by providing a passive aggressive means to tell annoying question askers to bugger off while at the same time teaching them the power of using a search engine to help themselves.

lmbtfyWhen I first heard about the Microsoft’s new search engine, Bing, I jumped at purchasing the domain name (though I was remiss in not also registering as well. If you own that domain, may I buy it off of you?)

Unfortunately, being way too busy caused me to leave the domain name unused gathering dust until I put out a call on Twitter for help. Not long after Maarten Balliauw and Juliën Hanssens answered the call and put together the bulk of this ASP.NET MVC application using jQuery and jQuery UI.

I really like what they did in that the background image for changes daily to match the one on I finally found some time to review the code, do a bit of clean-up, fix some minor issues, and test it so I am now ready to deploy it.

Keep in mind that even though I’m employed by Microsoft, this site is a pet project I’m doing on the side in collaboration with Maarten and Juliën and is in not associated with Microsoft nor Bing in any official capacity. We’re just some folks doing this for fun.

Now go try it out and release your inner snarkiness.

comments suggest edit

A little while ago, Scott Guthrie announced the launch of the Microsoft Ajax CDN. In his post he talked about how ASP.NET 4 will have support for the CDN as well as the list of scripts that are included.

The good news today is due to the hard work of Stephen Walther and the ASP.NET Ajax team, they’ve added a couple of new scripts to the CDN which are near and dear to my heart, the ASP.NET MVC 1.0 scripts. The following code snippet shows how you can start using them today.

<script src=""
<script src=""

Debug versions are also available on the CDN.

<script src=""
<script src=""

As ScottGu wrote,

The Microsoft AJAX CDN makes it really easy to add the jQuery and ASP.NET AJAX script libraries to your web sites, and have them be automatically served from one of our thousands of geo-located edge-cache servers around the world.

We currently don’t have the ASP.NET MVC 2 scripts available on the CDN, but that’s something we can consider as we get closer and closer to RTM.

comments suggest edit

If you’re a manufacturing plant, one way to maximize profit is to keep costs as low as possible. One way to do that is to cut corners. Go ahead and dump that toxic waste into the river and pollute the heck out of the air with your smoke stacks. These options are much cheaper than installing smoke scrubbers or trucking waste to proper disposal sites.


Of course, economists have long known that this does not paint the entire picture. Taking these shortcuts incur other costs, it’s just that these costs are not borne by the manufacturing plant. The term externalities describes such spillover costs.

In economics an externality or spillover of an economic transaction is an impact on a party that is not directly involved in the transaction. In such a case, prices do not reflect the full costs or benefits in production or consumption of a product or service.

Thus the full cost of manufacturing includes the hospital bills of those who get sick by drinking the tainted water, the cost of the crops damaged by the acid rain, etc.

Software is the same way. I got to thinking about this after reading Ted’s latest post that Agile is treating the symptoms not the disease where the complexity that Agile introduces is disparaged and Access is held up as one example of a great “simple” way to develop applications.

I agree that Access is great when you’re building a little database to track Billy’s baseball cards. However, the real world doesn’t stay that simple. As the second law of thermodynamics states (paraphrasing here), entropy tends to increase over time, which is something that Ted doesn’t address in his discussion.

I’m all for simplicity in our tools and methodologies as I think we still have a lot of room for improvement in reducing accidental complexity. Unfortunately, the business processes for which we build software are not simple at all and full of inherent complexity. Oh sure, they may start off as a simple Access database, but they never stay that simple. Every business I’ve ever interacted had very complex sets of business processes, some seemingly cargo cultish in origin, which led to major complexity in business processes.

Ted mentions friends of his who’ve made a healthy living using simple tools to build simple line-of-business apps for customers. And I’m sure they did a fine job of it. But I also made a healthy living in the past coming in to clean up the externalities left by such applications.

I remember one rescue operation for a company drowning in the complexity of a “simple” Access application they used to run their business. It was simple until they started adding new business processes they needed to track. It was simple until they started emailing copies aroundand were unsure which was the “master copy”. Not to mention all the data integrity issues and difficulty in changing the monolithic procedural application code.

I also remember helping a teachers union who started off with a simple attendance tracker style app (to use an example Ted mentions) and just scaled it up to an atrociously complex Access database with stranded data and manual processes where they printed excel spreadsheets to paper, then manually entered it into another application. I have to wonder, why is that little school district in western Pennsylvania engaging in custom software development in the first place? I don’t engage in developing custom school curricula. An even simpler option is to buy some off the shelf software or simply use a Wiki, but I digress. 

These were apps that would make The Daily WTF look like paragons of good software development in comparison.

The core problem here is that while it’s fine to push for simpler tools to reduce accidental complexity, at one point or another we are going to have to deal with inherent complexity caused by entropy. Business processes are inherently complex, usually more than they need to be, and this is not a problem that will be solved by any software. Most are not only inherently complex, but chock-full of accidental complexity as well. Your line of business app won’t solve that. It takes systemic change in the organization to make that happen.

Not only that, but business processes get more complex over time as entropy sets in. The applications I mentioned dealt with this entropy and reached a point where the current solution could not scale to meet that new level of complexity (a different sort of scaling up), so they started to drown in it, the original authors of the applications long gone off to create new apps with new externalities.

Fortunately, the externalities of these applications didn’t cause cancer, but rather kept guys like me employed. Of course, it was a negative externality for the company who kept pumping cash to fix these applications.

Ted paraphrases Billy suggesting that Agile requires even more complex tools, story cards, continuous integration servers, etc. This is an unfair characterization and misses the point of Agile. Agile is less about managing the complexity of an application itself and more about managing the complexity of building an application.

A higher principle of agile is YAGNI (You ain’t gonna need it) until you need it. For example, the 1 to 2 guys in a garage probably don’t need a continuous integration server, stand up meetings, etc and any real agilist worth his or her salt would recognize that and not try to force unnecessary procedures on a team that didn’t need it.

However, as the two garage dwellers start to grow the business and need to coordinate with more developers, such tools come in handy. As you grow a team beyond two people, the lines of communication start to grow exponentially, thus creating inherent complexity.  Looking at the cost of developing and maintaining an application over time is where you start to get a full picture of the true cost of building an application.

As Robert Glass pointed out in Facts and Fallacies of Software Engineering, research shows that maintenance typically consumes from 40 to 80 percent of software costs, typically making it the dominant life cycle phase of a software project. Thus these so called “simple” solutions need to factor that in, or the customers will continually be left with the clean-up duty while the polluters have long since moved on.

comments suggest edit

This morning at 3:17 AM, Mia Yokoyama Haack was born weighing in at 7lb 8.5 oz. Now my world domination crew is complete!

DSC_0013Mia is a fast little one as labor started around 11 PM and she was delivered only four hours later!

This time around, we did a water birth at a birthing center which involves the momma sitting in a big tub for the last part of labor and delivery, which made for a much more comfortable experience than last time. I think she’d definitely recommend it.


We were back home by 6:30 AM which just amazes me. Momma and Baby are doing well. I’m still getting over my cold, but I think the adrenaline of the whole experience helped a lot. :)

comments suggest edit

Today we just released ASP.NET MVC 2 Preview 2 for Visual Studio 2008 SP1 (and ASP.NET 3.5 SP1), which builds on top of the work we did in Preview 1 released two months ago.

Some of the cool new features we’ve added to Preview 2 include:

  • Client-Side Validation – ASP.NET MVC 2 includes the jQuery validation library to provide client-side validation based on the model’s validation metadata. It is possible to hook in alternative client-side validation libraries by writing an adapter which adapts the client library to the JSON metadata in a manner similar to the xVal validation framework.
  • Areas – Preview 2 includes in-the-box support for single project areas for developers who wish to organize their application without requiring multiple projects. Registration of areas has also been streamlined.
  • Model Validation Providers - allow hooking in alternative validation logic to provide validation when model binding. The default validation providers uses Data Annotations.
  • Metadata Providers - allow hooking in alternative sources of metadata for model objects. The default metadata provider uses Data Annotations.

Based on this list, you’ll notice a theme where in Preview 1, we tied much functionality directly to Data Annotation attributes, in Preview 2 we inserted abstractions around our usage of Data Annotations which allow hooking in custom implementations of validation and metadata providers.

This will allow you to do things like swapping out our default validation with the Enterprise Library Validation Block for example. It also allows providing implementations where model metadata is stored in alternative locations rather than via attributes, with a bit of work.

What About Visual Studio 2010?

The tools for this particular release only work in Visual Studio 2008 SP1. The version of ASP.NET MVC 2 Preview 2 for Visual Studio 2010 will be released in-the-box with Visual Studio 2010 Beta 2. You won’t need to go anywhere else, it’ll just be there waiting for you. Likewise, the RTM of ASP.NET MVC 2 will be included with the RTM of Visual Studio 2010.

Therefore, if you want to try out the new HTML encoding code blocks with ASP.NET MVC 2 Preview 2, you’ll have to wait till Visual Studio 2010 Beta 2 is released. But for now, you can try out Preview 2 on VS 2008 and start providing feedback.

code, tdd comments suggest edit

UPDATE: For a better approach, check out MoQ Sequences Revisited.

One area where using MoQ is confusing is when mocking successive calls to the same method of an object.

For example, I was writing some tests for legacy code where I needed to fake out multiple calls to a data reader. You remember data readers, don’t you?

Here’s a snippet of the code I was testing. Ignore the map method and focus on the call to reader.Read.

while(reader.Read()) {
  yield return map(reader);

Notice that there are multiple calls to reader.Read. The first couple times, I wanted Read to return true. The last time, it should return false. And here’s the code I hoped to write to fake this using MoQ:

reader.Setup(r => r.Read()).Returns(true);
reader.Setup(r => r.Read()).Returns(true);
reader.Setup(r => r.Read()).Returns(false);

Unfortunately, MoQ doesn’t work that way. The last call wins and nullifies the previous two calls. Fortunately, there are many overloads of the Returns method, some of which accept functions used to return the value when the method is called.

That’s the approach I found on Matt Hamilton’s blog post (Mad Props indeed!) where he describes his clever solution to this issue involving a Queue:

var pq = new Queue<IDbDataParameter>(new[]
mockCommand.Expect(c => c.CreateParameter()).Returns(() => pq.Dequeue());

Each time the method is called, it will return the next value in the queue.

One cool thing I stumbled on is that the syntax can be made even cleaner and more succinct by passing in a method group. Here’s my MoQ code for the original IDataReader issue I mentioned above.

var reader = new Mock<IDataReader>();
reader.Setup(r => r.Read())
  .Returns(new Queue<bool>(new[] { true, true, false }).Dequeue);

I’m defining a Queue inline and then passing what is effectively a pointer to its Dequeue method. Notice the lack of parentheses at the end of Dequeue which is how you can tell that I’m passing the method itself and not the result of the method.

Using this apporach, MoQ will call Dequeue each time it calls r.Read() grabbing the next value from the queue. Thanks to Matt for posting his solution! This is a great technique for dealing with sequences using MoQ.

UPDATE: There’s a great discussion in the comments to this post. Fredrik Kalseth proposed an extension method to make this pattern even simpler to apply and much more understandable. Why didn’t I think of this?! Here’s the extension method he proposed (but renamed to the name that Matt proposed because I like it better).

public static class MoqExtensions
  public static void ReturnsInOrder<T, TResult>(this ISetup<T, TResult> setup, 
    params TResult[] results) where T : class  {
    setup.Returns(new Queue<TResult>(results).Dequeue);

Now with this extension method, I can rewrite my above test to be even more readable.

var reader = new Mock<IDataReader>();
reader.Setup(r => r.Read()).ReturnsInOrder(true, true, false);

In the words of Borat, Very Nice!

Tags: TDD, unit testing, MoQ, code, mvc comments suggest edit

This is the first in a three part series related to HTML encoding blocks, aka the <%: ... %> syntax.

One great new feature being introduced in ASP.NET 4 is a new code block (often called a Code Nugget by members of the Visual Web Developer team) syntax which provides a convenient means to HTML encode output in an ASPX page or view.

<%: CodeExpression %>

I often tell people it’s <%= but with the = seen from the front.

Let’s look at an example of how this might be used in an ASP.NET MVC view. Suppose you have a form which allows the user to submit their first and last name. After submitting the form, the same view is used to display the submitted values.

First Name: <%: Model.FirstName %>
Last Name: <%: Model.FirstName %>

<form method="post">
  <%: Html.TextBox("FirstName") %>
  <%: Html.TextBox("LastName") %>

By using the the new syntax, Model.FirstName and Model.LastName are properly HTML encoded which helps in mitigating Cross Site Scripting (XSS) attacks.

Expressing Intent with the new IHtmlString interface

If you’re paying close attention, you might be asking yourself “Html.TextBox is supposed to return HTML that is already sanitized. Wouldn’t using this syntax with Html.TextBox cause double encoding?

ASP.NET 4 also introduces a new interface, IHtmlString along with a default implementation, HtmlString. Any method that returns a value that implements the IHtmlString interface will not get encoded by this new syntax.

In ASP.NET MVC 2, all helpers which return HTML now take advantage of this new interface which means that when you’re writing a view, you can simply use this new syntax all the time and it will just work.By adopting this habit, you’ve effectively changed the act of HTML encoding from an opt-in model to an opt-out model.

The Goals

There were four primary goals we wanted to satisfy with the new syntax.

  1. Obvious at a glance. When you look at a page or a view, it should be immediately obvious which code blocks are HTML encoded and which are not. You shouldn’t have to refer back to flags in web.config or the page directive (which could turn encoding on or off) to figure out whether the code is actually being encoded. Also, it’s not uncommon to review code changes via check-in emails which only show a DIFF. This is one reason we didn’t reuse existing syntax.

    Not only that, code review becomes a bit easier with this new syntax. For example, it would be easy to do a global search for <%= in a code base and review those lines with more scrutiny (though we hope there won’t be any to review). Also, when you receive a check-in email which shows a DIFF, you have most of the context you need to review that code.

  2. Evokes a similar meaning to <%=. We could have used something entirely new, but we didn’t have the time to drastically change the syntax. We also wanted something that had a similar feel to <%= which evokes the sense that it’s related to output. Yeah, it’s a bit touchy feely and arbitrary, but I think it helps people feel immediately familiar with the syntax.

  3. Replaces the old syntax and allows developers to show their intent. One issue with the current implementation of output code blocks is there’s no way for developers to indicate that a method is returning already sanitized HTML. Having this in place helps enable our goal of completely replacing the old syntax with this new syntax in practice.

    This also means we need to work hard to make sure all new samples, books, blog posts, etc. eventually use the new syntax when targeting ASP.NET 4.

    Hopefully, the next generation of ASP.NET developers will experience this as being the default output code block syntax and <%= will just be a bad memory for us old-timers like punch cards, manual memory allocations, and Do While Not rs.EOF.

  4. Make it easy to migrate from ASP.NET 3.5. We strongly considered just changing the existing <%= syntax to encode by default. We eventually decided against this for several reasons, some of which are listed in the above goals. Doing so would make it tricky and painful to upgrade an existing application from earlier versions of ASP.NET.

    Also, we didn’t want to impose an additional burden for those who already do practice good encoding. For those who don’t already practice good encoding, this additional burden might prevent them from porting their app and thus they wouldn’t get the benefit anyways.

When Can I Use This?

This is a new feature of ASP.NET 4. If you’re developing on ASP.NET 3.5, you will have to continue to use the existing <%= syntax and remember to encode the output yourself.

In ASP.NET 4 Beta 2, you will have the ability to try this out yourself with ASP.NET MVC 2 Preview 2. If you’re running on ASP.NET 3.5, you’ll have to use the old syntax.

What about ASP.NET MVC 2?

As mentioned, ASP.NET MVC 2 supports this new syntax in its helper when running on ASP.NET 4.

In order to make this possible, we are making a breaking change such that the relevant helper methods (ones that return HTML as a string) will return a type that implements IHtmlString.

In a follow-up blog post, I’ll write about the specifics of that change. It was an interesting challenge given that IHtmlString is new to ASP.NET 4, but ASP.NET MVC 2 is actually compiled against ASP.NET 3.5 SP1. :)

comments suggest edit

In my last post, I presented a general overview of the CodePlex foundation and talked a bit about what it means to the .NET OSS developer, admittedly without much in the way of details. I plan to fix some of that in this post.

Before I continue, I encourage you to read Scott Bellware’s great analysis of the CodePlex foundation which covers some of the points I planned to make (making my life easier). It’s a must-read to better understand the potential and opportunity presented by the foundation.

There’s one particular point he makes which I’d like to expound upon.

The CodePlex Foundation will bring influential open source projects under its auspices. The details aren’t clear yet, but it’s reasonable to assume that the foundation will support its projects the way that other software foundations support their projects, with protection for these projects as they are used in corporate and commercial contexts and who knows, maybe even some financial support will be part of the deal.

I talked to Bill Staples recently and he pointed out that The Apache Foundation is one source (among many) of inspiration for the CodePlex Foundation. If you go to the Apache FAQ, you’ll find the answer to the following question, “How does the ASF help its projects?” (emphasis mine)

As a corporate entity, the Apache Software Foundation is able to be a party to contracts, such as for technical services or guarantee-bonds for conferences. It can also accept donations on behalf of its projects, clarifying the associated tax issues, and create additional self-funded services via community-building activities, such as Apache-related T-shirts and user conferences.

In addition, the Foundation provides a framework for limiting the legal exposure of individual volunteers while they work on behalf of one of the ASF projects. In the past, these volunteers have been personally vulnerable to lawsuits, whether legitimate or frivolous, which impaired many activities that might have significantly improved contributions to the projects and benefited our users.

The first paragraph is what I alluded to in my last post, and this is something that the CodePlex Foundation would like to do in the long run, but as I mentioned before, it all depends on the level of participation and sponsorship funding. In an ideal world, the foundation would be able to add some level of funding of projects to this list of benefits for a member project.

The second paragraph is something that the CodePlex Foundation definitely wants to do right off the bat.

This is great news for those of us hosting open source projects. It’s generally not a worry for many small .NET open source projects, but the risk is always there that if a project starts to get noticed, some company may come along and sue the project owner for patent infringement etc. Typical projects may not have any money to go after, but I can imagine a commercial company going after a competing OSS product simply to shutter it.

Assigning your project’s copyright to the CodePlex Foundation would afford some level of legal protection against this sort of thing, similar to the way it works with the Apache Foundation.

One nice thing about the CodePlex Foundation is you have the option to assign copyright to the foundation or license your code to the foundation. I’m not a lawyer so I don’t understand if one provides more legal protection than the other. Honestly, once the foundation starts accepting projects at large, I would want to assign Subtext’s copyright over so that my name doesn’t appear as the big red bulls-eye in the Subtext copyright notice! ;)

And if you’re wondering, “am I losing control over my project by assigning copyright over”you are not. As I wrote in my post Who Owns The Copyright For An Open Source Project (part of my series called the Developer’s Guide To Copyright Law) you’d be assigning it under the open source license of your choice (yes, the CodePlex Foundation is more or less license agnostic. It doesn’t require a specific license to join), which always gives you the freedom to fork it should the foundation suddenly be overtaken by evil Ninjas.

As I said before, many of these details are still being hashed out and I’m guessing some of them won’t be finalized until the final board of directors is in place. But in the meanwhile, I think understanding the sources of inspiration for this new foundation will help provide insight into the direction it may take.

I hope this provides more concrete details than my last post.

comments suggest edit

UPDATE: Be sure to read my follow-up post on this topic as well.

Yesterday, Microsoft announced some exciting news about the formation of the CodePlex Foundation (not to be confused with project hosting website despite the unfortunately confusing same name) whose mission is to “enable the exchange of code and understanding among software companies and open source communities”.


This is an 501(c)(6) organization completely independent of Microsoft. For example, search the by-laws for mentions of Microsoft and you’ll find zero. Zilch.

One thing to keep in mind about this organization is that it’s very early in its formation. There was debate on trying to hash out all the details first and perhaps announcing the project some time further in the future, but that sort of goes against the open source ethos. As the main website states (emphasis mine):

We don’t have it all figured out yet. We know that commercial software developers are under-represented on open source projects. We know that commercial software companies face very specific challenges in determining how to engage with open source communities. We know that there are misunderstandings on both sides. Our aim is to advance the IT industry for both commercial software companies and open source communities by helping to meet these challenges.

Meeting these challenges is a collaborative process. We want your participation.

I’m personally excited about this as I’ve been a proponent of open source on the Microsoft stack for a long time and have called for Microsoft to get more involved in the past. I remember way back then, Scott Hanselman suggested Microsoft form an INETA like organization for open source as an editorial aside in his post on NDoc.

How does it benefit .NET OSS projects?

However, all is not roses just yet. If you read the mission statement carefully, it’s a very broad statement. In fact, it’s not specific to the Microsoft open source ecosystem, though obviously Microsoft will benefit from the mission statement being carried out.

If you look at it from Microsoft’s perspective, there are many legal and other challenges to participating in open source more fully. While Microsoft has made contributions to Linux, has collaborated closely with PHP, etc. Each time presents a unique set of challenges.

If the foundation succeeds in its mission, I believe it will open the doors for Microsoft to collaborate with and encourage the .NET open source ecosystem in a more meaningful manner.I don’t know what shape that will take in the end, but I believe that removing roadblocks to Microsoft’s participation is required and a great first step.

I’m honored to serve as an advisor to the board. In our first advisory board conference call, my first question asked the question, “what does this mean for those running open source projects on the .NET platform?” After all, while I’m a Microsoft employee by day, I also run an open source project at night and I have my own motivations as such.

I’m happy to see the mission statement take such a broad stance as it seems to be focused on the greater good and not focused on Microsoft specifically, but I am personally interested in seeing more details on why this is good for the open source developer who runs a project on the .NET platform. For example, can the foundation provide something more than moral support to .NET OSS projects such as MSDN licenses or more direct funding?

These are all interesting questions and I don’t know the answers. Microsoft put some skin in the game by seeding the foundation with a million dollars for the first year. The foundation, as an independent organization, will be looking for more sponsors to also pony up money. They will have to find the right balance in how they spend that money so that they can continue to operate. I imagine the answer to these questions will depend in how successful they are in finding sponsors and operating within their budget. As an advisor, I’ll be pushing for more clarity around this.

The full details for what the foundation will do are still being hashed out. The interim board has 100 days to choose a more permanent board of directors. Now is the time to get involved if you want to help make sure it continues in the right direction.

comments suggest edit

My last post on the new dynamic keyword sparked a range of reactions which are not uncommon when discussing a new language keyword or feature. Many are excited by it, but there are those who feel a sense of…well…grief when their language is “marred” by a new keyword.

C#, for example, has seen it with the var keyword and now with the dynamic keyword. I don’t know, maybe there’s something to this idea that developers go through the seven stages of grief when their favorite programming language adds new stuff (Disclaimer: Actually, I’m totally making this crap up)

1. Shock and denial.

With the introduction of a new keyword, initial reactions include shock and denial.

No way are they adding lambdas to the language! I had a hard enough time with the delegate keyword!

What is this crazy ‘expression of funky tea’ syntax? I’ll just ignore it and hope it goes away.

Generics will never catch on! Mark my words.

2. Longing for the past

Immediately, even before the new feature is even released, developers start to wax nostalgic remembering a past that never was.

I loved language X 10 years ago when it wasn’t so bloated, man.

They forget that the past also meant managing your own memory allocations, punch cards, and dying of the black plague, which totally sucks.

3. Anger and FUD

Soon this nostalgia turns to anger and FUD.

Check out this reaction to adding the goto keyword to PHP, emphasis mine.

This is a problem. Seriously, PHP has made it \ this far without goto, why turn the language into a public menace?

Yes Robin, PHP is a menace terrorizing Gotham City. To the Batmobile!

The dynamic keyword elicited similar anger with comments like:

C# was fine as a static language. If I wanted a dynamic language, I’d use something else!


I’ll never use that feature!

It’s never long before anger turns to spreading FUD (Fear Uncertainty Doubt). The var keyword in C# is a prime example of this. Many developers wrote erroneously that using it would mean that your code was no longer strongly typed and would lead to all hell breaking use.

My friend used the var keyword in his program and it formatted his hard drive, irradiate his crotch, and caused the recent economic crash. True story.

Little did they know that the dynamic keyword was on its way which really would fulfill all those promises. ;)

Pretty much the new feature will destroy life on the planet as we know it and make for some crappy code.

4. Depression, reflection, and wondering about its performance

Sigh. I now have to actually learn this new feature, I wonder how well it performs.

This one always gets me. It’s almost always the first question developers ask about a new language feature? “Does it perform?”.

I think wondering about its performance is a waste of time. For your website which gets 100 visitors a day, yeah, it probably performs just fine.

The better question to ask is “Does my application perform well enough for my requirements?” And if it doesn’t then you start measuring, find the bottlenecks, and then optimize. Odds are your performance problems are not due to language features but to common higher level mistakes such as the Select N+1 problem.

5. The upward turn

Ok, my hard drive wasn’t formatted by this keyword. Maybe it’s not so bad.

At this point, developers start to realize that the new feature doesn’t eat kittens for breakfast and just might not be evil incarnate after all. Hey! It might even have some legitimate uses.

This is the stage where I think you see a lot of experimentation with the feature as developers give it a try and try to figure out where it does and doesn’t work well.

6. Code gone wild! Everything is a nail

I think we all go through this phase from time to time. At some point, you realize that this new feature is really very cool so you start to go hog wild with it. In your hands the feature is the Hammer of Thor and every line of code looks like a nail ready to be smitten.

Things can get ugly at this stage in a fit of excitement. Suddenly every object is anonymous, every callback is a lambda, and every method is generic, whether it should be or not.

It’s probably a good idea to resist this, but once in a while, you have to let yourself give in and have a bit of fun with the feature. Just remember the following command.

svn revert -R

Or whatever the alternative is with your favorite source control system.

7. Acceptance and obliviousness

At this point, the developer has finally accepted the language feature as simply another part of the language like the class or public keyword. There is no longer a need to gratuitously use or over-use the keyword. Instead the developer only uses the keyword occasionally in cases where it’s really needed and serves a useful purpose.

It’s become a hammer in a world where not everything is a nail. Or maybe it’s an awl. I’m not sure what an awl is used for, but I’m sure some of you out there do and you probably don’t use it all the time, but you use it properly when the need arises. Me, I never use one, but that’s perfectly normal, perfectly fine.

For the most part, the developer becomes oblivious to the feature much as developers are oblivious to the using keyword. You only think about the keyword when it’s the right time to use it.


Thanks to everyone on Twitter who provided examples of language keywords that provoked pushback. It was helpful., code, mvc comments suggest edit

UPDATE: Looks like the CLR already has something similar to what I did here. Meet the latest class with a superhero sounding name, ExpandoObject

Warning: What I’m about to show you is quite possibly an abuse of the C# language. Then again, maybe it’s not. ;) You’ve been warned.

Ruby has a neat feature that allows you to hook into method calls for which the method is not defined. In such cases, Ruby will call a method on your class named method_missing. I showed an example of this using IronRuby a while back when I wrote about monkey patching CLR objects.

Typically, this sort of wild chicanery is safely contained within the world of those wild and crazy dynamic language aficionados, far away from the peaceful waters of those who prefer statically typed languages.

Until now suckas! (cue heart pounding rock music with a fast beat)

C# 4 introduces the new dynamic keyword which adds dynamic capabilities to the once staid and statically typed language. Don’t be afraid, nobody is going to force you to use this (except maybe me). In fact, I believe the original purpose of this feature is to make COM interoperability much easier. But phooey on the intention of this feature, I want to have some fun!

I figured I’d try and implement something similar to method_missing.

The first toy I wrote is a simple dynamic dictionary which uses property accessors as the means of adding and retrieving values from the dictionary by using the property name as the key. Here’s an example of the usage:

static void Main(string[] args) {
  dynamic dict = new DynamicDictionary();

  dict.Foo = "Some Value";  // Compare to dict["Foo"] = "Some Value";
  dict.Bar = 123;           // Compare to dict["Bar"] = 123;
  Console.WriteLine("Foo: {0}, Bar: {1}", dict.Foo, dict.Bar);

That’s kind of neat, and the code is very simple. To make a dynamic object, you have the choice of either implementing the IDynamicMetaObjectProvider interface or simply deriving from DynamicObject. I chose this second approach in this case because it was less work. Here’s the code.

public class DynamicDictionary : DynamicObject {
  Dictionary<string, object> 
    _dictionary = new Dictionary<string, object>();

  public override bool TrySetMember(SetMemberBinder binder, object value) {
    _dictionary[binder.Name] = value;
    return true;

  public override bool TryGetMember(GetMemberBinder binder, 
      out object result) {
    return _dictionary.TryGetValue(binder.Name, out result);

All I’m doing here is overriding the TrySetMember method which is invoked when attempting to set a field to a value on a dynamic object. I can grab the name of the field and use that as the key to my dictionary. I also override TryGetMember to grab values from the dictionary. It’s really simple.

One thing to note, in Ruby, there really aren’t properties and methods. Everything is a method, hence you only have to worry about method_missing. There’s no field_missing method, for example. With C# there is a difference, which is why there’s another method you can override, TryInvokeMember, to handle dynamic method calls.

What havoc can we wreack with MVC?

So I have this shiny new hammer in my hand, let’s go looking for some nails!

While I’m a fan of using strongly typed view data with ASP.NET MVC, I sometimes like to toss some ancillary data in the ViewDataDictionary. Of course, doing so adds to syntactic overhead that I’d love to reduce. Here’s what we have today.

// store in ViewData
ViewData["Message"] = "Hello World";

// pull out of view data
<%= Html.Encode(ViewData["Message"]) %>

Sounds like a job for dynamic dictionary!

Before I show you the code, let me show you the end result first. I created a new property for Controller and for ViewPage called Data instead of ViewData (just to keep it short and because I didn’t want to call it VD).

Here’s the controller code.

public ActionResult Index() {
    Data.Message = "<cool>Welcome to ASP.NET MVC!</cool> (encoded)";
    Data.Body = "<strong>This is not encoded</strong>.";
    return View();

Note that Message and Body are not actually properties of Data. They are keys to the dictionary via the power of the dynamic keyword. This is equivalent to setting ViewData["Message"] = "<cool>…</cool>".

In the view, I created my own convention where all access to the Data object will be html encoded unless you use an underscore.

<asp:Content ContentPlaceHolderID="MainContent" runat="server">
 <h2><%= Data.Message %></h2>
    <%= Data._Body %>

Keep in mind that Data.Message here is equivalent to ViewData["Message"].

Here’s a screenshot of the end result.


Here’s how I did it. I started by writing a new DynamicViewData class.

public class DynamicViewData : DynamicObject {
  public DynamicViewData(ViewDataDictionary viewData) {
    _viewData = viewData;
  private ViewDataDictionary _viewData;

  public override bool TrySetMember(SetMemberBinder binder, object value) {
    _viewData[binder.Name] = value;
      return true;

  public override bool TryGetMember(GetMemberBinder binder,
      out object result) {
    string key = binder.Name;
    bool encoded = true;
    if (key.StartsWith("_")) {
      key = key.Substring(1);
      encoded = false;
    result = _viewData.Eval(key);
     if (encoded) {
       result = System.Web.HttpUtility.HtmlEncode(result.ToString());
     return true;

If you look closely, you’ll notice I’m doing a bit of transformation within the body of TryGetMember. This is where I create my convention for not html encoding the content when the property name starts with underscore. I then strip off the underscore before trying to get the value from the database.

The next step was to create my own DynamicController

public class DynamicController : Controller {
  public dynamic Data {
    get {
      _viewData = _viewData ?? new DynamicViewData(ViewData);
      return _viewData;
  dynamic _viewData = null;

and DynamicViewPage, both of which makes use of this new type.

public class DynamicViewPage : ViewPage {
  public dynamic Data {
    get {
      _viewData = _viewData ?? new DynamicViewData(ViewData);
      return _viewData;
  dynamic _viewData = null;

In the Views directory, I updated the web.config file to make DynamicViewPage be the default base class for views instead of ViewPage. You can make this change by setting the pageBaseType attribute of the <pages> element (I talked about this a bit in my post on putting your views on a diet).

I hope you found this to be a fun romp through a new language feature of C#. I imagine many will find this to be an abuse of the language (language abuser!) while others might see other potential uses in this technique. Happy coding!

Tags: dynamic, aspnetmvc, C#, DLR

comments suggest edit

The .NET Framework provides support for managing transactions from code via the System.Transactions infrastructure. Performing database operations in a transaction is as easy as writing a using block with the TransactionScope class.

using(TransactionScope transaction = new TransactionScope()) 


At the end of the using block, Dispose is called on the transaction scope. If the transaction has not been completed (in other words, transaction.Complete was not called), then the transaction is rolled back. Otherwise it is committed to the underlying data store.

The typical reason a transaction might not be completed is that an exception is thrown within the using block and thus the Complete method is not called.

This pattern is simple, but I was looking at it the other day with a co-worker wondering if we could make it even simpler. After all, if the only reason a transaction fails is because an exception is thrown, why must the developer remember to complete the transaction? Can’t we do that for them?

My idea was to write a method that accepts an Action which contains the code you wish to run within the transaction. I’m not sure if people would consider this simpler, so you tell me. Here’s the usage pattern.

public void SomeMethod()
  Transaction.Do(() => {

Yay! I saved one whole line of code! :P

Kidding aside, we don’t save much in code reduction, but I think it makes the concept slightly simpler. I figured someone has already done this as it’s really not rocket science, but I didn’t see anything after a quick search. Here’s the code.

public static class Transaction 
  public static void Do(Action action) 
    using (TransactionScope transaction = new TransactionScope())

So you tell me, does this seem useful at all?

By the way, there are several overloads to the TransactionScope constructor. I would imagine that if you used this pattern in a real application, you’d want to provide corresponding overloads to the Transaction.Do method.

UPDATE: What if you don’t want to rely on an exception to determine whether the transaction is successful?

In general, I tend to think of a failed transaction as an exceptional situation. I generally assume transactions will succeed and when they don’t it’s an exceptional situation. In other words, I’m usually fine with an exception being the trigger that a transaction fails.

However, Omer Van Kloeten pointed out on Twitter that this can be a performance problem in cases where transaction failures are common and that returning true or false might make more sense.

It’s trivial to provide an overload that takes in a Func<bool>. When you use this overload, you simply return true if the transaction succeeds or false if it doesn’t, which is kind of nice. Here’s an example of usage.

Transaction.Do(() => {

  if(SaveWorkToDatabaseSuccessful()) {
    return true;
  return false;

The implementation is pretty similar to what we have above.

public static void Do(Func<bool> action) {
  using (TransactionScope transaction = new TransactionScope()) {
    if (action()) {

comments suggest edit

When building a web application, it’s a common desire to want to expose a simple Web API along with the HTML user interface to enable various mash-up scenarios or to simply make accessing structured data easy from the same application.

A common question that comes up is when to use ASP.NET MVC to build out REST-ful services and when to use WCF? I’ve answered the question before, but not as well as Ayende does (when discussing a different topic). This is what I tried to express.

In many cases, the application itself is the only reason for development [of the service]

[of the service]” added by me. In other words, when the only reason for the service’s existence is to service the one application you’re currently building, it may make more sense  would stick with the simple case of using ASP.NET MVC. This is commonly the case when the only client to your JSON service is your web application’s Ajax code.

When your service is intended to serve multiple clients (not just your one application) or hit large scale usage, then moving to a real services layer such as WCF may be more appropriate.

However, there is now a third hybrid choice that blends ASP.NET and WCF. The WCF team saw that many developers building ASP.NET MVC apps are more comfortable with the ASP.NET MVC programming model, but still want to supply more rich RESTful services from their web applications. So the WCF team put together an SDK and samples for building REST services using ASP.NET MVC.

You can download the samples and SDK from ASP.NET MVC 1.0 page on CodePlex.

Do read through the overview document as it describes the changes you’ll need to make to an application to make use of this framework. Also, the zip file includes several sample movie applications which demonstrate various scenarios and compares them to the baseline of not using the REST approach.

At this point in time, this is a sample and SDK hosted on our CodePlex site, but many of the features are in consideration for a future release of ASP.NET MVC (no specifics yet).

This is where you come in. We are keenly interested in hearing feedback on this SDK. Is it important to you, is it not? Does it do what you need? Does it need improvement. Let us know what you think. Thanks!