, code, mvc 0 comments suggest edit

This post is now outdated

I apologize for not blogging this over the weekend as I had planned, but the weather this weekend was just fantastic so I spent a lot of time outside with my son.

the-parkIf you haven’t heard yet, Visual Studio 2010 Beta 1 is now available for MSDN subscribers to download. It will be more generally available on Wednesday, according to Soma.

You can find a great whitepaper which describes what is new for web developers in ASP 4 which is included.

One thing you’ll notice is that ASP.NET MVC is not included in Beta

  1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

Right now, if you try and open an MVC project with VS 2010 Beta 1, you’ll get some error message about the project type not being supported. The easy fix for now is to remove the ASP.NET MVC ProjectTypeGuid entry as described by this post.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready., code, mvc 0 comments suggest edit

A while back, I wrote about Donut Caching in ASP.NET MVC for the scenario where you want to cache an entire view except for a small bit of it. The more technical term for this technique is probably “cache substitution” as it makes use of the Response.WriteSubstitution method, but I think “Donut Caching” really describes it well — you want to cache everything but the hole in the middle.

However, what happens when you want to do the inverse. Suppose you want to cache the donut hole, instead of the donut?

House of Sims

I think we should nickname all of our software concepts after tasty food items, don’t you agree?

In other words, suppose you want to cache a portion of the view in a different manner (for example, with a different duration) than the entire view? It hasn’t been exactly clear how do to do this with ASP.NET MVC.

For example, the Html.RenderPartial method ignores any OutputCache directives on the view user control. If you happen to use Html.RenderAction from MVC Futures which attempts to render the output from an action inline within another view, you might run into this bug in which the entire view is cached if the target action has an OutputCacheAttribute applied.

I did a little digging into this today and it turns out that when you specify the OutputCache directive on a control (or page for that matter), the output caching is not handled by the control itself. Rather, it appears that compilation system for ASP.NET pages kicks in and interprets that directive and does the necessary gymnastics to make it work.

In plain English, this means that what I’m about to show you will only work for the default WebFormViewEngine, though I have some ideas on how to get it to work for all view engines. I just need to chat with the members of the ASP.NET team who really understand the deep grisly guts of ASP.NET to figure it out exactly.

With the default WebFormViewEngine, it’s actually pretty easy to get partial output cache working. Simply add a ViewUserControl declaratively to a view and put your call to RenderAction or RenderPartial inside of that ViewUserControl. If you’re using RenderAction, you’ll need to remove the OutputCache attribute from the action you’re pointing to.

Keep in mind that ViewUserControls inherit the ViewData of the view they’re in. So if you’re using a strongly typed view, just make the generic type argument for ViewUserControl have the same type as the page.

If that last paragraph didn’t make sense to you, perhaps an example is in order. Suppose you have the following controller action.

public ActionResult Index() {
  var jokes = new[] { 
    new Joke {Title = "Two cannibals are eating a clown"},
    new Joke {Title = "One turns to the other and asks"},
    new Joke {Title = "Does this taste funny to you?"}

  return View(jokes);

And suppose you want to produce a list of jokes in the view. Normally, you’d create a strongly typed view and within that view, you’d iterate over the model and print out the joke titles.

We’ll still create that strongly typed view, but that view will contain a view user control in place of where we would have had the code to iterate the model (note that I omitted the namespaces within the Inherits attribute value for brevity).

<%@ Page Language="C#" Inherits="ViewPage<IEnumerable<Joke>>" %>
<%@ Register Src="~/Views/Home/Partial.ascx" TagPrefix="mvc" TagName="Partial" 
<mvc:Partial runat="server" />

Within that control, we do what we would have done in the main view and we specify the output cache values. Note that the ViewUserControl is generically typed with the same type argument that the view is, IEnumerable<Joke>. This allows us to move the exact code we would have had in the view to this control. We also specify the OutputCache directive here.

<%@ Control Language="C#" Inherits="ViewUserControl<IEnumerable<Joke>>" %>
<%@ OutputCache Duration="10000" VaryByParam="none" %>

<% foreach(var joke in Model) { %>
    <li><%= Html.Encode(joke.Title) %></li>
<% } %>

Now, this portion of the view will be cached, while the rest of your view will continue to not be cached. Within this view user control, you could have calls to RenderPartial and RenderAction to your heart’s content.

Note that if you are trying to cache the result of RenderPartial this technique doesn’t buy you much unless the cost to render that partial is expensive.

Since the output caching doesn’t happen until the view rendering phase, if the view data intended for the partial view is costly to put together, then you haven’t really saved much because the action method which provides the data to the partial view will run on every request and thus recreate the partial view data each time.

In that case, you want to hand cache the data for the partial view so you don’t have to recreate it each time. One crazy idea we might consider (thinking out loud here) is to allow associating output cache metadata to some bit of view data. That way, you could create a bit of view data specifically for a partial view and the partial view would automatically output cache itself based on that view data.

This would have to work in tandem with some means to specify that the bit of view data intended for the partial view is only recreated when the output cache is expired for that partial view, so we don’t incur the cost of creating it on every request.

In the RenderAction case, you really do get all the benefits of output caching because the action method you are rendering inline won’t get called from the view if the ViewUserControl is outputcached.

I’ve put together a small demo which demonstrates this concept in case the instructions here are not clear enough. Enjoy!

0 comments suggest edit

A while back a young developer emailed me asking for advice on what it takes to become a successful developer. I started to respond,

I don’t know. Why don’t you go ask a successful developer?

web - credits: But then I thought, that’s kind of snarky, isn’t it? And who am I kidding with all that false modesty? After all, the concept of a “clever hack” was named after me, but the person who came up with it didn’t have 1/10 of my awesomeness which is exceedingly apparent given the off-by-one error that dropped one of the “a”s from the phrase when it was coined. This, of course, was all before I invented TDD, the Internet, and breathing air.

(note to the humor impaired, I didn’t really invent TDD)

But as I’m apt to do, I digress…

I started thinking about the question a bit and said to myself, “If I were a successful developer, what would have I done to become that?” As I started brainstorming ideas, one thing that really stood out was joining an open source project.

If one thing in my career has paid dividends, it was getting involved with open source projects. It exposed me to such a diverse set of problems and technologies that I wouldn’t normally get a chance to work on at work.

Now before I go further, this post is not the post where I answer the young developer’s question. No, that’s a post for another time. I’ll probably give it some trite and pompous title like “Advice for a young developer.” I mean, c’mon! How pathetic and self absorbed is that title? “Get over yourself!” I’ll say to the mirror. But the guy in the mirror will probably do it anyways, but in reverse.

No, this is not that post. Rather, this post is a digression from that post, because if I’m good at one thing, it’s digressing.

As I thought about the open source thing, I got to thinking about the first open source project I ever worked on – RSS Bandit (w00t w00t!). RSS Bandit is a kick butt RSS aggregator developed by Dare Obasanjo and Torsten Rendelmann. I had just started to get into blogging at the time and was really impressed by Dare’s outspoken and yet very thoughtful blog as well as by his baby at the time, RSS Bandit (he has a real baby now, congrats man!).

I hadn’t done much Windows client development back then. I was mostly building web applications in classic ASP and then early versions of ASP.NET. I figured that it would be exciting to cut my teeth on RSS Bandit and learn Winforms development in the process. The idea of a stateful programming model had me positively giddy with excitement. This was going to be so cool.

Many new developers approaching an open source project have grand visions of implementing shiny amazing new features that will have the crowds roaring, the President naming a holiday after you, and all your enemies realizing the errors of their ways and naming their children after you.

But a good contributor swallows his or her pride and starts off slowly with something smaller in scope, and more grunt work like in nature. Most OSS projects have a real need for documentation, partly because all the glamour is in implementing features so nobody wants to write the documentation.

That’s where I started. I wrote an article for the docs, getting started with RSS Bandit. Dare took a notice and asked if I would contribute to the documentation, which I gladly agreed to do. He gave me commit access (I believe I was the third after Dare and Torsten to get commit rights) and started working very hard on the documentation. In fact, much of what I wrote is still there as you can see in my narcissistic application screenshots I used. ;)

Over time, I gained more and more trust and was allowed to work on some bug fixes and features. My first main feature was implementing configurable keyboard shortcuts, which was really neat to implement.

(A bit of trivia. I worked with these guys for years on RSS Bandit, but never met Dare in person until this past Mix conference in Las Vegas. Seriously! I’ve yet to meet Torsten who lives in Germany.)

I really loved working on RSS Bandit and it became quite a hobby that took up what little was left of my free time. I guess you could say it kept me out of gangs in the hard streets of Los Angeles, not that I tried to join nor would they accept me. Over time though, I learned something. Despite all that initial giddiness over finally getting to program in a stateful environment…

I realized I didn’t like it.

In fact, I found it quite foreign and challenging. I kept running into weird problems where controls still had the state they had before after a user clicked a button. I would think to myself, “why do I have to clear that state myself? Why doesn’t it just go away when the user takes an action?” I realized my problem was that I was thinking like a web programmer, not a client programmer who took these things for granted.

As challenging as a client programmer finds the web, where you have to recreate the state on each request because the web is stateless, a developer who primarily programs the web sometimes finds client development challenging because the state is like that ugly sidekick next to the hot one at a bar – it…just…won’t…go…away.

I realized then, that I’m just a web developer at heart and I’d rather make web, not war. It was around that time that I started the Subtext project where I felt more in my element working on a web application. Eventually, I stopped using RSS Bandit preferring a web based solution in Google Reader, ironically, because the state of my feeds is always there, in the cloud, without me needing to synchronize or install an app when I’m at a new computer.

So while I actually like (or maybe am just accustomed to) the stateless programming model of the web, I’m also attracted to the statefulness of web applications as a whole in that the state of my data is not tied to any one machine but it’s stored centrally where I can easily get to it from anywhere (which yes, has its own concerns and problems such as when the net is down).

At the same time, I do check in now and then to see how RSS Bandit is progressing. There are very cool features that it has that I miss out on with Google Reader such as the ability to comment directly from the aggregator via the Comment API and the ability to subscribe to authenticated feeds. And I think Dare’s taking RSS Bandit into compelling new directions.

All this is to say that if you want to become a better developer, join an open source project (such as this one :) because it might just show you exactly what type of developer you are at heart. As I learned, I’m a web developer at heart.

0 comments suggest edit

lockdownAs I’m sure you know, we developers are very particular people and we like to have things exactly our way. How else can you explain long winded impassioned debates over curly brace placement

So it comes as no surprise that developers really care about what goes in (and behind) their .aspx files, whether they be pages in Web Forms or views in ASP.NET MVC.

For example, some developers are adamant that a page should not include server side script blocks, while others don’t want their views to contain Web Form controls. Wouldn’t it be great if you could have your views reject such code constructs?

Fortunately, ASP.NET is full of lesser known extensibility gems which can help in such situations such as the PageParseFilter. MSDN describes this class as such:

Provides an abstract base class for a page parser filter that is used by the ASP.NET parser to determine whether an item is allowed in the page at parse time.

In other words, implementing this class allows you to go along for the ride as the page parser parses the .aspx file and gives you a chance to hook into that parsing.

For example, here’s a very simple filter which blocks any script tags with the runat="server" attribute set within a page.

using System;
using System.Web.UI;

public class MyPageParserFilter : PageParserFilter {
  public override bool ProcessCodeConstruct(CodeConstructType codeType
    , string code) {
    if (codeType == CodeConstructType.ScriptTag) {
      throw new InvalidOperationException("Say NO to server script blocks!");
    return base.ProcessCodeConstruct(codeType, code);

  public override bool AllowCode {
    get {
      return true;

  public override bool AllowControl(Type controlType, ControlBuilder builder)   {
    return true;

  public override bool AllowBaseType(Type baseType) {
    return true;

  public override bool AllowServerSideInclude(string includeVirtualPath) {
    return true;

  public override bool AllowVirtualReference(string referenceVirtualPath
    , VirtualReferenceType referenceType) {
    return true;

  public override int NumberOfControlsAllowed {
    get {
      return -1;

  public override int NumberOfDirectDependenciesAllowed {
    get {
      return -1;

Notice that we had to override some defaults for other properties we’re not interested in such as NumberOfControlsAllowed or we’d get the default of 0 which is not what we want in this case.

To apply this filter, just specify it in the <pages /> section of web.config like so:

  pageParserFilterType="Namespace.MyPageParserFilter, AssemblyName">

Applying a parse filter for Views in ASP.NET MVC is a bit trickier because it already has a parse filter registered, ViewTypeParserFilter, which handles part of the voodoo black magic in order to remove the need for code-behind in views when using a generic model type. Remember those particular developers I was talking about?

Suppose we want to prevent developers from using server controls which make no sense in the context of an ASP.NET MVC view. Ideally, we could simply inherit from ViewTypeParserFilter and make our change so we don’t lose the existing view functionality.

That type is internal so we can’t simply inherit it. Fortunately, what we can do is simply grab the ASP.NET MVC source code for that type, rename the type and namespace, and then change it to meet our needs. Once we’re done, we can even share those changes with others. This is one of the benefits of having an open source license for ASP.NET MVC.

WARNING: The fact that we implement a ViewTypeParserFilter is an implementation detail. The goal is that in the future, we wouldn’t need this filter to provide the nice generic syntax. So what I’m about to show you might be made obsolete in the future and should be done at your own risk. It’s definitely running with scissors.

In my demo, I copied the following files to my project:

  • ViewTypeParserFilter
  • ViewTypeControlBuilder
  • ViewPageControlBuilder
  • ViewUserControlControlBuilder

I then created a new parser filter which inherits the ViewTypeParserFilter and overrode the AllowControl method like so:

public override bool AllowControl(Type controlType, ControlBuilder builder) {
  return (controlType == typeof(HtmlHead) 
    || controlType == typeof(HtmlTitle)
    || controlType == typeof(ContentPlaceHolder)
    || controlType == typeof(Content)
    || controlType == typeof(HtmlLink));

This will block adding any control except for those necessary in creating a typical view. You can imagine later adding some easy way of configuring that list in case you do later allow other controls.

Once we’ve implemented this new filter, we can edit the Web.config file within the Views directory to set the parser filter to this one.

This is a powerful tool for hooking into the parsing of a web page, so do be careful with it. As you might expect, I have a very simple demo of this feature here.

0 comments suggest edit

At long last, the book that I worked on with Scott Hanselman, Rob Conery, and Scott Guthrie is in stock at

To commemorate the book being available, the two Scotts worked very hard to convert the free eBook of chapter 1 (the end-to-end walkthrough) from the PDF into a series of HTML articles.

This is a great series which walks through the construction of the NerdDinner website. It touches upon most of the day-to-day aspects of ASP.NET MVC that you’ll want to know. It’s a great way to start understanding how the pieces largely fit together.

The rest of the book is for those who want to drill deep into the details of how the framework works. We tried to pepper the book with notes and anecdotes from the product team in a style similar to the annotations in the Framework Design Guidelines book. If you’re looking for reasons not to buy the book, see Rob’s post.

extended-forehead-editionThe bad news is that despite our heroic efforts in which we cajoled, begged, and pleaded and rendered our clothes asunder, we were not able to convince our editors to produce a special platinum extended forehead edition of the book. I really thought we could we charge extra for the limited edition cover and make gangbusters. I’ll just post it here so you can see for yourself. Click on it for a larger view.

code, mvc, dlr 0 comments suggest edit

Say you’re building a web application and you want, against your better judgment perhaps, to allow end users to easily customize the look and feel – a common scenario within a blog engine or any hosted application.

With ASP.NET, view code tends to be some complex declarative markup stuck in a file on disk which gets compiled by ASP.NET into an assembly. Most system administrators would first pluck out their own toenail rather than allow an end user permission to modify such files.

It’s possible to store such files in the database and use a VirtualPathProvider to load them, but that requires your application (and thus their views) to run in full trust. Is there a way you could safely store such views in the database in an application running in medium trust where the code in the view is approachable?

At the ALT.NET conference a little while back, Jimmy Schementi and John Lam gave a talk about the pattern of hosting a scripting language within a larger application. For example, many modern 3-D Games have their high performance core engine written in C++ and Assembly. However, these games often use a scripting language, such as Lua, to write the scripts for the behaviors of characters and objects.

An example that might be more familiar to more people is the use of VBA to write macros for Excel. In both of these cases, the larger application hosts a scripting environment that allow end users to customize the application using a simpler lighter weight language than the one the core app is written in.

A long while back, I wrote a blog post about defining ASP.NET MVC Views in IronRuby followed by a full IronRuby ASP.NET MVC stack. While there was some passionate interest by a few, in general, I was met with the thunderous sound of crickets. Why the huge lack of interest? Probably because I didn’t really sell the benefit and the explain the pain it solves. I’m sure many of you were asking, Why bother? What’s in it for me?

After thinking about it some more, I realized that my prototypes appeared to suggest that if you want to take advantage of IronRuby, you would need to make some sort of wholesale switch to a new foreign language, not something to be undertaken lightly.

This is why I really like Jimmy and John’s recent efforts to focus on showing the benefits of hosting the DLR for scripting scenarios like the ones mentioned above. It makes total sense to me when I look at it in this perspective. The way I see it, most developers spend a huge bulk of their time in a single core language, typically their “language of choice”. For me, I spend the bulk of my time writing C# code.

However, I don’t think twice about the fact that I also write tons of JavaScript when I do web development, and I’ll write the occasional VB code when I need a new Macro for Visual Studio or Excel. I also write SQL when I need to. I’m happy to pick up and use a new language when it will enable me to do the job at hand more efficiently and naturally than C# does. I imagine many developers feel this way. The occasional use of a scripting languages is fine when it gets the job done and I can still spend most of my time in my favorite language.

So I started thinking about how that might work in a web application. What if you could write all your business logic and controller logic in your language of choice, but have your views written in a light weight scripting language. If my web application were to host a scripting engine, I could actually store the code in any medium I want, such as the database. Having them in the database makes it very easy for end users to modify it since it wouldn’t require file upload permissions into the web root.

This is where hosting the DLR is a nice fit. I put together a proof of concept for these ideas. This is just a prototype intended to show how such a workflow might work. In this prototype, you go about creating your models and controllers the way you normally would.

For example, here’s a controller that returns some structured data to the view in the form of an anonymous type.

public ActionResult FunWithScripting()
  var someData = new { 
    salutation = "Are you having fun with scripting yet?", 
    theDate = DateTime.Now,
      numbers = new int[] { 1, 2, 3, 4 } 

  return View(someData);

Once you write your controller, but before you create your view, you compile the app and then go visit the URL.View does not exist

We haven’t created the view yet, so let’s follow the instructions and login. Afterwards, we this:


Since the view doesn’t exist, I hooked in and provided a temporary view for the controller action which contains a view editor. Notice that at the bottom of the screen, you can see the current property names and values being passed to the view. For example, there’s an enumeration of integers as one property, so I was able to use the Ruby each method to print them out in the view.

The sweet little browser based source code editor is named Edit Area created by Christophe Dolivet. Unfortunately, at the time I write this, it doesn’t yet have support for ERB style syntax highlighting schemes. That’s why the <% and %> aren’t highlighted in yellow.

When I click Create View, I get taken back to the request for the same action, but now I can see the view I just created (click to enlarge).

Fun with scripting

In the future, I should be able to host C# views in this way. Mono already has a tool for dynamically compiling C# code passed in as a string which I could try and incorporate.

I’m seriously thinking of making this the approach for building skins in a future version of Subtext. That would make skin installation drop dead simple and not require any file directory access. Let me know if you make use of this technique in your applications.

If you try and run this prototype, please note that there are some quirky caching issues with editing existing views in the prototype. It’ll seem like your view is not being edited, but it’s a result of how views are being cached. It might take a bit of time before your edits show up. I’m sure there are other bugs I’m still in the process of fixing. But for the most part, the general principle is sound.

You can download the prototype here.

0 comments suggest edit

Because of all the travel I did last year as well as the impending new addition to the family this year, I drastically cut down on my travel this year. There are only two conferences outside of Redmond I planned to speak at, one was Mix (see the links to videos of my talks) and the next one is the Norwegian Developer Conference also known as the NDC.

NDC_logo-2009Hanselman spoke at this conference last year and tells me it’s a good one. Besides, it’s in Norway! I’ve travelled through Norway once during college, taking a train from Oslo to Bergen, riding a boat on the fjords, and enjoying the profound natural beauty of the country. I guess I have a thing for cold places. :)

I’m pretty excited about the speaker line-up which includes a lot of great .NET and ALT.NET speakers you know and love. But what’s really got me perked up are the speakers outside of the typical .NET conference lineup.

One of my favorite conferences last year was Google IO which was a refreshing change of pace for me. For the NDC, I requested to stay an extra day so I could make sure to catch sessions by Mary Poppendieck, Robert “Uncle Bob” Martin, and Michael Feathers among others.

It looks like all my talks are on Day 1 of the conference. I’ll be upda’ting my Black Belt Ninja Tips ASP.NET MVC talk, talking about Ajax in the context of ASP.NET MVC, and giving a joint talk with Scott Hanselman which we’re still figuring out the exact details on.

My only concern is whether I need to worry if Jeremy Miller is going to try and express his man love for me while there. ;) Kidding aside, I’m approaching with a mind ready to absorb knowledge. If you’re in the area, definitely consider this as a conference to check out. It should be fun!

0 comments suggest edit

What responsibility do we have as software professionals when we post code out there for public consumption?

I don’t have a clear cut answer in my mind, but maybe you can help me formulate one. :)

For example, I recently posted a sample on my blog intended to show how to use jQuery Grid with ASP.NET MVC.

The point of the sample was to demonstrate shaping a JSON result for the jQuery grid’s consumption. For the sake of illustration, I wanted the action method to be relatively self contained so that a reader would quickly understand what’s going on in the code without having to jump around a lot.

Thus the code takes some shortcuts with data access, lack of exception handling, and lack of input validation. It’s pretty horrific!

Now before we grab the pitchforks (and I did say “we” intentionally as I’ll join you) to skewer me, I did preface the code with a big “warning, DEMO CODE AHEAD” disclaimer and so far, nobody’s beaten me up too bad about it, though maybe by writing this I’m putting myself in the crosshairs.

Even so, it did give me pause to post the code the way I did. Was I making the right trade-off in sacrificing code quality for the sake of blog post demo clarity and brevity?

In this particular case, I felt it was worth it as I tend to categorize code into several categories. I’m not saying these are absolutely correct, just opening up my cranium and giving you a peek in my head about how I think about this:

  • Prototype Code – Code used to hash out an idea to see if it’s feasible or as a means of learning a new technology. Often very ugly throwaway code with little attention paid to good design.
  • Demo Code – Code used to illustrate a concept, especially in a public setting. Like prototype code, solid design is sometimes sacrificed for clarity, but these sacrifices are deliberateand intentional, which is very important. My jQuery Grid demo above is an example of what I mean.
  • Sample Code – Very similar to demo code, the difference being that good design principles should be demonstrated for the code relevant to the concept the sample is demonstrating. Code irrelevant to the core concept might be fine to leave out or have lower quality. For example, if the sample is showing a data access technique, you might still leave out exception handling, caching, etc… since it’s not the goal of the sample to demonstrate those concepts.
  • Production Code – Code you’re running your business on, or selling. Should be as high quality as possible given your constraints. Sometimes, shortcuts are taken in the short run (incurring technical debt) with the intention of paying down the debt ASAP.
  • Reference Code – This is code that is intended to demonstrate the correct way to build an application and should be almost idealized in its embracement of good design practices.

As you might expect, the quality the audience might expect from these characterizations is not hard and fast, but dependent on context. For example, for the Space Shuttle software, I expect the Production Code to be much higher quality than production code for some intranet application.

Likewise, I think where the code is posted and by whom is can affect perception. We might expect much less from some blowhard posting code to his personal blog, ummm, like this one.

Then again, if the person claims that his example is a best practice, which is a dubious claim in the first place, we may tend to hold it to much higher standards.

Now if instead of a person, the sample is posted on an official website of a large company, say Microsoft, the audience may expect a lot more than from a personal blog post. In fact, the audience may not make the distinction between sample and reference application. This appears to be the case recently with Kobe and in the past with Oxite.

Again, this is my perspective on these things. But my views have been challenged recently via internal and external discussions with many people. So I went to the font of all knowledge where all your wildest questions are answered: Twitter. I posed the following two questions:

Do you have different quality expectations for a sample app vs a reference app?

What if the app is released by MS? Does that change your expectations?

The answers varied widely. Here’s a small sampling that represents the general tone of the responses I received.

Yes. A sample app should be quick and dirty. A reference app should exhibit best practices (error checking, logging, etc)

No, same expectations… Even I ignore what is the difference between both.

Regardless of who releases the app, my expectations don’t change.

Yes being from MS raises the bar of necessary quality, because it carries with it the weight of a software development authority.

I don’t think I have ever thought about what the difference in the two is, isn’t a sample app basically a reference app?

I don’t think most people discriminate substantively betw the words “sample” and “reference.”

Everyone, Microsoft included, should expect to be judged by everything the produce, sample or otherwise.

yes, samples do not showcase scalability or security, but ref apps do… i.e ref apps are more “enterprisey”

IMHO, sample implies a quick attempt; mostly throw-away. Ref. implies a proposed best practice; inherently higher quality.

No. We as a community should understand the difference.However MS needs to apply this notion consistently to it’s examples.

Whatever you release as sample code, is *guaranteed* to be copy-pasted everywhere - ask Windows AppCompat if you don’t believe me

Note that this is a very unscientific sampling, but there is a lot of diversity in the views being expressed here. Some people make no distinction between sample and reference while others do. Some hold Microsoft to higher standards while others hold everybody to the same standard.

I found this feedback to be very helpful because I think we tend to operate under one assumption about how our audience sees our samples, but your audience might have a completely different view. This might explain why there may be miscommunication and confusion about the community reaction to a sample.

I highlighted the last two responses because they make up the core dichotomy in my head regarding releasing samples.

On the one hand, I have tended to lean towards the first viewpoint. If code has the proper disclaimer, shouldn’t we take personal responsibility in understanding the difference?

Ever since starting work on ASP.NET MVC, we’ve been approached by more and more teams at Microsoft who are interested in sharing yet more code on CodePlex (or otherwise) and want to hear about our experiences and challenges in doing so.

When you think about it, this is a great change in what has been an otherwise closed culture. There are a lot of teams at Microsoft and the quality of the code and intent of the code will vary from team to team and project to project. I would hate to slow down that stream of sample code flowing out because some people will misunderstand its purpose and intend and cut and paste it. Yes, some of the code will be very bad, but some of it will still be worth putting out there. After all, I tend to think that if we stop giving the bad programmers bad code to cut and paste, they’ll simply write the bad code themselves. Yes, posting good code is even better, but I think that will be a byproduct of getting more code out there.

On the other hand, there’s the macro view of things to consider. People should also know not to use a hair dryer in the shower, yet they still have these funny warning labels for a reason. The fact that people shouldn’t do something sometimes doesn’t change the fact that may still do it. We can’t simply ignore that fact and the impact it may have. No matter how many disclaimers we put on our code, people will cut and paste it. It’s not so bad that a bad programmer uses bad code, but that as it propagates, the code gets confused with the right way and spreads to many programmers.

Furthermore, the story is complicated even more by the inconsistent labels applied to all this sample code, not to mention the inconsistent quality.

So What’s the Solution?

Stop shipping samples.

Nah, I’m just kidding. ;)

Some responses were along the lines of Microsoft should just post good code. I agree, I would really love it if every sample was of superb quality. I’d also like to play in a World Cup and fly without wings, but I don’t live in that world.

Obviously, this is what we should be striving for, but what do we do in the meantime? Stop shipping samples? I hope not.

Again, I don’t claim to have the answers, but I think there are a few things that could help. One twitter response made a great point:

a reference app is going to be grilled. Even more if it comes from the mothership. Get the community involved *before* it gets pub

Getting the community involved is a great means of having your code reviewed to make sure you’re not doing anything obviously stupid. Of course, even in this, there’s a challenge. Jeremy Miller made this great point recently:

We don’t have our own story straight yet.  We’re still advancing our craft.  By no means have we reached some sort of omega point in our own development efforts. 

In other words, even with community involvement, you’re probably going to piss someone off. But avoiding piss is not really the point anyways (though it’s much preferred to the alternative). The point is to be a participant in advancing the craft alongside the larger community. Others might disagree with some of your design decisions, but hopefully they can see that your code is well considered via your involvement with the community in the design process.

This also helps in avoiding the perception of arrogance, a fault that some feel is the root cause of why some of our sample apps are of poor quality. Any involvement with the community will help make it very clear that there’s much to learn from the community just as there is much to teach.

While I think getting community involved is important, I’m still on the fence on whether it must happen before its published. After all, isn’t publishing code a means of getting community involvement in the first place? As Dare says:

getting real feedback from customers by shipping is more valuable than any amount of talking to or about them beforehand

Personally, I would love for there to be a way for teams to feel free to post samples (using the definition I wrote), without fear of misconstrued intent and bad usage. Ideally in a manner where it’s clear that the code is not meant for cut and paste into real apps.

Can we figure out a way to post code samples that are not yet the embodiment of good practices in a responsible manner with the intent to improve the code quality based on community feedback? Is this even a worthy goal or should Microsoft samples just get it right the first time, as mentioned before, or don’t post at all?

Perhaps both of those are pipe dreams. I’m definitely interested in hearing your thoughts. :)

Another question I struggle with is what causes people to not distinguish between reference apps and sample apps? Is there no distinction to make? Or is this a perception problem that can be corrected with a concerted effort to make such labels consistently applied, perhaps? Or via some other means.

As you can see, I have my own preconceived notions about those things, but I’m putting them out there and challenging them based on what I’ve read recently. Please do comment and let me know your thoughts.

code, mvc 0 comments suggest edit

Tim Davis posted an updated version of this solution on his blog. His includes the following:

  • jqGrid 3.8.2
  • .NET 4.0 Updates
  • VS2010
  • jQuery 1.4.4
  • jQuery UI 1.8.7

Continuing in my pseudo-series of posts based on my ASP.NET MVC Ninjas on Fire Black Belt Tips Presentation at Mix (go watch it!), this post covers a demo I did not show because I ran out of time. It was a demo I held in my back pocket just in case I went too fast and needed one more demo.

A common scenario when building web user interfaces is providing a pageable and sortable grid of data. Even better if it uses AJAX to make it more responsive and snazzy. Since ASP.NET MVC includes jQuery, I figured it’d be fun to use a jQuery plugin for this demo, so I chose jQuery Grid.

After creating a standard ASP.NET MVC project, the first step was to download the plugin and to unzip the contents to my scripts directory per the Installation instructions.


For the purposes of this demo, I’ll just implement this using the Index controller action and view within the HomeController.

With the scripts in place, go to the Index view and add the proper call to initialize the jQuery grid. There are three parts to this:

First, make sure to add the required script and CSS declarations.

<link rel="stylesheet" type="text/css" href="/scripts/themes/coffee/grid.css" 
  title="coffee" media="screen" />
<script src="/Scripts/jquery-1.3.2.js" type="text/javascript"></script>
<script src="/Scripts/jquery.jqGrid.js" type="text/javascript"></script>
<script src="/Scripts/js/jqModal.js" type="text/javascript"></script>
<script src="/Scripts/js/jqDnR.js" type="text/javascript"></script>

Notice that the first line contains a reference to the “coffee” CSS file. There are multiple themes included and when you choose a theme, you need to be sure to include the theme’s CSS file. I chose coffee, because I drink a lot of it.

The Second step is to initialize the grid with a bit of JavaScript. This looks a bit funky if you’re not used to jQuery, but I assure you, it’s pretty straightforward.

<script type="text/javascript">
        datatype: 'json',
        mtype: 'GET',
        colModel :[
          {name:'Id', index:'Id', width:40, align:'left' },
          {name:'Votes', index:'Votes', width:40, align:'left' },
          {name:'Title', index:'Title', width:200, align:'left'}],
        pager: jQuery('#pager'),
        sortname: 'Id',
        sortorder: "desc",
        viewrecords: true,
        imgpath: '/scripts/themes/coffee/images',
        caption: 'My first grid'

There are a few things you’ll have to be sure to configure here. First is the url property which points to the URL that will provide the JSON data. Notice that the value is /Home/GridData which means we’ll be implementing an action method named GridData soon. During the course of this post, we’ll change that property to point to different action methods.

The colNames property contains the display names for each column separated by columns. Ideally it should match up with the items in the colModel property.

The colModel property is an array that is used to configure each column of the grid, allowing you to specify the width, alignment, and sortability of a column. The index property of a column is an important one as that is the value that is sent to the server when sorting on a column.

See the documentation for more details on the HTML and JavaScript used to configure the grid.

The Third step is to add a bit of HTML to the page which will house the grid.

<h2>My Grid Data</h2>
<table id="list" class="scroll" cellpadding="0" cellspacing="0"></table>
<div id="pager" class="scroll" style="text-align:center;"></div>

With this in place, it’s time to implement the GridData action method to return the JSON in the proper format.

But first, let’s take a look at the JSON format expected by the grid. From the documentation, you can see it will look something like:

  total: "xxx", 
  page: "yyy", 
  records: "zzz",
  rows : [
    {id:"1", cell:["cell11", "cell12", "cell13"]},
    {id:"2", cell:["cell21", "cell22", "cell23"]},

The documentation I linked to also provides some gnarly looking PHP code you can use to generate the JSON data. Fortunately, you won’t have to deal with that. By using the Json helper method with an anonymous object, we can write some relatively clean looking code which looks almost just like the spec. Here’s my first cut of the action method, just to get it to display some fake data.

public ActionResult GridData(string sidx, string sord, int page, int rows) {
  var jsonData = new {
    total = 1, // we'll implement later 
    page = page,
    records = 3, // implement later 
    rows = new[]{
      new {id = 1, cell = new[] {"1", "-7", "Is this a good question?"}},
      new {id = 2, cell = new[] {"2", "15", "Is this a blatant ripoff?"}},
      new {id = 3, cell = new[] {"3", "23", "Why is the sky blue?"}}
  return Json(jsonData, JsonRequestBehavior.AllowGet);

A couple of things to point out. The arguments to the action methods are named according to the query string parameter names that jQuery grid sends via the Ajax request. I didn’t choose those names.

By naming the arguments to the action method exactly the same as what is in the query string, we have a very convenient way to retrieve these values. Remember, arguments passed to an action method should be treated with care.Never trust user input!

In this example, we statically create some JSON data and use the Json helper method to return the data back to the grid and Voila! It works!


Yeah, this is great for a simple demo, but I use a real database to store my data! Understood. It’s time to hook this up to a real database. As you might guess, I’ll use the HaackOverflow database for this demo as well as LinqToSql.

I’ll assume you know how to add a database and create a LinqToSql model already. If not, look at the source code I’ve included. Once you’ve done that, it’s pretty easy to transformat the data we get back into the proper JSON format.

public ActionResult LinqGridData    (string sidx, string sord, int page, int rows) {
  varnew HaackOverflowDataContext();

  var jsonData = new {
    total = 1, //todo: calculate
    page = page,
    records = context.Questions.Count(),
    rows = (
      from question in context.Questions
      select new {
        id = question.Id,
        cell = new string[] { 
          question.Id.ToString(), question.Votes.ToString(), question.Title 
  return Json(jsonData, JsonRequestBehavior.AllowGet);

Note that the method is a tiny bit busier, but it follows the same basic structure as the JSON data. After changing the JavaScript code in the view to point to this action instead of the other, we can now see the first ten records from the database in the grid.

But we’re not done yet. At this point, we want to implement paging and sorting. Paging is pretty easy, but sorting is a bit tricky. After all, what we get passed into the action method is the name of the sort column. At that point, we want to dynamically create a LINQ expression that sorts by that column.

One easy way to do this is to use the Dynamic Linq Query library which ScottGu wrote about a while back. This library adds extension methods which make it easy to create more dynamic Linq queries based on strings. Of course, with great power comes great responsibility. Make sure to validate the strings before you pass them into the methods. With this in place, we rewrite the action method to be (warning, DEMO CODE AHEAD!):

public ActionResult DynamicGridData
    (string sidx, string sord, int page, int rows) {
  var context = new HaackOverflowDataContext();
  int pageIndex = Convert.ToInt32(page) - 1;
  int pageSize = rows;
  int totalRecords = context.Questions.Count();
  int totalPages = (int)Math.Ceiling((float)totalRecords / (float)pageSize);

  var questions = context.Questions
    .OrderBy(sidx + " " + sord)
    .Skip(pageIndex * pageSize)

  var jsonData = new {
    total = totalPages,
    page = page,
    records = totalRecords,
    rows = (
      from question in questions
      select new {
        id = question.Id,
        cell = new string[] {
          question.Id.ToString(), question.Votes.ToString(), question.Title 
  return Json(jsonData, JsonRequestBehavior.AllowGet);

Some things to note: The first part of this method does some initial calculations to figure out the number of pages we’re dealing with based on the page size (passed in) and the total record count.

Then given that info, we use the Dynamic Linq extension methods to do the actual paging and sorting via the line:

var questions = context.Questions.OrderBy(…).Skip(…).Take(…);

Once we have that, we can simply transform that into the array that jQuery grid expects and place that in the larger JSON payload represented by the jsonData variable.

With all this in place, you now have a pretty snazzy approach to paging and sorting data using AJAX. Now go forth and wow your customers. ;)

And before I forget, here’s the sample project that uses all three approaches.

personal 0 comments suggest edit

Every good developer knows to always have a backup. For example, over two years ago, I announced my world domination plans. But there was a single point of failure in me putting all my world domination plans on the tiny shoulders of just one progeny. My boy needs a partner in crime.

mia So my wife and I conspired together and we’re happy to announce that baby #2 is on the way. Together, the two of them will be unstoppable!

My wife is past her first trimester and we expect the baby to RTF (Release To Family) around October.

This second time around has been a bit more challenging. My poor wife, bless her heart, has had to deal with much more sever nausea this time around.

Notice the crinkle in the ultrasound photo. My son did that. ;) He’s trying to destroy the evidence.

Many of you who have more than one children might be able to relate to this, but I really considered not writing a blog announcement for my second child. There was the feeling that it was such a novel thing the first time, but now it’s becoming old hat (not really!).

But then I mentally fast forwarded 16 years later and pictured my future daughter finding the firstborn announcement and not finding her own blog announcement. Try explaining that to a kid. I can deal without the drama.

Well I won’t have to! Hi honey, here’s your announcement. Now you be good while daddy goes shopping for a shotgun and a shovel. :), mvc 0 comments suggest edit

There are a couple of peculiarities worth understanding when dealing with title tags and master pages within Web Forms and ASP.NET MVC. These assume you are using the HtmlHead control, aka <head runat="server" />.

The first peculiarity involves a common approach where one puts a ContentPlaceHolder inside of a title tag like we do with the default template in ASP.NET MVC:

<%@ Master ... %>
<head runat="server">
    <asp:ContentPlaceHolder ID="titleContent" runat="server" />

What’s nice about this approach is you can set the title tag from within any content page.

<asp:Content ContentPlaceHolderID="titleContent" runat="server">

But what happens if you want to set part of the title from within the master page. For example, you might want the title of every page to end with a suffix, “ – MySite”.

If you try this (notice the – MySite tacked on):

<%@ Master ... %>
<head runat="server">
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> - MySite

And run the page, you’ll find that the – MySite is not rendered. This appears to be a quirk of the HtmlHead control. This is because the title tag within the HtmlHead control is now itself a control. This will be familiar to those who understand how the AddParsedSubObject method works. Effectively, the only content allowed within the body of the HtmlHead control are other controls.

The fix is pretty simple. Add your text to a LiteralControl like so.

<%@ Master ... %>
<head runat="server">
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> 
    <asp:LiteralControl runat="server" Text=" - MySite" />

The second peculiarityhas to do with how the HeaderControl really wants to produce valid HTML markup.

If you leave the <head runat="server"></head> tag empty, and then view source at the rendered output, you’ll notice that it renders an empty <title> tag for you. It looked at its child controls collection and saw that it didn’t contain an HtmlTitle control so it rendered one for you.

This can cause problems when attempting to use a ContentPlaceHolder to render the title tag for you. For example, a common layout I’ve seen is the following.

<%@ Master ... %>
<head runat="server">
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 

This approach is neat because it allows you to not only set the title tag from within any content page, but any other content you want within the <head> tag.

However, if you view source on the rendered output, you’ll see two <title> tags, one that you specified and one that’s empty.

Going back to what wrote earlier, the reason becomes apparent. The HtmlHead control checks to see if it contains a child title control. When it doesn’t find one, it renders an empty one. However, it doesn’t look within the content placeholders defined within it to see if they’ve rendered a title tag.

This makes sense when you consider how the HtmlHead tag works. It only allows placing controls inside of it. However, a ContentPlaceHolder allows adding literal text in there. So while it looks the same, the title tag within the ContentPlaceHolder is not an HtmlTitle control. It’s just some text, and the HtmlHead control doesn’t want to parse all the rendered text from its children.

This is why I tend to take the following approach with my own master pages.

<%@ Master ... %>
<head runat="server">
  <title><asp:ContentPlaceHolder ID="titleContent" runat="server" /></title>
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 

Happy Titling! 0 comments suggest edit

In my last blog post, I walked step by step through a Cross-site request forgery (CSRF) attack against an ASP.NET MVC web application. This attack is the result of how browsers handle cookies and cross domain form posts and is not specific to any one web platform. Many web platforms thus include their own mitigations to the problem.

It might seem that if you’re using Web Forms, you’re automatically safe from this attack. While Web Forms has many mitigations turned on by default, it turns out that it does not automatically protect your site against this specific form of attack.

In the same sample bank transfer application I provided in the last post, I also included an example written using Web Forms which demonstrates the CSRF attack. After you log in to the site, you can navigate to /BankWebForm/default.aspx to try out the Web Form version of the transfer money page. it works just like the MVC version.

To simulate the attack, make sure you are running the sample application locally and make sure you are logged in and then click on

Here’s the code for that page:

<html xmlns="" >
  <form name="badform" method="post"
    <input type="hidden" name="ctl00$MainContent$amountTextBox"
      value="1000" />
    <input type="hidden" name="ctl00$MainContent$destinationAccountDropDown"
      value="2" />
    <input type="hidden" name="ctl00$MainContent$submitButton"
      value="Transfer" />
    <input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET"
      value="" />
    <input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT"
      value="" />
    <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
      value="/wEP...0ws8kIw=" />
    <input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION"
      value="/wEWBwK...+FaB85Nc" />
    <script type="text/javascript">

It’s a bit more involved, but it does the trick. It mocks up all the proper hidden fields required to execute a bank transfer on my silly demo site.

The mitigation for this attack is pretty simple and described thoroughly in this this article by Dino Esposito as well as this post by Scott Hanselman. The change I made to my code behind based on Dino’s recommendation is the following:

protected override void OnInit(EventArgs e) {
  ViewStateUserKey = Session.SessionID;

With this change in place, the CSRF attack I put in place no longer works.

When you go to a real bank site, you’ll learn they have all sorts of protections in place above and beyond what I described here. Hopefully this post and the previous one provided some insight into why they do all the things they do. :)

Technorati Tags:,security, code, mvc 0 comments suggest edit

A Cross-site request forgery attack, also known as CSRF or XSRF (pronounced sea-surf) is the less well known, but equally dangerous, cousin of the Cross Site Scripting (XSS) attack. Yeah, they come from a rough family.

CSRF is a form of confused deputy attack. Imagine you’re a malcontent who wants to harm another person in a maximum security jail. You’re probably going to have a tough time reaching that person due to your lack of proper credentials. A potentially easier approach to accomplish your misdeed is to confuse a deputy to misuse his authority to commit the dastardly act on your behalf. That’s a much more effective strategy for causing mayhem!

In the case of a CSRF attack, the confused deputy is your browser. After logging into a typical website, the website will issue your browser an authentication token within a cookie. Each subsequent request to sends the cookie back to the site to let the site know that you are authorized to take whatever action you’re taking.

Suppose you visit a malicious website soon after visiting your bank website. Your session on the previous site might still be valid (though most bank websites guard against this carefully). Thus, visiting a carefully crafted malicious website (perhaps you clicked on a spam link) could cause a form post to the previous website. Your browser would send the authentication cookie back to that site and appear to be making a request on your behalf, even though you did not intend to do so.

Let’s take a look at a concrete example to make this clear. This example is the same one I demonstrated as part of my ASP.NET MVC Ninjas on Fire Black Belt Tips talk at Mix in Las Vegas. Feel free to download the source for this sample and follow along.

Here’s a simple banking website I wrote. If your banking site looks like this one, I recommend running away.

banking-login-pageThe site properly blocks anonymous users from taking any action. You can see that in the code for the controller:

public class HomeController : Controller

Notice that we use the AuthorizeAttribute on the controller (without specifying any roles) to specify that all actions of this controller require the user to be authentication.

After logging in, we get a simple form that allows us to transfer money to another account in the bank. Note that for the sake of the demo, I’ve included an information disclosure vulnerability by allowing you to see the balance for other bank members. ;)


To transfer money to my Bookie, for example, I can enter an amount of $1000, select the Bookie account, and then click Transfer. The following shows the HTTP POST that is sent to the website (slightly edited for brevity):

POST /Home/Transfer HTTP/1.1
Referer: http://localhost:54607/csrf-mvc.html
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 34
Cookie: .ASPXAUTH=98A250...03BB37


There are three important things to notice here. We are posting to a well known URL, /Home/Transfer, we are sending a cookie, .ASPXAUTH, which lets the site know we are already logged in, and we are posting some data (Amount=1000&destinationAccountId=3), namely the amount we want to transfer and the account id we want to transfer to. Let’s briefly look at the code that executes the transfer.

public ActionResult Transfer(int destinationAccountId, double amount) {
  string username = User.Identity.Name;
  Account source = _context.Accounts.First(a => a.Username == username);
  Account destination = _context.Accounts.FirstOrDefault(
    a => a.Id == destinationAccountId);
  source.Balance -= amount;
  destination.Balance += amount;
  return RedirectToAction("Index");

Disclaimer: Do not write code like this. This code is for demonstration purposes only. For example, I don’t ensure that amount non-negative, which means you can enter a negative value to transfer money from another account. Like I said, if you see a bank website like this, run!

The code is straightforward. We simply transfer money from one account to another. At this point, everything looks fine. We’re making sure the user is logged in before we transfer money. And we are making sure that this method can only be called from a POST request and not a GET request (this last point is important. Never allow changes to data via a GET request).So what could go wrong?

Well BadGuy, another bank user has an idea. He sets up a website that has a page with the following code:

    <form name="badform" method="post"
        <input type="hidden" name="destinationAccountId" value="2" />
        <input type="hidden" name="amount" value="1000" />
    <script type="text/javascript">

What he’s done here is create an HTML page that replicates the fields in bank transfer form as hidden inputs and then runs some JavaScript to submit the form. The form has its action set to post to the bank’s URL.

When you visit this page it makes a form post back to the bank site. If you want to try this out, I am hosting this HTML here. You have to make sure the website sample code is running on your machine before you click that link to see it working.

Let’s look at the contents of that form post.

POST /Home/Transfer HTTP/1.1
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 34
Cookie: .ASPXAUTH=98A250...03BB37


It looks exactly the same as the one before, except the Referer is different. When the unsuspecting bank user visited the bad guy’s website, it recreated a form post to transfer funds, and the browser unwittingly sent the still active session cookie containing the user’s authentication information.

The end result is that I’m out of $1000 and BadGuy has his bank account increased by $1000. Drat!

It might seem that you could rely on the checking the Referer to prevent this attack, but some proxy servers etc… will strip out the Referer field in order to maintain privacy. Also, there may be ways to spoof the Referer field. Another mitigation is to constantly change the URL used for performing sensitive operations like this.

In general, the standard approach to mitigating CSRF attacks is to render a “canary” in the form (typically a hidden input) that the attacker couldn’t know or compute. When the form is submitted, the server validates that the submitted canary is correct. Now this assumes that the browser is trusted since the point of the attack is to get the general public to misuse their own browser’s authority.

It turns out this is mostly a reasonable assumption since browsers do not allow using XmlHttp to make a cross-domain GET request. This makes it difficult for the attacker to obtain the canary using the current user’s credentials. However, a bug in an older browser, or in a browser plugin, might allow alternate means for the bad guy’s site to grab the current user’s canary.

The mitigation in ASP.NET MVC is to use the AntiForgery helpers. Steve Sanderson has a great post detailing their usage.

The first step is to add the ValidateAntiForgeryTokenAttribute to the action method. This will validate the “canary”.

public ActionResult Transfer(int destinationAccountId, double amount) {
  ///... code you've already seen ...

The next step is to add the canary to the form in your view via the Html.AntiForgeryToken() method.

The following shows the relevant section of the view.

<% using (Html.BeginForm("Transfer", "Home")) { %>
    <label for="Amount">Amount:</legend>
    <%= Html.TextBox("Amount")%>
    <label for="destinationAccountId">
      Destination Account:
    <%= Html.DropDownList("destinationAccountId", "Select an Account") %>
    <%= Html.AntiForgeryToken() %>
    <input type="submit" value="transfer" />
<% } %>

When you view source, you’ll see the following hidden input.

<input name="__RequestVerificationToken" 
  value="WaE634+3jjeuJFgcVB7FMKNzOxKrPq/WwQmU7iqD7PxyTtf8H8M3hre+VUZY1Hxf" />

At the same time, we also issue a cookie with that value encrypted. When the form post is submitted, we compare the cookie value to the submitted verification token and ensure that they match.

Should you be worried?

The point of this post is not to be alarmist, but to raise awareness. Most sites will never really have to worry about this attack in the first place. If your site is not well known or doesn’t manage valuable resources that can be transferred to others, then it’s not as likely to be targeted by a mass phishing attack by those looking to make a buck.

Of course, financial gain is not the only motivation for a CSRF attack. Some people are just a-holes and like to grief large popular sites. For example, a bad guy might use this attack to try and post stories on a popular link aggregator site like Digg.

One point I would like to stress is that it is very important to never allow any changes to data via GET requests. To understand why, check out this post as well as this story about the Google Web Accelerator.

What about Web Forms?

It turns out Web Forms are not immune to this attack by default. I have a follow-up post that talks about this and the mitigation.

If you missed the link to the sample code before, you can download the source here (compiled against ASP.NET MVC 2).

Technorati Tags: aspnetmvc,,csrf,security

code, humor 0 comments suggest edit

I’ve been relatively quiet on my blog lately in part because of all the work on ASP.NET MVC. However, the ASP.NET team is a relatively small team so we often are required to work on multiple features at the same time. So part of the reason I’ve been so busy is that while we were wrapping up ASP.NET MVC, I was also busy working on a core .NET Framework feature we plan to get into the next version (it was a feature that originated with our team, but we realized it belongs in the BCL).

The goal of the feature is to help deal with the very common task of handling string input. In many cases, the point is to convert the input into another type, such as an int or float. However, how do you deal with the fact that the string might not be convertible to the other type.

We realized we needed a type to handle this situation. A type that would represent the situation after the user has submitted input, but before you attempt the conversion. At this point, you have a string or another type.


For more details on the StringOr<T> Community Technology Preview (CTP), please see details on lead developer Eilon Lipton’s Blog (he’s a big fan of cats as you can see). He provides source code and unit tests for download. As always, please do provide feedback as your feedback is extremely important in helping shape this nascent technology.

Tags: framework , .net mvc, 0 comments suggest edit

First let me begin by assuring you, this is not an April Fool’s joke.

2871423645_2f690a0c61Exciting news! Scott Guthrie announced today that we have released the source code for ASP.NET MVC 1.0 under the Ms-PL license, an OSI approved Open Source license with all the rights that license entails.

You can download the Ms-PL licensed source package from the download details page here. Just scroll down and look for the file named My baby is growing up!

A big thanks must go out to everyone involved in making this happen and to those who approved it. It’s truly a team effort. When I joined Microsoft, I remember walking into ScottGu’s office to try and plant the seed for releasing ASP.NET MVC under the Ms-PL license. I came in armed with reasons why we should, but found him to be immediately receptive, if not already thinking along those lines. In fact, a lot of people such as Brian Goldfarb, my management chain, our LCA contact, etc… were completely on board, which was unexpected (though maybe it should not have been) and encouraging to me.

However, there’s agreement to do something and the actual doing. It still a lot of people to do the leg-work to make it happen. I personally was kept me very busy in the days leading up to the official RTM release. Let’s just say I feel like I’m one course away from getting a law degree.

I know one of the first questions some of you will ask is will we accept source code contributions (I’ve already seen the question on Twitter :). Unfortunately, at this time the answer is no, we do not accept patches. Please don’t let that stop you from contributing in other ways. The terms of the license do mean we need to stay on our toes to keep putting out compelling releases and we will work hard not to disappoint.

Personally (and this is totally my own opinion), I’d like to reach the point where we could accept patches. There are many hurdles in the way, but if you went back in time several years and told people that Microsoft would release several open source projects (Ajax Control Toolkit, MEF, DLR, IronPython and IronRuby, etc….) you’d have been laughed back to the present. Perhaps if we could travel to the future a few years, we’ll see a completely different landscape from today.

However, it is a complex issue and I don’t want to downplay that, but there are many of us who are looking for novel solutions and trying to push things forward. I really think in the long run, it is good for us and four our customers, otherwise we wouldn’t care.

But coming back to the present, I’m extremely pleased with where we are now and look forward to what will happen in the future. Someone once expressed disappointment that my involvement in open source projects seriously declined after joining Microsoft. It was my hope at the time that by the time it released, it would be clear that technically, I had been working on OSS. :)

subtext 0 comments suggest edit

Simo beat me to the punch in writing about this, After many long years being hosted on SourceForge, the Subtext submarine is moving into a new project hosting port.

We’ve finally moved off of SourceForge and onto Google Code’s project hosting. Our main site (primarily for end users) is still at and I’ve hopefully updated every place it points to SourceForge to now point to Google Code.

Subtext-moves Image stolen from Simo’s blog. ;)

This was a very tough decision between CodePlex and Google Code. CodePlex is a great platform and I really like what they’ve done with being able to vote on issues etc… They seem to be innovating and adding new features at a rapid clip. I host Subkismet, a smaller project, on CodePlex and probably would choose it for a brand new project.

My one big complaint with CodePlex is that we really want native Subversion access, not a subversion bridge to TFS. For example, I was able to run the svnsync command to get the entire SVN history for Subtext into Google Code. That’s not something I could do today with CodePlex.

One other thing I really like with Google Code is that it’s fast. When you go to our project page, and click on the tabs, notice how fast the transition is. Click on an issue and see how fast you get there. Make a change and save it and it just snaps back. I spend a lot of time triaging and organizing issues etc… so this snappiness is really important to me.

Another great feature I love is how well code review is integrated into Google Code. For example, you can use the web interface to look at any revision in our repository. Take a look at r3406 for example.

Click on the diff link next to each file that was changed. For example, the diff for AkismetClient.cs. You get a nice side-by-side diff of the changes. You can double click on any line of code to leave a comment. Scroll down to line 160 and take a look at a comment. Don’t worry, I was the original author of that file so I’m not offending anybody but myself with that comment.

So there’s a lot I’d love to see improved with Google Code, but I’m pretty happy with the usability of the site overall. It’s a far improvement over SourceForge where I started to viscerally hate managing issues and doing any sort of administrative tasks over there.

We’re now using Google Groups for our Subtext discussion list and we have a separate group for notification emails such as commit emails. I’m going to leave the tracker and file releases on at SourceForge for a while longer until we’ve moved everything over. Unfortunately, there’s no automatic import from SourceForge for bug reports. But if you’re interested in keeping tabs on the progress Subtext is making, feel free to join the groups.

code 0 comments suggest edit

Recently, I tried to accomplish a simple task on a website which frustrated me because what should have been simple, was not. All I wanted to do was go to the Mix website and quickly find links to my sessions so I could post them here. Even I should be able to figure this out.

As a note, I’m using the Mix site as my illustration here, but I do it out of love and not mean spiritedness. Mix is my favorite conference, but its website…leaves something to be desired.

It seems particularly interesting that I’d run into this with the Mix site because the whole conference caters to a web design audience as well as web development. It just goes to show that even the best designers sometimes lose sight of what makes a user interface usable in the pursuit of cool flashy design. After all, designers often are trying to impress their designer friends when they should be creating a site that allows its audience to accomplish something?

Let’s take a look at my experience. Here’s the front page. Where do I start?


Notice that there’s no search feature (I already tried Google). Fine, so I click all sessions and get this timeline view. There’s a search button there but when I type “MVC” and hit GO, nothing happens. I also try my last name. No go.

Sessions - Windows Internet
Explorer Near the bottom, I click on SPEAKERS and find my name. I then try clicking on one of my sessions which is highlighted as a link, and nothing happens (it’s since been fixed). When I highlight the session, a pop-up which appears to have links shows up, but I can’t seem to move my mouse over it because it disappears before my mouse gets there.

Speakers - Windows Internet

After a moment of pondering, it occurs to me that maybe the video of my talk was not yet available at the time (it is now as I write this). It would have been nice if the site clearly indicated this, but instead, they were providing a link that went nowhere, giving me the impression that the video was available. Very confusing!

Now contrast that to this great post by Greg Duncan.

Mix 09 Quick Video Link List - Gregs Cool [Insert Clever Name] of the
Day - Windows Internet

Notice that it’s the height of simplicity. It’s a list of all sessions (after all, the list of sessions wasn’t all that long) with headings showing you which video format the session is available in. I would have loved to see something like this on the Mix site.

A quick F3 to bring up my browser’s search feature, type in MVC, and boom goes the dynamite, I’ve found my sessions and am happily dissecting my speaking performance in disgust.

Kudos to Greg for putting this together. The fact that he even needed to put this together is a major indication that there’s a problem over at the Mix site.

The problem here is that in the pursuit of cool design, designers sometimes confuse ornamentation with good design. The Mix site has all sorts of flashy elements showing off a mastery of jQuery. Whoopee! And typically, I love those elements because when done well, they can make a site more usable. But in this case, I much prefer Greg’s simpler approach.

There’s not that much data to display, so rather than chop it up into pages, he just puts it all in one page which allows me, the reader, to make use of the familiar tools built into my browser such as F3 to search. Bam! Done! I can move on now.

Keep that in mind next time as you put together your next great user interface. How can you make it simpler and leverage the tools already in the client? mvc 0 comments suggest edit

After my critical post of the Mix website, I found this other site,, which should have been prominently linked to from the main site because it has a working search bar and is fairly usable and flashy!

I gave two sessions on ASP.NET MVC at Mix.

As you can see, we tried to have a bit of fun with the session titles. If you’re not tired of hearing me talk about MVC, you can also see an interview I did with Adam Kinney on Channel 9.

I will be posting my demos on my blog sometime soon hopefully. I want to give a few of them the full blog treatment. Partly because all of my demos are built live and I don’t keep the final product around, so I need to redo the steps. And I’m lazy. There. I said it.

0 comments suggest edit

subtext200x200One of the cool products that I’m personally excited about announced at Mix is the updated Web Platform Installer.

I’m not going to lie. Part of the reason I’m excited about it is that it includes the latest version Subtext! The Web PI tool is a really nice way of installing and trying out various free and open source applications out there. It installs everything you need to get Subtext up and running on your local machine.

All you have to do is go to the Web App Gallery, find an application and click the Install button and it will install the application (if you have Web PI already installed). If you don’t have Web PI installed, it will prompt you to install Web PI and then install the app.

In this screenshot, you can see the dependencies needed to run Subtext are already listed.


Subtext 2.1.1 is a very minor update to Subtext with a few bug fixes. The major fix is that the “forgot password” feature now works properly.

This was a little last minute surprise for the Subtext team as I literally put the required install package together in the last minutes leading up to Mix. In the meanwhile, major refactoring work is ongoing in our subversion repository. For example, the trunk now uses a custom Routing implementation (not yet built on ASP.NET MVC, but moving that way) Feel free to join in and help fix bugs and test. :), mvc 0 comments suggest edit

newdotnetlogo_2_thumb Today I’m happy to write that ASP.NET MVC 1.0 RTW (Release To Web) is now officially released.

This was one of several announcements ScottGu made at the Mix 09 conference today, which I unfortunately missed because I was on a plane to Vegas enroute to Mix 09. I was busy  back at the mother ship making sure everything was in order for the release.


It’s been nearly a year and a half since I joined Microsoft and started working on it and what a ride it’s been.

Some highlights during that time:

With ASP.NET MVC, we wanted to release early and often providing a lot of transparency with our design process. We made a lot of changes based on user feedback. All in all, I counted 10 releases:

  1. December 2007 CTP
  2. Preview 2
  3. Preview 2.5 (April CodePlex release)
  4. Preview 3
  5. CodePlex Preview 4
  6. Preview 5
  7. Beta
  8. RC 1
  9. RC 2
  10. RTM!

A great way to learn about ASP.NET MVC is to go to the website, Also be sure to check out the free eBook which contains Chapter 1 of Professional ASP.NET MVC. This chapter contains an end-to-end walkthrough of building an application with ASP.NET MVC.

We also have an updated set of documents on MSDN worth checking out.

A great way to install ASP.NET MVC is via the new Web Platform Installer. I highly recommend it. Not only can you install ASP.NET MVC using it, but many other free applications such as one of my favorites, Subtext!

Technorati Tags: aspnetmvc,,subtext