code comments edit

Every now and then some email or website comes along promising to prove Fred Brooks wrong about this crazy idea he wrote in The Mythical Man Month (highly recommended reading!) that there is no silver bullet which by itself will provide a tenfold improvement in productivity, reliability, and simplicity within a decade.

This time around, the promise was much like others, but they felt the need to note that their revolutionary new application/framework/doohickey will allow business analysts to directly build applications 10 times as fast without the need for programmers![revenge-nerds](http://haacked.com/images/haacked_com/WindowsLiveWriter/AndGetRidOfThosePeskyProgrammers_A0BE/revenge-nerds_thumb.jpg "revenge-nerds")

Ah yeah! Get rid of those foul smelling pesky programmers! We don’t need em!

Now wait one dag-burn minute! Seriously?!

I’m going to try real hard for a moment to forget they said that and not indulge my natural knee jerk reaction which is to flip the bozo bit immediately. If I were a more reflective person, this would raised a disturbing question:

Why are these business types so eager to get rid of us programmers?

It’s easy to blame the suits for not understanding software development and forcing us into a Tom Smykowski moment having to defend what it is we do around here.

Well-well look. I already told you: I deal with the god damn customers so the engineers don’t have to. I have people skills; I am good at dealing with people. Can’t you understand that? What the hell is wrong with you people?

Maybe, as Steven “Doc” List quotes from Cool Hand Luke in his latest End Bracket article on effective communication for MSDN Magazine,

What we’ve got here is a failure to communicate.

Leon Bambrick (aka SecretGeek) recently wrote about this phenomena in his post entitled, The Better You Program, The Worse You Communicate, in which he outlines how techniques that make us effective software developers do not apply to communicating with other humans.

After all, we can sometimes be hard to work with. We’re often so focused on the technical aspects and limitations of a solution that we unknowingly confuse the stakeholders with jargon and annoy them by calling their requirements “ludicrous”. Sometimes, we fail to deeply understand their business and resort to making fun of our stakeholders rather than truly understanding their needs. No wonder they want to do the programming themselves!

Ok, ok. It’s not always like this. Not every programmer is like this and it isn’t fair to lay all the blame at our feet. I’m merely trying to empathize and understand the viewpoint that would lead to this idea that moving programmers out of the picture would be a good thing.

Some blame does deserve to lie squarely at the feet of these snake oil salespeople, because at the moment, they’re selling a lie. What they’d like customers to believe is your average business analyst simply describes the business in their own words to the software, and it spits out an application.

The other day, I started an internal email thread describing in hand-wavy terms some feature I thought might be interesting. A couple hours later, my co-worker had an implementation ready to show off.

Now that my friends, is the best type of declarative programming. I merely declared my intentions, waited a bit, and voila!  Code! Perhaps that’s along the lines of what these types of applications hope to accomplish, but there’s one problem. In the scenario I described, it required feeding requirements to a human. If I had sent that email to some software, it would have no idea what to do with it.

At some point, something close to this might be possible, but only when software has reached the point where it can exhibit sophisticated artificial intelligence and really deal with fuzziness. In other words, when the software itself becomes the programmer, only then might you really get rid of the human programmer. But I’m sorry to say, you’re still working with a programmer, just one who doesn’t scoff at your requirements arrogantly (at least not in your face while it plots to take over the world, carrot-top).

Until that day, when a business analyst wires together an applications with Lego-like precision using such frameworks, that analyst has in essence become a programmer. That work requires many of the same skills that developers require. At this point, you really haven’t gotten rid of programmers, you’ve just converted a business type into a programmer, but one who happens to know the business very well.

In the end, no matter how “declarative” a system you build and how foolproof it is such that a non-programmer can build applications by dragging some doohickeys around a screen, there’s very little room for imprecision and fuzziness, something humans handle well, but computers do not, as Spock demonstrated so well in an episode of Star Trek.

“Computer, compute the last digit of PI” - Spock

Throw into the mix that the bulk of the real work of building an application is not the coding, but all the work surrounding that, as Udi Dahan points out in his post on The Fallacy of ReUse.

This is not to say that I don’t think we should continue to invest in building better and better tools. After all, the history of software development is about building better and better higher level tools to make developers more productive. I think the danger lies in trying to remove the discipline and traits that will always be required when using these tools to build applications.

Even when you can tell the computer what you want in human terms, and it figures it out, it’s important to still follow good software development principles, ensure quality checks, tests, etc…

The lesson for us programmers, I believe is two-fold. One, we have to educate our stakeholders about how software production really works. Even if they won’t listen, a little knowledge and understanding here goes a long way. Be patient, don’t be condescending, and hope for the best. Secondly, we have to educate ourselves about the business in a deep manner so that we are seen as valuable business partners who happen to write the code that matters.

comments edit

A little while ago I announced our plans for ASP.NET MVC as it relates to Visual Studio 2010. ASP.NET MVC wasn’t included as part of Beta 1, which raised a few concerns by some (if not conspiracy theories!) ;). The reason for this was simple as I pointed out:

One thing you’ll notice is that ASP.NET MVC is not included in Beta 1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

Today I’m happy to announce that we’re done with the work I described and the installer is now available on CodePlex. Be sure to give it a try as many of the new VS10 features intended to support the TDD workflow fit very nicely with ASP.NET MVC, which ScottGu will describe in an upcoming blog post.

If you run into problems with the intaller, try out this troubleshooting guide by Jacques, the developer who did the installer work and do provide feedback.

You’ll notice that the installer says this is ASP.NET MVC 1.1, but as the readme notes point out, this is really ASP.NET MVC 1.0 retargeted for Visual Studio 2010. The 1.1 is just a placeholder version number. We bumped up the version number to avoid runtime conflicts with ASP.NET MVC 1.0. All of this and more is described in the Release Notes.

When VS10 Beta 2 comes out, you won’t need to download a separate standalone installer to get ASP.NET MVC (though a standalone installer will be made available for VS2008 users that will run on ASP.NET 3.5 SP1). A pre-release version of ASP.NET MVC 2 will be included as part of the Beta 2 installer as described in the …

Roadmap

Road Blur: Photo credit: arinas74 on
stock.xchng

I recently published the Roadmap for ASP.NET MVC 2 which gives a high level look at what features we plan to do for ASP.NET MVC 2. The features are noticeably lacking in details as we’re deep in the planning phase trying to gather pain points.

Right now, we’re avoiding focusing the implementation details as much as possible. When designing software, it’s very easy to have preconceived notions about what the solution should be, even when we really don’t have a full grasp of the problem that needs to be solved.

Rather than guiding people towards what we think the solution is, I hope to focus on making sure we understand the problem domain and what people want to accomplish with the framework. That leaves us free to try out alternative approaches that we might not have considered before such as alternatives to expression based URL helpers. Maybe the alternative will work out, maybe not. Ideally, I’d like to have several design alternatives to choose from for each feature.

As we get further along the process, I’ll be sure to flesh out more and more details in the Roadmap and share them with you.

Snippets

One cool new feature of VS10 is that snippets now work in the HTML editor. Jeff King from the Visual Web Developer team sent me the snippets we plan to include in the next version. They are also downloadable from the CodePlex release page. Installation is very simple:

Installation Steps:

​1) Unzip “ASP.NET MVC Snippets.zip” into “C:\Users\<username>\Documents\Visual Studio 10\Code Snippets\Visual Web Developer\My HTML Snippets”, where “C:\” is your OS drive. \ 2) Visual Studio will automatically detect these new files.

Try them out and let us know if you have ideas for snippets that will help you be more productive.

Important Links:

comments edit

One of the features contained in the MVC Futures project is the ability to generate action links in a strongly typed fashion using expressions. For example:

<%= Html.ActionLink<HomeController>(c => c.Index()) %>

Will generate a link to to the Index action of the HomeController.

It’s a pretty slick approach, but it is not without its drawbacks. First, the syntax is not one you’d want to take as your prom date. I guess you can get used to it, but a lot of people who see it for the first time kind of recoil at it.

The other problem with this approach is performance as seen in this slide deck I learned about from Brad Wilson. One of the pain points the authors of the deck found was that the compilation of the expressions was very slow.

I had thought that we might be able to mitigate these performance issues via some sort of caching of the compiled expressions, but that might not work very well. Consider the following case:

<% for(int i = 0; i < 20; i++) { %>

  <%= Html.ActionLink<HomeController>(c => c.Foo(i)) %>

<% } %>

Each time through that loop, the expression is the same: c => c.Foo(i)

But the value of the captured “i” is different each time. If we try to cache the compiled expression, what happens?

So I started thinking about an alternative approach using code generation against the controllers and circulated an email internally. One approach was to code gen action specific action link methods. Thus the about link for the home controller (assuming we add an id parameter for demonstration purposes) would be:

<%= HomeAboutLink(123) %>

Brad had mentioned many times that while he likes expressions, he’s no fan of using them for links and he tends to write specific action link methods just like the above. So what if we could generate them for you so you didn’t have to write them by hand?

A couple hours after starting the email thread, David Ebbo had an implementation of this ready to show off. He probably had it done earlier for all I know, I was stuck in meetings. Talk about the best kind of declarative programming. I declared what I wanted roughly with hand waving, and a little while later, the code just appears! ;)

David’s approach uses a BuildProvider to reflect over the Controllers and Actions in the solution and generate custom action link methods for each one. There’s plenty of room for improvement, such as ensuring that it honors the ActionNameAttribute and generating overloads, but it’s a neat proof of concept.

One disadvantage of this approach compared to the expression based helpers is that there’s no refactoring support. However, if you rename an action method, you will get a compilation error rather than a runtime error, which is better than what you get without either. One advantage of this approach is that it performs fast and doesn’t rely on the funky expression syntax.

These are some interesting tradeoffs we’ll be looking closely at for the next version of ASP.NET MVC.

comments edit

ASP.NET Pages are designed to stream their output directly to a response stream. This can be a huge performance benefit for large pages as it doesn’t require buffering and allocating very large strings before rendering. Allocating large strings can put them on the Large Object Heap which means they’ll be sticking around for a while.

string However, there are many cases in which you really want to render a page to a string so you can perform some post processing. I wrote about one means using a Response filter eons ago.

However, recently, I learned about a method of the Page class I never noticed which allows me to use a much lighter weight approach to this problem.

The method in question is CreateHtmlTextWriter which is protected, but also virtual.

So here’s an example of the code-behind for a page that can leverage this method to filter the output before its sent to the browser.

public partial class FilterDemo : System.Web.UI.Page
{
  HtmlTextWriter _oldWriter = null;
  StringWriter _stringWriter = new StringWriter();

  protected override HtmlTextWriter CreateHtmlTextWriter(TextWriter tw)
  {
    _oldWriter = base.CreateHtmlTextWriter(tw);
    return base.CreateHtmlTextWriter(_stringWriter);
  }

  protected override void Render(HtmlTextWriter writer)
  {
    base.Render(writer);
    string html = _stringWriter.ToString();
    html = html.Replace("REPLACE ME!", "IT WAS REPLACED!");
    _oldWriter.Write(html);
  }
}

In the CreateHtmlTextWriter method, we simply use the original logic to create the HtmlTextWriter and store it away in an instance variable.

Then we use the same logic to create a new HtmlTextWriter, but this one has our own StringWriter as the underlying TextWriter. The HtmlTextWriter passed into the Render method is the one we created. We call Render on that and grab the output from the StringWriter and now can do all the replacements we want. We finally write the final output to the original HtmlTextWriter which is hooked up to the response.

A lot of caveats apply in using this technique. First, as I mentioned before, for large pages, you could be killing scalability and performance by doing this. Also, I haven’t tested this with output caching, async pages, etc… etc…, so your mileage may vary.

Note, if you want to call one page from another, and get the output as a string within the first page, you can pass your own TextWriter to Server.Execute, so this technique is not necessary in that case.

personal comments edit

Being that it’s a glorious Memorial Day Weekend up here in the Northwest, my co-worker Eilon (developer lead for ASP.NET MVC) and I decided to go on a hike to Mt Si where we had a bit of a scary moment.

in-front-of-mt-si I first learned about Mt Si at the company picnic last year, seen behind me and Cody in this photo. I remember seeing the imposing cliff face and thinking to myself, I want to climb up there. I imagined the view would be quite impressive.

Mt Si is a moderately strenuous hike 8 miles round trip with an elevation gain of 3100 feet taking you to about 3600 feet, according to the Washington Trails Association website. Given that it is a very popular hike and that this was a three-day weekend, we figured we’d get an early start by heading over there at 7 AM.

That ended up being a good idea as the parking lot had quite a few cars already, but it wasn’t full by any means. This is a picture of the trail head which starts the hike off under a nice canopy of green.

041Right away, the no-nonsense trail starts you off huffing uphill amongst a multitude of trees.

044

Along the way, there are the occasional diversions. For example, this one won me $10 as the result of a bet that I wouldn’t walk to the edge of the tree overhanging the drop off.

050

When you get to the top, there’s a great lookout with amazing views. But what caught our attention is a rock outcropping called the “Haystack”, which takes you up another 500 feet. Near the base of the Haystack is a small memorial for those who’ve died from plummeting off its rocky face. It’s not a trivial undertaking, but I demanded we try.

Mount Si
026Unfortunately, there’s nothing in the above picture to provide a better sense of scale for this scramble. In the following picture you can see some people pretty much scooting down the steep slope on their butts.

Mount Si
029

Once they were down, we set up and reached around two thirds of the way when I made the mistake of looking back and made a remark about how it’s going to be much more difficult going down. That started getting us nervous because it’s always easier going up than down.

It would have probably been best if I hadn’t made that remark because the climb wasn’t really that difficult, but introducing a bit of nervousness into the mix can really sabotage one’s confidence, which you definitely want on a climb.

At that point, the damage was done and we decided we had enough and started heading back down. Better to try again another day when we felt more confident. At that moment, a couple heading down told us we were almost there and it wasn’t so bad. Our success heading back down and their comments started to bolster our confidence to the point where I was ready to head back up, until I noticed that my shoe felt odd.

What I hadn’t noticed while climbing on the steep face was that my sole had almost completely detached from my hiking boot during the climb. Fortunately, Eilon had some duct tape on hand allowing me to make this ghetto looking patch job.

MacGuyver Repair
JobAt this point I had a mild panic because I worried that the duct tape would cause me to lose grip with my boots on the way down. And frankly, I was pissed off as well, as I’ve had these boots for a few years, but haven’t hiked in them all that often. What a perfect time for them completely fall apart!

Fortunately, I didn’t have much problem climbing back down and we stopped at the first summit to take some pictures and have a brief snack.

Not having the guts today to climb the big rock, I scrambled up a much smaller one and got this great view of Mt Rainier in its full splendor.

071

The view from the top is quite scenic and using binoculars, I was able to check on my family back in Bellevue (joke).

079Going back down was much quicker than the way up and we had a blast of it practically trail running the first part, until my other shoe gave out.

084Guess the warranty must have run out yesterday. ;) Fortunately, Eilon, who was prepared with the Duct tape, also had all terrain sandals with him, which I wore the rest of the way. Next time, I think I’ll ditch the Salomon boots and try Merrells which other hikers I ran into were wearing.

Despite the mishaps, the hike was really a fun romp in the woods and I highly recommend it to anyone in the Seattle area to give it a try. Go early to avoid the crowds. I doubled my $10 in an over/under bet where I took 140 and over cars in the lot. We stopped counting at around 170 cars in the lot when we left.

Mount Si
052This is one last look at Mt Si on our way back home. Eilon put together a play-by-play using Live Maps Bird’s Eye view (click for larger).

The path we
tookFor more info on the Mt Si hike, check out the Washington Trails Association website.

asp.net, code, asp.net mvc comments edit

This post is now outdated

I apologize for not blogging this over the weekend as I had planned, but the weather this weekend was just fantastic so I spent a lot of time outside with my son.

the-parkIf you haven’t heard yet, Visual Studio 2010 Beta 1 is now available for MSDN subscribers to download. It will be more generally available on Wednesday, according to Soma.

You can find a great whitepaper which describes what is new for web developers in ASP 4 which is included.

One thing you’ll notice is that ASP.NET MVC is not included in Beta

  1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

Right now, if you try and open an MVC project with VS 2010 Beta 1, you’ll get some error message about the project type not being supported. The easy fix for now is to remove the ASP.NET MVC ProjectTypeGuid entry as described by this post.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

asp.net, code, asp.net mvc comments edit

A while back, I wrote about Donut Caching in ASP.NET MVC for the scenario where you want to cache an entire view except for a small bit of it. The more technical term for this technique is probably “cache substitution” as it makes use of the Response.WriteSubstitution method, but I think “Donut Caching” really describes it well — you want to cache everything but the hole in the middle.

However, what happens when you want to do the inverse. Suppose you want to cache the donut hole, instead of the donut?

House of Sims
Photostream

I think we should nickname all of our software concepts after tasty food items, don’t you agree?

In other words, suppose you want to cache a portion of the view in a different manner (for example, with a different duration) than the entire view? It hasn’t been exactly clear how do to do this with ASP.NET MVC.

For example, the Html.RenderPartial method ignores any OutputCache directives on the view user control. If you happen to use Html.RenderAction from MVC Futures which attempts to render the output from an action inline within another view, you might run into this bug in which the entire view is cached if the target action has an OutputCacheAttribute applied.

I did a little digging into this today and it turns out that when you specify the OutputCache directive on a control (or page for that matter), the output caching is not handled by the control itself. Rather, it appears that compilation system for ASP.NET pages kicks in and interprets that directive and does the necessary gymnastics to make it work.

In plain English, this means that what I’m about to show you will only work for the default WebFormViewEngine, though I have some ideas on how to get it to work for all view engines. I just need to chat with the members of the ASP.NET team who really understand the deep grisly guts of ASP.NET to figure it out exactly.

With the default WebFormViewEngine, it’s actually pretty easy to get partial output cache working. Simply add a ViewUserControl declaratively to a view and put your call to RenderAction or RenderPartial inside of that ViewUserControl. If you’re using RenderAction, you’ll need to remove the OutputCache attribute from the action you’re pointing to.

Keep in mind that ViewUserControls inherit the ViewData of the view they’re in. So if you’re using a strongly typed view, just make the generic type argument for ViewUserControl have the same type as the page.

If that last paragraph didn’t make sense to you, perhaps an example is in order. Suppose you have the following controller action.

public ActionResult Index() {
  var jokes = new[] { 
    new Joke {Title = "Two cannibals are eating a clown"},
    new Joke {Title = "One turns to the other and asks"},
    new Joke {Title = "Does this taste funny to you?"}
  };

  return View(jokes);
}

And suppose you want to produce a list of jokes in the view. Normally, you’d create a strongly typed view and within that view, you’d iterate over the model and print out the joke titles.

We’ll still create that strongly typed view, but that view will contain a view user control in place of where we would have had the code to iterate the model (note that I omitted the namespaces within the Inherits attribute value for brevity).

<%@ Page Language="C#" Inherits="ViewPage<IEnumerable<Joke>>" %>
<%@ Register Src="~/Views/Home/Partial.ascx" TagPrefix="mvc" TagName="Partial" 
%>
<mvc:Partial runat="server" />

Within that control, we do what we would have done in the main view and we specify the output cache values. Note that the ViewUserControl is generically typed with the same type argument that the view is, IEnumerable<Joke>. This allows us to move the exact code we would have had in the view to this control. We also specify the OutputCache directive here.

<%@ Control Language="C#" Inherits="ViewUserControl<IEnumerable<Joke>>" %>
<%@ OutputCache Duration="10000" VaryByParam="none" %>

<ul>
<% foreach(var joke in Model) { %>
    <li><%= Html.Encode(joke.Title) %></li>
<% } %>
</ul>

Now, this portion of the view will be cached, while the rest of your view will continue to not be cached. Within this view user control, you could have calls to RenderPartial and RenderAction to your heart’s content.

Note that if you are trying to cache the result of RenderPartial this technique doesn’t buy you much unless the cost to render that partial is expensive.

Since the output caching doesn’t happen until the view rendering phase, if the view data intended for the partial view is costly to put together, then you haven’t really saved much because the action method which provides the data to the partial view will run on every request and thus recreate the partial view data each time.

In that case, you want to hand cache the data for the partial view so you don’t have to recreate it each time. One crazy idea we might consider (thinking out loud here) is to allow associating output cache metadata to some bit of view data. That way, you could create a bit of view data specifically for a partial view and the partial view would automatically output cache itself based on that view data.

This would have to work in tandem with some means to specify that the bit of view data intended for the partial view is only recreated when the output cache is expired for that partial view, so we don’t incur the cost of creating it on every request.

In the RenderAction case, you really do get all the benefits of output caching because the action method you are rendering inline won’t get called from the view if the ViewUserControl is outputcached.

I’ve put together a small demo which demonstrates this concept in case the instructions here are not clear enough. Enjoy!

comments edit

A while back a young developer emailed me asking for advice on what it takes to become a successful developer. I started to respond,

I don’t know. Why don’t you go ask a successful developer?

web - credits:
http://www.sxc.hu/photo/885310 But then I thought, that’s kind of snarky, isn’t it? And who am I kidding with all that false modesty? After all, the concept of a “clever hack” was named after me, but the person who came up with it didn’t have 1/10 of my awesomeness which is exceedingly apparent given the off-by-one error that dropped one of the “a”s from the phrase when it was coined. This, of course, was all before I invented TDD, the Internet, and breathing air.

(note to the humor impaired, I didn’t really invent TDD)

But as I’m apt to do, I digress…

I started thinking about the question a bit and said to myself, “If I were a successful developer, what would have I done to become that?” As I started brainstorming ideas, one thing that really stood out was joining an open source project.

If one thing in my career has paid dividends, it was getting involved with open source projects. It exposed me to such a diverse set of problems and technologies that I wouldn’t normally get a chance to work on at work.

Now before I go further, this post is not the post where I answer the young developer’s question. No, that’s a post for another time. I’ll probably give it some trite and pompous title like “Advice for a young developer.” I mean, c’mon! How pathetic and self absorbed is that title? “Get over yourself!” I’ll say to the mirror. But the guy in the mirror will probably do it anyways, but in reverse.

No, this is not that post. Rather, this post is a digression from that post, because if I’m good at one thing, it’s digressing.

As I thought about the open source thing, I got to thinking about the first open source project I ever worked on – RSS Bandit (w00t w00t!). RSS Bandit is a kick butt RSS aggregator developed by Dare Obasanjo and Torsten Rendelmann. I had just started to get into blogging at the time and was really impressed by Dare’s outspoken and yet very thoughtful blog as well as by his baby at the time, RSS Bandit (he has a real baby now, congrats man!).

I hadn’t done much Windows client development back then. I was mostly building web applications in classic ASP and then early versions of ASP.NET. I figured that it would be exciting to cut my teeth on RSS Bandit and learn Winforms development in the process. The idea of a stateful programming model had me positively giddy with excitement. This was going to be so cool.

Many new developers approaching an open source project have grand visions of implementing shiny amazing new features that will have the crowds roaring, the President naming a holiday after you, and all your enemies realizing the errors of their ways and naming their children after you.

But a good contributor swallows his or her pride and starts off slowly with something smaller in scope, and more grunt work like in nature. Most OSS projects have a real need for documentation, partly because all the glamour is in implementing features so nobody wants to write the documentation.

That’s where I started. I wrote an article for the docs, getting started with RSS Bandit. Dare took a notice and asked if I would contribute to the documentation, which I gladly agreed to do. He gave me commit access (I believe I was the third after Dare and Torsten to get commit rights) and started working very hard on the documentation. In fact, much of what I wrote is still there as you can see in my narcissistic application screenshots I used. ;)

Over time, I gained more and more trust and was allowed to work on some bug fixes and features. My first main feature was implementing configurable keyboard shortcuts, which was really neat to implement.

(A bit of trivia. I worked with these guys for years on RSS Bandit, but never met Dare in person until this past Mix conference in Las Vegas. Seriously! I’ve yet to meet Torsten who lives in Germany.)

I really loved working on RSS Bandit and it became quite a hobby that took up what little was left of my free time. I guess you could say it kept me out of gangs in the hard streets of Los Angeles, not that I tried to join nor would they accept me. Over time though, I learned something. Despite all that initial giddiness over finally getting to program in a stateful environment…

I realized I didn’t like it.

In fact, I found it quite foreign and challenging. I kept running into weird problems where controls still had the state they had before after a user clicked a button. I would think to myself, “why do I have to clear that state myself? Why doesn’t it just go away when the user takes an action?” I realized my problem was that I was thinking like a web programmer, not a client programmer who took these things for granted.

As challenging as a client programmer finds the web, where you have to recreate the state on each request because the web is stateless, a developer who primarily programs the web sometimes finds client development challenging because the state is like that ugly sidekick next to the hot one at a bar – it…just…won’t…go…away.

I realized then, that I’m just a web developer at heart and I’d rather make web, not war. It was around that time that I started the Subtext project where I felt more in my element working on a web application. Eventually, I stopped using RSS Bandit preferring a web based solution in Google Reader, ironically, because the state of my feeds is always there, in the cloud, without me needing to synchronize or install an app when I’m at a new computer.

So while I actually like (or maybe am just accustomed to) the stateless programming model of the web, I’m also attracted to the statefulness of web applications as a whole in that the state of my data is not tied to any one machine but it’s stored centrally where I can easily get to it from anywhere (which yes, has its own concerns and problems such as when the net is down).

At the same time, I do check in now and then to see how RSS Bandit is progressing. There are very cool features that it has that I miss out on with Google Reader such as the ability to comment directly from the aggregator via the Comment API and the ability to subscribe to authenticated feeds. And I think Dare’s taking RSS Bandit into compelling new directions.

All this is to say that if you want to become a better developer, join an open source project (such as this one :) because it might just show you exactly what type of developer you are at heart. As I learned, I’m a web developer at heart.

comments edit

lockdownAs I’m sure you know, we developers are very particular people and we like to have things exactly our way. How else can you explain long winded impassioned debates over curly brace placement

So it comes as no surprise that developers really care about what goes in (and behind) their .aspx files, whether they be pages in Web Forms or views in ASP.NET MVC.

For example, some developers are adamant that a page should not include server side script blocks, while others don’t want their views to contain Web Form controls. Wouldn’t it be great if you could have your views reject such code constructs?

Fortunately, ASP.NET is full of lesser known extensibility gems which can help in such situations such as the PageParseFilter. MSDN describes this class as such:

Provides an abstract base class for a page parser filter that is used by the ASP.NET parser to determine whether an item is allowed in the page at parse time.

In other words, implementing this class allows you to go along for the ride as the page parser parses the .aspx file and gives you a chance to hook into that parsing.

For example, here’s a very simple filter which blocks any script tags with the runat="server" attribute set within a page.

using System;
using System.Web.UI;

public class MyPageParserFilter : PageParserFilter {
  public override bool ProcessCodeConstruct(CodeConstructType codeType
    , string code) {
    if (codeType == CodeConstructType.ScriptTag) {
      throw new InvalidOperationException("Say NO to server script blocks!");
    }
    return base.ProcessCodeConstruct(codeType, code);
  }

  public override bool AllowCode {
    get {
      return true;
    }
  }

  public override bool AllowControl(Type controlType, ControlBuilder builder)   {
    return true;
  }

  public override bool AllowBaseType(Type baseType) {
    return true;
  }

  public override bool AllowServerSideInclude(string includeVirtualPath) {
    return true;
  }

  public override bool AllowVirtualReference(string referenceVirtualPath
    , VirtualReferenceType referenceType) {
    return true;
  }

  public override int NumberOfControlsAllowed {
    get {
      return -1;
    }
  }

  public override int NumberOfDirectDependenciesAllowed {
    get {
      return -1;
    }
  }
}

Notice that we had to override some defaults for other properties we’re not interested in such as NumberOfControlsAllowed or we’d get the default of 0 which is not what we want in this case.

To apply this filter, just specify it in the <pages /> section of web.config like so:

<pages 
  pageParserFilterType="Namespace.MyPageParserFilter, AssemblyName">

Applying a parse filter for Views in ASP.NET MVC is a bit trickier because it already has a parse filter registered, ViewTypeParserFilter, which handles part of the voodoo black magic in order to remove the need for code-behind in views when using a generic model type. Remember those particular developers I was talking about?

Suppose we want to prevent developers from using server controls which make no sense in the context of an ASP.NET MVC view. Ideally, we could simply inherit from ViewTypeParserFilter and make our change so we don’t lose the existing view functionality.

That type is internal so we can’t simply inherit it. Fortunately, what we can do is simply grab the ASP.NET MVC source code for that type, rename the type and namespace, and then change it to meet our needs. Once we’re done, we can even share those changes with others. This is one of the benefits of having an open source license for ASP.NET MVC.

WARNING: The fact that we implement a ViewTypeParserFilter is an implementation detail. The goal is that in the future, we wouldn’t need this filter to provide the nice generic syntax. So what I’m about to show you might be made obsolete in the future and should be done at your own risk. It’s definitely running with scissors.

In my demo, I copied the following files to my project:

  • ViewTypeParserFilter
  • ViewTypeControlBuilder
  • ViewPageControlBuilder
  • ViewUserControlControlBuilder

I then created a new parser filter which inherits the ViewTypeParserFilter and overrode the AllowControl method like so:

public override bool AllowControl(Type controlType, ControlBuilder builder) {
  return (controlType == typeof(HtmlHead) 
    || controlType == typeof(HtmlTitle)
    || controlType == typeof(ContentPlaceHolder)
    || controlType == typeof(Content)
    || controlType == typeof(HtmlLink));
}

This will block adding any control except for those necessary in creating a typical view. You can imagine later adding some easy way of configuring that list in case you do later allow other controls.

Once we’ve implemented this new filter, we can edit the Web.config file within the Views directory to set the parser filter to this one.

This is a powerful tool for hooking into the parsing of a web page, so do be careful with it. As you might expect, I have a very simple demo of this feature here.

comments edit

At long last, the book that I worked on with Scott Hanselman, Rob Conery, and Scott Guthrie is in stock at Amazon.com.

To commemorate the book being available, the two Scotts worked very hard to convert the free eBook of chapter 1 (the end-to-end walkthrough) from the PDF into a series of HTML articles.

This is a great series which walks through the construction of the NerdDinner website. It touches upon most of the day-to-day aspects of ASP.NET MVC that you’ll want to know. It’s a great way to start understanding how the pieces largely fit together.

The rest of the book is for those who want to drill deep into the details of how the framework works. We tried to pepper the book with notes and anecdotes from the product team in a style similar to the annotations in the Framework Design Guidelines book. If you’re looking for reasons not to buy the book, see Rob’s post.

extended-forehead-editionThe bad news is that despite our heroic efforts in which we cajoled, begged, and pleaded and rendered our clothes asunder, we were not able to convince our editors to produce a special platinum extended forehead edition of the book. I really thought we could we charge extra for the limited edition cover and make gangbusters. I’ll just post it here so you can see for yourself. Click on it for a larger view.

code, asp.net mvc, dlr comments edit

Say you’re building a web application and you want, against your better judgment perhaps, to allow end users to easily customize the look and feel – a common scenario within a blog engine or any hosted application.

With ASP.NET, view code tends to be some complex declarative markup stuck in a file on disk which gets compiled by ASP.NET into an assembly. Most system administrators would first pluck out their own toenail rather than allow an end user permission to modify such files.

It’s possible to store such files in the database and use a VirtualPathProvider to load them, but that requires your application (and thus their views) to run in full trust. Is there a way you could safely store such views in the database in an application running in medium trust where the code in the view is approachable?

At the ALT.NET conference a little while back, Jimmy Schementi and John Lam gave a talk about the pattern of hosting a scripting language within a larger application. For example, many modern 3-D Games have their high performance core engine written in C++ and Assembly. However, these games often use a scripting language, such as Lua, to write the scripts for the behaviors of characters and objects.

An example that might be more familiar to more people is the use of VBA to write macros for Excel. In both of these cases, the larger application hosts a scripting environment that allow end users to customize the application using a simpler lighter weight language than the one the core app is written in.

A long while back, I wrote a blog post about defining ASP.NET MVC Views in IronRuby followed by a full IronRuby ASP.NET MVC stack. While there was some passionate interest by a few, in general, I was met with the thunderous sound of crickets. Why the huge lack of interest? Probably because I didn’t really sell the benefit and the explain the pain it solves. I’m sure many of you were asking, Why bother? What’s in it for me?

After thinking about it some more, I realized that my prototypes appeared to suggest that if you want to take advantage of IronRuby, you would need to make some sort of wholesale switch to a new foreign language, not something to be undertaken lightly.

This is why I really like Jimmy and John’s recent efforts to focus on showing the benefits of hosting the DLR for scripting scenarios like the ones mentioned above. It makes total sense to me when I look at it in this perspective. The way I see it, most developers spend a huge bulk of their time in a single core language, typically their “language of choice”. For me, I spend the bulk of my time writing C# code.

However, I don’t think twice about the fact that I also write tons of JavaScript when I do web development, and I’ll write the occasional VB code when I need a new Macro for Visual Studio or Excel. I also write SQL when I need to. I’m happy to pick up and use a new language when it will enable me to do the job at hand more efficiently and naturally than C# does. I imagine many developers feel this way. The occasional use of a scripting languages is fine when it gets the job done and I can still spend most of my time in my favorite language.

So I started thinking about how that might work in a web application. What if you could write all your business logic and controller logic in your language of choice, but have your views written in a light weight scripting language. If my web application were to host a scripting engine, I could actually store the code in any medium I want, such as the database. Having them in the database makes it very easy for end users to modify it since it wouldn’t require file upload permissions into the web root.

This is where hosting the DLR is a nice fit. I put together a proof of concept for these ideas. This is just a prototype intended to show how such a workflow might work. In this prototype, you go about creating your models and controllers the way you normally would.

For example, here’s a controller that returns some structured data to the view in the form of an anonymous type.

public ActionResult FunWithScripting()
{
  var someData = new { 
    salutation = "Are you having fun with scripting yet?", 
    theDate = DateTime.Now,
      numbers = new int[] { 1, 2, 3, 4 } 
  };

  return View(someData);
}

Once you write your controller, but before you create your view, you compile the app and then go visit the URL.View does not exist
view

We haven’t created the view yet, so let’s follow the instructions and login. Afterwards, we this:

view
editor

Since the view doesn’t exist, I hooked in and provided a temporary view for the controller action which contains a view editor. Notice that at the bottom of the screen, you can see the current property names and values being passed to the view. For example, there’s an enumeration of integers as one property, so I was able to use the Ruby each method to print them out in the view.

The sweet little browser based source code editor is named Edit Area created by Christophe Dolivet. Unfortunately, at the time I write this, it doesn’t yet have support for ERB style syntax highlighting schemes. That’s why the <% and %> aren’t highlighted in yellow.

When I click Create View, I get taken back to the request for the same action, but now I can see the view I just created (click to enlarge).

Fun with scripting
view

In the future, I should be able to host C# views in this way. Mono already has a tool for dynamically compiling C# code passed in as a string which I could try and incorporate.

I’m seriously thinking of making this the approach for building skins in a future version of Subtext. That would make skin installation drop dead simple and not require any file directory access. Let me know if you make use of this technique in your applications.

If you try and run this prototype, please note that there are some quirky caching issues with editing existing views in the prototype. It’ll seem like your view is not being edited, but it’s a result of how views are being cached. It might take a bit of time before your edits show up. I’m sure there are other bugs I’m still in the process of fixing. But for the most part, the general principle is sound.

You can download the prototype here.

comments edit

Because of all the travel I did last year as well as the impending new addition to the family this year, I drastically cut down on my travel this year. There are only two conferences outside of Redmond I planned to speak at, one was Mix (see the links to videos of my talks) and the next one is the Norwegian Developer Conference also known as the NDC.

NDC_logo-2009Hanselman spoke at this conference last year and tells me it’s a good one. Besides, it’s in Norway! I’ve travelled through Norway once during college, taking a train from Oslo to Bergen, riding a boat on the fjords, and enjoying the profound natural beauty of the country. I guess I have a thing for cold places. :)

I’m pretty excited about the speaker line-up which includes a lot of great .NET and ALT.NET speakers you know and love. But what’s really got me perked up are the speakers outside of the typical .NET conference lineup.

One of my favorite conferences last year was Google IO which was a refreshing change of pace for me. For the NDC, I requested to stay an extra day so I could make sure to catch sessions by Mary Poppendieck, Robert “Uncle Bob” Martin, and Michael Feathers among others.

It looks like all my talks are on Day 1 of the conference. I’ll be upda’ting my Black Belt Ninja Tips ASP.NET MVC talk, talking about Ajax in the context of ASP.NET MVC, and giving a joint talk with Scott Hanselman which we’re still figuring out the exact details on.

My only concern is whether I need to worry if Jeremy Miller is going to try and express his man love for me while there. ;) Kidding aside, I’m approaching with a mind ready to absorb knowledge. If you’re in the area, definitely consider this as a conference to check out. It should be fun!

comments edit

What responsibility do we have as software professionals when we post code out there for public consumption?

I don’t have a clear cut answer in my mind, but maybe you can help me formulate one. :)

For example, I recently posted a sample on my blog intended to show how to use jQuery Grid with ASP.NET MVC.

The point of the sample was to demonstrate shaping a JSON result for the jQuery grid’s consumption. For the sake of illustration, I wanted the action method to be relatively self contained so that a reader would quickly understand what’s going on in the code without having to jump around a lot.

Thus the code takes some shortcuts with data access, lack of exception handling, and lack of input validation. It’s pretty horrific!

Now before we grab the pitchforks (and I did say “we” intentionally as I’ll join you) to skewer me, I did preface the code with a big “warning, DEMO CODE AHEAD” disclaimer and so far, nobody’s beaten me up too bad about it, though maybe by writing this I’m putting myself in the crosshairs.

Even so, it did give me pause to post the code the way I did. Was I making the right trade-off in sacrificing code quality for the sake of blog post demo clarity and brevity?

In this particular case, I felt it was worth it as I tend to categorize code into several categories. I’m not saying these are absolutely correct, just opening up my cranium and giving you a peek in my head about how I think about this:

  • Prototype Code – Code used to hash out an idea to see if it’s feasible or as a means of learning a new technology. Often very ugly throwaway code with little attention paid to good design.
  • Demo Code – Code used to illustrate a concept, especially in a public setting. Like prototype code, solid design is sometimes sacrificed for clarity, but these sacrifices are deliberateand intentional, which is very important. My jQuery Grid demo above is an example of what I mean.
  • Sample Code – Very similar to demo code, the difference being that good design principles should be demonstrated for the code relevant to the concept the sample is demonstrating. Code irrelevant to the core concept might be fine to leave out or have lower quality. For example, if the sample is showing a data access technique, you might still leave out exception handling, caching, etc… since it’s not the goal of the sample to demonstrate those concepts.
  • Production Code – Code you’re running your business on, or selling. Should be as high quality as possible given your constraints. Sometimes, shortcuts are taken in the short run (incurring technical debt) with the intention of paying down the debt ASAP.
  • Reference Code – This is code that is intended to demonstrate the correct way to build an application and should be almost idealized in its embracement of good design practices.

As you might expect, the quality the audience might expect from these characterizations is not hard and fast, but dependent on context. For example, for the Space Shuttle software, I expect the Production Code to be much higher quality than production code for some intranet application.

Likewise, I think where the code is posted and by whom is can affect perception. We might expect much less from some blowhard posting code to his personal blog, ummm, like this one.

Then again, if the person claims that his example is a best practice, which is a dubious claim in the first place, we may tend to hold it to much higher standards.

Now if instead of a person, the sample is posted on an official website of a large company, say Microsoft, the audience may expect a lot more than from a personal blog post. In fact, the audience may not make the distinction between sample and reference application. This appears to be the case recently with Kobe and in the past with Oxite.

Again, this is my perspective on these things. But my views have been challenged recently via internal and external discussions with many people. So I went to the font of all knowledge where all your wildest questions are answered: Twitter. I posed the following two questions:

Do you have different quality expectations for a sample app vs a reference app?

What if the app is released by MS? Does that change your expectations?

The answers varied widely. Here’s a small sampling that represents the general tone of the responses I received.

Yes. A sample app should be quick and dirty. A reference app should exhibit best practices (error checking, logging, etc)

No, same expectations… Even I ignore what is the difference between both.

Regardless of who releases the app, my expectations don’t change.

Yes being from MS raises the bar of necessary quality, because it carries with it the weight of a software development authority.

I don’t think I have ever thought about what the difference in the two is, isn’t a sample app basically a reference app?

I don’t think most people discriminate substantively betw the words “sample” and “reference.”

Everyone, Microsoft included, should expect to be judged by everything the produce, sample or otherwise.

yes, samples do not showcase scalability or security, but ref apps do… i.e ref apps are more “enterprisey”

IMHO, sample implies a quick attempt; mostly throw-away. Ref. implies a proposed best practice; inherently higher quality.

No. We as a community should understand the difference.However MS needs to apply this notion consistently to it’s examples.

Whatever you release as sample code, is *guaranteed* to be copy-pasted everywhere - ask Windows AppCompat if you don’t believe me

Note that this is a very unscientific sampling, but there is a lot of diversity in the views being expressed here. Some people make no distinction between sample and reference while others do. Some hold Microsoft to higher standards while others hold everybody to the same standard.

I found this feedback to be very helpful because I think we tend to operate under one assumption about how our audience sees our samples, but your audience might have a completely different view. This might explain why there may be miscommunication and confusion about the community reaction to a sample.

I highlighted the last two responses because they make up the core dichotomy in my head regarding releasing samples.

On the one hand, I have tended to lean towards the first viewpoint. If code has the proper disclaimer, shouldn’t we take personal responsibility in understanding the difference?

Ever since starting work on ASP.NET MVC, we’ve been approached by more and more teams at Microsoft who are interested in sharing yet more code on CodePlex (or otherwise) and want to hear about our experiences and challenges in doing so.

When you think about it, this is a great change in what has been an otherwise closed culture. There are a lot of teams at Microsoft and the quality of the code and intent of the code will vary from team to team and project to project. I would hate to slow down that stream of sample code flowing out because some people will misunderstand its purpose and intend and cut and paste it. Yes, some of the code will be very bad, but some of it will still be worth putting out there. After all, I tend to think that if we stop giving the bad programmers bad code to cut and paste, they’ll simply write the bad code themselves. Yes, posting good code is even better, but I think that will be a byproduct of getting more code out there.

On the other hand, there’s the macro view of things to consider. People should also know not to use a hair dryer in the shower, yet they still have these funny warning labels for a reason. The fact that people shouldn’t do something sometimes doesn’t change the fact that may still do it. We can’t simply ignore that fact and the impact it may have. No matter how many disclaimers we put on our code, people will cut and paste it. It’s not so bad that a bad programmer uses bad code, but that as it propagates, the code gets confused with the right way and spreads to many programmers.

Furthermore, the story is complicated even more by the inconsistent labels applied to all this sample code, not to mention the inconsistent quality.

So What’s the Solution?

Stop shipping samples.

Nah, I’m just kidding. ;)

Some responses were along the lines of Microsoft should just post good code. I agree, I would really love it if every sample was of superb quality. I’d also like to play in a World Cup and fly without wings, but I don’t live in that world.

Obviously, this is what we should be striving for, but what do we do in the meantime? Stop shipping samples? I hope not.

Again, I don’t claim to have the answers, but I think there are a few things that could help. One twitter response made a great point:

a reference app is going to be grilled. Even more if it comes from the mothership. Get the community involved *before* it gets pub

Getting the community involved is a great means of having your code reviewed to make sure you’re not doing anything obviously stupid. Of course, even in this, there’s a challenge. Jeremy Miller made this great point recently:

We don’t have our own story straight yet.  We’re still advancing our craft.  By no means have we reached some sort of omega point in our own development efforts. 

In other words, even with community involvement, you’re probably going to piss someone off. But avoiding piss is not really the point anyways (though it’s much preferred to the alternative). The point is to be a participant in advancing the craft alongside the larger community. Others might disagree with some of your design decisions, but hopefully they can see that your code is well considered via your involvement with the community in the design process.

This also helps in avoiding the perception of arrogance, a fault that some feel is the root cause of why some of our sample apps are of poor quality. Any involvement with the community will help make it very clear that there’s much to learn from the community just as there is much to teach.

While I think getting community involved is important, I’m still on the fence on whether it must happen before its published. After all, isn’t publishing code a means of getting community involvement in the first place? As Dare says:

getting real feedback from customers by shipping is more valuable than any amount of talking to or about them beforehand

Personally, I would love for there to be a way for teams to feel free to post samples (using the definition I wrote), without fear of misconstrued intent and bad usage. Ideally in a manner where it’s clear that the code is not meant for cut and paste into real apps.

Can we figure out a way to post code samples that are not yet the embodiment of good practices in a responsible manner with the intent to improve the code quality based on community feedback? Is this even a worthy goal or should Microsoft samples just get it right the first time, as mentioned before, or don’t post at all?

Perhaps both of those are pipe dreams. I’m definitely interested in hearing your thoughts. :)

Another question I struggle with is what causes people to not distinguish between reference apps and sample apps? Is there no distinction to make? Or is this a perception problem that can be corrected with a concerted effort to make such labels consistently applied, perhaps? Or via some other means.

As you can see, I have my own preconceived notions about those things, but I’m putting them out there and challenging them based on what I’ve read recently. Please do comment and let me know your thoughts.

code, asp.net mvc comments edit

Tim Davis posted an updated version of this solution on his blog. His includes the following:

  • jqGrid 3.8.2
  • .NET 4.0 Updates
  • VS2010
  • jQuery 1.4.4
  • jQuery UI 1.8.7

Continuing in my pseudo-series of posts based on my ASP.NET MVC Ninjas on Fire Black Belt Tips Presentation at Mix (go watch it!), this post covers a demo I did not show because I ran out of time. It was a demo I held in my back pocket just in case I went too fast and needed one more demo.

A common scenario when building web user interfaces is providing a pageable and sortable grid of data. Even better if it uses AJAX to make it more responsive and snazzy. Since ASP.NET MVC includes jQuery, I figured it’d be fun to use a jQuery plugin for this demo, so I chose jQuery Grid.

After creating a standard ASP.NET MVC project, the first step was to download the plugin and to unzip the contents to my scripts directory per the Installation instructions.

jquery-grid-scripts

For the purposes of this demo, I’ll just implement this using the Index controller action and view within the HomeController.

With the scripts in place, go to the Index view and add the proper call to initialize the jQuery grid. There are three parts to this:

First, make sure to add the required script and CSS declarations.

<link rel="stylesheet" type="text/css" href="/scripts/themes/coffee/grid.css" 
  title="coffee" media="screen" />
<script src="/Scripts/jquery-1.3.2.js" type="text/javascript"></script>
<script src="/Scripts/jquery.jqGrid.js" type="text/javascript"></script>
<script src="/Scripts/js/jqModal.js" type="text/javascript"></script>
<script src="/Scripts/js/jqDnR.js" type="text/javascript"></script>

Notice that the first line contains a reference to the “coffee” CSS file. There are multiple themes included and when you choose a theme, you need to be sure to include the theme’s CSS file. I chose coffee, because I drink a lot of it.

The Second step is to initialize the grid with a bit of JavaScript. This looks a bit funky if you’re not used to jQuery, but I assure you, it’s pretty straightforward.

<script type="text/javascript">
    jQuery(document).ready(function(){ 
      jQuery("#list").jqGrid({
        url:'/Home/GridData/',
        datatype: 'json',
        mtype: 'GET',
        colNames:['Id','Votes','Title'],
        colModel :[
          {name:'Id', index:'Id', width:40, align:'left' },
          {name:'Votes', index:'Votes', width:40, align:'left' },
          {name:'Title', index:'Title', width:200, align:'left'}],
        pager: jQuery('#pager'),
        rowNum:10,
        rowList:[5,10,20,50],
        sortname: 'Id',
        sortorder: "desc",
        viewrecords: true,
        imgpath: '/scripts/themes/coffee/images',
        caption: 'My first grid'
      }); 
    }); 
</script>

There are a few things you’ll have to be sure to configure here. First is the url property which points to the URL that will provide the JSON data. Notice that the value is /Home/GridData which means we’ll be implementing an action method named GridData soon. During the course of this post, we’ll change that property to point to different action methods.

The colNames property contains the display names for each column separated by columns. Ideally it should match up with the items in the colModel property.

The colModel property is an array that is used to configure each column of the grid, allowing you to specify the width, alignment, and sortability of a column. The index property of a column is an important one as that is the value that is sent to the server when sorting on a column.

See the documentation for more details on the HTML and JavaScript used to configure the grid.

The Third step is to add a bit of HTML to the page which will house the grid.

<h2>My Grid Data</h2>
<table id="list" class="scroll" cellpadding="0" cellspacing="0"></table>
<div id="pager" class="scroll" style="text-align:center;"></div>

With this in place, it’s time to implement the GridData action method to return the JSON in the proper format.

But first, let’s take a look at the JSON format expected by the grid. From the documentation, you can see it will look something like:

{ 
  total: "xxx", 
  page: "yyy", 
  records: "zzz",
  rows : [
    {id:"1", cell:["cell11", "cell12", "cell13"]},
    {id:"2", cell:["cell21", "cell22", "cell23"]},
      ...
  ]
}

The documentation I linked to also provides some gnarly looking PHP code you can use to generate the JSON data. Fortunately, you won’t have to deal with that. By using the Json helper method with an anonymous object, we can write some relatively clean looking code which looks almost just like the spec. Here’s my first cut of the action method, just to get it to display some fake data.

public ActionResult GridData(string sidx, string sord, int page, int rows) {
  var jsonData = new {
    total = 1, // we'll implement later 
    page = page,
    records = 3, // implement later 
    rows = new[]{
      new {id = 1, cell = new[] {"1", "-7", "Is this a good question?"}},
      new {id = 2, cell = new[] {"2", "15", "Is this a blatant ripoff?"}},
      new {id = 3, cell = new[] {"3", "23", "Why is the sky blue?"}}
    }
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

A couple of things to point out. The arguments to the action methods are named according to the query string parameter names that jQuery grid sends via the Ajax request. I didn’t choose those names.

By naming the arguments to the action method exactly the same as what is in the query string, we have a very convenient way to retrieve these values. Remember, arguments passed to an action method should be treated with care.Never trust user input!

In this example, we statically create some JSON data and use the Json helper method to return the data back to the grid and Voila! It works!

jquery-grid-demo

Yeah, this is great for a simple demo, but I use a real database to store my data! Understood. It’s time to hook this up to a real database. As you might guess, I’ll use the HaackOverflow database for this demo as well as LinqToSql.

I’ll assume you know how to add a database and create a LinqToSql model already. If not, look at the source code I’ve included. Once you’ve done that, it’s pretty easy to transformat the data we get back into the proper JSON format.

public ActionResult LinqGridData    (string sidx, string sord, int page, int rows) {
  varnew HaackOverflowDataContext();

  var jsonData = new {
    total = 1, //todo: calculate
    page = page,
    records = context.Questions.Count(),
    rows = (
      from question in context.Questions
      select new {
        id = question.Id,
        cell = new string[] { 
          question.Id.ToString(), question.Votes.ToString(), question.Title 
        }
      }).ToArray()
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

Note that the method is a tiny bit busier, but it follows the same basic structure as the JSON data. After changing the JavaScript code in the view to point to this action instead of the other, we can now see the first ten records from the database in the grid.

But we’re not done yet. At this point, we want to implement paging and sorting. Paging is pretty easy, but sorting is a bit tricky. After all, what we get passed into the action method is the name of the sort column. At that point, we want to dynamically create a LINQ expression that sorts by that column.

One easy way to do this is to use the Dynamic Linq Query library which ScottGu wrote about a while back. This library adds extension methods which make it easy to create more dynamic Linq queries based on strings. Of course, with great power comes great responsibility. Make sure to validate the strings before you pass them into the methods. With this in place, we rewrite the action method to be (warning, DEMO CODE AHEAD!):

public ActionResult DynamicGridData
    (string sidx, string sord, int page, int rows) {
  var context = new HaackOverflowDataContext();
  int pageIndex = Convert.ToInt32(page) - 1;
  int pageSize = rows;
  int totalRecords = context.Questions.Count();
  int totalPages = (int)Math.Ceiling((float)totalRecords / (float)pageSize);

  var questions = context.Questions
    .OrderBy(sidx + " " + sord)
    .Skip(pageIndex * pageSize)
    .Take(pageSize);

  var jsonData = new {
    total = totalPages,
    page = page,
    records = totalRecords,
    rows = (
      from question in questions
      select new {
        id = question.Id,
        cell = new string[] {
          question.Id.ToString(), question.Votes.ToString(), question.Title 
        }
    }).ToArray()
  };
  return Json(jsonData, JsonRequestBehavior.AllowGet);
}

Some things to note: The first part of this method does some initial calculations to figure out the number of pages we’re dealing with based on the page size (passed in) and the total record count.

Then given that info, we use the Dynamic Linq extension methods to do the actual paging and sorting via the line:

var questions = context.Questions.OrderBy(…).Skip(…).Take(…);

Once we have that, we can simply transform that into the array that jQuery grid expects and place that in the larger JSON payload represented by the jsonData variable.

With all this in place, you now have a pretty snazzy approach to paging and sorting data using AJAX. Now go forth and wow your customers. ;)

And before I forget, here’s the sample project that uses all three approaches.

personal comments edit

Every good developer knows to always have a backup. For example, over two years ago, I announced my world domination plans. But there was a single point of failure in me putting all my world domination plans on the tiny shoulders of just one progeny. My boy needs a partner in crime.

mia So my wife and I conspired together and we’re happy to announce that baby #2 is on the way. Together, the two of them will be unstoppable!

My wife is past her first trimester and we expect the baby to RTF (Release To Family) around October.

This second time around has been a bit more challenging. My poor wife, bless her heart, has had to deal with much more sever nausea this time around.

Notice the crinkle in the ultrasound photo. My son did that. ;) He’s trying to destroy the evidence.

Many of you who have more than one children might be able to relate to this, but I really considered not writing a blog announcement for my second child. There was the feeling that it was such a novel thing the first time, but now it’s becoming old hat (not really!).

But then I mentally fast forwarded 16 years later and pictured my future daughter finding the firstborn announcement and not finding her own blog announcement. Try explaining that to a kid. I can deal without the drama.

Well I won’t have to! Hi honey, here’s your announcement. Now you be good while daddy goes shopping for a shotgun and a shovel. :)

asp.net, asp.net mvc comments edit

There are a couple of peculiarities worth understanding when dealing with title tags and master pages within Web Forms and ASP.NET MVC. These assume you are using the HtmlHead control, aka <head runat="server" />.

The first peculiarity involves a common approach where one puts a ContentPlaceHolder inside of a title tag like we do with the default template in ASP.NET MVC:

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" />
  </title>
</head>
...

What’s nice about this approach is you can set the title tag from within any content page.

<asp:Content ContentPlaceHolderID="titleContent" runat="server">
  Home
</asp:Content>

But what happens if you want to set part of the title from within the master page. For example, you might want the title of every page to end with a suffix, “ – MySite”.

If you try this (notice the – MySite tacked on):

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> - MySite
  </title>
</head>
...

And run the page, you’ll find that the – MySite is not rendered. This appears to be a quirk of the HtmlHead control. This is because the title tag within the HtmlHead control is now itself a control. This will be familiar to those who understand how the AddParsedSubObject method works. Effectively, the only content allowed within the body of the HtmlHead control are other controls.

The fix is pretty simple. Add your text to a LiteralControl like so.

<%@ Master ... %>
<html>
<head runat="server">
  <title>
    <asp:ContentPlaceHolder ID="titleContent" runat="server" /> 
    <asp:LiteralControl runat="server" Text=" - MySite" />
  </title>
</head>
...

The second peculiarityhas to do with how the HeaderControl really wants to produce valid HTML markup.

If you leave the <head runat="server"></head> tag empty, and then view source at the rendered output, you’ll notice that it renders an empty <title> tag for you. It looked at its child controls collection and saw that it didn’t contain an HtmlTitle control so it rendered one for you.

This can cause problems when attempting to use a ContentPlaceHolder to render the title tag for you. For example, a common layout I’ve seen is the following.

<%@ Master ... %>
<html>
<head runat="server">
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 
    <title>Testing</title>  
  </asp:ContentPlaceHolder>
</head>
...

This approach is neat because it allows you to not only set the title tag from within any content page, but any other content you want within the <head> tag.

However, if you view source on the rendered output, you’ll see two <title> tags, one that you specified and one that’s empty.

Going back to what wrote earlier, the reason becomes apparent. The HtmlHead control checks to see if it contains a child title control. When it doesn’t find one, it renders an empty one. However, it doesn’t look within the content placeholders defined within it to see if they’ve rendered a title tag.

This makes sense when you consider how the HtmlHead tag works. It only allows placing controls inside of it. However, a ContentPlaceHolder allows adding literal text in there. So while it looks the same, the title tag within the ContentPlaceHolder is not an HtmlTitle control. It’s just some text, and the HtmlHead control doesn’t want to parse all the rendered text from its children.

This is why I tend to take the following approach with my own master pages.

<%@ Master ... %>
<html>
<head runat="server">
  <title><asp:ContentPlaceHolder ID="titleContent" runat="server" /></title>
  <asp:ContentPlaceHolder ID="headContent" runat="server"> 
  </asp:ContentPlaceHolder>
</head>
...

Happy Titling!

asp.net comments edit

In my last blog post, I walked step by step through a Cross-site request forgery (CSRF) attack against an ASP.NET MVC web application. This attack is the result of how browsers handle cookies and cross domain form posts and is not specific to any one web platform. Many web platforms thus include their own mitigations to the problem.

It might seem that if you’re using Web Forms, you’re automatically safe from this attack. While Web Forms has many mitigations turned on by default, it turns out that it does not automatically protect your site against this specific form of attack.

In the same sample bank transfer application I provided in the last post, I also included an example written using Web Forms which demonstrates the CSRF attack. After you log in to the site, you can navigate to /BankWebForm/default.aspx to try out the Web Form version of the transfer money page. it works just like the MVC version.

To simulate the attack, make sure you are running the sample application locally and make sure you are logged in and then click on http://haacked.com/demos/csrf-webform.html.

Here’s the code for that page:

<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
    <title></title>
</head>
<body>
  <form name="badform" method="post"
    action="http://localhost:54607/BankWebForm/Default.aspx">
    <input type="hidden" name="ctl00$MainContent$amountTextBox"
      value="1000" />
    <input type="hidden" name="ctl00$MainContent$destinationAccountDropDown"
      value="2" />
    <input type="hidden" name="ctl00$MainContent$submitButton"
      value="Transfer" />
    <input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET"
      value="" />
    <input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT"
      value="" />
    <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"
      value="/wEP...0ws8kIw=" />
    <input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION"
      value="/wEWBwK...+FaB85Nc" />
    </form>
    <script type="text/javascript">
        document.badform.submit();
    </script>
</body>
</html>

It’s a bit more involved, but it does the trick. It mocks up all the proper hidden fields required to execute a bank transfer on my silly demo site.

The mitigation for this attack is pretty simple and described thoroughly in this this article by Dino Esposito as well as this post by Scott Hanselman. The change I made to my code behind based on Dino’s recommendation is the following:

protected override void OnInit(EventArgs e) {
  ViewStateUserKey = Session.SessionID;
  base.OnInit(e);
}

With this change in place, the CSRF attack I put in place no longer works.

When you go to a real bank site, you’ll learn they have all sorts of protections in place above and beyond what I described here. Hopefully this post and the previous one provided some insight into why they do all the things they do. :)

Technorati Tags: asp.net,security

asp.net, code, asp.net mvc comments edit

A Cross-site request forgery attack, also known as CSRF or XSRF (pronounced sea-surf) is the less well known, but equally dangerous, cousin of the Cross Site Scripting (XSS) attack. Yeah, they come from a rough family.

CSRF is a form of confused deputy attack. Imagine you’re a malcontent who wants to harm another person in a maximum security jail. You’re probably going to have a tough time reaching that person due to your lack of proper credentials. A potentially easier approach to accomplish your misdeed is to confuse a deputy to misuse his authority to commit the dastardly act on your behalf. That’s a much more effective strategy for causing mayhem!

In the case of a CSRF attack, the confused deputy is your browser. After logging into a typical website, the website will issue your browser an authentication token within a cookie. Each subsequent request to sends the cookie back to the site to let the site know that you are authorized to take whatever action you’re taking.

Suppose you visit a malicious website soon after visiting your bank website. Your session on the previous site might still be valid (though most bank websites guard against this carefully). Thus, visiting a carefully crafted malicious website (perhaps you clicked on a spam link) could cause a form post to the previous website. Your browser would send the authentication cookie back to that site and appear to be making a request on your behalf, even though you did not intend to do so.

Let’s take a look at a concrete example to make this clear. This example is the same one I demonstrated as part of my ASP.NET MVC Ninjas on Fire Black Belt Tips talk at Mix in Las Vegas. Feel free to download the source for this sample and follow along.

Here’s a simple banking website I wrote. If your banking site looks like this one, I recommend running away.

banking-login-pageThe site properly blocks anonymous users from taking any action. You can see that in the code for the controller:

[Authorize]
public class HomeController : Controller
{
  //...
}

Notice that we use the AuthorizeAttribute on the controller (without specifying any roles) to specify that all actions of this controller require the user to be authentication.

After logging in, we get a simple form that allows us to transfer money to another account in the bank. Note that for the sake of the demo, I’ve included an information disclosure vulnerability by allowing you to see the balance for other bank members. ;)

bank-transfer-screen

To transfer money to my Bookie, for example, I can enter an amount of $1000, select the Bookie account, and then click Transfer. The following shows the HTTP POST that is sent to the website (slightly edited for brevity):

POST /Home/Transfer HTTP/1.1
Referer: http://localhost:54607/csrf-mvc.html
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Host: 127.0.0.1:54607
Content-Length: 34
Cookie: .ASPXAUTH=98A250...03BB37

Amount=1000&destinationAccountId=3

There are three important things to notice here. We are posting to a well known URL, /Home/Transfer, we are sending a cookie, .ASPXAUTH, which lets the site know we are already logged in, and we are posting some data (Amount=1000&destinationAccountId=3), namely the amount we want to transfer and the account id we want to transfer to. Let’s briefly look at the code that executes the transfer.

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Transfer(int destinationAccountId, double amount) {
  string username = User.Identity.Name;
  Account source = _context.Accounts.First(a => a.Username == username);
  Account destination = _context.Accounts.FirstOrDefault(
    a => a.Id == destinationAccountId);
            
  source.Balance -= amount;
  destination.Balance += amount;
  _context.SubmitChanges();
  return RedirectToAction("Index");
}

Disclaimer: Do not write code like this. This code is for demonstration purposes only. For example, I don’t ensure that amount non-negative, which means you can enter a negative value to transfer money from another account. Like I said, if you see a bank website like this, run!

The code is straightforward. We simply transfer money from one account to another. At this point, everything looks fine. We’re making sure the user is logged in before we transfer money. And we are making sure that this method can only be called from a POST request and not a GET request (this last point is important. Never allow changes to data via a GET request).So what could go wrong?

Well BadGuy, another bank user has an idea. He sets up a website that has a page with the following code:

<html>
<head>
    <title></title>
</head>
<body>
    <form name="badform" method="post"
     action="http://localhost:54607/Home/Transfer">
        <input type="hidden" name="destinationAccountId" value="2" />
        <input type="hidden" name="amount" value="1000" />
    </form>
    <script type="text/javascript">
        document.badform.submit();
    </script>
</body>
</html>

What he’s done here is create an HTML page that replicates the fields in bank transfer form as hidden inputs and then runs some JavaScript to submit the form. The form has its action set to post to the bank’s URL.

When you visit this page it makes a form post back to the bank site. If you want to try this out, I am hosting this HTML here. You have to make sure the website sample code is running on your machine before you click that link to see it working.

Let’s look at the contents of that form post.

POST /Home/Transfer HTTP/1.1
Referer: http://haacked.com/demos/csrf-mvc.html
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Host: 127.0.0.1:54607
Content-Length: 34
Cookie: .ASPXAUTH=98A250...03BB37

Amount=1000&destinationAccountId=2

It looks exactly the same as the one before, except the Referer is different. When the unsuspecting bank user visited the bad guy’s website, it recreated a form post to transfer funds, and the browser unwittingly sent the still active session cookie containing the user’s authentication information.

The end result is that I’m out of $1000 and BadGuy has his bank account increased by $1000. Drat!

It might seem that you could rely on the checking the Referer to prevent this attack, but some proxy servers etc… will strip out the Referer field in order to maintain privacy. Also, there may be ways to spoof the Referer field. Another mitigation is to constantly change the URL used for performing sensitive operations like this.

In general, the standard approach to mitigating CSRF attacks is to render a “canary” in the form (typically a hidden input) that the attacker couldn’t know or compute. When the form is submitted, the server validates that the submitted canary is correct. Now this assumes that the browser is trusted since the point of the attack is to get the general public to misuse their own browser’s authority.

It turns out this is mostly a reasonable assumption since browsers do not allow using XmlHttp to make a cross-domain GET request. This makes it difficult for the attacker to obtain the canary using the current user’s credentials. However, a bug in an older browser, or in a browser plugin, might allow alternate means for the bad guy’s site to grab the current user’s canary.

The mitigation in ASP.NET MVC is to use the AntiForgery helpers. Steve Sanderson has a great post detailing their usage.

The first step is to add the ValidateAntiForgeryTokenAttribute to the action method. This will validate the “canary”.

[ValidateAntiForgeryToken]
public ActionResult Transfer(int destinationAccountId, double amount) {
  ///... code you've already seen ...
}

The next step is to add the canary to the form in your view via the Html.AntiForgeryToken() method.

The following shows the relevant section of the view.

<% using (Html.BeginForm("Transfer", "Home")) { %>
<p>
    <label for="Amount">Amount:</legend>
    <%= Html.TextBox("Amount")%>
</p>
<p>
    <label for="destinationAccountId">
      Destination Account:
    </legend>
    <%= Html.DropDownList("destinationAccountId", "Select an Account") %>
</p>
<p>
    <%= Html.AntiForgeryToken() %>
    <input type="submit" value="transfer" />
</p>
<% } %>

When you view source, you’ll see the following hidden input.

<input name="__RequestVerificationToken" 
  type="hidden" 
  value="WaE634+3jjeuJFgcVB7FMKNzOxKrPq/WwQmU7iqD7PxyTtf8H8M3hre+VUZY1Hxf" />

At the same time, we also issue a cookie with that value encrypted. When the form post is submitted, we compare the cookie value to the submitted verification token and ensure that they match.

Should you be worried?

The point of this post is not to be alarmist, but to raise awareness. Most sites will never really have to worry about this attack in the first place. If your site is not well known or doesn’t manage valuable resources that can be transferred to others, then it’s not as likely to be targeted by a mass phishing attack by those looking to make a buck.

Of course, financial gain is not the only motivation for a CSRF attack. Some people are just a-holes and like to grief large popular sites. For example, a bad guy might use this attack to try and post stories on a popular link aggregator site like Digg.

One point I would like to stress is that it is very important to never allow any changes to data via GET requests. To understand why, check out this post as well as this story about the Google Web Accelerator.

What about Web Forms?

It turns out Web Forms are not immune to this attack by default. I have a follow-up post that talks about this and the mitigation.

If you missed the link to the sample code before, you can download the source here (compiled against ASP.NET MVC 2).

Technorati Tags: aspnetmvc,asp.net,csrf,security

code, humor comments edit

I’ve been relatively quiet on my blog lately in part because of all the work on ASP.NET MVC. However, the ASP.NET team is a relatively small team so we often are required to work on multiple features at the same time. So part of the reason I’ve been so busy is that while we were wrapping up ASP.NET MVC, I was also busy working on a core .NET Framework feature we plan to get into the next version (it was a feature that originated with our team, but we realized it belongs in the BCL).

The goal of the feature is to help deal with the very common task of handling string input. In many cases, the point is to convert the input into another type, such as an int or float. However, how do you deal with the fact that the string might not be convertible to the other type.

We realized we needed a type to handle this situation. A type that would represent the situation after the user has submitted input, but before you attempt the conversion. At this point, you have a string or another type.

clip_image004_thumb

For more details on the StringOr<T> Community Technology Preview (CTP), please see details on lead developer Eilon Lipton’s Blog (he’s a big fan of cats as you can see). He provides source code and unit tests for download. As always, please do provide feedback as your feedback is extremely important in helping shape this nascent technology.

Tags: framework , .net

asp.net mvc, asp.net comments edit

First let me begin by assuring you, this is not an April Fool’s joke.

2871423645_2f690a0c61Exciting news! Scott Guthrie announced today that we have released the source code for ASP.NET MVC 1.0 under the Ms-PL license, an OSI approved Open Source license with all the rights that license entails.

You can download the Ms-PL licensed source package from the download details page here. Just scroll down and look for the file named AspNetMvc1.Ms-PL.source.zip. My baby is growing up!

A big thanks must go out to everyone involved in making this happen and to those who approved it. It’s truly a team effort. When I joined Microsoft, I remember walking into ScottGu’s office to try and plant the seed for releasing ASP.NET MVC under the Ms-PL license. I came in armed with reasons why we should, but found him to be immediately receptive, if not already thinking along those lines. In fact, a lot of people such as Brian Goldfarb, my management chain, our LCA contact, etc… were completely on board, which was unexpected (though maybe it should not have been) and encouraging to me.

However, there’s agreement to do something and the actual doing. It still a lot of people to do the leg-work to make it happen. I personally was kept me very busy in the days leading up to the official RTM release. Let’s just say I feel like I’m one course away from getting a law degree.

I know one of the first questions some of you will ask is will we accept source code contributions (I’ve already seen the question on Twitter :). Unfortunately, at this time the answer is no, we do not accept patches. Please don’t let that stop you from contributing in other ways. The terms of the license do mean we need to stay on our toes to keep putting out compelling releases and we will work hard not to disappoint.

Personally (and this is totally my own opinion), I’d like to reach the point where we could accept patches. There are many hurdles in the way, but if you went back in time several years and told people that Microsoft would release several open source projects (Ajax Control Toolkit, MEF, DLR, IronPython and IronRuby, etc….) you’d have been laughed back to the present. Perhaps if we could travel to the future a few years, we’ll see a completely different landscape from today.

However, it is a complex issue and I don’t want to downplay that, but there are many of us who are looking for novel solutions and trying to push things forward. I really think in the long run, it is good for us and four our customers, otherwise we wouldn’t care.

But coming back to the present, I’m extremely pleased with where we are now and look forward to what will happen in the future. Someone once expressed disappointment that my involvement in open source projects seriously declined after joining Microsoft. It was my hope at the time that by the time it released, it would be clear that technically, I had been working on OSS. :)