comments suggest edit

The .NET Framework provides support for managing transactions from code via the System.Transactions infrastructure. Performing database operations in a transaction is as easy as writing a using block with the TransactionScope class.

using(TransactionScope transaction = new TransactionScope()) 
{
  DoSomeWork();
  SaveWorkToDatabase();

  transaction.Complete();
}

At the end of the using block, Dispose is called on the transaction scope. If the transaction has not been completed (in other words, transaction.Complete was not called), then the transaction is rolled back. Otherwise it is committed to the underlying data store.

The typical reason a transaction might not be completed is that an exception is thrown within the using block and thus the Complete method is not called.

This pattern is simple, but I was looking at it the other day with a co-worker wondering if we could make it even simpler. After all, if the only reason a transaction fails is because an exception is thrown, why must the developer remember to complete the transaction? Can’t we do that for them?

My idea was to write a method that accepts an Action which contains the code you wish to run within the transaction. I’m not sure if people would consider this simpler, so you tell me. Here’s the usage pattern.

public void SomeMethod()
{
  Transaction.Do(() => {
    DoSomeWork();
    SaveWorkToDatabase();
  });
}

Yay! I saved one whole line of code! :P

Kidding aside, we don’t save much in code reduction, but I think it makes the concept slightly simpler. I figured someone has already done this as it’s really not rocket science, but I didn’t see anything after a quick search. Here’s the code.

public static class Transaction 
{
  public static void Do(Action action) 
  {
    using (TransactionScope transaction = new TransactionScope())
   {
      action();
      transaction.Complete();
    }
  }
}

So you tell me, does this seem useful at all?

By the way, there are several overloads to the TransactionScope constructor. I would imagine that if you used this pattern in a real application, you’d want to provide corresponding overloads to the Transaction.Do method.

UPDATE: What if you don’t want to rely on an exception to determine whether the transaction is successful?

In general, I tend to think of a failed transaction as an exceptional situation. I generally assume transactions will succeed and when they don’t it’s an exceptional situation. In other words, I’m usually fine with an exception being the trigger that a transaction fails.

However, Omer Van Kloeten pointed out on Twitter that this can be a performance problem in cases where transaction failures are common and that returning true or false might make more sense.

It’s trivial to provide an overload that takes in a Func<bool>. When you use this overload, you simply return true if the transaction succeeds or false if it doesn’t, which is kind of nice. Here’s an example of usage.

Transaction.Do(() => {

  DoSomeWork();
  if(SaveWorkToDatabaseSuccessful()) {
    return true;
  }
  return false;
});

The implementation is pretty similar to what we have above.

public static void Do(Func<bool> action) {
  using (TransactionScope transaction = new TransactionScope()) {
    if (action()) {
      transaction.Complete();
    }
  }
}

comments suggest edit

When building a web application, it’s a common desire to want to expose a simple Web API along with the HTML user interface to enable various mash-up scenarios or to simply make accessing structured data easy from the same application.

A common question that comes up is when to use ASP.NET MVC to build out REST-ful services and when to use WCF? I’ve answered the question before, but not as well as Ayende does (when discussing a different topic). This is what I tried to express.

In many cases, the application itself is the only reason for development [of the service]

[of the service]” added by me. In other words, when the only reason for the service’s existence is to service the one application you’re currently building, it may make more sense  would stick with the simple case of using ASP.NET MVC. This is commonly the case when the only client to your JSON service is your web application’s Ajax code.

When your service is intended to serve multiple clients (not just your one application) or hit large scale usage, then moving to a real services layer such as WCF may be more appropriate.

However, there is now a third hybrid choice that blends ASP.NET and WCF. The WCF team saw that many developers building ASP.NET MVC apps are more comfortable with the ASP.NET MVC programming model, but still want to supply more rich RESTful services from their web applications. So the WCF team put together an SDK and samples for building REST services using ASP.NET MVC.

You can download the samples and SDK from ASP.NET MVC 1.0 page on CodePlex.

Do read through the overview document as it describes the changes you’ll need to make to an application to make use of this framework. Also, the zip file includes several sample movie applications which demonstrate various scenarios and compares them to the baseline of not using the REST approach.

At this point in time, this is a sample and SDK hosted on our CodePlex site, but many of the features are in consideration for a future release of ASP.NET MVC (no specifics yet).

This is where you come in. We are keenly interested in hearing feedback on this SDK. Is it important to you, is it not? Does it do what you need? Does it need improvement. Let us know what you think. Thanks!

comments suggest edit

In a recent post, The Law of Demeter Is Not A Dot Counting Exercise, I wanted to peer into the dark depths of the Law of Demeter to understand it’s real purpose. In the end I concluded that the real goal of the guideline is to reduce coupling, not dots, which was a relief because I’m a big fan of dots (and stripes too judging by my shirt collection).

However, one thing that puzzled me was that there are in essence two distinct formulations of the law, the object form and the class form. Why are there two forms and how do they differ in a practical sense?

Let’s find an example of where the law seems to break down and perhaps apply these forms to solve the conundrum as a means of gaining better understanding of the law.

Rémon Sinnema has a great example of where the law seems to break down that can serve as a starting point for this discussion.

Code that violates the Law of Demeter is a candidate for Hide Delegate, e.g. manager = john.getDepartment().getManager() can be refactored to manager = john.getManager(), where the Employee class gets a new getManager() method.

However, not all such refactorings make as much sense. Consider, for example, someone who’s trying to kiss up to his boss: sendFlowers(john.getManager().getSpouse()). Applying Hide Delegate here would yield a getManagersSpouse() method in Employee. Yuck.

This is an example of one common drawback of following LoD to the letter. You can end up up with a lot of one-off wrapper methods to propagate a property or method to the caller. In fact, this is so common there’s a term for such a wrapper. It’s called a Demeter Transmogrifier!

transmogrifier_small

Who knew that Calvin was such a rock star software developer?

Too many of these one-off “transmogrifier” methods can clutter your API like a tornado in a paper factory, but like most things in software, it’s a trade-off that has to be weighed against the benefits of applying LoD in any given situation. These sort of judgment calls are part of the craft of software development and there’s just no “one size fits all follow the checklist” solution.

While this criticism of LoD may be valid at times, it may not be so in this particular case. Is this another case of dot counting?

For example, suppose the getManager method returns an instance of Manager and Manager implements the IEmployee interface. Also suppose that the IEmployee interface includes the getSpouse() method. Since John is also an IEmployee, shouldn’t he be free to call the getSpouse() method of his manager without violating LoD? After all, they are both instances of IEmployee.

Let’s take another look at the the general formulation of the law:

Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Or: Each unit should only talk to its friends; Don’t talk to strangers.

Notice that the word closely is in quotes. What exactly does it mean that one unit is closely related to another? In the short form of the law, Don’t talk to strangers,we learn we shouldn’t talk to strangers. But who exactly is a stranger? Great questions, if I do say so myself!

The formal version of the law focuses on sending messages to objects. For example, a method of an object can always call methods of itself, methods of an object it created, or methods of passed in arguments. But what about types? Can an object always call methods of an object that is the same type as the calling object? In other words, if I am a Person object, is another Person object a stranger to me?

According to the general formulation, there is a class form of LoD which applies to statically typed languages and seems to indicate that yes, this is the case. It seems it’s fair to say that for a statically typed language, an object has knowledge of the inner workings of another object of the same type.

Please note that I am qualifying that statement with “seems” and “fair to say” because I’m not an expert here. This is what I’ve pieced together in my own reading and am open to someone with more expertise here clearing up my understanding or lack thereof.

comments suggest edit

One of the complaints I often here with our our default view engine and Pages is that there’s all this extra cruft in there with the whole page directive and stuff. But it turns out that you can get rid of a lot of it. Credit goes to David Ebbo, the oracle of all hidden gems within the inner workings of ASP.NET, for pointing me in the right direction on this.

First, let me show you what the before and after of our default Index view (reformatted to fit the format for this blog).

Before

<%@ Page Language="C#" 
  MasterPageFile="~/Views/Shared/Site.Master" 
  Inherits="System.Web.Mvc.ViewPage" %>

<asp:Content ID="indexTitle" 
  ContentPlaceHolderID="TitleContent" runat="server">
    Home Page
</asp:Content>

<asp:Content ID="indexContent" 
  ContentPlaceHolderID="MainContent" runat="server">
    <h2><%= Html.Encode(ViewData["Message"]) %></h2>
    <p>
        To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" 
        title="ASP.NET MVC Website">http://asp.net/mvc</a>.
    </p>
</asp:Content>

After {.clear}

<asp:Content ContentPlaceHolderID="TitleContent" runat="server">
    Home Page
</asp:Content>

<asp:Content ContentPlaceHolderID="MainContent" runat="server">
    <h2><%= Html.Encode(ViewData["Message"]) %></h2>
    <p>
        To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" 
        title="ASP.NET MVC Website">http://asp.net/mvc</a>.
    </p>
</asp:Content>

That ain’t your pappy’s Web Form view. I can see your reaction now:

Where’s the page declaration!? Where’s all the Content IDs!? Where’s the Master Page declaration!? Oh good, at least runat=”server” is still there to anchor my sanity and comfort me at night.

It turns out that ASP.NET provides ways to set many of the defaults within Web.config. What I’ve done here (and which you can do in an ASP.NET MVC project or Web Forms project) is to set several of these defaults.

In the case of ASP.NET MVC, I opened up the Web.config file hiding away in the Views directory, not to be confused with the Web.config in your application root.

views-webconfig

This Web.config is placed here because it is the default for all Views. I then made the following changes:

  1. Set the compilation element’s defaultLanguage attribute to “C#”.
  2. Set the pages element’s masterPageFile attribute to point to ~/Views/Shared/Site.master.
  3. Set the pages element’s pageBaseType attribute to System.Web.Mvc.ViewPage (in this case, it was already set as part of the default ASP.NET MVC project template).

Below is what the web.config file looks like with my changes (I removed some details like other elements and attributes just to show the gist):

<configuration>
  <system.web>
    <compilation defaultLanguage="C#" />
    <pages
        masterPageFile="~/Views/Shared/Site.master"
        pageBaseType="System.Web.Mvc.ViewPage"
        userControlBaseType="System.Web.Mvc.ViewUserControl">
    </pages>
  </system.web>
</configuration>

With this in place, as long as my views don’t deviate from these settings, I won’t have to declare the Page directive.

Of course, if you’re using strongly typed views, you’ll need the Page directive to specify the ViewPage type, but that’s it.

Also, don’t forget that you can get rid of all them ugly Register declarations by registering custom controls in Web.config.

You can also get rid of those ugly Import directives by importing namespaces in Web.config.

<configuration>
  <system.web>
    <pages>
      <namespaces>
        <add namespace="Haack.Mvc.Helpers" />
      </namespaces>
    </pages>
  </system.web>
</configuration>

By following these techniques, you can get rid of a lot of cruft within your pages and views and keep them slimmer and fitter. Of course, what I’ve shown here is merely putting your views on a syntax diet. The more important diet for your views is to keep the amount of code minimal and restricted to presentation concerns, but that’s a post for another day as Rob has already covered it.

Sadly, there is no getting rid of runat="server" yet short of switching another view engine. But at this point, I like to think of him as that obnoxious friend from high school your wife hates but you still keep around to remind you of your roots. ;)

Hope you enjoy these tips and use them to put your views on a diet. After all, isn’t it the view’s job to look good for the beach?

Tags: asp.net,aspnetmvc,views

comments suggest edit

Note, this blog post is based on Preview 1 of ASP.NET MVC 2 and details are subject to change. I’ll try to get back to normal ASP.NET MVC 1.0 content soon. :)

While in a meeting yesterday with “The Gu”, the topic of automatic views came up. Imagine if you could simply instantiate a model object within a controller action, return it to the “view”, and have ASP.NET MVC provide simple scaffolded edit and details views for the model automatically.

That’s when the light bulb went on for Scott and he briefly mentioned an idea for an approach that would work. I was excited by this idea and decided to prototype it tonight. Before I discuss that approach, let me lead in with a bit of background.

One of the cool features of ASP.NET MVC is that any views in our ~/Views/Shared folderare shared among all controllers. For example, suppose you wanted a default Index view for all controllers. You could simply add a view named Index into the Shared views folder.

shared-index-view

Thus any controller with an action named Indexwould automatically use the Index in the Shared folder unless there was also an Index view in the controller’s view folder.

Perhaps, we can use this to our advantage when building simple CRUD (Create, Read, Update, Delete) pages. What if we included default views within the Shared folder named after the basic CRUD operations? What would we place in these views? Well calls to our new Templated Helpers of course! That way, when you add a new action method which follows the convention, you’d automatically have a scaffolded view without having to create the view!

I prototyped this up tonight as a demonstration. The first thing I did was add three new views to the Shared folder, Details, Edit, and Create.

crud-views Let’s take a look at the Details view to see how simple it is.

<%@ Page Inherits="System.Web.Mvc.ViewPage"%>
<asp:Content ContentPlaceHolderID="TitleContent" runat="server">
    Details for <%= Html.Encode(ViewData.Eval("Title")) %>
</asp:Content>

<asp:Content ContentPlaceHolderID="MainContent" runat="server">

    <fieldset class="default-view">
        <legend><%= Html.Encode(ViewData.Eval("Title")) %></legend>
    
        <% ViewData["__MyModel"] = Model; %>
        <%= Html.Display("__MyModel") %>
    </fieldset>
</asp:Content>

What we see here is a non-generic ViewPage. Since this View can be used for multiple controller views and we won’t know what the model type is until runtime, we can’t use a strongly typed view here, but we can use the non-generic Html.Display method to display the model.

One thing you’ll notice is that this required a hack where I take the model and add it to ViewData using an arbirtrary key, and then I call Html.Display using the same view data key. This is due to an apparent bug in Preview 1 in which Html.Display("") doesn’t work against the current model. I’m confident we’ll fix this in a future preview.

Html.DisplayFor(m => m) also doesn’t work here because the expression works against the declared type of the Model, not the runtime type, which in this case, is object.

With these views in place, I now have the basic default CRUD (well Create, Edit, Details to be exact) views in place. So the next time I create an action method named the same as these templates, I won’t have to create a view.

Let’s see this in action. I love NerdDinner, but I’d like to use another domain for this sample for a chain. Let’s try Ninjas!

First, we create a simple Ninja class.

public class Ninja
{
    public string Name { get; set; }
    public int ShurikenCount { get; set; }
    public int BlowgunDartCount { get; set; }
    public string  Clan { get; set; }
}

Next we’ll add a new NinjaController using the Add Controller dialog by right clicking on the Controllers folder, selecting Add, and choosing Controller.

add-controller

This brings up a dialog which allows you to name the controller and choose to scaffold some simple action methods (completely configurable of course using T4 templates).

Add
Controller

Within the newly added Ninja controller, I create sample Ninja (as a static variable for demonstration purposes) and return it from the Details action.

static Ninja _ninja = new Ninja { 
    Name = "Ask a Ninja", 
    Clan = "Yokoyama", 
    BlowgunDartCount = 23, 
    ShurikenCount = 42 };

public ActionResult Details(int id)
{
  ViewData["Title"] = "A Very Cool Ninja";
  return View(_ninja);
}

Note that I also place a title in ViewData since I know the view will display that title. I could also have created a NinjaViewModel and passed that to the view instead complete with Title property, but I chose to do it this way for demo purposes.

Now, when I visit the Ninja details page, I see:

Details for One awesome Ninja - Windows Internet Explorer
(2)

With these default templates in place, I can quickly create other action methods without having to worry about the view yet. I’ll just get a default scaffolded view.

If I need to make minor customizations to the scaffolded view, I can always apply data annotation attributes to provide hints to the templated helper on how to display the model. For example, let’s add some spaces to the fields via the DisplayNameAttribute.

public class Ninja
{
    public string Name { get; set; }
    [DisplayName("Shurikens")]
    public int ShurikenCount { get; set; }
    [DisplayName("Blowgun Darts")]
    public int BlowgunDartCount { get; set; }
    public string  Clan { get; set; }
}

If it concerns you that I’m adding these presentation concerns to the model, let’s pretend this is actually a view specific model for the moment and set those concerns aside. Also, in the future we hope to provide means to provide this meta-data via other means so it’s doesn’t have to be applied directly to the model but can be stored elsewhere.

Now when I recompile and refresh the page, I see my updated labels.

updated-ninja-details

Alternatively, I can create a display template for Ninjas. All I need to do is add a folder named DisplayTemplates to the Shared views folder and add my Ninja template there.

Then I right click on that folder and select the Add View dialog, making sure to check Create a strongly-typed view. In this case, since I know I’m making a template specifically for Ninjas, I can create a strongly typed partial view and select Ninja as model type.

Add-Partial-View

When I’m done, I should see the following template in the DisplayTemplates folder. I can go in there and make any edits I like now to provide much more detailed customization.

DisplayTemplates

Now I just recompile and then refresh my details page and see:

scaffolded-details

Finally, if I need even more control, I can simply add a Details view to the Ninja views folder, which provides absolute control and overrides the default Details view in the Shared folder.

Ninja-Details-View

So that’s the neat idea which I’m calling “default templated views” for now. This walkthrough not only shows you the idea, but how to implement it yourself! You can easily take this idea and have it fit your own conventions.

At the time that he mentioned this idea, Scott exclaimed “Why didn’t I think of this before, it’s so obvious.” (or something to that effect, I wasn’t taking notes).

I was thinking the same thing until I just realized, we didn’t have Templated Helpers before, so having default CRUD views would not have been all that useful in ASP.NET MVC 1.0. ;)

But ASP.NET MVC 2 Preview 1 does have Templated Helpers and this post provides a neat means to provide scaffolded views while you build your application.

And before I forget, here’s a download containing my sample Ninja project.

code, asp.net mvc comments suggest edit

UPDATEThis post is now obsolete. Single project areas are a core part of ASP.NET MVC 2.

Preview 1 of ASP.NET MVC 2 introduces the concept of Areas. Areas provide a means of dividing a large web application into multiple projects, each of which can be developed in relative isolation. The goal of this feature is to help manage complexity when developing a large site by factoring the site into multiple projects, which get combined back into the main site before deployment. Despite the multiple projects, it’s all logically one web application.

One piece of feedback I’ve already heard from several people is that they don’t want to manage multiple projects and simply want areas within  single project as a means of organizing controllers and views much like I had it in my prototype for ASP.NET MVC 1.0.

areas-folder-structure

Well the bad news is that the areas layout I had in that prototype doesn’t work right out of the box. The good news is that it is very easy to enable that scenario. All of the components necessary are in the box, we just need to tweak the installation slightly.

We’ve added a few area specific properties to VirtualPathProviderViewEngine, the base class for our WebFormViewEngine and others. Properties such as AreaViewLocationFormats allow specifying an array of format strings used by the view engines to locate a view. The default format strings for areas doesn’t match the structure that I used before, but it’s not hard for us to tweak things a bit so it does.

The approach I took was to simply create a new view engine that had the area view location formats that I cared about and inserted it first into the view engines collection.

public class SingleProjectAreasViewEngine : WebFormViewEngine {
    public SingleProjectAreasViewEngine() : this(
        new[] {
            "~/Areas/{2}/Views/{1}/{0}.aspx",
            "~/Areas/{2}/Views/{1}/{0}.ascx",
            "~/Areas/{2}/Shared/{0}.aspx",
            "~/Areas/{2}/Shared/{0}.ascx"
        },
        null,
        new[] {
            "~/Areas/{2}/Views/{1}/{0}.master",
            "~/Areas/{2}/Views/Shared/{0}.master",
        }
        ) {
    }

    public SingleProjectAreasViewEngine(
            IEnumerable<string> areaViewLocationFormats, 
            IEnumerable<string> areaPartialViewLocationFormats, 
            IEnumerable<string> areaMasterLocationFormats) : base() {
        this.AreaViewLocationFormats = areaViewLocationFormats.ToArray();
        this.AreaPartialViewLocationFormats = (areaPartialViewLocationFormats ?? 
            areaViewLocationFormats).ToArray();
        this.AreaMasterLocationFormats = areaMasterLocationFormats.ToArray();
    }
}

The constructor of this view engine simply specifies different format strings. Here’s a case where I wish the Framework had a String.Format method that efficiently worked with named formats.

This sample is made slightly more complicated by the fact that I have another constructor that accepts all these formats. That makes it possible to change the formats when registering the view engine if you so choose.

In my web.config file, I then registered this view engine like so:

protected void Application_Start() {
    RegisterRoutes(RouteTable.Routes);
    ViewEngines.Engines.Insert(0, new SingleProjectAreasViewEngine());
}

Note that I’m inserting it first so it takes precedence. I could have cleared the collection and added this as the only one, but I wanted the existing areas format for multi-project solutions to continue to work just in case. It’s really your call.

Now I can register my area routes using a new MapAreaRoute extension method.

public static void RegisterRoutes(RouteCollection routes) {
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapAreaRoute("Blogs", "blogs_area", 
        "blog/{controller}/{action}/{id}", 
        new { controller = "Home", action = "Index", id = "" }, 
        new string[] { "SingleProjectAreas.Areas.Blogs.Controllers" });
    
    routes.MapAreaRoute("Forums", 
        "forums_area", 
        "forum/{controller}/{action}/{id}", 
        new { controller = "Home", action = "Index", id = "" }, 
        new string[] { "SingleProjectAreas.Areas.Forums.Controllers" });
    
    routes.MapAreaRoute("Main", "default_route", 
        "{controller}/{action}/{id}", 
        new { controller = "Home", action = "Index", id = "" }, 
        new string[] { "SingleProjectAreas.Controllers" });
}

And I’m good to go. Notice that I no longer have a default route. Instead, I mapped an area named “Main” to serve as the “main” project. The Route URL pattern there is what you’d typically see in the default template.

If you prefer this approach or would like to see both approaches supported, let me know. We are looking at having the single project approach supported out of the box as a possibility for Preview 2.

If you want to see this in action, download the following sample.

asp.net, code, asp.net mvc comments suggest edit

UPDATE: This post is outdated. ASP.NET MVC 2 RTM was released in March.

Four and a half months after my team released ASP.NET MVC 1.0, I am very happy to announce that the release of our first Preview of version 2 of ASP.NET MVC is now available for download. Go download it immediately and enjoy its coolness. :) Don’t be afraid to install it as it will sit nicely side-by-side with ASP.NET 1.0.

The release notes provide more details on what’s in this release and I’ve also updated the Roadmap on CodePlex, which describes the work we want to do in Preview 2 and beyond.

After shipping ASP.NET MVC 1.0, the team and I spent time pitching in on ASP.NET 4 which was a nice diversion for me personally as I got a chance to work on something different for a while and it let ideas for ASP.NET MVC 2 percolate.

But now I’m very happy to be back in the saddle going full bore working on ASP.NET MVC again. As mentioned in the roadmap and elsewhere, ASP.NET MVC 2 will run on both ASP.NET 3.5 SP1 and ASP.NET 4. We will be shipping ASP.NET MVC 2 in the box with Visual Studio 2010 and be making a separate installer for Visual Studio 2008 SP 1 available via download.

Templated Helpers

One of my favorite new additions in Preview 1 is what we call the Templated Helpers. You can watch a short Channel 9 Video that Scott Hanselman filmed of me giving a last minute impromptu demo of Templated Helpers.

Templated Helpers allow you to automatically associate templates for editing and displaying values base on the data type. For example, a date picker UI element can be automatically rendered every time data of type System.DateTime is used.

If you’re familiar with Field Templates in ASP.NET Dynamic Data, then this is very similar to that, but specific to ASP.NET MVC.

To find out more about the helpers, check out the pre-release documentation for a walkthrough of using Templated Helpers.

We also include support for Areas and Data Annotations, along with various bug fixes and minor API improvements. Everything is detailed in the Release Notes.

The Team

I have to say, I really like being a part of a team that I feel is working very well together and am proud of the work they’ve done. Some of them already have blogs such as Eilon (rarely updated) andBrad Wilson. The QA guys recently started a podcast. But others (Levi, looking at you) really need to start a blog. ;) Great work fellas!

Be sure to let us know what you think and provide feedback in our forums!

Related Links

Note: the official name of this version of the product is ASP.NET MVC 2 and not ASP.NET MVC 2.0 as some might expect. Maybe it’s part of a new marketing initiative to get rid of dots in product names. I guess they didn’t read my Law of Demeter post to understand it’s not about reducing dots ;).

comments suggest edit

A member of the Subtext team discovered a security vulnerability due to our integration with the FCKEditor control as well as the FreeTextBox control. This vulnerability would potentially allow unauthenticated users to upload files using the file upload tools included with these editors.

The Fix

If you’re running the latest version of Subtext (Subtext 2.1.1), the quickest way to patch your installation is to copy the following web.config file…

<configuration>
    <system.web>
        <authorization>
            <allow roles="Admins" />
            <deny roles="HostAdmins"/>
            <deny users="*" />
        </authorization>
    </system.web>
</configuration>

…to the following directories within the Providers\BlogEntryEditor
directory.

  • FCKeditor\editor\filemanager\browser\default\connectors\aspx\
  • FCKeditor\editor\filemanager\upload\aspx\
  • FTB\

If you’re running an older version or would rather not have to hunt through your installation, upgrade to Subtext 2.1.2. The only difference between this version and 2.1.1 is the change mentioned above.

Notes

This is the second time we’ve been bitten by integration issues with these rich text editors. The Subtext team takes security very seriously and regret that this vulnerability was released. We’ll take a hard look at these integration points and may consider turning them off by default or some other mitigations. I have a feeling that most of our users use Windows Live Writer or some other such application to post to their blog anyways.

You might wonder why we don’t simply include that web.config file within the Providers directory. I tested that out and unfortunately it breaks FCKEditor for no good reason that I could deduce.

Again, I feel terrible that this happened and we’ll work hard to ensure it doesn’t again. My thanks goes to Si Philp who found the issue and discreetly reported it.

Download

The URL to the new version of Subtext is here.

comments suggest edit

Recently I read a discussion on an internal mailing list on whether or not it would be worthwhile to add a null dereferencing operator to C#.

For example, one proposed idea would allow the following expression.

object a = foo.?bar.?baz.?qux;

This would assign the variable a the value null if any one of foo, bar, or baz is null instead ofthrowing a NullReferenceException. It’s a small, but potentially helpful, mitigation for the billion dollar mistake.

Sure enough, it did not take long for someone to claim that this syntax would be unnecessary if the code here was not violating the sacred Law of Demeter (or LoD for short). I think this phenomena is an analog to Godwin’s Law and deserves its own name. Let’s call it the “LoD Dot Counting Law”:

As a discussion of a code expression with more than one dot grows longer, the probability that someone claims a Law of Demeter violation approaches 1.

dippindots Count the dots and win a prize!

What is wrong with the claim that the above expression violates LoD? To answer that let’s briefly cover the Law of Demeter. I’m not going to cover it in detail but rather point to posts that describe it in much better detail than I would.

The Many Forms of Demeter

The formal object form of the law can be summarized as:

A method of an object may only call methods of:

  1. The object itself.
  2. An argument of the method.
  3. Any object created within the method.
  4. Any direct properties/fields of the object.

A general formulation of the law is:

Each unit should have only limited knowledge about other units: only units “closely” related to the current unit. Or: Each unit should only talk to its friends; Don’t talk to strangers.

This of course leads to the succinct form of the law:

Don’t talk to strangers

In other words, try to avoid calling methods of an object that was returned by calling another method. Often, people shorten the law to simply state “use only one dot”.

One of the key benefits of applying LoD is low coupling via encapsulation. In his paper, The Paperboy, The Wallet, and The Law of Demeter (PDF) (it’s a relatively quick read so go ahead, I’ll be here), David Bock provides a great illustration of this law with an analogy of a paperboy and a customer. Rather than having a customer hand over his wallet to pay the paperboy, he instead has the paperboy request payment from the customer.

In answer to “Why is this better?” David Bock gives these three reasons.

The first reason that this is better is because it better models the real world scenario…  The Paperboy code is now ‘asking’ the customer for a payment.  The paperboy does not have direct access to the wallet.  

The second reason that this is better is because the Wallet class can now change, and the paperboy is completely isolated from that change…

The third, and probably most ‘object-oriented’ answer is that we are now free to change the implementation of ‘getPayment()’.

Note that the first benefit is an improvement not only in encapsulation but the abstraction is also improved.

Dot Counting Is Not The Point

You’ll note that David doesn’t list “50% less dots in your code!” as a benefit of applying LoD. The focus is on reduced coupling and improved encapsulation.

So going back to the initial expression, does foo.bar.baz.qux violate LoD? Like most things, it depends.

For example, suppose that foo is of type Something and it contains properties named Bar, Baz, and Qux which each simply return this.

In this semi-contrived example, the expression is not an LoD violation because each property returns the object itself and according to the first rule of LoD, “you do not talk about LoD” … wait … sorry… “a method is free to call any properties of the object itself” (in a future post, I will cover the class form of LoD which seems to indicate that if Bar, Baz, and Qux return the same type, whether it’s the same object or not, LoD is preserved).

This pattern is actually quite common with fluent interfaces where each method in a calling chain might return the object itself, but transformed in some way.

So we see that counting dots is not enough to indicate an LoD violation. But lets dig deeper. Are there other cases where counting dots leads do not indicate an LoD violation? More importantly, is it always a bad thing to violate LoD? Are there cases where an LoD violation might even be the right thing to do?

Go Directly to Jail, Do Not Pass Go

Despite its name, violating the Law of Demeter will not get you on an episode of Cops nor is it some inviolable law of nature.

As the original paper points out, it was developed during design and implementation of the Demeter system (hence the name) and was held to be a law for the developers of that system.

The designers of the system felt that this practice ensured good Object-Oriented design:

The motivation behind the Law of Demeter is to ensure that the software is as modular as possible. The law effectively reduces the occurrences of nested message sendings (function calls) and simplifies the methods.

However, while it was a law in the context of the Demeter system, whether it should hold the weight that calling it a Law implies is open to debate.

David Bock refers to it as an idiom:

This paper is going to talk about a particluar (sic) ‘trick’ I like, one that is probably better classified as an ‘idiom’ than a design pattern (although it is a component in many different design patterns).

 Martin Fowler suggests (emphasis mine)

I’d prefer it to be called the Occasionally Useful Suggestion of Demeter.

Personally, I think most developers are guilty of bad encapsulation and tight coupling. I’m a bit more worried about that than applying this law inappropriately (though I worry about that too). Those who have deep understanding of this guideline are the ones who are likely to know when it shouldn’t be applied.

For the rest of us mere mortals, I think it’s important to at least think about this guideline and be intentional about applying or not applying it.

I Fought The Law and The Law Won

So what are the occasions when the Law of Demeter doesn’t necessarily apply? There’s some debate out there on the issue.

In his post, Misunderstanding the Law of Demeter, Daniel Manges argues that web page views aren’t domain objects and thus shouldn’t be subject to the Law of Demeter. His argument hinges on a Rails example where you send an Order object to the view, but the view needs to display the customer’s name.

<%= @order.customer.name %>

Counting two dots, he considers the change that would make it fit LoD:

<%= @order.customer_name %>

He then asks:

Why should an order have a customer_name? We’re working with objects, an order should have a customer who has a name.

…when rendering a view, it’s natural and expected that the view needs to branch out into the domain model.

Alex Blabs of Pivotal Labs takes issue with Daniel’s post and argues that views aredomain objects and an order ought to have a customer_name property.

It’s an interesting argument, but the following snippet of a comment by Zak Tamsen summarizes where I currently am on this subject (though my mind is open).

because they don’t. the primary job of the views (under discussion) is to expose the internal state of objects for display purposes. that is, they are expressly for data showing, not data hiding. and there’s the rub: these kind of views flagrantly violate encapsulation, LoD is all about encapsulation, and no amount of attribute delegation can reconcile this.

The problem as I see it with Alex’s approach is where do you stop? Does the Order object encapsulate every property of Customer? What about sub-properties of the Customer’s properties? It seems the decision to encapsulate the Customer’s name is driven by the view’s need to display it. I wouldn’t want my domain object’s interface to be driven by the needs of the view as that would violate separation of concerns.

There’s another option which might be more common in the ASP.NET MVC world than in the Rails world, I’m not sure. Why not have a view specific model object. This would effectively be the bridge between the domain objects and the view and could encapsulate many of the these properties that the view needs to display.

Another case where an LoD violation might not be such a bad idea is in cases where the object structure is public and unlikely to change. While in Norway, I had the opportunity to briefly chat with Michael Feathers about LoD and he pointed out the example of Excel’s object model for tables and cells. If LoD is about encapsulation (aka information hiding) then why would you hide the structure of an object where the structure is exactly what people are interested in and unlikely to change?

Use It or Lose It

When I learn a new guideline or principle, I really like to dig into where the guideline breaks down. Knowing where a guideline works, and what its advantages are is only half the story in really understanding it. When I can explain where it doesn’t work and what its disadvantages are, only then do I feel I’m starting to gain understanding.

However, in writing about my attempts at understanding, it may come across that I’m being critical of the guideline. I want to be clear that I think the Law of Demeter is a very useful guideline and it applies in more cases than not. It’s one of the few principles that can point to an empirical study that may point to its efficacy.

In a Validation of Object-Oriented Design Metrics as Quality Indicators, the authors of the study provide evidence that suggests The Law of Demeter reduces the probability of software faults.

the-count Still, I would hope that those who apply it don’t do it blindly by counting dots. Dot counting can help you find where to look for violations, but always keep in mind that the end goal is reducing coupling, not dots.

humor comments suggest edit

Ok, I haven’t had a good track record with making up jokes before. Just see exhibit A,this groaner of an MVC joke.

However, I think I did oke recently with a few geeky Your Momma jokes that I can’t just leave to Twitter alone. Here they are:

Your momma so fat, Bloatware is her clothing line.

Your momma so fat I called her and got a stack overflow.

your momma so ugly its just best to forego the “V” in MVC with her.

So now it’s your turn. Please post your best ones in the comments to this post. :)

comments suggest edit

When you visit Norway, it takes a week to recover. Ok, at least when I visit Norway, it takes a week. But that’s just a testament to the good time I had. As they say, what happens in Vegas stays in Vegas, but what happens in Oslo gets recorded as a .NET Rocks Live episode.

The week before last, I spent the week in Oslo, Norway attending and speaking at the Norwegian Developer’s Conference (NDC 09). This conference was not your typical Microsoft conference I usually attend but was a conference on .NET with a heavy Agile Software bent.

IMG_3460

Just looking at the speaker line-up will tell you that. Scott Bellware tweeted a blurb recently that succinctly summarized my impression of the conference:

how to know you’re at a good conference: the speakers are going to sessions (at least the ones who aren’t working on their sessions)

That is definitely true. I didn’t attend as many talks as I would have liked, but I did manage to attend two by Mary Poppendieck which really sparked my imagination and got me excited about the concept of a problem solving organization and learning more about Lean. She had promised to put her slides on her site, but I can’t for the life of me find them! ;)

While there, I gave three talks, one of them being a joint talk with the Hanselnator (aka Mr. Hanselman).

Black Belt Ninja Tips ASP.NET MVC \ This covered several tips on getting more out of ASP.NET MVC and included the first public demonstration of David Ebbo’s T4 template.

**ASP.NET MVC + AJAX = meant for each other \ **This covered the Ajax helpers included with ASP.NET MVC and drilled into some of the lesser known client aspects of these helpers, such as showing the Sys.Mvc.AjaxContext object and how to leverage it. The talk then moved into a demonstration of the client templating feature of ASP.NET Ajax 4 Preview 4. I showed off some of the work Jonathan Carter and I (mostly Jonathan) did to make two-way data binding work with ASP.NET MVC. The audience really dug it.

**The Haacked and Hanselman Show \ **So named because we didn’t have any agenda until about a week before the conference, this ended up being a web security talk where Scott would present a common “secure” implementation of a feature, I would then proceed to “Haack” the feature, and then Scott would fix the feature, all the while explaining what was going on. I think this was very big hit as we saw messages on Twitter like “I’m now too afraid to build a web application”. ;) Of course, I hope more attendees felt empowered rather than fearful. :P

IMG_3484The conference was held in an indoor soccer stadium since it was a venue large enough for all the attendees. They curtained off sections of the bleachers to create rooms for giving the talks. On the outside of the curtains was a large screen which allowed attendees to walk around from talk to talk with a headset on the conference floor if they didn’t feel like sitting in the bleachers.

IMG_3455Plenty of bean bags on the floor provided a comfortable place to relax and listen in. In fact, that’s where I would often find some of my new friends lounging around such as the crazy Irishman.

On the second night of the conference, we all rocked out at the big attendee party featuring the band, DataRock which played some rocking music with geek friendly lyrics like:

I ran into her on computer camp \ (Was that in 84?) \ Not sure \ I had my commodore 64 \ Had to score

– DataRock, Computer Camp Love

data-rockMany thanks to Kjetil Klaussen for posting those lyrics in his NDC 09 highlights post because I had forgotten pretty much every lyric. :) After DataRock, we all went upstairs for the after party to enjoy a more intimate setting with LoveShack, an 80s cover band.

One interesting highlight of the show was a live recording of .NET Rocks. The show was originally going to simply feature Scott Hanselman, but while hanging out in the speakers lounge Carl Franklin, one of the hosts of the show, suggested I join in the fun too.

haack-hanselman-dotnetrocksWhile it was great fun, Scott is a big, no ginormous, personality and I rarely got a word in edgewise, except for the few times I swooped right in just in time to put my foot in my mouth to apparently great comedic effect. In any case, you can listen to the results yourself, though I hope they post the video soon to get the full effect of how much fun everyone was having. :) Be warned, there’s not a lot of real software development content in the show.

The conference ended on Friday leaving all of Saturday for me to relax and actually get out and see Oslo. On Saturday, I headed out to Vigeland Statue Park with an eclectic group of people, Ted Neward, Rocky Lhotka and his wife, Jeremy Miller, and Anna K{Something With Too Many Syllables in a Row}, a conference organizer herself.

The park was very beautiful and I took a ton of pictures, but unfortunately I lost my camera on the flight home from Norway. :( So instead, I’ll just include this Creative Commons licensed picture taken by Cebete from Flickr. The main difference was the sky was a deep blue when we visited.

Vigeland-ParkThat evening, Sondre Bjellås, an attendee, was kind enough to invite several of us over to his flat for a little gathering. I headed over with Bellware and Anna since everyone else was pretty much flattened by the previous weeks activities. It was great to meet non-techie Norwegians such as his wife and friends in order to get a different perspective on what it’s like to live in Norway. The answer: expensive!

In an odd coincidence, on my connecting flight in Philadelphia, I ran into my good friend Walter who happened to be flying home from Belgium. In fact, we were on the same flight in the same exit row with seats right next to each other. How’s that for a funny coincidence?

Show me the Code!

Rune, one of the organizers of the conference, assures me that the videos of the talks will be posted online soon, so you’ll get to see them if you’d like. I’ve also posted my powerpoint slides and code samples here.

Please note that my talks tend to be heavy in demos so the Powerpoint decks don’t have much content in them. Likewise, the code samples represent the “before” state of my talks, not the “after” state. I usually write up a checklist for each talk which I use a to remind myself where I am in those cases where I have a total brain fart and forget my own name under the pressure of presenting.

Other NDC 09 Posts

comments suggest edit

In my last post, I wrote about the hijacking of JSON arrays. Near the end of the post, I mentioned a comment whereby someone suggests that what really should happen is that browsers should be more strict about honoring content types and not execute code with the content type of application/json.

I totally agree! But then again, browsers haven’t had a good track record with being strict with such standards and it’s probably too much to expect browsers to suddenly start tightening ship, not to mention potentially breaking the web in the process.

Another potential solution that came to mind was this: Can we simply change JSON? Is it too late to do that or has that boat left the harbor?

boat-left-harbor

Let me run an idea by you. What if everyone got together and decided to version the JSON standard and change it in such a way that when the entire JSON response is an array, the format is no longer executable script. Note that I’m not referring to an array which is a property of a JSON object. I’m referring to the case when the entire JSON response is an array.

One way to do this, and I’m just throwing this out there, is to make it such that the JSON package must always begin and end with a curly brace. JSON objects already fulfill this requirement, so their format would remain unchanged.

But when the response is a JSON array, we would go from here:

[{"Id":1,"Amt":3.14},{"Id":2,"Amt":2.72}]

to here:

{[{"Id":1,"Amt":3.14},{"Id":2,"Amt":2.72}]}

Client code would simply check to see if the JSON response starts with {[ to determine whether it’s an array, or an object. There many alternatives, such as simply wrapping ALL JSON responses in some new characters to keep it simple.

It’d be possible to do this without breaking every site out there by simply giving all the client libraries a head start. We would update the JavaScript libraries which parse JSON to recognize this new syntax, but still support the old syntax. That way, they’d work with servers which haven’t yet upgraded to the new syntax.

As far as I know, most sites that make use of JSON are using it for Ajax scenarios so the site developer is in control of the client and server anyways. For sites that provide JSON as a cross-site service, upgrading the server before the clients are ready could be problematic, but not the end of the world.

So what do you think? Is this worth pursuing? Not that I have any idea on how I would convince or even who I would need to convince. ;)

UPDATE: 6/26 10:39 AM Scott Koon points out this idea is not new (I didn’t think it would be) and points to a great post that gives more detail on the specifics of executable JSON as it relates to the ECMAScript Specification.

comments suggest edit

A while back I wrote about a subtle JSON vulnerability which could result in the disclosure of sensitive information. That particular exploit involved overriding the JavaScript Array constructor to disclose the payload of a JSON array, something which most browsers do not support now.

However, there’s another related exploit that seems to affect many more browsers. It was brought to my attention recently by someone at Microsoft and Scott Hanselman and I demonstrated it at the Norwegian Developers Conference last week, though it has been demonstrated against Twitter in the past.

hijack

Before I go further, let me give you the punch line first in terms of what this vulnerability affects.

This vulnerability requires that you are exposing a JSON service which…

  • …returns sensitive data.
  • …returns a JSON array.
  • …responds to GET requests.
  • …the browser making the request has JavaScript enabled (very likely the case)
  • …the browser making the request supports the __defineSetter__ method.

Thus if you never send sensitive data in JSON format, or you only send JSON in response to a POST request, etc. then your site is probably not vulnerable to this particular vulnerability (though there could be others).

I’m terrible with Visio, but I thought I’d give it my best shot and try to diagram the attack the best I could. In this first screenshot, we see the unwitting victim logging into the vulnerable site, and the vulnerable site issues an authentication cookie, which the browser holds onto.

Json-Hijack-1

At some point, either in the past, or the near future, the bad guy spams the victim with an email promising a hilariously funny video of a hamster on a piano.

Json-Hijack-2

But the link actually points to the bad guy’s website. When the victim clicks on the link, the next two steps happen in quick succession. First, the victim’s browser makes a request for the bad guy’s website.

Json-Hijack-3

The website responds with some HTML containing some JavaScript along with a script tag. When the browser sees the script tag, it makes another GET request back to the vulnerable site to load the script, sending the auth cookie along.

Json-Hijack-4

The bad guy has tricked the victim’s browser to issue a request for the JSON containing sensitive information using the browser’s credentials (aka the auth cookie). This loads the JSON array as executable JavaScript and now the bad guy has access to this data.

To gain a deeper understanding, it may help to see actual code (which you can download and run) which demonstrates this attack.

Note that the following demonstration is not specific to ASP.NET or ASP.NET MVC in any way, I just happen to be using ASP.NET MVC to demonstrate it. Suppose the Vulnerable Website returns JSON with sensitive data via an action method like this.

[Authorize]
public JsonResult AdminBalances() {
  var balances = new[] {
    new {Id = 1, Balance=3.14}, 
    new {Id = 2, Balance=2.72},
    new {Id = 3, Balance=1.62}
  };
  return Json(balances);
}

Assuming this is a method of HomeController, you can access this action via a GET request for /Home/AdminBalances which returns the following JSON:

[{"Id":1,"Balance":3.14},{"Id":2,"Balance":2.72},{"Id":3,"Balance":1.62}]

Notice that I’m requiring authentication via the AuthorizeAttribute on this action method, so an anonymous GET request will not be able to view this sensitive data.

The fact that this is a JSON array is important. It turns out that a script that contains a JSON array is a valid JavaScript script and can thus be executed. A script that just contains a JSON object is not a valid JavaScript file. For example, if you had a JavaScript file that contained the following JSON:

{“Id”:1, “Balance”:3.14}

And you had a script tag that referenced that file:

<script src="http://example.com/SomeJson"></script>

You would get a JavaScript error in your HTML page. However, through an unfortunate coincidence, if you have a script tag that references a file only containing a JSON array, that would be considered valid JavaScript and the array gets executed.

Now let’s look at the HTML page that the bad guy hosts on his/her own server:

<html> 
...
<body> 
    <script type="text/javascript"> 
        Object.prototype.__defineSetter__('Id', function(obj){alert(obj);});
    </script> 
    <script src="http://example.com/Home/AdminBalances"></script> 
</body> 
</html>

What’s happening here? Well the bad guy is changing the prototype for Object using the special __defineSetter__ method which allows overriding what happens when a property setter is being called.

In this case, any time a property named Id is being set on any object, an anonymous function is called which displays the value of the property using the alert function. Note that the script could just as easily post the data back to the bad guy, thus disclosing sensitive data.

As mentioned before, the bad guy needs to get you to visit his malicious page shortly after logging into the vulnerable site while your session on that site is still valid. Typically a phishing attack via email containing a link to the evil site does the trick.

If by blind bad luck you’re still logged into the original site when you click through to the link, the browser will send your authentication cookie to the website when it loads the script referenced in the script tag. As far as the original site is concerned, you’re making a valid authenticated request for the JSON data and it responds with the data, which now gets executed in your browser. This may sound familiar as it is really a variant of a Cross Site Request Forgery (CSRF) attack which I wrote about before.

If you want to see it for yourself, you can grab the CodeHaacks solution from GitHub and run the JsonHijackDemo project locally (right click on the project and select Set as StartUp Project. Just follow the instructions on the home page of the project to see the attack in action. It will tell you to visit http://demo.haacked.com/security/JsonAttack.html.

Note that this attack does not work on IE 8 which will tell you that __defineSetter__ is not a valid method. Last I checked, it does work on Chrome and Firefox.

The mitigation is simple. Either never send JSON arrays OR always require an HTTP POST to get that data (except in the case of non-sensitive data in which case you probably don’t care). For example, with ASP.NET MVC, you could use the AcceptVerbsAttribute to enforce this like so:

[Authorize]
[AcceptVerbs(HttpVerbs.Post)]
public JsonResult AdminBalances() {
  var balances = new[] {
    new {Id = 1, Balance=3.14}, 
    new {Id = 2, Balance=2.72},
    new {Id = 3, Balance=1.62}
  };
  return Json(balances);
}

One issue with this approach is that many JavaScript libraries such as jQuery request JSON using a GET request by default, not POST. For example, $.getJSON issues a GET request by default. So when calling into this JSON service, you need to make sure you issue a POST request with your client library.

ASP.NET and WCF JSON service endpoints actually wrap their JSON in an object with the “d” property as I wrote about a while back. While it might seem odd to have to go through this property to get access to your data, this awkwardness is eased by the fact that the generated client proxies for these services strip the “d” property so the end-user doesn’t need to know it was ever there.

With ASP.NET MVC (and other similar frameworks), a significant number of developers are not using client generated proxies (we don’t have them) but instead using jQuery and other such libraries to call into these methods, making the “d” fix kind of awkward.

What About Checking The Header?

Some of you might be wondering, “why not have the JSON service check for a special header such as the X-Requested-With: XMLHttpRequest or Content-Type: application/json before serving it up in response to a GET request?” I too thought this might be a great mitigation because most client libraries send one or the other of these headers, but a browser’s GET request in response to a script tag would not.

The problem with this (as a couple of co-workers pointed out to me) is that at some point in the past, the user may have made a legitimate GET request for that JSON in which case it may well be cached in the user’s browser or in some proxy server in between the victim’s browser and the vulnerable website. In that case, when the browser makes the GET request for the script, the request might get fulfilled from the browser cache or proxy cache. You could try setting No-Cache headers, but at that point you’re trusting that the browser and all proxy servers correctly implement caching and that the user can’t override that accidentally.

Of course, this particular caching issue isn’t a problem if you’re serving up your JSON using SSL.

The real issue?

There’s a post at the Mozilla Developer Center which states that object and array initializers should not invoke setters when evaluated, which at this point, I tend to agree with, though a comment to that post argues that perhaps browsers really shouldn’t execute scripts regardless of their content type, which is also a valid complaint.

But at the end of the day, assigning blame doesn’t make your site more secure. These type of browser quirks will continue to crop up from time to time and we as web developers need to deal with them. Chrome 2.0.172.31 and Firefox 3.0.11 were both vulnerable to this. IE 8 was not because it doesn’t support this method. I didn’t try it in IE 7 or IE 6.

It seems to me that to be secure by default, the default behavior for accessing JSON should probably be POST and you should opt-in to GET, rather than the other way around as is done with the current client libraries. What do you think? And how do other platforms you’ve worked with handle this? I’d love to hear your thoughts.

In case you missed it, here are the repro steps again: grab the CodeHaacks solution from GitHub and run the JsonHijackDemo project locally (right click on the project and select Set as StartUp Project. Just follow the instructions on the home page of the project to see the attack in action. To see a successful attack, you’ll need to do this in a vulnerable browser such as Firefox 3.0.11.

I followed up this post with a proposal to fix JSON to prevent this particular issue.

Tags: aspnetmvc, json, javascript, security, browsers

code comments suggest edit

Every now and then some email or website comes along promising to prove Fred Brooks wrong about this crazy idea he wrote in The Mythical Man Month (highly recommended reading!) that there is no silver bullet which by itself will provide a tenfold improvement in productivity, reliability, and simplicity within a decade.

This time around, the promise was much like others, but they felt the need to note that their revolutionary new application/framework/doohickey will allow business analysts to directly build applications 10 times as fast without the need for programmers![revenge-nerds](https://haacked.com/images/haacked_com/WindowsLiveWriter/AndGetRidOfThosePeskyProgrammers_A0BE/revenge-nerds_thumb.jpg "revenge-nerds")

Ah yeah! Get rid of those foul smelling pesky programmers! We don’t need em!

Now wait one dag-burn minute! Seriously?!

I’m going to try real hard for a moment to forget they said that and not indulge my natural knee jerk reaction which is to flip the bozo bit immediately. If I were a more reflective person, this would raised a disturbing question:

Why are these business types so eager to get rid of us programmers?

It’s easy to blame the suits for not understanding software development and forcing us into a Tom Smykowski moment having to defend what it is we do around here.

Well-well look. I already told you: I deal with the god damn customers so the engineers don’t have to. I have people skills; I am good at dealing with people. Can’t you understand that? What the hell is wrong with you people?

Maybe, as Steven “Doc” List quotes from Cool Hand Luke in his latest End Bracket article on effective communication for MSDN Magazine,

What we’ve got here is a failure to communicate.

Leon Bambrick (aka SecretGeek) recently wrote about this phenomena in his post entitled, The Better You Program, The Worse You Communicate, in which he outlines how techniques that make us effective software developers do not apply to communicating with other humans.

After all, we can sometimes be hard to work with. We’re often so focused on the technical aspects and limitations of a solution that we unknowingly confuse the stakeholders with jargon and annoy them by calling their requirements “ludicrous”. Sometimes, we fail to deeply understand their business and resort to making fun of our stakeholders rather than truly understanding their needs. No wonder they want to do the programming themselves!

Ok, ok. It’s not always like this. Not every programmer is like this and it isn’t fair to lay all the blame at our feet. I’m merely trying to empathize and understand the viewpoint that would lead to this idea that moving programmers out of the picture would be a good thing.

Some blame does deserve to lie squarely at the feet of these snake oil salespeople, because at the moment, they’re selling a lie. What they’d like customers to believe is your average business analyst simply describes the business in their own words to the software, and it spits out an application.

The other day, I started an internal email thread describing in hand-wavy terms some feature I thought might be interesting. A couple hours later, my co-worker had an implementation ready to show off.

Now that my friends, is the best type of declarative programming. I merely declared my intentions, waited a bit, and voila!  Code! Perhaps that’s along the lines of what these types of applications hope to accomplish, but there’s one problem. In the scenario I described, it required feeding requirements to a human. If I had sent that email to some software, it would have no idea what to do with it.

At some point, something close to this might be possible, but only when software has reached the point where it can exhibit sophisticated artificial intelligence and really deal with fuzziness. In other words, when the software itself becomes the programmer, only then might you really get rid of the human programmer. But I’m sorry to say, you’re still working with a programmer, just one who doesn’t scoff at your requirements arrogantly (at least not in your face while it plots to take over the world, carrot-top).

Until that day, when a business analyst wires together an applications with Lego-like precision using such frameworks, that analyst has in essence become a programmer. That work requires many of the same skills that developers require. At this point, you really haven’t gotten rid of programmers, you’ve just converted a business type into a programmer, but one who happens to know the business very well.

In the end, no matter how “declarative” a system you build and how foolproof it is such that a non-programmer can build applications by dragging some doohickeys around a screen, there’s very little room for imprecision and fuzziness, something humans handle well, but computers do not, as Spock demonstrated so well in an episode of Star Trek.

“Computer, compute the last digit of PI” - Spock

Throw into the mix that the bulk of the real work of building an application is not the coding, but all the work surrounding that, as Udi Dahan points out in his post on The Fallacy of ReUse.

This is not to say that I don’t think we should continue to invest in building better and better tools. After all, the history of software development is about building better and better higher level tools to make developers more productive. I think the danger lies in trying to remove the discipline and traits that will always be required when using these tools to build applications.

Even when you can tell the computer what you want in human terms, and it figures it out, it’s important to still follow good software development principles, ensure quality checks, tests, etc…

The lesson for us programmers, I believe is two-fold. One, we have to educate our stakeholders about how software production really works. Even if they won’t listen, a little knowledge and understanding here goes a long way. Be patient, don’t be condescending, and hope for the best. Secondly, we have to educate ourselves about the business in a deep manner so that we are seen as valuable business partners who happen to write the code that matters.

comments suggest edit

A little while ago I announced our plans for ASP.NET MVC as it relates to Visual Studio 2010. ASP.NET MVC wasn’t included as part of Beta 1, which raised a few concerns by some (if not conspiracy theories!) ;). The reason for this was simple as I pointed out:

One thing you’ll notice is that ASP.NET MVC is not included in Beta 1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

Today I’m happy to announce that we’re done with the work I described and the installer is now available on CodePlex. Be sure to give it a try as many of the new VS10 features intended to support the TDD workflow fit very nicely with ASP.NET MVC, which ScottGu will describe in an upcoming blog post.

If you run into problems with the intaller, try out this troubleshooting guide by Jacques, the developer who did the installer work and do provide feedback.

You’ll notice that the installer says this is ASP.NET MVC 1.1, but as the readme notes point out, this is really ASP.NET MVC 1.0 retargeted for Visual Studio 2010. The 1.1 is just a placeholder version number. We bumped up the version number to avoid runtime conflicts with ASP.NET MVC 1.0. All of this and more is described in the Release Notes.

When VS10 Beta 2 comes out, you won’t need to download a separate standalone installer to get ASP.NET MVC (though a standalone installer will be made available for VS2008 users that will run on ASP.NET 3.5 SP1). A pre-release version of ASP.NET MVC 2 will be included as part of the Beta 2 installer as described in the …

Roadmap

Road Blur: Photo credit: arinas74 on
stock.xchng

I recently published the Roadmap for ASP.NET MVC 2 which gives a high level look at what features we plan to do for ASP.NET MVC 2. The features are noticeably lacking in details as we’re deep in the planning phase trying to gather pain points.

Right now, we’re avoiding focusing the implementation details as much as possible. When designing software, it’s very easy to have preconceived notions about what the solution should be, even when we really don’t have a full grasp of the problem that needs to be solved.

Rather than guiding people towards what we think the solution is, I hope to focus on making sure we understand the problem domain and what people want to accomplish with the framework. That leaves us free to try out alternative approaches that we might not have considered before such as alternatives to expression based URL helpers. Maybe the alternative will work out, maybe not. Ideally, I’d like to have several design alternatives to choose from for each feature.

As we get further along the process, I’ll be sure to flesh out more and more details in the Roadmap and share them with you.

Snippets

One cool new feature of VS10 is that snippets now work in the HTML editor. Jeff King from the Visual Web Developer team sent me the snippets we plan to include in the next version. They are also downloadable from the CodePlex release page. Installation is very simple:

Installation Steps:

​1) Unzip “ASP.NET MVC Snippets.zip” into “C:\Users\<username>\Documents\Visual Studio 10\Code Snippets\Visual Web Developer\My HTML Snippets”, where “C:\” is your OS drive. \ 2) Visual Studio will automatically detect these new files.

Try them out and let us know if you have ideas for snippets that will help you be more productive.

Important Links:

comments suggest edit

One of the features contained in the MVC Futures project is the ability to generate action links in a strongly typed fashion using expressions. For example:

<%= Html.ActionLink<HomeController>(c => c.Index()) %>

Will generate a link to to the Index action of the HomeController.

It’s a pretty slick approach, but it is not without its drawbacks. First, the syntax is not one you’d want to take as your prom date. I guess you can get used to it, but a lot of people who see it for the first time kind of recoil at it.

The other problem with this approach is performance as seen in this slide deck I learned about from Brad Wilson. One of the pain points the authors of the deck found was that the compilation of the expressions was very slow.

I had thought that we might be able to mitigate these performance issues via some sort of caching of the compiled expressions, but that might not work very well. Consider the following case:

<% for(int i = 0; i < 20; i++) { %>

  <%= Html.ActionLink<HomeController>(c => c.Foo(i)) %>

<% } %>

Each time through that loop, the expression is the same: c => c.Foo(i)

But the value of the captured “i” is different each time. If we try to cache the compiled expression, what happens?

So I started thinking about an alternative approach using code generation against the controllers and circulated an email internally. One approach was to code gen action specific action link methods. Thus the about link for the home controller (assuming we add an id parameter for demonstration purposes) would be:

<%= HomeAboutLink(123) %>

Brad had mentioned many times that while he likes expressions, he’s no fan of using them for links and he tends to write specific action link methods just like the above. So what if we could generate them for you so you didn’t have to write them by hand?

A couple hours after starting the email thread, David Ebbo had an implementation of this ready to show off. He probably had it done earlier for all I know, I was stuck in meetings. Talk about the best kind of declarative programming. I declared what I wanted roughly with hand waving, and a little while later, the code just appears! ;)

David’s approach uses a BuildProvider to reflect over the Controllers and Actions in the solution and generate custom action link methods for each one. There’s plenty of room for improvement, such as ensuring that it honors the ActionNameAttribute and generating overloads, but it’s a neat proof of concept.

One disadvantage of this approach compared to the expression based helpers is that there’s no refactoring support. However, if you rename an action method, you will get a compilation error rather than a runtime error, which is better than what you get without either. One advantage of this approach is that it performs fast and doesn’t rely on the funky expression syntax.

These are some interesting tradeoffs we’ll be looking closely at for the next version of ASP.NET MVC.

comments suggest edit

ASP.NET Pages are designed to stream their output directly to a response stream. This can be a huge performance benefit for large pages as it doesn’t require buffering and allocating very large strings before rendering. Allocating large strings can put them on the Large Object Heap which means they’ll be sticking around for a while.

string However, there are many cases in which you really want to render a page to a string so you can perform some post processing. I wrote about one means using a Response filter eons ago.

However, recently, I learned about a method of the Page class I never noticed which allows me to use a much lighter weight approach to this problem.

The method in question is CreateHtmlTextWriter which is protected, but also virtual.

So here’s an example of the code-behind for a page that can leverage this method to filter the output before its sent to the browser.

public partial class FilterDemo : System.Web.UI.Page
{
  HtmlTextWriter _oldWriter = null;
  StringWriter _stringWriter = new StringWriter();

  protected override HtmlTextWriter CreateHtmlTextWriter(TextWriter tw)
  {
    _oldWriter = base.CreateHtmlTextWriter(tw);
    return base.CreateHtmlTextWriter(_stringWriter);
  }

  protected override void Render(HtmlTextWriter writer)
  {
    base.Render(writer);
    string html = _stringWriter.ToString();
    html = html.Replace("REPLACE ME!", "IT WAS REPLACED!");
    _oldWriter.Write(html);
  }
}

In the CreateHtmlTextWriter method, we simply use the original logic to create the HtmlTextWriter and store it away in an instance variable.

Then we use the same logic to create a new HtmlTextWriter, but this one has our own StringWriter as the underlying TextWriter. The HtmlTextWriter passed into the Render method is the one we created. We call Render on that and grab the output from the StringWriter and now can do all the replacements we want. We finally write the final output to the original HtmlTextWriter which is hooked up to the response.

A lot of caveats apply in using this technique. First, as I mentioned before, for large pages, you could be killing scalability and performance by doing this. Also, I haven’t tested this with output caching, async pages, etc… etc…, so your mileage may vary.

Note, if you want to call one page from another, and get the output as a string within the first page, you can pass your own TextWriter to Server.Execute, so this technique is not necessary in that case.

personal comments suggest edit

Being that it’s a glorious Memorial Day Weekend up here in the Northwest, my co-worker Eilon (developer lead for ASP.NET MVC) and I decided to go on a hike to Mt Si where we had a bit of a scary moment.

in-front-of-mt-si I first learned about Mt Si at the company picnic last year, seen behind me and Cody in this photo. I remember seeing the imposing cliff face and thinking to myself, I want to climb up there. I imagined the view would be quite impressive.

Mt Si is a moderately strenuous hike 8 miles round trip with an elevation gain of 3100 feet taking you to about 3600 feet, according to the Washington Trails Association website. Given that it is a very popular hike and that this was a three-day weekend, we figured we’d get an early start by heading over there at 7 AM.

That ended up being a good idea as the parking lot had quite a few cars already, but it wasn’t full by any means. This is a picture of the trail head which starts the hike off under a nice canopy of green.

041Right away, the no-nonsense trail starts you off huffing uphill amongst a multitude of trees.

044

Along the way, there are the occasional diversions. For example, this one won me $10 as the result of a bet that I wouldn’t walk to the edge of the tree overhanging the drop off.

050

When you get to the top, there’s a great lookout with amazing views. But what caught our attention is a rock outcropping called the “Haystack”, which takes you up another 500 feet. Near the base of the Haystack is a small memorial for those who’ve died from plummeting off its rocky face. It’s not a trivial undertaking, but I demanded we try.

Mount Si
026Unfortunately, there’s nothing in the above picture to provide a better sense of scale for this scramble. In the following picture you can see some people pretty much scooting down the steep slope on their butts.

Mount Si
029

Once they were down, we set up and reached around two thirds of the way when I made the mistake of looking back and made a remark about how it’s going to be much more difficult going down. That started getting us nervous because it’s always easier going up than down.

It would have probably been best if I hadn’t made that remark because the climb wasn’t really that difficult, but introducing a bit of nervousness into the mix can really sabotage one’s confidence, which you definitely want on a climb.

At that point, the damage was done and we decided we had enough and started heading back down. Better to try again another day when we felt more confident. At that moment, a couple heading down told us we were almost there and it wasn’t so bad. Our success heading back down and their comments started to bolster our confidence to the point where I was ready to head back up, until I noticed that my shoe felt odd.

What I hadn’t noticed while climbing on the steep face was that my sole had almost completely detached from my hiking boot during the climb. Fortunately, Eilon had some duct tape on hand allowing me to make this ghetto looking patch job.

MacGuyver Repair
JobAt this point I had a mild panic because I worried that the duct tape would cause me to lose grip with my boots on the way down. And frankly, I was pissed off as well, as I’ve had these boots for a few years, but haven’t hiked in them all that often. What a perfect time for them completely fall apart!

Fortunately, I didn’t have much problem climbing back down and we stopped at the first summit to take some pictures and have a brief snack.

Not having the guts today to climb the big rock, I scrambled up a much smaller one and got this great view of Mt Rainier in its full splendor.

071

The view from the top is quite scenic and using binoculars, I was able to check on my family back in Bellevue (joke).

079Going back down was much quicker than the way up and we had a blast of it practically trail running the first part, until my other shoe gave out.

084Guess the warranty must have run out yesterday. ;) Fortunately, Eilon, who was prepared with the Duct tape, also had all terrain sandals with him, which I wore the rest of the way. Next time, I think I’ll ditch the Salomon boots and try Merrells which other hikers I ran into were wearing.

Despite the mishaps, the hike was really a fun romp in the woods and I highly recommend it to anyone in the Seattle area to give it a try. Go early to avoid the crowds. I doubled my $10 in an over/under bet where I took 140 and over cars in the lot. We stopped counting at around 170 cars in the lot when we left.

Mount Si
052This is one last look at Mt Si on our way back home. Eilon put together a play-by-play using Live Maps Bird’s Eye view (click for larger).

The path we
tookFor more info on the Mt Si hike, check out the Washington Trails Association website.

asp.net, code, asp.net mvc comments suggest edit

This post is now outdated

I apologize for not blogging this over the weekend as I had planned, but the weather this weekend was just fantastic so I spent a lot of time outside with my son.

the-parkIf you haven’t heard yet, Visual Studio 2010 Beta 1 is now available for MSDN subscribers to download. It will be more generally available on Wednesday, according to Soma.

You can find a great whitepaper which describes what is new for web developers in ASP 4 which is included.

One thing you’ll notice is that ASP.NET MVC is not included in Beta

  1. The reason for this is that Beta 1 started locking down before MVC 1.0 shipped. ASP.NET MVC will be included as part of the package in VS10 Beta 2.

Right now, if you try and open an MVC project with VS 2010 Beta 1, you’ll get some error message about the project type not being supported. The easy fix for now is to remove the ASP.NET MVC ProjectTypeGuid entry as described by this post.

We’re working hard to have an out-of-band installer which will install the project templates and tooling for ASP.NET MVC which works with VS2010 Beta 1 sometime in June on CodePlex. Sorry for the inconvenience. I’ll blog about it once it is ready.

asp.net, code, asp.net mvc comments suggest edit

A while back, I wrote about Donut Caching in ASP.NET MVC for the scenario where you want to cache an entire view except for a small bit of it. The more technical term for this technique is probably “cache substitution” as it makes use of the Response.WriteSubstitution method, but I think “Donut Caching” really describes it well — you want to cache everything but the hole in the middle.

However, what happens when you want to do the inverse. Suppose you want to cache the donut hole, instead of the donut?

House of Sims
Photostream

I think we should nickname all of our software concepts after tasty food items, don’t you agree?

In other words, suppose you want to cache a portion of the view in a different manner (for example, with a different duration) than the entire view? It hasn’t been exactly clear how do to do this with ASP.NET MVC.

For example, the Html.RenderPartial method ignores any OutputCache directives on the view user control. If you happen to use Html.RenderAction from MVC Futures which attempts to render the output from an action inline within another view, you might run into this bug in which the entire view is cached if the target action has an OutputCacheAttribute applied.

I did a little digging into this today and it turns out that when you specify the OutputCache directive on a control (or page for that matter), the output caching is not handled by the control itself. Rather, it appears that compilation system for ASP.NET pages kicks in and interprets that directive and does the necessary gymnastics to make it work.

In plain English, this means that what I’m about to show you will only work for the default WebFormViewEngine, though I have some ideas on how to get it to work for all view engines. I just need to chat with the members of the ASP.NET team who really understand the deep grisly guts of ASP.NET to figure it out exactly.

With the default WebFormViewEngine, it’s actually pretty easy to get partial output cache working. Simply add a ViewUserControl declaratively to a view and put your call to RenderAction or RenderPartial inside of that ViewUserControl. If you’re using RenderAction, you’ll need to remove the OutputCache attribute from the action you’re pointing to.

Keep in mind that ViewUserControls inherit the ViewData of the view they’re in. So if you’re using a strongly typed view, just make the generic type argument for ViewUserControl have the same type as the page.

If that last paragraph didn’t make sense to you, perhaps an example is in order. Suppose you have the following controller action.

public ActionResult Index() {
  var jokes = new[] { 
    new Joke {Title = "Two cannibals are eating a clown"},
    new Joke {Title = "One turns to the other and asks"},
    new Joke {Title = "Does this taste funny to you?"}
  };

  return View(jokes);
}

And suppose you want to produce a list of jokes in the view. Normally, you’d create a strongly typed view and within that view, you’d iterate over the model and print out the joke titles.

We’ll still create that strongly typed view, but that view will contain a view user control in place of where we would have had the code to iterate the model (note that I omitted the namespaces within the Inherits attribute value for brevity).

<%@ Page Language="C#" Inherits="ViewPage<IEnumerable<Joke>>" %>
<%@ Register Src="~/Views/Home/Partial.ascx" TagPrefix="mvc" TagName="Partial" 
%>
<mvc:Partial runat="server" />

Within that control, we do what we would have done in the main view and we specify the output cache values. Note that the ViewUserControl is generically typed with the same type argument that the view is, IEnumerable<Joke>. This allows us to move the exact code we would have had in the view to this control. We also specify the OutputCache directive here.

<%@ Control Language="C#" Inherits="ViewUserControl<IEnumerable<Joke>>" %>
<%@ OutputCache Duration="10000" VaryByParam="none" %>

<ul>
<% foreach(var joke in Model) { %>
    <li><%= Html.Encode(joke.Title) %></li>
<% } %>
</ul>

Now, this portion of the view will be cached, while the rest of your view will continue to not be cached. Within this view user control, you could have calls to RenderPartial and RenderAction to your heart’s content.

Note that if you are trying to cache the result of RenderPartial this technique doesn’t buy you much unless the cost to render that partial is expensive.

Since the output caching doesn’t happen until the view rendering phase, if the view data intended for the partial view is costly to put together, then you haven’t really saved much because the action method which provides the data to the partial view will run on every request and thus recreate the partial view data each time.

In that case, you want to hand cache the data for the partial view so you don’t have to recreate it each time. One crazy idea we might consider (thinking out loud here) is to allow associating output cache metadata to some bit of view data. That way, you could create a bit of view data specifically for a partial view and the partial view would automatically output cache itself based on that view data.

This would have to work in tandem with some means to specify that the bit of view data intended for the partial view is only recreated when the output cache is expired for that partial view, so we don’t incur the cost of creating it on every request.

In the RenderAction case, you really do get all the benefits of output caching because the action method you are rendering inline won’t get called from the view if the ViewUserControl is outputcached.

I’ve put together a small demo which demonstrates this concept in case the instructions here are not clear enough. Enjoy!