personal 0 comments suggest edit

The internet access I had at my mother-in-law’s last time I was in Japan turned out to be a fluke. I am at a Japanese Manga and Internet cafe (because those three things go so well together) right now typing this out. I’ve received a lot of comments and questions via my blog and once I get to Hong Kong, I will do my best to answer.

No promises though as I hear that the pool at the hotel is nice and I do have three talks to prepare. I must admit that not having daily internet access is probably a good thing for me as I’m a total online junkie. :)

The next time we visit, I need to remember to bring more reading material. I brought two books on Poker and am tired of reading about it. ;)

Technorati Tags: japan,tokyo, dlr 0 comments suggest edit

This afternoon we released a refresh of our DLR/IronPython support for ASP.NET, now called “ASP.NET Dynamic Language Support”, on our CodePlex site.

This was originally part of our July 2007 ASP.NET Futures package, along with several other features. As updates to these features were made available, we would have liked to remove them from the package, but we wanted to wait till everything within the package was updated.

Well that time has come. This CodePlex release contains two exceedingly simple sample applications, one for WebForms and one for ASP.NET MVC. It’s compiled against the latest DLR assemblies, and our goal is to continue to push it forward fixing bugs here and there. Keep in mind that this initial refresh is pretty barebones and doesn’t contain everything that the original package contained because certain features (such as the project system) are still being updated.

I won’t go too deeply into the specifics of how to use it. Instead, be sure to check out David Ebbo’s whitepaper on IronPython and ASP.NET which was written a while ago, but still mostly relevant. Also, Jimmy Schementi from the DLR team has written a nice brief write-up on this release.

I have the pleasure of taking over as the PM for this feature (in MS parlance we’d say I “own” this feature now) which nicely complements my duties as the PM for ASP.NET MVC. If you’ve followed my blog, you know I have an interest in dynamic languages and now I can channel that interest into work time, rather than on my own time. :)

This initial release only has IronPython support, but IronRuby support will be coming soon. This gives me an opportunity to learn a bit about Python, and let me tell you, the fact that whitespace matters in this language can be nice within a normal code file, but a real pain within a view.

One nice thing about this implementation above and beyond my old IronRuby prototype is that it has true support for a file, the IronPython equivalent for Global.asax.cs. This allowed me to define my routes in IronPython directly in that file rather than reading in a separate file. I did implement some helper methods in C# that make it easy to define routes using a Python dictionary.

personal 0 comments suggest edit

If you happen to be in Asia around October 8-10, I’ll be speaking at Tech-Ed Hong Kong. Come by and say hi. I’m giving three talks, one on each day.

Date Time Talk
October 8 11:45 AM – 1:00 PM ASP.NET MVC  - An alternative approach to building Web Applications
October 9 11:00 AM – 12:15 PM Developing Data Driven Applications Using ASP.NET Dynamic Data
October 10 9:30 AM – 10:45 AM Write better designed code with Test Driven Development

hong_kongI’m hoping to have a little time after some of my talks to go do a little sight-seeing around Hong Kong. The trip to Hong Kong is actually a side trip from Japan where my wife and kid will stay while I go and and speak at this conference. I’m looking forward to the vacation, but wish I had scheduled the vacation part after the conference, rather than before. Lesson learned!

Technorati Tags: conferences,aspnetmvc,tdd,dynamic data, mvc 0 comments suggest edit

pdc2008 First of all, I want to congratulate Jeff Atwood, Joel Spolsky, and their team for the release of If you haven’t tried it out, I highly recommend giving it a shot. Be prepared, it’s addicting.

Besides my 959 reputation score (which is actually pretty weak), the other thing about StackOverflow that excites me is that it’s built using ASP.NET MVC. So far, Jeff has mostly praised the experience of using ASP.NET MVC, though he’s had a few pain points that I’m now well aware of. :)

LogoI like StackOverflow so much that I asked Jeff to take a 10-15 minute slice of my ASP.NET MVC talk at PDC to talk about his experiences building StackOverflow using ASP.NET MVC. Having a demonstration of a real-world application using ASP.NET MVC will be a nice complement to my overview talk. For the rest of the talk, he’ll be my code monkey.

By the way, if you’re planning to attend PDC, please navigate to the link to my talk and leave a comment with any requests for things you’d love to hear me talk about. I am currently planning to give the general overview, but perhaps some of you want to hear more anecdotes from the product team, or more details about specific features.

Technorati Tags: aspnetmvc,stackoverflow,pdc2008 mvc, 0 comments suggest edit

UPDATE: The MVC Futures assembly, Microsoft.Web.Mvc is available on CodePlex.

Wanted to provide a quick heads up about the MvcFutures assembly within ASP.NET MVC CodePlex Preview 5. As mentioned in various places, this assembly contains various experimental features we are considering for future versions of ASP.NET MVC.

When we release the BETA for ASP.NET MVC, it will not automatically be included in the project template by the installer. We’ve included it in the various previews for convenience, but we want the BETA installer to be as close to the RTM installer experience as possible.

We will make sure that the assembly remains available on CodePlex. I just wanted to make you aware of this so there is no surprises when we release the Beta regarding this. Thanks!

Technorati Tags: aspnetmvc mvc, 0 comments suggest edit

This is one of them “coming of age” stories about how a lowly method becomes a full fledged Action in ASP.NET MVC. You might think the two things are the same thing, but that’s not the case. It is not just any method gets to take the mantle of being an Action method.


Like any good story, it all begins at the beginning with Routing. By default, one of the routes defined in the MVC project template has the following URL pattern:


When a request comes in and matches that route, we populate a dictionary of route values (accessible via the RequestContext) based on this route. For example, if a request comes in for:


We add the key “action” with the value “list” to the route values dictionary (We’ve also added “home” as the value for “controller”, but that’s for another story. This is the story of the action.) At the heart of it, an action is just a string. That’s how it starts out after all, as a sub string of the URL.

Later on, when the request is handed of to MVC, MVC interprets the value in the route values for “action” to be the action name. In this case, it knows that the request should be handled by the action “list”. Contrary to popular belief, this does not necessarily mean that a method named List will handle this request,as we’ll soon see.

Action Method Selection

Once we’ve identified the name of the action, we need to identify a method that can respond to that action. This is the job of the ControllerActionInvoker.

By default, the invoker simply uses reflection to find a public method on a class that derives from Controller which has the same name (case insensitive) as the current action.

Like many things within this framework, you can tweak this default behavior.


Introduced in ASP.NET MVC CodePlex Preview 5 which we just released, applying this attribute to a method allows you to specify the action that the method handles.

For example, suppose you want to have an action named View, this would conflict with the View method of Controller. An easy way to work around this issue without having to futz with routing or method hiding is to do the following:

public ActionResult ViewSomething(string id)
  return View();

The ActionNameAttribute redefines the name of this action to be “View”. Thus this method is invoked in response to requests for /home/view, but not for /home/viewsomething. In the latter case, as far as the action invoker is concerned, an action method named “ViewSomething” does not exist.

One consequence of this is that if you’re using our conventional approach to locate the view that corresponds to this action, the view should be named after the action, not after the method. In the above example (assuming this is a method of HomeController), we would look for the view ~/Views/Home/View.aspx by default.

This attribute is not required on an action method. Implicitly, the name of a public method serves as the action name for that method.


We’re not done yet matching the action to a method. Once we’ve identified all methods of the Controller class that match the current action name, we need to whittle the list down further by looking at all instances of the ActionSelectionAttribute applied to the methods in the list.

This attribute is an abstract base class for attributes which provide fine grained control over which requests an action method can respond to. The API for this method is quite simple and consists of a single method.

public abstract class ActionSelectionAttribute : Attribute
  public abstract bool IsValidForRequest(ControllerContext controllerContext
    , MethodInfo methodInfo);

At this point, the invoker looks for any methods in the list which contain attributes which derive from this attribute and calls the IsValidForRequest() method on each attribute. If any attribute returns false, the method that the attribute is applied to is removed from the list of potential action methods for the current request.

At the end, we should be left with one method in the list, which the invoker then invokes. If more than one method can handle the current request, the invoker throws an exception indicating the problem. If no method can handle the request, the invoker calls HandleUnknownAction() on the controller.

The ASP.NET MVC framework includes one implementation of this base attribute, the AcceptVerbsAttribute.


This is a concrete implementation of ActionSelectionAttribute which uses the current HTTP request’s http method (aka verb) to determine whether or not a method is the action that should handle the current request.

This allows for having two methods of the same name (different parameters of course) to both be actions, but respond to different HTTP verbs.

For example, we may want two versions of the Edit method, one which renders the edit form, and the other which handles the request when that form is posted.

public ActionResult Edit(string id)
  return View();

public ActionResult Edit(string id, FormCollection form)
  //Save the item and redirect…

When a POST request for /home/edit is received, the action invoker creates a list of all methods of the controller that match the “edit” action name. In this case, we would end up with a list of two methods. Afterwards, the invoker looks at all of the ActionSelectionAttribute instances applied to each method and calls the IsValidForRequest() method on each. If each attribute returns true, then the method is considered valid for the current action.

For example, in this case, when we ask the first method if it can handle a POST request, it would respond with false because it only handles GET requests. The second method responds with true because it can handle the POST request and it is the one selected to handle the action.


One consequence to keep in mind when using helpers which use our routing API to generate URLs is that the parameters for all of these helpers take in the action name,not the method name. So if I want to render the URL to the following action:

public ActionResult ListSomething()

Use “List” and not “ListSomething” as the action name.

<!-- WRONG! -->
<%= Url.Action("ListSomething") %>

<!-- RIGHT! -->
<%= Url.Action("List") %>

This is one reason you’ve seen the MVC team resistant to including helper methods, such as Url<T>(…), that use an expression to define the URL of an action. The action is not necessarily equivalent to a method on the class with the same name.


So in the end, an action is a logical concept that represents an event caused by the user (such as clicking a link or posting a form) which is eventually mapped to a method which handles that user event.

It’s convenient to think of an action as a method of the same name, but they are distinct concepts. A lowly method can become an action by the power of its own name (aka name dropping), but in this egalitarian framework, any method, no matter its name, can handle a particular action, by merely using the ActionNameAttribute.

Technorati Tags: aspnetmvc,routing, mvc, code 0 comments suggest edit

Download the MSI and Release notes here.

Last night we released ASP.NET CodePlex Preview 5 on CodePlex. Be on the lookout for one of those famous epic blog posts from ScottGu describing the release. In the meanwhile, the release notes contain short write-ups of what has changed.

We didn’t originally plan to have another preview. However, we implemented a few significant chunks of functionality and were dying to get feedback so that we could incorporate it into the product before Beta. It helps that with five or so of these interim releases, we’ve become pretty efficient producing these releases.

We plan to have our next release be our official Beta, which means we’ll have a lot more test passes to produce and run before we release the next one.

In the meanwhile, take the code for a test drive and let us know what you think. Some of the naming needs to be cleaned up, so you can expect some name changes and improvements to the API from here to Beta, along with a lot of bug fixes and a few more features. Naming classes is tough, so we appreciate good suggestions there. :)

One change that I think I forgot to mention in the release notes is that the Ajax helpers do not accept inline script any longer, they take method names. Those helpers are all in their own namespace now as extension methods which allow you to completely swap them out with ones of your own.

If you’re interested in more details about how our action method selection works, be sure to read my post entitled How a Method Becomes an Action. Be sure to keep an eye on Brad Wilson’s blog too, as he put in some work on this feature and will describe the view engine changes.

UPDATE: Brad just blogged about partial rendering and view engines in ASP.NET MVC.

Technorati Tags: aspnetmvc,

tdd, code 0 comments suggest edit

I admit, up until now I’ve largely ignored the BDD (Behavior Driven Development) Context/Specification style of writing unit tests. It’s been touted as a more approachable way to learn TDD (Test Driven Development) and as a more natural transition from user stories to the actual code design. I guess my hesitation to give it a second thought was that I felt I didn’t need a more approachable form of TDD.

Recently, my Subtext partner in crime, Steve Harman, urged me to take another fresh look at BDD Context/Specification style tests. I trust Steve’s opinion, so I took another look and in doing so, the benefits of this approach dawned on me. I realized that it wasn’t BDD itself I didn’t like, after all, I did enjoy writing specs using minispec and IronRuby. I realized that the part I didn’t really like was the .NET implementations of this style. Keep in mind that I do not claim to be an expert in TDD or BDD, I’m just a student and these are just my observations. I’m sure others will chime in and provide corrections that we can all learn from.

SpecUnit.NET example

For example, let’s take a look at one example pulled from the sample project of Scott Bellware’s SpecUnit.NET project, which provides extensions supporting the BDD-style use with .NET unit testing frameworks and has really pushed this space forward. I trimmed the name of the class slightly by removing a couple articles (“the” and “an”) so it would fit within the format of my blog post.

[Concern("Funds transfer")]
public class when_transfering_amount_greater_than_balance_of_the_from_account
  : behaves_like_context_with_from_account_and_to_account
  private Exception _exception;

  protected override void Because()
    _exception = ((MethodThatThrows) delegate
      _fromAccount.Transfer(2m, _toAccount);

  public void should_not_allow_the_transfer()

  public void should_raise_System_Exception()

The Because method contains the code with the behavior we’re interested in testing. The two methods annotated with Observation are the specifications. Notice that the names of the classes and methods are meant to be human readable. The output of running these tests remove underscores and reads like a specification document. It’s all very cool.

What I like about this approach is there’s a crisp focus on having each test class focused on a single behavior, in this case transferring a balance from one account to another. In the past, I might have written something like this as two test methods (which led to duplicating code or putting code in some generic Setup method that seems detached from what I’m trying to test) or as one method with two asserts. This approach helps you think about how to organize tests along the lines of your objects’ responsibilities.

What I don’t like about it, and I admit this is really just a nitpick, is that it looks like someone’s keyboard puked underscores all over the place. I feel like having to encapsulate each observation as a method adds a lot of syntactic overhead when I’m trying to read this class from top to bottom. Maybe that’s just something you get used to.

MSpec example

Switching gears, let’s look at a different example by Aaron Jensen. This is an experiment in which he tried a very different approach. Look at this code sample…

public class Transferring_between_from_account_and_to_account   
  static Account fromAccount;   
  static Account toAccount;   
  Context before_each =()=>   
    fromAccount = new Account {Balance = 1m};   
    toAccount = new Account {Balance = 1m};   
  When the_transfer_is_made =()=>   
    fromAccount.Transfer(1m, toAccount);   
  It should_debit_the_from_account_by_the_amount_transferred =()=>   
  It should_credit_the_to_account_by_the_amount_transferred =()=>   

There’s still the underscore porn, but it does read a little more like prose from top to bottom, if you can get yourself to ignore that funky operator right there. =()=> Whoa!

When I complained to Steve about all the underscores in these various approaches, he suggested that being a fan of the more Zen-like Ruby language, the underscores didn’t bother him. I didn’t buy that as part of the aesthetic of Ruby is its clean DRY minimalism. Yes, it uses underscores, but it doesn’t generally abuse them. Let’s take a look at a BDD example using RSpec and Ruby. This is an example of a spec in progress from Luke Redpath… (forgive the poor syntax highlighting. I need a ruby css stylesheet. :)

context "A user (in general)" do 
  setup do 
    @user = 

  specify "should be invalid without a username" do 
    @user.errors.on(:username).should_equal "is required" 
    @user.username = 'someusername' 

  specify "should be invalid without an email" do 
    @user.errors.on(:email).should_equal "is required" = '' 

One thing to notice is that we’re not using separate methods and classes here. Ruby doesn’t force you to put code in classes. You can just execute a script top-to-bottom. In this case, the code sets up a context block and within that block there is a setup block and a couple of specify blocks. There’s no need to factor a specification into multiple classes and methods.

Also notice that the context and specifications are described using strings! Now we’re getting somewhere. If it’s meant to be human readable, why don’t we use strings instead of the underscore porn? On Twitter, many accused the ceremony and vagaries of C# of preventing this approach. While I agree that Ruby has less ceremony than C#, I also think C# doesn’t get its fair shake sometimes. We can certainly take a C# approach down to its bare metal with the least syntactic noise as possible, right?


So in true Program Manager at Microsoft fashion, I spec’d out a rough idea of the syntax I would like to use with BDD. I then showed it to Brad Wilson asking him how can I make this work in In true Developer fashion, he ran with it and made it actually work. This blog post is the part where I try to take all the credit. That’s what PMs do at Microsoft, write specs, take credit for the hard work the developers do in bringing the specs to life. ;) (I kid, I kid)

Here’s an example using this syntax…

public void PushNullSpecifications()
  Stack<string> stack = null;

  "Given a new stack".Context(() => stack = new Stack<string>());

  "with null pushed into it".Do(() => stack.Push(null));

  "the stack is not empty".Assert(() => Assert.False(stack.IsEmpty));
  "the popped value is null".Assert(() => Assert.Null(stack.Pop()));
  "Top returns null".Assert(() => Assert.Null(stack.Top));

A few things to notice. First, the entire spec is in a single method, which reduces some of the syntactic noise of splitting the spec into multiple methods. Secondly, we’re using strings here to describe the specification and context, rather than method names with underscores for the human readable part.

Lastly, and most importantly, while it may look like we’re committing the sin of multiple asserts in a single test, this is not the case. Via the power of the xUnit.NET extensibility model, Brad was able to generate a test per assertion. That’s why the Assert method (should it be Observe or Fact?) takes in a lambda. We can return these closures to and it will wrap each one in a separate test. Here’s another look at the same method with some comments to highlight how similar this is to the previous examples. (UPDATE: I also changed the asserts to use the Should style extension methods to demonstrate what it could look like).

public void PushNullSpecifications()
  Stack<string> stack = null;
  //equivalent to before-each
  "Given a new stack".Context(() => stack = new Stack<string>());

  //equivalent to Because()
  "with null pushed into it".Do(() => stack.Push(null));

  //Equivalent to [Observation]
  "the stack is not empty".Assert(() => stack.IsEmpty.ShouldBeFalse());
  //Equivalent to [Observation]
  "the popped value is null".Assert(() => stack.Pop().ShouldBeNull());
  //Equivalent to [Observation]
  "Top returns null".Assert(() => stack.Top.ShouldBeNull());

Keep in mind, at this point, this is a merely proof-of-concept sample that will be included with the samples project in the next version of xUnit.NET and by the time you read this sentence, it may have changed already. You can download this particular changeset here.

The following is a screenshot of the HTML report generated by xUnit.NET when using this syntax that Brad sent me today.subspec-report

Despite it being a sample, I tried to give it a catchy name in case this is intriguing to others and worth iterating on to make it better (not to mention that I love putting the prefix “Sub” in front of everything.)

Possible next steps would be to add all the Woulda, Coulda, Shoulda extension methods so popular with this style of testing. For example, that would allow you to replace Assert.False(stack.IsEmpty) with stack.IsEmpty.ShouldBeFalse(). For those of you practicing BDD, I’d be interested in hearing your thoughts, objections, etc… concerning this approach.

For completeness sake, here’s another syntax I proposed to Brad. He mentioned it was similar to something else he’s seen which he might port over to

public void When_Transferring_To_An_Account()
  Election e = null;
  Account a = null;
  Account b = null;
  Where("both accounts have positive balances", () => {
    a = new Account { Balance = 1 };
    b = new Account { Balance = 2 };
  When("transfer is made", () =>
    a.Transfer(1, b)
  It("debits account by amount transferred", () => a.Balance.ShouldBe(0));
  It("credits account by amount transferred", () => b.Balance.ShouldBe(3));

For those of you completely new to BDD, check out Scott Bellware’s Code Magazine article on the subject.

Technorati Tags: tdd,bdd,

personal 0 comments suggest edit

As Scott wrote last week, using a punny title I have to admire, he and I (among many others) were both the subject of a DoS (Denial of Service) attack. Looking through my logs, it looks to actually be a DDoS (Distributed Denial of Service) attack coming from multiple IP addresses.

The attack appears to actually be an attempt at a SQL Injection attack, but for his blog, which stores its data in XML files, that is entirely pointless. For my blog, which doesn’t do any inline SQL, it’s also mostly pointless. So far, the SQL injection part of the attack has failed, but it has succeeded in pegging my CPU. Maybe that’s the actual intended goal. Only the attacker knows.

LogParser Queries

The first clue (besides my site being down) is that my log file for today is huge at 9:00 AM.


The next step is to run some queries against my logs using the fantastic LogParser tool. This post, entitled Forensic Log Parsing with Microsoft’s LogParser is a great resource for constructing queries. The focus tends to be more on investigating an actual intrusion. The queries I need are to discover what kind of DoS attack I’m experiencing. Here’s the query I’m using so far…

  logparser "SELECT c-ip, COUNT(*), STRLEN(cs-uri-query) as LENGTH, cs-uri-query 
  FROM C:\WINDOWS\system32\LogFiles\W3SVC1\ex080822.log 
  GROUP BY Length, cs-uri-query, c-ip 
  HAVING Length > 500 
  ORDER BY LENGTH DESC" -rtp:-1 > long-query.txt

Note that I’m running this for a single log file for the day. I could use a wildcard and run this for all my log files. The very last snippet, > long-query.txt, pipes the output to a text file. Here’s a snippet of one of the query strings I’m seeing:


The length of these query strings are all very long. Interestingly enough, there’s no smooth transition in length. For example, there are no query strings of length 500 – 1000.

URL Scan

I then went and installed URLScan 3.0 Beta, which Scott wrote about, and went into the configuration file (located at C:\WINDOWS\system32\inetsrv\urlscan\UrlScan.ini by default and changed the following setting near the bottom:


From its default of 2048 to another smaller value.

The other setting I changed is to allow dots in the path because I have many URLs that contain dots.


Technorati Tags: UrlScan,IIS,DoS,DDoS,Security

code, mvc, 0 comments suggest edit

A commenter to my last post asks the following question,

What is the difference between a beta, a CTP, a fully-supported out of band release, an RTM feature, and a service pack?

The answer you get will differ based on who you ask, but I’ll give you my two cents on what these terms mean.


Let’s start with Beta. A great starting point is this post by Jeff Atwood entitled Alpha, Beta, and Sometimes Gamma.

The software is complete enough for external testing – that is, by groups outside the organization or community that developed the software. Beta software is usually feature complete, but may have known limitations or bugs. Betas are either closed (private) and limited to a specific set of users, or they can be open to the general public.

With the ASP.NET MVC project, all features we plan to implement for RTM should be complete for our Beta. However, the Beta period can influence this and if it seems extremely important, we may take on small DCRs (Design Change Requests).


CTP stands for Community Technology Preview. It’s generally an incomplete preview of a new technology in progress. These usually come out before beta and are a way to gather feedback from the community during the development of a product. This is similar to an Alpha release per Jeff’s hierarchy, except that at Microsoft, we generally do put CTPs in a public location.

With the ASP.NET MVC project, we no longer use the term CTP and simply use the term “Preview”. I think this is due to running out of our TLA (Three Letter Acronym) budget for the year. Our previews do still undergo a QA test pass and are released to the ASP.NET website.

Daily Builds / Interim Releases

The commenter didn’t ask about this, but I thought I would mention it. In many open source projects, you can get a daily build of the software directly from their source code repository. For example, with Subtext, if you want to grab the most recent build, you can go to our builds archive. A daily build is really for those who like to play with fire, as they usually are not tested, and could represent work in progress that is not even working at all.

The closest thing the ASP.NET MVC team has to this is with our periodic “Interim releas”, a term we just made up, that is pushed out to CodePlex and not placed on the ASP.NET website, because of the more mainstream nature of that site.

As much as these CodePlex releases are for the cutting edge audience, being Microsoft, we can’t simply put daily builds out there and say you’re on your own. At least not yet. So these CodePlex builds are sanity checked by our QA team and by me, but they do not go under a full test pass like our Preview releases do. This is an area of experimentation for the ASP.NET team and so far, is proving successful.

Fully Supported Out-of-Band release

Internally, we usually call these OOB releases (pronounced “oob” like it’s spelled).

A Fully Supported Out-of-Band release is a release that is not part of the Framework (i.e. it’s not included in an installation of the .NET Framework), but is fully supported as if it were. For example, you can call up PSS (Microsoft’s Tech Support) for support on a fully supported OOB release.

One example of this was “Atlas” which later became Microsoft Ajax and was rolled into ASP.NET 3.5. ASP.NET MVC 1.0 will be an example of an OOB release.

RTM and RTW release

RTM stands for “Released to Manufacturing” and is a throwback to the days when software was mostly released as CDs. When a project went “Gold”, it was released to manufacturing who then burned a bunch of CDs and packaged them up to be put on store shelves. True, this still goes on today believe it or not, but this mode of delivery is on the decline for certain types of software.

RTW is a related term that stands for “Released to Web” which is more descriptive of how software is actually shipped these days. For example, while we like to use the term RTM internally out of habit, ASP.NET MVC will actually be RTW.

Service Pack

A Service Pack (or SP) is simply an RTM (or RTW) release of fixes and/or improvements to some software. It used to be that SPs rarely included new features, but it seems to be the norm now that they do. Service Packs tend to include all the hotfixes and patches released since the product originally was released, which is convenient for the end user in not having to install every fix individually.

Technorati Tags: beta,ctp,alpha,rtw,rtm mvc, 0 comments suggest edit

I wanted to clear up a bit of confusion I’ve seen around the web about ASP.NET MVC and the .NET Framework 3.5 Service Pack 1. ASP.NET MVC was not released as part of SP1. I repeat, ASP.NET 3.5 SP1 does not include ASP.NET MVC.

What was released with SP1 was the ASP.NET Routing feature, which is in use by both ASP.NET MVC and Dynamic Data. The Routing feature is my first Framework RTM feature to ship at Microsoft! We also shipped a bunch of other features such as Dynamic Data, and this short list of breaking changes.

I hope that clears things up and I apologize for the confusion.

And for my next feat, I’m going to try and read your mind, oooooh! Right now, you’re thinking something along the lines of,

Ok, so ASP.NET MVC didn’t ship as part of SP1. When is it going to ship?!

Good question! Scott Hanselman once quipped that it would ship in a month that ends in “-ber”. He also recently quipped,

Anyway, Phil has always said that MVC is on its own schedule and will ship when its done. Possibly when Duke Nukem Forever ships.

That Scott, he’s so full of quips. ;)

In any case, he’s right in that MVC is pretty much on its own schedule since the first RTM version will be a fully supported out-of-band release, much like Atlas was back in the day.

The MVC team really doesn’t want to rush the first release. We’re taking the time to do the best we can in laying the groundwork for future releases. My hope is that we’ll have very few, if any, moments where we we want to make a breaking change because we didn’t provide the right amount of extensibility.

At the same time, we also really want to get ASP.NET MVC in your hands in an RTM form soon so you can start using it for your clients who are uncomfortable working with a beta technology. Trust me, we are not in the business of the “perpetual-beta” and are working towards an RTM. As Scott pointed out, our hope is to get it out before the end of the year. But as most of you know about how software scheduling works, anything can happen between now and tomorrow.


As we move towards the tail end of the development cycle, we’ve been pushing hard to get our bug/approved change request count down, which I recently twittered about. I asked Carl, our tester, to print out an Excel graph of our bug count over time. It feels really good to walk by his office every day and see the line trending down towards zero (though occasionally, it ticks up a bit). I think it’s a huge motivator to try and fix and close out work items.

At the same time, this graph is for our benefit only and not something we’re being evaluated on by any managers, which is extremely important. One of the dangers of any metric is that developers are smart and they’ll do what they can to optimize the metric. For example, the danger with this metric is that we might be tempted to not log feature requests and bugs. Joel Spolsky wrote about this phenomena when measuring the performance of knowledge workers a while back,

But in the absence of 100% supervision, workers have an incentive to “work to the measurement,” concerning themselves solely with the measurement and not with the actual value or quality of their work.

Since we’re the only ones who care about this graph (nobody is looking over our shoulder) and QA is very motivated to find bugs, I think it’s a safe to use as a fun source of motivation. For the most part, watching the graph move towards zero feels good. Those are the metrics I like, the ones that inspire positive feelings among the team and a sense of forward motion and momentum. :)

Tags: aspnetmvc , aspnet , schedule, mvc 0 comments suggest edit

In Preview 2 or Preview 3 of ASP.NET (I forget which), we introduced the concept of Action Filters. Sounds much more exciting than your run-of-the-mill LayOnTheCouchMunchingChipsWatchingInfomercialsFilter, that I originally proposed to the team. Thankfully, that was rejected.

An action filter is an attribute you can slap on an action method in order to run some code before and after the action method executes. Typically, an action filter represents a cross-cutting concern to your action method. Output caching is a good example of a cross-cutting concern.

In CodePlex Preview 4 of ASP.NET MVC, we split out our action filters into four types of filters, each of which is an interface.

  • IAuthorizationFilter
  • IActionFilter
  • IResultFilter
  • IExceptionFilter


Authorization filters run before any of the action filters and allow you to cancel the action. If you cancel the action, you can set the ActionResult instance you want rendered in response to the current request.

There should be very few cases (hopefully) that you need to write such a filter of your own. In those rare cases when you do, you’ll be glad to have this interface around.


Action filters allow you to run code before and after an action method is called, but before the result of the action method is executed. This effectively allows you to hook into the rendering of the view, for example.

In the “before” method (OnActionExecuting), you can cancel the action and even supply an action result of your own instead. If you cancel the action, no other filters higher up the stack will be executed and the invoker starts executing the “after” method for any action filter that had its “before” method called (except for the filter that canceled the action).

In the after method (OnActionExecuted) you can’t cancel the action (it already ran and we don’t have a ITimeMachineFilter implemented yet), but you can replace or modify the action result before it gets called.

If an exception was thrown by another action filter or by the action method itself, you can examine the exception thrown from your filter. Your filter can specify that it can handle the exception (seriously, only do this if your filter really can do this as it’s generally a bad thing to handle an exception you shouldn’t be handling), in which case the action result will still get executed. If the exception propagates up, the result will not get executed.


Result filters are pretty much similar to action filters, except they run after the action method has executed, but before the result returned from the action method has been executed. The “before” method is called OnResultExecuting and the “after” method is called OnResultExecuted.


The exception filters are all guaranteed to run after all of the action filters and result filters have run. Even if an exception filter indicates that it can handle the exception, it will still run. This is useful for logging scenarios in cases where you want a filter to always run no matter what happens so it can log exceptions etc…

One interesting thing to note is that exception filters run after result filters. So what can you do from an exception filter? Well we give you one last ditch chance to render something to the user by allowing you to set the action result in the exception filter. If that action result throws an exception, you’re SOL and the exception filter does not handle that exception. Well, you’re not totally SOL. The normal ASP.NET web.config settings for custom errors will kick in if you set them.

Writing Custom Filters

To write a custom filter, you simply need to create an attribute (aka a class that inherits from FilterAttribute) that also implements one of the four interfaces I mentioned.

It turns out that we think the most common case for custom filters will be those that implement IActionFilter and/or IResultFilter. To support the common case, we included a base attribute ActionFilterAttribute, which inherits both of these interfaces. Yeah, the name isn’t exactly accurate, but we tend to think of action filters as really action and action result filters.

For the other two filter types, we did not include a base attribute type for these. To write your own authorization filter, you simply implement IAuthorizationFilter. For example, here’s a filter I wrote the other day which we will probably include in the MvcFutures assembly. Apply this filter to an action and it will perform request validation of potentially insecure input. (Side Note: This validation is on by default in ASP.NET WebForm applications, but not in ASP.NET MVC applications because it’s implemented by the Page class, which runs too late.)

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, 
  Inherited = true, AllowMultiple = false)]
public sealed class ValidateInputAttribute : FilterAttribute
    , IAuthorizationFilter {
  public void OnAuthorization(AuthorizationContext filterContext) {

While we did not include a base attribute for these filters, we did include concrete implementations of these interfaces. For example, the AuthorizeAttribute is a concrete implementation of an authorization filter. You can (er…will be able to) inherit from this attribute if you want, but you can also simply implement IAuthorizationFilter yourself.

Why Four Filter Types?

We debated this a long time. We could have stuck with just the two interfaces: IActionFilter and IResultFilter and handled all cases.

The problem we ran into is that for attributes that perform some sort of authentication check, you want to be absolutely sure it runs before any of the action filters. And it’s very easy to get this wrong by accident even if you know what you are doing.

The type of thing we wanted to avoid was accidentally running the output cache filter before the the authorization filter. That’s a recipe for an information disclosure bug, potentially displaying information to someone who shouldn’t have access to see it such as photos of your hair piece collection (Why do you have so man?). So we decided that there ought to be four distinct filter phases in the life of a controller action: Authorization, Action Execution, Result Execution, Exception Handling.

If you write an authorization filter, it is guaranteed to run before any other action filters.

Keep in mind though, that these phases merely help guide filter writers into doing the right thing. Because the MVC framework is all about leaving you in control, it is still possible to get it all wrong. For example, I could write a custom output caching filter that implements IAuthorizationFilter and thus runs at the wrong time. Please don’t do this.Code responsibly.

Technorati Tags: aspnetmvc,aspnet,action filters

subtext 0 comments suggest edit

One feature of Windows Live Writer that Subtext supports is the ability to edit your post slug? What is the URL slug associated with a blog post? What is the URL slug?

Take a quick look in the address bar and you should notice that the URL ends with “editing-post-slugs.aspx”. That bold part is the post slug. It’s a human friendly URL portion that identifies this blog post, as opposed to using some integer id.

For a long time, Subtext had the ability to automatically convert your blog post title into friendlier URLs. However, as with most automatic efforts, there are cases where it falls a bit short.

For example, suppose I started writing the following post with the following title:


When I post it, the URL ends up being a bit ugly, though Subtext does give a good faith effort.


With Windows live writer, there’s a little double hash mark at the bottom that you can click to expand, providing more options. In the Slug: field, enter a cleaner URL.


Now when you publish this post, the URL will end with the slug that you specified.

If you use the Subtext Web Admin to post, we’ve had this feature all along in the Advanced Options section. It’s the Entry Name field (which I think we should call Entry Name Slug since Slug seems to be the standard term for this.

Of course when we come out with our MVC version, we can get rid of that annoying .aspx at the end. :)

Technorati Tags: subtext,windows live writer,wlw

personal 0 comments suggest edit

It’s been a long time coming, but we are finally ready to release Subtext 2.0. As I mentioned in April (was it that long ago!?), this is scaled down a bit from our original 2.0 plans. But even so, we have a lot of new goodness in here. It’s not just a bug fix release, though there are plenty of those too.


With this release, Subtext has top notch support for Windows Live Writer thanks to some check-ins from Tim Heuer.

  • Enhanced MetaWeblog API implementation to support providing a “slug” URL name for the post.  This gives the user the option to use the default URL naming, the “auto-friendly” or now to override that with your own slug name.
  • Fixed a bug in the SiteMap handler for blogs not hosted at root domains.  Would love people to test this out.
  • Added support for WordPress API functions of: newPage, editPage, getPages, newCategory
  • Simple modification to the Windows Live Writer manifest to prevent those who think they can future post :-)
  • Tag-based RSS syndicator

Other highlights

  • New CSS-Based Admin Design That Makes Better Use of Space
  • Ability to set a separate skin for mobile devices
  • Streamlined Installation Process. I tried to remove unnecessary steps and make this more robust.
  • Support for Enclosures (See Simo’s great post on this for more details)
  • CSS and JS optimizations (Simo has more interesting details on this).
  • Setting a date in the future for publishing posts (again, Simo has more details).
  • Login to your blog using OpenID, as well as use your blog as an OpenId Delegate

Notes for new installations

The install package includes a default Subtext2.0.mdf file for SQL

  1. If you plan to run your blog off of SQL Express, installation is as easy as copying the install files to your webroot. If you’re not using SQL Express, but plan to use SQL Server 2005, you can attach to the supplied .mdf file and use it as your database.

Notes for upgrading

In the app_data folder of the install package, feel free to delete the database files there. They only apply to new installs.

We also include a zip file with just the SQL upgrade scripts. This is sometimes useful for those who run into problems with the upgrade procedure.

Full upgrade notes are on the Subtext project website.

So what’s next for Subtext?

The Subtext team is fired up to get their feet wet using ASP.NET MVC, and I can’t blame them. So at this point, we’re starting preliminary planning work for Subtext 3.0, the next major version of Subtext which will be a ground-up rewrite pulling in as much code from 2.0 along the way of course.

But that doesn’t mean we’re abandoning the 2.0 line immediately. I would expect to see several small incremental releases of the 2.* line even as we start on 3.0 with fires lit under our butts. Subtext 3.0 is in the very early stages of planning taking a long term look into the future.

After all, there’s still a lot of infrastructure decisions to be made, as well as requirements gathering. In what ways do we want to be just like Subtext 2.0? In what ways do we want to completely change the architecture?

Some of the decisions we need to make, just as a start:

  1. Where do we host? Do we stick with SourceForge or go elsewhere?
  2. What data access layer/ORM tooling should we use?
  3. What DI framework do we choose?
  4. What do we use for communication and documentation?
  5. What should our database design look like?
  6. Should we change how we handle multi-tenancy?

In any case, it’s been a fun ride so far, and I hope we can keep our momentum going in producing a great blogging platform for ASP.NET.

And before I forget, here’s the download page link.

Tags: subtext, mvc 0 comments suggest edit

In his Practical Review of ASP.NET MVC, Josh Charles provides a helpful review of ASP.NET MVC from a Rails developer’s perspective. It seemed fair and balanced, and the end result is that there’s room for improvement, which we’re taking to heart.

However, that’s not the part that caught my attention. He mentioned that he wrote a cycle method but couldn’t write it as an extension method to HtmlHelper.

this was an instance method that would take two strings and return the one that it didn’t return the last time it was called. In my templates, I used this to change the classes for each row of data, to give them different background colors. I considered writing an extension method to the Html object used for other Html operations in the view page, but this method specifically required the use of an additional private variable, so that would not work.

If you don’t mind cheating a bit, there is a way to write this as an extension method. And while we’re doing that, why stop at only two strings? Why not take an indefinite number? :)

public static string Cycle(this HtmlHelper html, params string[] strings) {
    var context = html.ViewContext.HttpContext;
    int index = Convert.ToInt32(context.Items["cycle_index"]);

    string returnValue = strings[index % strings.Length];

    html.ViewContext.HttpContext.Items["cycle_index"] = ++index;
    return returnValue;

Perhaps allowing an indefinite number of strings is overkill (who ever heard of a table with tri-color highlighting?) but I thought it was fun to do regardless. Here’s an example of usage with three different CSS styles:

    .first {background-color: #ddd;}
    .second {background-color: khaki;}
    .third {background-color: #fdd;}

<% for (int i = 0; i < 5; i++) { %>
    <tr class="<%= Html.Cycle("first", "second", "third") %>">
<% } %>

And the output…

    <tr class="first">

    <tr class="second">

    <tr class="third">

    <tr class="first">

    <tr class="second">

With this, go forth and spread tri-color highlighted tables all over the web. Or if you’re really crazy player, go with four color highlighting!

Technorati Tags: aspnetmvc,helpers,review, mvc 0 comments suggest edit


Recently, Adam Kinney came by my office to interview me for a Channel 9 episode discussing ASP.NET MVC CodePlex Preview 4.

I’ve known Adam for a long time, even before he joined Microsoft. I think we met (in person) at Tech-Ed 2003.

In any case, we talk a bit about ASP.NET MVC and Preview 4, all the while I tried very hard not to put my foot in my mouth. At the end there are some outtakes of me impersonating Scott Hanselman doing an impersonation of Sean Conery. That wasn’t to make fun of Scott, but totally out of love and respect. ;)

On Gaming

In the interview, I mentioned that I used to work at a skill gaming company called SkillJam which is no longer around. We had a tournament engine that allowed users to play games of skill for money. I worked on the back-end technologies such as our mobile gaming infrastructure which was well reviewed by GameSpot.

SkillJam is no longer around as it was bought by FUN Technologies (yes, I literally worked for “Fun” back then) and subsumed into World Winner.

In any case, one of the sharpest coworkers that I worked with there went out on his own to start a new game development company called CasualCafe. This guy was the one who lived in the world of writing low-level C++ code, anti-cheat techniques, and game engines. If you are into the genre of “Casual Games” (such as Bejewelled, Solitaire, etc…) be sure to check it out.

Books on My Shelf

Via Twitter, James Avery was more interested in the books on my shelf than what I was saying (I don’t blame him). In case you were wondering, here’s a partial list:

There are other books up there, but these are the ones I’ve read cover-to-cover and can recommend.

Technorati Tags: aspnetmvc,books,channel9

tdd, code 0 comments suggest edit

A while back I talked about how testable code helps manage complexity. In that post, I mentioned one common rebuttal to certain design decisions made in code in order to make it more testable.

Why would I want to do XYZ just do improve testability?

integrated-circuit Recently, I heard one variation of this comment in the comments to my post on unit test boundaries. Several people suggested that it’s fine to have unit tests access the database, after all, the code relies on data from the database, it should be tested.

Implicit in this statement is the question, “Why would I want to abstract away the data access just to improve testability?

Keep in mind, I never said you shouldn’t test your code’s interaction with the database. You absolutely should. I merely categorized that sort of test as a different sort of test - an integration test. You might still use your favorite unit testing framework to automate such a test, but I suggest trying to keep it in a separate test suite.

The authors of The Pragmatic Programmer: From Journeyman to Master have a great answer to this question with their comparison to Integrated Circuit’s, which have features designed specifically to enable testability.

The “Design For Test” Wikipedia entry refers to name as encompassing a range of design techniques for adding features to microelectronic hardware in order to make it testable. Examples of these techniques show up as early as the 1940s/50s. So designing for testability is not some whiz-bangy latest methodology flavor of the day the crazy kids are doing.

One key benefit to these techniques is that components can be tested in relative isolation. You don’t have to place them into a product in order to test them, though at the same time, they can be tested while within the product.

So in answer to the original question, I’d ask, “Why wouldn’t we design for testability?”

I think this analogy illustrates one reason why I don’t want my unit tests talking to the database (apart from wanting the tests to run fast). Ideally, someone else down the road, new to the project, should be able to get the latest code from source control and run the unit tests immediately without having to go through the pains of setting up an environment with the correct database.

Another benefit of abstracting away the database so that your code is testable and doesn’t cross boundaries is that your code is then not so dependent on a particular database. I used to argue that there’s no need to insulate your code from the particular database that you are using. I’ve never been on a project where the customer suddenly switches from SQL Server to Oracle. That sort of drastic change very rarely happens.

But it turns out that I have been on projects where we switched from SQL Server 6.5 to 7 (and from 7 to 2000 and so on). Upgrades can be nearly as drastic as choosing a different database vendor. Having your code isolated from your choice of database provides some nice peace of mind here.

Tags: IC , Integrated Circuit , TDD , Unit Testing mvc, 0 comments suggest edit

UPDATE: I linked to the wrong post. I corrected the link.

During the recent Insiders summit, Wally cornered me into recording a really short video demonstrating a feature of ASP.NET MVC. I decided to sprinkle a little Ajax in my demo by showing how to use jQuery to call an action that returns a JsonResult.

Specifically, I show how to update a couple of regions in the page (two dom elements) with data pulled from the server. I then add a little sparkle to the demo by implementing the ubiquitous yellow fade when adding the content to the DOM. As you’re watching it, you’ll notice that I’m making it up on the fly based on another demo I did earlier that day.

He’s posted the video here in show #106 121. That’s heckuva a lot of shows Wally!

Technorati Tags: aspnetmvc,ajax,jquery

personal, code 0 comments suggest edit

It’s a quiet friday afternoon with all of our devs in training today, so I figured I’d take a breather and respond to this meme I’ve been tagged with by Simone, Keyvan, Steve and others.

How Old Were You When You Started Programming?

Have I even started really programming yet? I guess I got my first taste when I was around eight with my first computer, a TRS-80 Color Computer.


That sucker could display 9 colors, all at once, believe it or not. My programming experience back then was pretty minimal. My dad and I mostly spent hours typing in program listings from books, complete with pages of DATA lines consisting entirely of 0s and 1s. Pretty much typing by hand the equivalent of binary resources. I also wrote simple programs that would draw pictures and my dumb attempts at Zork-like text adventures.

How Did You Get Started In Programming?

Well I would write dumb little programs for my TRS-80, then Commodore 128, then Amiga. But I never wrote any programs of any significance till I took a C and C++ class in college. Even then, those programs were not “real world” programs, but simple assignments. It wasn’t till after college when I had to get a J-O-B that I learned how to really program.

And it was a rough start, writing the most knotty spaghetti ASP code in VBScript ever. It wasn’t till I read Code Complete that I realized that I wasn’t programming yet, I was barfing code.

What Was Your First Language?

English. My first programming language was BASIC for the TRS-80. My first professional language was VBScript. My first Object Oriented language was C++.

What Was The First Real Program You Wrote?

Man, I can barely remember that far back. All I remember was on my first day of my first programming job, a helpful consultant/coworker showing me the ropes. The main thing she taught me that day was when looping through a RecordSet, don’t forget the rs.MoveNext call, or else!

Around that time, I started working on a website for a company called which later became which later got bought by Yahoo. That was the first large website I worked on and the place I had my first and worse major production bug ever. It was a lot of fun because I also signed up on the site and would interact with the other members, since it was a community music site.

What Languages Have You Used Since You Started Programming?

What is this? Some sort of interview? The programming languages I’ve used professionally are: BASIC, Visual Basic, VBScript, Java, J++, Ruby, C#.

What Was Your First Programming Gig?

As I mentioned before, it was right after college I got a job at a consulting firm named Sequoia Softworks in Seal Beach, CA. We had an office right on main street near the beach above some antique store or something. We later changed our name to Solien. They’re still around, but their site needs some love.

If You Knew Then What You Know Now, Would You Have Started Programming?

Well in between investing large sums of money in the right stocks and getting out at the right time, yes, absolutely! I love to write code and write about code.

If There is One Thing You Learned Along the Way That You Would Tell New Developers, What Would It Be?

Learn to write and communicate well. Software development is rich in ideas and being able to communicate your ideas well will get you places. And keep an open mind. Things you’re absolutely sure about now, you might not be so sure about tomorrow.

What’s the Most Fun You’ve Ever Had … Programming?

Hacking on Open Source projects such as Subtext. Recently, working on my [IronRuby

  • MVC prototype]( “IronRuby and MVC”) has been a lot of fun. I find it fun to try out new things as well as making old things better.

Technorati Tags: Personal

tdd, code 0 comments suggest edit

One principle to follow when writing a unit test is that a unit test should ideally not cross boundaries.


Michael Feathers takes a harder stance in saying…

A test is not a unit test if:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can’t run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it

Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.

Speed isn’t the only benefit of following these rules. In order to make sure your tests don’t reach across boundaries, you have to make sure the unit under test is easily decoupled from code across its boundary, which provides benefits for the code being tested.

Suppose you have a function that pulls a list of coordinates from the database and calculated the best fit line for those coordinates. Your unit test for this method should ideally not make an actual database call, as that is reaching across a boundary and coupling your method to a specific data access layer.

Reaching across a boundary is not the only sin of this method. Data access is an orthogonal concern to calculating the best-fit line of a series of points. In The Pragmatic Programmer, Andrew Hunt and Dave Thomas tout Orthogonality as a key trait of well written code. In an interview on, Andy describes orthogonality like so:

The basic idea of orthogonality is that things that are not related conceptually should not be related in the system. Parts of the architecture that really have nothing to do with the other, such as the database and the UI, should not need to be changed together. A change to one should not cause a change to the other. Unfortunately, we’ve seen systems throughout our careers where that’s not the case.

Ideally, you would refactor the method so that the data the method needs is provided to it via some other means (another method passing the data via arguments, dependency injection, whatever). That other means, whatever it is, can perform the necessary data access: that’s not your concern at this moment. You aren’t testing that other means (right now at least, you might later), you’re focused on testing this unit.

This isolation enforced by unit test can be challenging, as it’s easy to get distracted by these other orthogonal concerns. For example, if this method doesn’t do data access, which one does? However, having the discipline to focus on the unit being tested can help shape your code so that it follows the single responsibility principle (SRP for short). If your test needs to access an external resource, it might just be violating SRP.

This provides several key benefits.

  • Your function is no longer tightly coupled to the current system. You could easily move it to another system that happened to have a different data access layer.
  • Your unit test of this function no longer needs to access the database, helping to keep execution of your unit tests extremely fast.
  • It keeps your unit test from being too fragile. Changes to the data access layer will not affect this function, and therefore the unit test of this function.

All this decoupling will provide long term benefits for the maintainability of your code.

Technorati Tags: TDD,Unit Test,Orthogonality,Single Responsibility Principle