personal comments edit

LazyCoder (aka Scott Koon) is organizing a little drinky drink this Wednesday around 6:00 PM-ish at The Three Lions Pub. This is just an informal gathering, not the huge production like the Hanselman Geek Dinner which requires eating at a mall food court because some three hundred plus geeks show up. (Did you know his last geek dinner was covered by theSeattle Times online?).

No, this will be a smaller intimate affair. Here’s your chance to get me sloshed by buying me beers in order to slip your pet feature into ASP.NET MVC. If it’s a really crazy feature idea, you may have to move up to Scotch. ;)

Seriously though, if you’re looking to blow off some steam on a Wed night (aren’t we all?) swing by and throw back one or a few with us.

Tags: Geek Dinner

code, tdd comments edit

UPDATE: I should have entitled this “Comparing Rhino Mocks and MoQ for State Based Testing”. I tend to prefer state-based testing over interaction based testing except in the few cases where it is absolutely necessary or makes the test much cleaner. When it is necessary, it is nice to have interaction based testing available. So my comparison is incomplete as I didn’t compare interaction based testing between the two frameworks.

For the longest time I’ve been a big fan of Rhino Mocks and have often written about it glowingly. When Moq came on the scene, I remained blissfully ignorant of it because I thought the lambda syntax to be a bit gimmicky. I figured if using lambdas was all it had to offer, I wasn’t interested.

Fortunately for me, several people in my twitter-circle recently heaped praise on Moq. Always willing to be proven wrong, I decided to check out what all the fuss was about. It turns out, the use of lambdas is not the best part of Moq. No, it’s the clean discoverable API design and lack of the record/playback model that really sets it apart.

To show you what I mean, here are two unit tests for a really simple example, one using Rhino Mocks and one using Moq. The tests use the mock frameworks to fake out an interface with a single method.

public void RhinoMocksDemoTest()
  MockRepository mocks = new MockRepository();
  var mock = mocks.DynamicMock<ISomethingUseful>();

  var myClass = new MyClass(mock);
  Assert.AreEqual(2, myClass.MethodUnderTest(123));

public void MoqDemoTest()
  var mock = new Mock<ISomethingUseful>();
  mock.Expect(u => u.CalculateSomething(123)).Returns(1);

  var myClass = new MyClass(mock.Object);
  Assert.AreEqual(2, myClass.MethodUnderTest(123));

Notice that the test using Moq only requires four lines of code whereas the test using Rhino Mocks requires six. Lines of code is not the measure of an API of course, but it is telling in this case. The extra code in Rhino Mocks is due to creating a MockRepository class and for calling ReplayAll.

The other aspect of Moq I like is that the expectations are set on the mock itself. Even after all this time, I still get confused when setting up results/expecations using Rhino Mocks. First of all, you have to remember to use the correct static method, either SetupResult or Expect. Secondly, I always get confused between SetupResult.On and SetupResult.For. I feel like the MoQ approach is a bit more intuitive and discoverable.

The one minor thing I needed to get used to with Moq is that I kept trying to pass the mock itself rather than mock.Object to the method/ctor that needed it. With Rhino Mocks, when you create the mock, you get a class of the actual type back, not a wrapper. However, I see the benefits with having the wrapper in Moq’s approach and now like it very much.

My only other complaint with Moq is the name. It’s hard to talk about Moq without always saying, “Moq with a Q”. I’d prefer MonQ to MoQ. Anyways, if that’s my only complaint, then I’m a happy camper! You can learn more about MoQ and download it from its Google Code Page.

Nice work Kzu!


The source code for MyClass and the interface for ISomethingUseful are below in case you want to recreate my tests.

public interface ISomethingUseful 
  int CalculateSomething(int x);

public class MyClass
  public MyClass(ISomethingUseful useful)
    this.useful = useful;

  ISomethingUseful useful;
  public int MethodUnderTest(int x)
    //Yah, it's dumb.
    return 1 + useful.CalculateSomething(x);

Give it a whirl.

Tags: TDD , MoQ , Rhino Mocks

personal comments edit

My family and I recently moved into our new home after a two month stay in temporary housing. One of the perks of moving is when your stuff is delivered from storage, it feels like Christmas again. “Oooh! Look at all the boxes I get to unwrap. Hey! I have a Stereo just like this one!”.

I have a tendency to get distracted by the things I’m unwrapping. For example, I found a few of my old college Math textbooks. I started thumbing through the Complex Analysis, Abstract Algebra, and Number theory books and they seemed like total gibberish to me.

Artists and writers often get all the credit for creativity. It’s often why people call them creative types. Meanwhile, mathematicians are often unfairly portrayed and humorless, pencil pushing, uncreative types. This is so not true. Many mathematicians use computers now and love telling jokes like “Can you prove that the integral of e to the x is equal to f sub u of n?” Perhaps if I write out the equation the joke becomes more apparent.

∫e^x^ = f~u~(n)

Hey, I didn’t say mathematicians had a **good sense of humor. The fact is that this idea that mathematicians lack creativity comes from those who never made it to higher math. Once you hit abstract algebra, number theory, non-Euclidean geometry, etc… you start to wonder if perhaps the giants in the field were smoking a little something something to come up with this stuff.

The creativity involved in some of the proofs (or even the questions the proofs prove) just boggles the mind. I imagine that the proof of Fermat’s Last Theorem (which reportedly took 7 straight years of work to crack) is something that pretty much nobody in the world understands. Not unlike an abstract painting in a modern art museum. But I digress…

Number theory was my favorite class at the time so I thought I’d sit down, crack open the text, and see if I could understand one problem and even make progress on it.

No Dice.

I understood this stuff once, when I was in college, but now, when I’m older and wiser, I am slower to pick it up, even though I understood it once already. So I started thinking? What is different between the now me and the college me? Then it hit me…

Pizza Consumption!


Think about it for a sec. They often say programmers are machines that turns coffee into code. I wonder if college math students are machines that turn Pizza into solved problem sets? And what about the mythical lore of those early Dot-Com boom startups which started off as a small group of recent college students, Pepsi, and a bunch of Pizzas and ended up selling for millions? Pizza was the common denominator.

I think I’m on to something here. Perhaps Pizza is brain food. I wonder if it’s the Mozzarella or the Pepperoni. Of course, the problem for me now is that college was a long time ago. While the question of whether Pizza is brain food is mere idle speculation, it is a well established fact that Pizza is definitely gut food. Food for thought.

Technorati Tags: Pizza,Humor,College,Math, mvc, code comments edit

Whew! I’ve held off writing about MVC until I could write a non-MVC post in response to some constructive criticism (It’s not just Sean, Jeff mentioned something to me as well). Now that I’ve posted that, perhaps I’ve bought myself a few MVC related posts in a row before the goodwill runs dry and I have to write something decidedly not MVC related again. ;)

As ScottGu recently posted, the ASP.NET MVC source code is now available via CodePlex. A move like this isn’t as simple as flipping a switch and *boom* it happens. No, it takes a lot of effort behind the scene. On the one hand is all the planning involved, and Bertrand Le Roy and my boss Simon played a big part in that.

Along with planning is the execution of the plan which requires coordination among different groups such as the Devs, PMs, QA and the legal team. For that, we have our newest PM Scott Galloway to thank for that effort. I helped a little bit with the planning and writing the extremely short readme (I didn’t know what to say) and roadmap. One part of this experience that went surprisingly well was the person from our legal department we worked with. I was expecting a battle but this guy just got it and really understood what we were trying to do and was easy to work with.

With that said, I’ve seen a lot of questions about this so I thought I would answer a few here.

Is this the live source repository?

No, the MVC dev team is not committing directly into the CodePlex source code repository for many reasons. One practical reason is that we are trying to reduce interruptions to our progress as much as possible. Changing source code repositories midstream is a big disruption. For now, we’ll periodically ship code to CodePlex when we feel we have something worth putting out there.

Where is the source for Routing?

As I mentioned before, routing is not actually a feature of MVC which is why it is not included. It will be part of the .NET Framework and thus its source will eventually be available much like the rest of the .NET Framework source. It’d be nice to include it in CodePlex, but as I like to say, baby steps.

Where are the unit tests?

Waitaminute! You mean they’re not there?! I better have a talk with Scott. I kid. I kid.  We plan to put the unit tests out there, but the current tests have some dependencies on internal tools which we don’t want to distribute. We’re hoping to rewrite these tests using something we feel comfortable distributing.

When’s the next update to CodePlex?

As I mentioned, we’ll update the source when we have something to show. Hopefully pretty often and soon. We’ll see how it goes.

As a team, we’re pretty excited about this. I wondered if the devs would feel a bit antsy about this level of transparency. Sure, anyone can see the source code for the larger .NET Framework, but that code has already shipped. This is all early work in progress. Can you imagine at your work place if your boss told you to publish all your work in progress for all the world to critique (if you’re a full time OSS developer, don’t answer that). ;) I’m not sure I’d want anyone to see some of my early code using .NET.

Fortunately, the devs on my team buy into the larger benefit of this transparency. It leads to a closer collaboration with our customers and creates a tighter feedback cycle. I am confident it will pay off in the end product. Of course they do have their limits when it comes to transparency. I tried suggesting we take it one step further and publish our credit card numbers in there, but that was a no go.

Technorati Tags: aspnetmvc,codeplex, mvc, code comments edit

UPDATE: I’ve added a NuGet package named “routedebugger” to the NuGet feed, which will make it much easier to install.

UPDATE 2: In newer versions of the NuGet package you don’t need to add code to global.asax as described below. An appSetting <add key="RouteDebugger:Enabled" value="true" /> in web.config suffices.

In Scott Hanselman’s wonderful talk at Mix, he demonstrated a simple little route tester I quickly put together.

Route Debugger

This utility displays the route data pulled from the request of the current request in the address bar. So you can type in various URLs in the address bar to see which route matches. At the bottom, it shows a list of all defined routes in your application. This allows you to see which of your routes would match the current URL.

The reason this is useful is sometimes you expect one route to match, but another higher up the stack matches instead. This will show you that is happening. However, it doesn’t provide any information why that is happening. Hopefully we can do more to help that situation in the future.

To use this, simply download the following zip file and place the assembly inside of it into your bin folder. Then in your Global.asax.cs file add one line to the Application_Start method (in bold).

protected void Application_Start(object sender, EventArgs e)

This will update the route handler (IRouteHandler) of all your routes to use a DebugRouteHandler instead of whichever route handler you had previously specified for the route. It also adds a catch-all route to the end to make sure that the debugger always matches any request for the application.

I’m also making available the full source (using the word full makes it sound like there’s a lot, but there’s not all that much) and a demo app that makes use of this route tester. Let me know if this ends up being useful or not for you.

Technorati Tags: aspnetmvc,ASP.NET,routing, code, mvc comments edit

UPDATE: I updated the sample to work with the final version of ASP.NET Routing included with ASP.NET 3.5 SP1. This sample is now being hosted on CodePlex.

Download the demo here

In my last post I described how Routing no longer has any dependency on MVC. The natural question I’ve been asked upon hearing that is “Can I use it with Web Forms?” to which I answer “You sure can, but very carefully.”

Being on the inside, I’ve had a working example of this for a while now based on early access to the bits. Even so, Chris Cavanagh impressively beats me to the punch in blogging his own implementation of routing for Web Forms. Nice!

One of the obvious uses for the new routing mechanism is as a “clean” alternative to URL rewriting (and possibly custom VirtualPathProviders for simple scenarios) for traditional / postback-based ASP.NET sites.  After a little experimentation I found some minimal steps that work pretty well:

  • Create a custom IRouteHandler that instantiates your pages
  • Register new Routes associated with your IRouteHandler
  • That’s it!

He took advantage of the extensibility model by implementing the IRouteHandler interface with his own WebFormRouteHandler class (not surprisingly my implementation uses the same name) ;)

There is one subtle potential security issue to be aware of when using routing with URL Authorization. Let me give an example.

Suppose you have a website and you wish to block unauthenticated access to the admin folder. With a standard site, one way to do so would be to drop the following web.config file in the admin folder…

<?xml version="1.0"?>
            <deny users="*" />


Ok, I am a bit draconian. I decided to block access to the admin directory for allusers. Attempt to navigate to the admin directory and you get an access denied error. However, suppose you use a naive implementation of WebFormRouteHandler to map the URL fizzbucket to the admin dir like so…

RouteTable.Routes.Add(new Route("fizzbucket"
  , new WebFormRouteHandler("~/admin/secretpage.aspx"));

Now, a request for the URL /fizzbucket will display secretpage.aspx in the admin directory. This might be what you want all along. Then again, it might not be.

In general, I believe that users of routing and Web Form will want to secure the physical directory structure in which Web Forms are placed using UrlAuthorization. One way to do this is to call UrlAuthorizationModule.CheckUrlAccessForPrincipal on the actual physical virtual path for the Web Form.

This is one key difference between Routing and URL Rewriting, routing doesn’t actually rewrite the URL. Another key difference is that routing provides a mean to generate URLs as well and is thus bidirectional.

The following code is my implementation of WebFormRouteHandler which addresses this security issue. This class has a boolean property on it that allows you to not apply URL authorization to the physical path if you’d like (in following the principal of secure by default the default value for this property is true which means it will always apply URL authorization).

public class WebFormRouteHandler : IRouteHandler
  public WebFormRouteHandler(string virtualPath) : this(virtualPath, true)

  public WebFormRouteHandler(string virtualPath, bool checkPhysicalUrlAccess)
    this.VirtualPath = virtualPath;
    this.CheckPhysicalUrlAccess = checkPhysicalUrlAccess;

  public string VirtualPath { get; private set; }

  public bool CheckPhysicalUrlAccess { get; set; }

  public IHttpHandler GetHttpHandler(RequestContext requestContext)
    if (this.CheckPhysicalUrlAccess 
      && !UrlAuthorizationModule.CheckUrlAccessForPrincipal(this.VirtualPath
              ,  requestContext.HttpContext.User
              , requestContext.HttpContext.Request.HttpMethod))
      throw new SecurityException();

    var page = BuildManager
        , typeof(Page)) as IHttpHandler;
    if (page != null)
      var routablePage = page as IRoutablePage;
      if (routablePage != null)
        routablePage.RequestContext = requestContext;
    return page;

You’ll notice the code here checks to see if the page implements an IRoutablePage interface. If your Web Form Page implements this interface, the WebFromRouteHandler class can pass it the RequestContext. In the MVC world, you generally get the RequestContext via the ControllerContext property of Controller, which itself inherits from RequestContext.

The RequestContext is important for calling into API methods for URL generation. Along with the IRoutablePage, I provide a RoutablePage abstract base class that inherits from Page. The code for this interface and the abstract base class that implements it is in the download at the end of this post.

One other thing I did for fun was to play around with fluent interfaces and extension methods for defining simple routes for Web Forms. Since routes with Web Forms tend to be simple, I thought this syntax would work nicely.

public static void RegisterRoutes(RouteCollection routes)
  //first one is a named route.
  routes.Map("General", "haha/{filename}.aspx").To("~/forms/haha.aspx");

The general idea is that the route url on the left maps to the webform virtual path to the right.

I’ve packaged all this up into a solution you can download and try out. The solution contains three projects:

  • WebFormRouting - The class library with the WebFormRouteHandler and helpers…
  • WebFormRoutingDemoWebApp - A website that demonstrates how to use WebFormRouting and also shows off url generation.
  • WebFormRoutingTests- a few non comprehensive unit tests of the WebFormRouting library.

WARNING: This is prototype code I put together for educational purposes. Use it at your own risk. It is by no means comprehensive, but is a useful start to understanding how to use routing with Web Forms should you wish. Download the demo here.

Technorati Tags: aspnetmvc,Routing,ASP.NET, code, mvc comments edit

At this year’s Mix conference, we announced the availability of the second preview for ASP.NET MVC which you can download from here. Videos highlighting MVC are also available.

Now that I am back from Mix and have time to breathe, I thought I’d share a few (non-exhaustive) highlights of this release as well as my thoughts on the future.

New Assemblies and Routing

Much of the effort and focus of this release was put into routing. If you’ve installed the release, you’ll notice that MVC has been factored into three assemblies:

  • System.Web.Mvc
  • System.Web.Routing
  • System.Web.Abstractions

The key takeaway here is that MVC depends on Routing which depends on Abstractions.

MVC => Routing => Abstractions

Routing is being used by another team here at Microsoft so we worked on making it an independent feature to MVC. MVC relies heavily on routing, but routing doesn’t have any knowledge of MVC. I’ll write a follow up post that talks about the implications of that and how you might use Routing in a non-MVC context.

Because of the other dependencies on Routing, we spent a lot of time trying to make sure we got the API and code correct and making sure the quality level of routing meets an extremely high bar. Unfortunately, this investment in routing did mean that we didn’t implement everything we wanted to implement for MVC core, but hey, it’s a preview right? ;)

CodePlex Builds

At Mix this year Scott Hanselman’s gave a great talk (IMHO) on MVC. One thing he announced during that talk is the vehicle by which we will be making the MVC source code available. Many of you might recall ScottGu’s recent roadmap for MVC in which he mentioned we would be making the source available. At Mix, Scottha announced that we would be deploying our source to CodePlex soon.

Not only that, we hope to push source from our source control to a CodePlex Project’s source control server on a semi-regular basis. These builds would only include source (in a buildable form) and would not include the usual hoopla with associated with a full Preview or Beta release.

How regular a schedule we keep to remains to be seen, but Scott mentioned in his talk around every four to six weeks. Secretly, between you and me, I’d love to push even more regularly. Just keep in mind that this is an experiment in transparency here at Microsoft, so we’ll start slow (baby steps!) and see how it progresses. In spirit, this would be the equivalent to the daily builds that are common in open source projects, just not daily.

Unit Test Framework Integration

In a recent post, I highlighted some of the work we’re doing around integrating third party unit testing frameworks.

Unit Testing

I’ve been in contact with various unit testing framework developers about integrating their frameworks with the MVC project template. I’m happy to see that MbUnit released updated installers that will integrate MbUnit into this dropdown. Hopefully the others will follow suit soon.

One interesting approach I should point out is that this is a great way to integrate your own totally tricked out unit testing project template complete with your favorite mock framework and your own private assembly full of your useful unit test helper classes and methods.

If you’re interested in building your own customized test framework project which will show up in this dropdown, Joe Cartano of the Web Tools team posted an updated streamlined walkthrough on how to do so using NUnit and Rhino Mocks as an example.

Upcoming improvements

One area in this preview we need to improve is the testability of the framework. The entire MVC dev team had a pause in which we performed some app building and uncovered some of the same problems being reported in various forums. Problems such as testing controller actions in which a call to RedirectToAction is made. Other problems include the fact that you need to mock ControllerContext even if you’re not using it within the action. There are many others that have been reported by various people in the community and we are listening. We ourselves have encountered many of them and definitely want to address them.

Experiencing the pain ourselves is very important to understanding how we should fix these issues. One valuable lesson I learned is that a framework that is testable does not mean that applications built with that framework are testable. We definitely have to keep that in mind as we move forward.

To that end, we’ll be applying some suggested improvements in upcoming releases that address these problems. One particular refactoring I’m excited about is ways to make the controller class itself more lightweight. Some of us are still recovering from Mix so as we move forward, I hope to provide even more details on specifically what we hope to do rather than this hand-waving approach. Bear with me.

During this next phase, I personally hope to have more time to continuously do app building to make sure these sort of testing problems don’t crop up again. For the ones that are out there, I take responsibility and apologize. I am a big a fan of TDD as anyone and I hate making life difficult for my brethren. ;)


When do we RTM? This is probably the most asked question I get and right now, I don’t have a good answer. In part, because I’m still trying to sort through massive amounts of feedback regarding this question. Some questions I’ve been asking various people revolve around the very notion of RTM for a project like this :

  • How important is RTM to you?
  • Right now, the license allows you to go-live and we’ll provide the source, is that good enough for your needs?
  • Is it really RTM that you want, or is it the assurance that the framework won’t churn so much?
  • What if we told you what areas are stable and which areas will undergo churn, would you still need the RTM label?
  • Is there a question I should be asking regarding this that I’m not? ;)

I do hope to have a more concrete answer soon based on this feedback. In general, what I have been hearing from people thus far is they would like to see an RTM release sooner rather than later, even if it is not feature rich. I look forward to hearing from more people on this.

Closing Thoughts

In general, we have received an enormous amount of interest and feedback in this project. Please do keep it coming as the constructive feedback is really shaping the project. Tell us what you like as well as what is causing you problems. We might not respond to every single thing reported as quickly as we would like to, but we are involved in the forums and I am still trying to working through the massive list of emails accrued during Mix.

Technorati Tags: ASP.NET,aspnetmvc,software

personal comments edit

You don’t so much return from Las Vegas as you recover from Las Vegas.

Right now, I am recovering from my Las Vegas trip. Recovering from Vegas Nose caused by the extremely dry air and massive second hand smoke inhalation. Recovering from the sensory onslaught they call casinos. Recovering from the fake kitschiness and manafactured excitement as people sit like zombies feeding machines their life savings.

Yet despite all that, I still love the place. Maybe because despite the bad things, Vegas is really the Disneyland for adults with a slight bad streak inside that yearns to get out once in a while to indugle in a vice or three.

I especially love the Mix conference there. As Jeff Atwood writes (and I agree)…

MIX is by far my favorite Microsoft conference after attending the ’06 and ’07 iterations…

What I love about Mix is that it …

  • is relatively small and intimate, at around 2,000 attendees.
  • seamlessly merges software engineering and design.
  • includes a lot of non-Microsoft folks, even those that are traditionally hostile to Microsoft, so there’s plenty of perspective.

Some highlights from Mix include seeing old friends again such as Jon, Steve, and Miguel. Also enjoyed having time to hang with my coworkers in a non work setting after having imbibed much alcohol. And of course meeting a lot of people I know through the intarweb such as Kevin Dente, Steven Smith, and Dave Laribee among others I’m forgetting at the moment.

I also played Poker for the first time at a casino and I had a blast, leaving with some winnings despite losing four close showdowns, a couple of them in which my pocket Kings were beaten by pocket Aces. Who’d a thunk it?!

The past couple of weeks have been a rough time for me and my family. I missed a few days of work due to a bad fever, and my wife then suffered through the same fever the following week. Fortunately Cody hasn’t seemed to have caught the fever.


In a week, we will finally move into our new home. While our temporary housing has been nice, we’ve been here a long while and it’s just hard to feel settled when we are not in our own home. Once we move into our house and make it our own, I think we’ll finally feel settled into this place and find our rhythm here. Who knows, maybe I’ll have more time for my extracurricular activities I used to do.

Speaking of extracurricular, I have joined two soccer leagues now, the GSSL (Greater Seattle Soccer League) and the MSSL (Microsoft Soccer League). Unfortunately, my foot is injured due to a hard tackle so I didn’t have a good showing in my first MSSL game today.

In the past month, I’ve given several talks, some that went well and one that went…how should I put it…very badly. Hopefully I will be asked to speak again. At least my TechReady (an internal talk) went well with 240 attendees, but probably because Scott Hanselman carried the both of us. ;) That boy can talk!

If you’ve read this far expecting technical content, my apologies, it’s a Saturday. We just had an updated preview of MVC released and I will be writing about that in some upcoming posts. In the meanwhile, the sun just came up outside for once and my mood is lifted after a dark few weeks. Things are starting to look up for me in this town. Hope things are well with you as well.

Technorati Tags: Vegas,Mix,Personal

comments edit

broken-glass One interesting response to my series on versioning of interfaces and abstract base classes is the one in which someone suggested that we should go ahead and break their code from version to version. They’re fine in requiring a recompile when upgrading.

In terms of recompile, it’s not necessarily recompiling yourcode that is the worry. It’s the third party libraries you’re using that poses the problem. Buying a proper third-party library that meets a need that your company can be a huge cost-saving measure in the long run, especially for libraries that perform non-trivial calculations/tasks/etc… that are important to your application. Not being able to use them in the event of an upgrade could be problematic.

Let’s walk through a scenario. You’re using a third-party language translation class library which has a class that implements an interface in the framework. In the next update of the framework, a new property is added to the interface. Along with that property, some new classes are added to the Framework that would give your application a competitive edge if it could use those classes.

So you try and recompile your app, fix any breaks in your app, and everything seems fine. But when you test your app, it breaks because the third party library is not implementing the new interface. The third party library is broken by this change.

Now you’re in a bind. You could try to write your own language translation library, but that’s a huge and difficult task and your company is not in the business of writing language translators. You could simply wait for the third party to update their library, but in the meanwhile, your competitors are passing you by.

So in this case, the customer has no choice but to wait. It is my hunch that this is the sort of bind we’d like to limit as much as possible. This is not to say we never break, as we certainly do. But we generally limit breaks to the cases in which there is a security issue.

Again, as I’ve said before, the constraints in building a framework, any framework, are not the same constraints of building an app. Especially when the framework has a large number of users. When Microsoft ships an interface, it has to expect that there will be many many customers out there implementing that interface.

This is why there tends to be an emphasis on Abstract Base Classes as I mentioned before within the .NET Framework. Fortunately, when the ABC keeps most, if not all, methods abstract or virtual, then testability is not affected. It’s quite trivial to mock or fake an abstract base class written in such a manner as you would an interface.

Personally, I tend to like interfaces in my own code. I like the way I can quickly describe another concern via an interface and then mock out the interface so I can focus on the code I am currently concerned about. A common example is persistence. Sometimes I want to write code that does something with some data, but I don’t want to have to think about how the data is retrieved or persisted. So I will simply write up an interface that describes what the interaction with the persistence layer will look like and then mock that interface. This allows me to focus on the real implementation of persistence later so I can focus on the code that does stuff with that data now.

Of course I have the luxury of breaking my interfaces whenever I want, as my customers tend to be few and forgiving. ;)

Technorati Tags: Framework,Design,Software,Interfaces, code comments edit

Eilon Lipton recently wrote a bit about context objects in ASP.NET MVC and in an “Oh by the way” moment, tossed out the fact that we changed the IHttpContext interface to the HttpContextBase abstract base class (ABC for short).

Not long after, this spurred debate among the Twitterati. Why did you choose an Abstract Base Class in this case? The full detailed answer would probably break my keyboard in length, so I thought I would try to address it in a series of posts.

In the end, I hope to convince the critiques that the real point of contention is about maintaining backwards compatibility, not about choosing an abstract base class in this one instance.

Our Constraints

All engineering problems are about optimizing for constraints. As I’ve written before, there is no perfect design, partly because we’re all optimizing for different constraints. The constraints you have in your job are probably different than the constraints that I have in my job.

For better or worse, these are the constraints my team is dealing with in the long run. You may disagree with these constraints, so be it. We can have that discussion later. I only ask that for the time being, you evaluate this discussion in light of these constraints. In logical terms, these are the premises on which my argument rests.

  • Avoid Breaking Changes at all costs
  • Allow for future changes

Specifically I mean breaking changes in our public API once we RTM. We can make breaking changes while we’re in the CTP/Beta phase.

You Can’t Change An Interface

The first problem we run into is that you cannot change an interface.

Now some might state, “Of course you can change an interface. Watch me! Changing an interface only means some clients will break, but I still changed it.

The misunderstanding here is that after you ship an assembly with an interface, any changes to that interface result in a new interface. Eric Lippert points this out in this old Joel On Software forum thread

The key thing to understand regarding “changing interfaces” is that an interface is a _type_.  A type is logically bound to an assembly, and an assembly can have a strong name.

This means that if you correctly version and strong-name your assemblies, there is no “you can’t change this interface” problem.  An interface updated in a new version of an assembly is a _different_ interface from the old one.

(This is of course yet another good reason to get in the habit of strong-naming assemblies.)

Thus trying to make even one tiny change to an interface violates our first constraint. It is a breaking change. You can however add a new virtual method to an abstract base class without breaking existing clients of the class. Hey, it’s not pretty, but it works.

Why Not Use An Interface And an Abstract Base Class?

Why not have a corresponding interface for every abstract base class? This assumes that the purpose of the ABC is simply to provide the default implementation of an interface. This isn’t always the case. Sometimes we may want to use an ABC in the same way we use an interface (all methods are abstract…revised versions of the class may add virtual methods which throw a NotImplementedException).

The reason that having a corresponding interface doesn’t necessarily buy us anything in terms of versioning, is that you can’t expose the interface. Let me explain with a totally contrived example.

Suppose you have an abstract base class we’ll randomly call HttpContextBase. Let’s also suppose that HttpContextBase implements an IHttpContext interface. Now we want to expose an instance of HttpContextBase via a property of another class, say RequestContext. The question is, what is the type of that property?

Is it…

public IHttpContext HttpContext {get; set;}

? Or is it…

public HttpContextBase HttpContext {get; set;}

If you choose the first option, then we’re back to square one with the versioning issue. If you choose the second option, we don’t gain much by having the interface.

What Is This Versioning Issue You Speak Of?

The versioning issue I speak of relates to clients of the property. Suppose we wish to add a new method or property to IHttpContext. We’ve effectively created a new interface and now all clients need to recompile. Not only that, but any components you might be using that refer to IHttpContext need to be recompiled. This can get ugly.

You could decide to add the new method to the ABC and not change the interface. What this means is that new clients of this class need to perform an interface check when they want to call this method every time.

public void SomeMethod(IHttpContext context)
  HttpContextBase contextAbs = context as HttpContextBase;
  if(contextAbs != null)

In the second case with the ABC, you can add the method as a virtual method and throw NotImplementedException. You don’t get compile time checking with this approach when implementing this ABC, but hey, thems the breaks. Remember, no perfect design.

Adding this method doesn’t break older clients. Newer clients who might need to call this method can recompile and now call this new method if they wish. This is where we get the versioning benefits.

So Why Not Keep Interfaces Small?

It’s not being small that makes an interface resilient to change. What you really want is an interface that is small and cohesivewith very little reason to change. This is probably the best strategy with interfaces and versioning, but even this can run into problems. I’ll address this in more detail in an upcoming post, but for now will provide just one brief argument.

Many times, you want to divide a wide API surface into a group of distinct smaller interfaces. The problem arises when a method needs functionality of several of those interfaces. Now, a change in any one of those interfaces would break the client. In fact, all you’ve really done is spread out one large interface with many reasons to change into many interfaces each with few reasons to change. Overall, it adds up to the same thing in terms of risk of change.

Alternative Approaches

These issues are one of the trade-offs of using statically typed languages. One reason you don’t hear much about this in the Ruby community, for example, is there really aren’t interfaces in Ruby, though some have proposed approaches to provide something similar. Dynamic typing is really great for resilience to versioning.

One thing I̻’d love to hear more feedback from others is why, in .NET land, are we so tied to interfaces? If the general rule of thumb is to keep interfaces small (I’ve even heard some suggest interfaces should only have one method), why aren’t we using delegates more instead of interfaces? That would provide for even looser coupling than interfaces.

The proposed dynamic keyword and duck-typing features in future versions of C# might provide more resilience. As with dynamically typed languages such as Ruby, the trade-off in these cases is that you forego compile time checking for run-time checking. Personally, I think the evidence is mounting that this may be a worthwhile tradeoff in many cases.

For More On This

The Framework Design Guidelines highlights the issues I covered here well in chapter 4 (starting on page 11). You can read chapter 4 from here. In particular, I found this quote quite interesting as it is based on the experience from other Framework developers.

Over the course of the three versions of the .NET Framework, I have talked about this guideline with quite a few developers on our team. Many of them, including those who initially disagreed with the guideline, have said that they regret having shipped some API as an interface. I have not heard of even one case in which somebody regretted that they shipped a class.

Again, these guidelines are specific to Framework development (for statically typed languages), and not to other types of software development.

What’s Next?


If I’ve done my job well, you by now agree with the conclusions I put forth in this post, given the constraints I laid out. Unless of course there is something I missed, which I would love to hear about.

My gut feeling is that most disagreements will focus on the premise, the constraint, of avoiding breaking changes at all costs. This is where you might find me in some agreement. After all, before I joined Microsoft, I wrote a blog post asking, Is Backward Compatibility Holding Microsoft Back? Now that I am on the inside, I realize the answer requires more nuance than a simple yes or no answer. So I will touch on this topic in an upcoming post.

Other topics I hope to cover:

  • On backwards compatibility and breaking changes.
  • Different criteria for choosing interfaces and abstract base classes.
  • Facts and Fallacies regarding small interfaces.
  • Whatever else crosses my mind.

My last word on this is to keep the feedback coming. It may well turn out that based on experience, HttpContextBase should be an interface while HttpRequest should remain an abstract base class. Who knows?! Frameworks are best extracted from real applications, not simply from guidelines. The guidelines are simply that, a guide based on past experiences. So keep building applications on top of ASP.NET MVC and let us know what needs improvement (and also what you like about it).

Technorati Tags: ASP.NET,Design,Framework

code comments edit

This is part 2 in an ongoing series in which I talk about various design and versioning issues as they relate to Abstract Base Classes (ABC), Interfaces, and Framework design. In part 1 I discussed some ways in which ABCs are more resilient to versioning than interfaces. I haven’t covered the full story yet and will address some great points raised in the comments.

In this part, I want to point out some cases in which Abstract Base Classes fail in versioning. In my last post, I mentioned you could simply add new methods to an Abstract Base Class and not break clients. Well that’s true, it’s possible, but I didn’t emphasize that this is not true for all cases and can be risky. I was saving that for another post (aka this one).

I had been thinking about this particular scenario a while ago, but it was recently solidified in talking to a coworker today (thanks Mike!). Let’s look at the scenario. Suppose there is an abstract base class in a framework named FrameworkContextBase. The framework also provides a concrete implementation.

public abstract class FrameworkContextBase
  public abstract void MethodOne();

Somewhere else in another class in the framework, there is a method that takes in an instance of the base class and calls the method on it for whatever reason.

public void Accept(FrameworkContextBase arg)

With me so far? Good. Now imagine that you, as a consumer of the Framework write a concrete implementation of FrameworkContextBase. In the next release of the framework, the framework includes a method to FrameworkContextBase like so…

public abstract class FrameworkContextBase
  public abstract void MethodOne();
  public virtual void MethodTwo()
    throw new NotImplementedException();

And the Accept method is updated like so…

public void Accept(FrameworkContextBase arg)

Seems innocuous enough. You might even be lulled into the false sense that all is well in the world and decide to go ahead and upgrade the version of the Framework hosting your application without recompiling. Unfortunately, somewhere in your application, you pass your old implementation of the ABC to the new Accept method. Uh oh! Runtime exception!

The fix sounds easy in theory, when adding a new method to the ABC, the framework developer need to make sure it has a reasonable default implementation. In my contrived example, the default implementation throws an exception. This seems easy enough to fix. But how can you be sure the implementation is reasonable for all possible implementations of your ABC? You can’t.

This is often why you see guidelines for .NET which suggest making all methods non-virtual unless you absolutely need to. The idea is that the Framework should provide checks before and after to make sure certain invariants are not broken when calling a virtual method since we have no idea what that method will do.

As you might guess, I tend to take the approach of buyer beware. Rather than putting the weight on the Framework to make sure that virtual methods don’t do anything weird, I’d rather put the weight on the developer overriding the virtual method. At least that’s the approach we’re taking with ASP.NET MVC.

Another possible fix is to also add an associated Supports{Method} property when you add a method to an ABC. All code that calls that new method would have to check the property. For example…

public abstract class FrameworkContextBase
  public abstract void MethodOne();
  public virtual void MethodTwo()
    throw new NotImplementedException();
  public virtual bool SupportsMethodTwo {get{return false;}}

//Some other class in the same framework
public void Accept(FrameworkContextBase arg)

But it may not be clear to you, the framework developer, what you should do when the instance doesn’t support MethodTwo. This might not be clear nor straightforward.

This post seems to contradict my last post a bit, but I don’t see it that way. As I stated all along, there is no perfect design, we are simply trying to optimize for constraints. Not only that, I should add that versioning is a hard problem. I am not fully convinced we have made all the right optimizations (so to speak) hence I am writing this series.

Coming up: More on versioning interfaces with real code examples and tradeoffs. More on why breaking changes suck. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Interfaces,Abstract Base Classes,Framework, code, mvc comments edit

By the way my blogging frequency has declined, you can guess I’ve been quite busy here at Microsoft preparing for the next release of ASP.NET MVC.

It’s not just working on specs, design meetings, etc… that keep me busy. It’s preparing for several talks, various spec reviews, building hands on labs, demo and and app building, etc…that keeps me busy. All the while I am still learning the ropes and dealing with selling a house in L.A. and buying a house up here. There’s a lot that goes into being a PM I naively didn’t expect, on top of just how much work goes into simply moving.

Not that I’m complaining. It’s still a lot of fun. ScottGu posted an entry on his blog about the fun we’re having in preparing for the upcoming ASP.NET MVC Mix Preview.

Here’s a screenshot that shows a tooling feature I’m particularly excited about.

Unit Testing

I’ve written about the challenges Microsoft faces with bundling Open Source software in the past and what I thought they should do about it…

What I would have liked to have seen is for Team System to provide extensibility points which make it extremely easy to swap out MS Test for another testing framework. MS Test isn’t the money maker for Microsoft, it’s the whole integrated suite that brings in the moolah, so being able to replace it doesn’t hurt the bottom line.

What we have here is similar in spirit to what I hoped for. AFAIK it is not going to be integrated to the level that Visual Studio Team Test is integrated, but it also won’t require VSTS. This is a feature of the ASP.NET MVC project template.

I’m also excited about some of the community feedback we were able to incorporate such as removing the ControllerActionAttribute among other things. A hat or two might be eaten over that one ;).

In any case, there are still some areas I’m not yet happy with (there always will be, won’t there?), so we are not done by any measure. I’ll reserve talking about that until after Mix when you have the code in your hands and can follow along.

Technorati Tags: ASP.NET,aspnetmvc, code, mvc comments edit

UPDATE: I improved this based on some feedback in my comments.

With ASP.NET MVC it is possible for someone to try and navigate directly to a .aspx view. In general, this only leaves them with an ugly error message as Views typically need ViewData in order to work.

However, one approach that I think will easily work is to create a Web.config file in the root of your Views directory that contains the following.

We need to do more testing on our side to make sure there’s no pathological case in doing this, but so far in my personal testing, it seems to work.

<?xml version="1.0"?>
      <remove verb="*" path="*.aspx"/>
      <add path="*.aspx" verb="*" 

    <validation validateIntegratedModeConfiguration="false"/>
      <remove name="PageHandlerFactory-ISAPI-2.0"/>
      <remove name="PageHandlerFactory-ISAPI-1.1"/>
      <remove name="PageHandlerFactory-Integrated"/>
      <add name="BlockViewHandler" path="*.aspx" verb="*" 

Let me know if you run into problems with this.

Technorati Tags: ASP.NET MVC,Tips,ASP.NET, code comments edit

If you know me, you know I go through great pains to write automated unit tests for my code. Some might even call me anal about it. Those people know me too well.

For example, in the active branch of Subtext, we have 882 unit tests, of which I estimate I wrote around 800 of those. Yep, if you’re browsing the Subtext unit test code and something smells bad, chances are I probably dealt it.

IMG_1117 Unfortunately, by most definitions of Unit Test, most of these tests are really integration tests. Partly because I was testing legacy code that and partly because I was blocked by the framework, not every method under test could be easily tested in isolation.

Whether you’re a TDD fan or not, I think most of us can agree that unit testing your own code is a necessary practice, whether it is manually or automated.

I still think it’s worthwhile to take that one step further and automate your unit tests whenever possible and where it makes most sense (for large values of make sense).

When writing an automated unit test, the key is to try and isolate the unit under test (typically a method) by both controlling the external dependencies for the method and being able to capture any side-effects of the method.

Sometimes though, external code that your code makes calls into can sometimes be written in such a way that makes it challenging to test your own code. Often, you have to resort to building all sorts of Bridge or Adapter classes to abstract away the thing you’re calling. Sometimes you are plain stuck.

What “external code” might exhibit this characteristic of making your own code hard to test? I don’t have anything in particular in mind but…oh…off the top of my head if you made me pick one totally spontaneously, I might mention my way of example one little piece of code called the .NET Framework.

For the most part, I’ve had very few problems with the Base Class Libraries or other parts of the Framework. Most of my testing woes came when writing code against ASP.NET. The ASP.NET MVC framework hopes to help address some of that.

I’ve been in a lot of internal discussions recently talking with various people and teams about testable code. In order to contribute more value to these discussions, I am trying to gather specific cases and scenarios where testing your code is really painful.

What I am not looking for is feedback such as

It’s hard to write unit tests when writing a Foo application/control/part”.


Class Queezle should really make method Bloozle public.

Perhaps Bloozle should be public, but I am not interested in theoretical pains. If the inaccessibility of Bloozle caused a real problem in a real unit test, that’s what I want to hear.

What I am looking for is specifics! Concrete scenarios that are blocked or extremely painful.Including the actual unit test is even better! For example…

When writing ASP.NET code, I want to have helper methods that accept an HttpContext instance and get some values from its various properties and then perform a calculation. Because HttpContext is sealed and is tightly coupled to the ASP.NET stack, I can’t mock it out or replace it with a test specific subclass. This means I always have to write some sort of bridge or adapter that wraps the context which gets tedious.

Only, you can’t use that one, because I already wrote it. Ideally, I’d love to hear feedback from across the board, not just ASP.NET. Got issues with WPF, WCF, Sharepoint Web Parts, etc… tell us about it. Please post them in my comments or on your blog and link to this post. Your input is very valuable and could help shape the future of the Framework, or at least help me to sound like I am clued into customer needs next time I talk to someone internal about this. ;)

Technorati Tags: TDD,Unit Testing

code comments edit

At the risk of embarrassing myself and eliciting more groans, I am going to share a joke I made up for my Code Camp talk. I was inspired to come up with some humor based on Jeff’s Sesame Street Presentation Rule post. I fully understand his post was addressing something deeper than simply telling a gratuitous joke in the presentation.

The context is that I just finished explaining the various dependencies between the Model, View, and Controller within the MVC pattern.

UPDATE: Updating this joke as the feedback comes in. The original is at the bottom.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The View has a few more drinks than the others and turns to the Model and says,

“Can you give me a ride home tonight? I know I always burden you with this, but I can never depend on that guy.”

Ba dum dum.

I am refining it a bit and posting it here in the hopes that I can improve upon it with your feedback. :)

Here’s the original as a point of comparison.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The Model has a few more drinks than the others and turns to the Controller and says,

“Can you give me a ride home tonight? I know I always burden you with this and never ask him, but I can never depend on that guy.”

Thanks for the feedback!

Technorati Tags: Humor,ASP.NET MVC, mvc comments edit

The ASP.NET and Silverlight team are hiring! Brad Abrams (who happens to be my “grand-boss” as in my boss’s boss) posted a developers wanted ad on his blog:

  • Are you JavaScript guru who has a passion to make Ajax easier for the masses?
  • Are you the resident expert in ASP.NET and consistently think about how data driven applications could be easier to build?
  • Are you a TDD or patterns wonk that sees how ASP.NET MVC is a game-changer?
  • Are you excited about the potential of Silverlight for business applications, and particularly the potential for the ASP.NET+Silverlight combination where we have the great .NET programming model on the server and client?

Some of you out there have provided great insight into how ASP.NET MVC could be better (and then some of you just point out its warts).  Either way, here’s a chance to have an even bigger influence on its design and code. I promise I’m easy to work with.

These are senior developer positions…

My team has several openings for senior, experienced developers with a passion for platform\framework technology. If you have 5+ years of software development industry experience, know how to work effectively in teams, have a great software design abilities and are an excellent developer than we just might have the spot for you!

Oh, and if you do get hired and join my feature team, I like my coffee with milk and sugar. Thank you very much. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Microsoft,Jobs

code comments edit

camping The Seattle Code Camp (which despite the misleading photo, isn’t a camping trip) is now over and I have nothing but good things to say about it. I didn’t get a chance to see a lot of talks but did enjoy the talk by Jim Newkirk and Brad Wilson. I’m a fan of their approach to providing extensibility and this session provided all the impetus I needed to really give a try rather than simply talking about trying it. :)

As for my own talk, I had the great pleasure of showing up late to my talk. To this moment, I still don’t know why my alarm didn’t go off. All indications are that it was set properly.

My wife woke me up at 9:00 AM asking, “Don’t you have a talk to give today?” I sure did, at 9:15 AM. I drove like a madman from Mercer Island (sorry to all the people I cut off) to Redmond and ended up being around 10 minutes late I think.

Fortunately the attendees were patiently waiting and despite my frazzled approach, I think the presentation went fairly well. Perhaps Chris Tavares can tell me in private how it really went. ;) Thanks to everyone who attended and refrained from heckling me. I appreciate the restraint and thoughtfulness.

By the way, Chris is part of the P&P group, but he’s also a member of the ASP.NET MVC feature team. I think his role is a P&P liason or something like that. Mainly he’s there to give great advice and help us out. So definitely give his blog a look for some great software wisdom.

As this was my first Code Camp, I am definitely looking forward to the next one.

Technorati Tags: Code Camp,ASP.NET MVC

personal comments edit

Well today I turn old. 33 years old to be exact. A day after Hanselman turns really old. It seems I’m always following in that guy’s footsteps doesn’t it?

Oh look, Hanselman gets a blue badge so Phil has to go and get a blue badge. Hanselman has a birthday and Phil has to go and have a birthday on the next day. What a pathetic follower!

What’s interesting is that Rob Conery’s birthday was just a few days ago and we’re all recent hires and involved in MVC in some way. Ok, maybe that isn’t all that interesting after all as it is statistically insignificant, but I thought I’d mention it anyways.

Clustering is a natural occurrence in random sequences. In fact, when looking at a sequence in which there is no clustering, you’d pretty much know it wasn’t random. That’s often what happens when a human tries to make a sequence look random. The person will tend to spread things evenly out. But randomness typically forms clusters. Not only that, but people have a natural tendency to find patterns in sequences and phenomena that don’t exist. But I digress.

Code Camp Seattle

If you’re in the Seattle area this weekend, I’ll be speaking at the Seattle Code Camp. There are several speakers covering various aspects of ASP.NET MVC so I thought I would try not to overlap and maybe cover a bit of an introduction, a few techniques, and some Q & A.

In other words, I’m woefully unprepared because I’ve been so slammed trying to get a plan finalized for MVC that we can talk about. But I’m enlisting Rob’s help to put together some hopefully worthwhile demos. All I really need to do is get a designer involved to make something shiny. ;)

In any case, it’s 2 AM, I need to retire for the evening. I leave you with this photo of my son after he got ahold of a pen while my wife wasn’t looking. It was a brief moment, but that’s all he needed to change his look.

Cody Pen

code, tdd comments edit

In a recent post, Frans Bouma laments the lack of computer science or reference to research papers by agile pundits in various mailing lists (I bet this really applies to nearly all mailing lists, not just the ones he mentions).

What surprised me to no end was the total lack of any reference/debate about computer science research, papers etc. … in the form of CS research. Almost all the debates focused on tools and their direct techniques, not the computer science behind them. In general asking ’Why’ wasn’t answered with: “research has shown that…” but with replies which were pointing at techniques, tools and patterns, not the reasoning behind these tools, techniques and patterns.

science-in-actionAs a fan of the scientific method, I understand the difference between a hypothesis and a theory/law. Experience and anecdotal evidence do not a theory make, as anyone who’s participated in a memory experiment will learn that memory itself is easily manipulated. At the same time though, if a hypothesis works for you every time you’ve tried it, you start to have confidence that the hypothesis holds weight.

So while the lack of research was a slight itch sitting there in the back of my mind, I’ve never been overly concerned because I’ve always felt that the efficacy of TDD would hold weight when put to the test (get it?). It is simply a young hypothesis and it was only a matter of time before experimentation would add evidence to bolster the many claims I’ve made on my blog.

I’ve been having a lot of interesting discussions around TDD internally here lately and wanted to brush up on the key points when I happened upon this paper published in the Proceedings of the IEEE Transactions on Software Engineering entitled On the Effectiveness of Test-first Approach to Programming.

The researchers put together an experiment in which an experiment group and a control group implemented a set of features in an incremental fashion. The experiment group wrote tests first, the control group applied a more conventional approach of writing the tests after the fact. As a result, they found…

We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed.

The interesting result here is that quality is more a factor of the number of tests you write, and not whether you write them first. TDD just happens to be a technique in which the natural inclination (path of least resistance) is to write more tests than less in the same time span. The lesson here is even if you don’t buy TDD, you should still be writing automated unit tests for your code.

Obviously such a controlled experiment on undergraduate students leaves one wondering if the conclusions drawn can really be applied to the workplace. The researches do address this question of validity…

The external validity of the results could be limited since the subjects were students. Runeson [21] compared freshmen, graduate, and professional developers and concluded that similar improvement trends persisted among the three groups. Replicated experiments by Porter \ and Votta [22] and Höst et al. [23] suggest that students may provide an adequate model of the professional population.

Other evidence they refer to even suggests that in the case of advance techniques, the benefit that professional developers gain outweighs that of students, which could bolster the evidence they present.

My favorite part of the paper is the section in which they offer their explanations for why they believe that Test-First programming might offer productivity benefits. I won’t dock them for using the word synergistic.

We believe that the observed productivity advantage of Test-First subjects is due to a number of synergistic effects:

  • Better task understanding. Writing a test before implementing the underlying functionality requires \ the programmer to express the functionality unambiguously.
  • Better task focus. Test-First advances one test case at a time. A single test case has a limited scope. Thus, the programmer is engaged in a decomposition process in which larger pieces of functionality are broken down to smaller, more manageable chunks. While developing the functionality for a single test, the cognitive load of the programmer is lower.
  • Faster learning. Less productive and coarser decomposition strategies are quickly abandoned in favor of more productive, finer ones.
  • Lower rework effort. Since the scope of a single test is limited, when the test fails, rework is easier. When rework immediately follows a short burst of testing and implementation activity, the problem context is still fresh in the programmer’s mind. With a high number of focused tests, when a test fails, the root cause is more easily pinpointed. In addition, more frequent regression testing shortens the feedback cycle. When new functionality interferes with old functionality, this situation is revealed faster. Small problems are detected before they become serious and costly.

Test-First also tends to increase the variation in productivity. This effect is attributed to the relative difficulty of the technique, which is supported by the subjects’ responses to the post-questionnaire and by the observation that higher skill subjects were able to achieve more significant productivity benefits.

So while I don’t expect that those who are resistant or disparaging of TDD will suddenly change their minds on TDD, I am encouraged by this result as it jives with my own experience. The authors do cite several other studies (future reading material, woohoo!) that for the most part support the benefits of TDD.

So while I’m personally convinced of the benefits of TDD and feel the evidence thus far supports that, I do agree that the evidence is not yet overwhelming. More research is required.

I prefer to take a provisional approach to theories, ready to change my mind if the evidence supports it. Though in this case, I find TDD a rather pleasant fun enjoyable method for writing code. There would have to be a massive amount solid evidence that TDD is a colossal waste of time for me to drop it.

Technorati Tags: TDD,Unit Tests,Research

comments edit

After joining Microsoft and drinking from the firehose a bit, I’m happy to report that I am still alive and well and now residing in the Seattle area along with my family. In meeting with various groups, I’ve been very excited by how much various teams here are embracing Test Driven Development (and its close cousin, TAD aka Test After Development). We’ve had great discussions in which we really looked at a design from a TDD perspective and discussed ways to make it more testable. Teams are also starting to really apply TDD in their development process as a team effort, and not just sporadic individuals.

BugsTDD doesn’t work well in a vacuum. I mean, it can work to adopt TDD as an individual developer, but the effectiveness of TDD is lessened if you don’t consider the organizational changes that should occur in tandem when adopting TDD.

One obvious example is the fact that TDD works much better when everyone on your team applies it. If only one developer applies TDD, the developer loses the regression tests benefit of having a unit test suite when making changes that might affect code written by a coworker who doesn’t have code coverage via unit tests.

Another example that was less obvious to me until now was how the role of a dedicated QA (Quality Assurance) team changes subtly when part of a team that does TDD. This wasn’t obvious to me because I’ve rarely been on a team with such talented, technical, and dedicated QA team members.

QA teams at Microsoft invest heavily in test automation from the UI level down to the code level. Typically, the QA team goes through an exploratory testing phase in which they try to break the app and suss out bugs. This is pretty common across the board.

The exploratory testing provides data for the QA team to use in determining what automation tests are necessary and provide the most bang for the buck. Engineering is all about balancing constraints so we can’t write every possible automated test. We want to prioritize them. So if exploratory testing reveals a bug, that will be a high priority area for test automation to help prevent regressions.

However, this appears to have some overlap with TDD. Normally, when someone reports a bug to me, the test driven developer, I will write a unit test (or tests) that should pass but fails because of that bug. I then fix the bug and ensure my test (or tests) now pass because the bug is fixed. This prevents regressions.

In order to reduce duplication of efforts, we’re looking at ways of reducing this overlap and adjusting the QA role slightly in light of TDD to maximize the effectiveness of QA working in tandem with TDD.

In talking with our QA team, here’s what we came up with that seems to be a good guideline for QA in a TDD environment:

  • QA is responsible for exploratory testing along with non-unit tests (system testing, UI testing, integration testing, etc…). TDD often focuses on the task at hand so it may be easy for a developer to miss obscure test cases.
  • Upon finding bugs in exploratory testing, assuming the test case doesn’t require external systems (aka it isn’t a system or integration test), the developer would be responsible for test automation via writing unit tests that expose the bug. Once the test is in place, the developer fixes the bug.
  • In the case where the test requires integrating with external resources and is thus outside the domain of a unit test, the QA team is responsible for test automation.
  • QA takes the unit tests and run them on all platforms.
  • QA ensures API integrity by testing for API changes.
  • QA might also review unit tests which could provide ideas for where to focus exploratory testing.
  • Much of this is flexible and can be negotiated between Dev and QA as give and take based on project needs.

Again, it is a mistake to assume that TDD should be done in a vacuum or that it negates the need for QA. TDD is only one small part of the testing story and if you don’t have testers, shame on you ;) . One anecdote a QA tester shared with me was a case where the developer had nice test coverage of exception cases. In these cases, the code threw a nice informative exception. Unfortunately a component higher up the stack swallowed the exception, resulting in a very unintuitive error message for the end user. This might rightly be considered an integration test, but the point is that relying soley on the unit tests caused an error to go unnoticed.

TDD creates a very integrated and cooperative relationship between developers and QA staff. It puts the two on the same team, rather than simply focusing on the adversarial relationship.

Technorati Tags: TDD,Microsoft