, code, mvc 0 comments suggest edit

UPDATE: I updated the sample to work with the final version of ASP.NET Routing included with ASP.NET 3.5 SP1. This sample is now being hosted on CodePlex.

Download the demo here

In my last post I described how Routing no longer has any dependency on MVC. The natural question I’ve been asked upon hearing that is “Can I use it with Web Forms?” to which I answer “You sure can, but very carefully.”

Being on the inside, I’ve had a working example of this for a while now based on early access to the bits. Even so, Chris Cavanagh impressively beats me to the punch in blogging his own implementation of routing for Web Forms. Nice!

One of the obvious uses for the new routing mechanism is as a “clean” alternative to URL rewriting (and possibly custom VirtualPathProviders for simple scenarios) for traditional / postback-based ASP.NET sites.  After a little experimentation I found some minimal steps that work pretty well:

  • Create a custom IRouteHandler that instantiates your pages
  • Register new Routes associated with your IRouteHandler
  • That’s it!

He took advantage of the extensibility model by implementing the IRouteHandler interface with his own WebFormRouteHandler class (not surprisingly my implementation uses the same name) ;)

There is one subtle potential security issue to be aware of when using routing with URL Authorization. Let me give an example.

Suppose you have a website and you wish to block unauthenticated access to the admin folder. With a standard site, one way to do so would be to drop the following web.config file in the admin folder…

<?xml version="1.0"?>
            <deny users="*" />


Ok, I am a bit draconian. I decided to block access to the admin directory for allusers. Attempt to navigate to the admin directory and you get an access denied error. However, suppose you use a naive implementation of WebFormRouteHandler to map the URL fizzbucket to the admin dir like so…

RouteTable.Routes.Add(new Route("fizzbucket"
  , new WebFormRouteHandler("~/admin/secretpage.aspx"));

Now, a request for the URL /fizzbucket will display secretpage.aspx in the admin directory. This might be what you want all along. Then again, it might not be.

In general, I believe that users of routing and Web Form will want to secure the physical directory structure in which Web Forms are placed using UrlAuthorization. One way to do this is to call UrlAuthorizationModule.CheckUrlAccessForPrincipal on the actual physical virtual path for the Web Form.

This is one key difference between Routing and URL Rewriting, routing doesn’t actually rewrite the URL. Another key difference is that routing provides a mean to generate URLs as well and is thus bidirectional.

The following code is my implementation of WebFormRouteHandler which addresses this security issue. This class has a boolean property on it that allows you to not apply URL authorization to the physical path if you’d like (in following the principal of secure by default the default value for this property is true which means it will always apply URL authorization).

public class WebFormRouteHandler : IRouteHandler
  public WebFormRouteHandler(string virtualPath) : this(virtualPath, true)

  public WebFormRouteHandler(string virtualPath, bool checkPhysicalUrlAccess)
    this.VirtualPath = virtualPath;
    this.CheckPhysicalUrlAccess = checkPhysicalUrlAccess;

  public string VirtualPath { get; private set; }

  public bool CheckPhysicalUrlAccess { get; set; }

  public IHttpHandler GetHttpHandler(RequestContext requestContext)
    if (this.CheckPhysicalUrlAccess 
      && !UrlAuthorizationModule.CheckUrlAccessForPrincipal(this.VirtualPath
              ,  requestContext.HttpContext.User
              , requestContext.HttpContext.Request.HttpMethod))
      throw new SecurityException();

    var page = BuildManager
        , typeof(Page)) as IHttpHandler;
    if (page != null)
      var routablePage = page as IRoutablePage;
      if (routablePage != null)
        routablePage.RequestContext = requestContext;
    return page;

You’ll notice the code here checks to see if the page implements an IRoutablePage interface. If your Web Form Page implements this interface, the WebFromRouteHandler class can pass it the RequestContext. In the MVC world, you generally get the RequestContext via the ControllerContext property of Controller, which itself inherits from RequestContext.

The RequestContext is important for calling into API methods for URL generation. Along with the IRoutablePage, I provide a RoutablePage abstract base class that inherits from Page. The code for this interface and the abstract base class that implements it is in the download at the end of this post.

One other thing I did for fun was to play around with fluent interfaces and extension methods for defining simple routes for Web Forms. Since routes with Web Forms tend to be simple, I thought this syntax would work nicely.

public static void RegisterRoutes(RouteCollection routes)
  //first one is a named route.
  routes.Map("General", "haha/{filename}.aspx").To("~/forms/haha.aspx");

The general idea is that the route url on the left maps to the webform virtual path to the right.

I’ve packaged all this up into a solution you can download and try out. The solution contains three projects:

  • WebFormRouting - The class library with the WebFormRouteHandler and helpers…
  • WebFormRoutingDemoWebApp - A website that demonstrates how to use WebFormRouting and also shows off url generation.
  • WebFormRoutingTests- a few non comprehensive unit tests of the WebFormRouting library.

WARNING: This is prototype code I put together for educational purposes. Use it at your own risk. It is by no means comprehensive, but is a useful start to understanding how to use routing with Web Forms should you wish. Download the demo here.

Technorati Tags: aspnetmvc,Routing,ASP.NET, code, mvc 0 comments suggest edit

At this year’s Mix conference, we announced the availability of the second preview for ASP.NET MVC which you can download from here. Videos highlighting MVC are also available.

Now that I am back from Mix and have time to breathe, I thought I’d share a few (non-exhaustive) highlights of this release as well as my thoughts on the future.

New Assemblies and Routing

Much of the effort and focus of this release was put into routing. If you’ve installed the release, you’ll notice that MVC has been factored into three assemblies:

  • System.Web.Mvc
  • System.Web.Routing
  • System.Web.Abstractions

The key takeaway here is that MVC depends on Routing which depends on Abstractions.

MVC => Routing => Abstractions

Routing is being used by another team here at Microsoft so we worked on making it an independent feature to MVC. MVC relies heavily on routing, but routing doesn’t have any knowledge of MVC. I’ll write a follow up post that talks about the implications of that and how you might use Routing in a non-MVC context.

Because of the other dependencies on Routing, we spent a lot of time trying to make sure we got the API and code correct and making sure the quality level of routing meets an extremely high bar. Unfortunately, this investment in routing did mean that we didn’t implement everything we wanted to implement for MVC core, but hey, it’s a preview right? ;)

CodePlex Builds

At Mix this year Scott Hanselman’s gave a great talk (IMHO) on MVC. One thing he announced during that talk is the vehicle by which we will be making the MVC source code available. Many of you might recall ScottGu’s recent roadmap for MVC in which he mentioned we would be making the source available. At Mix, Scottha announced that we would be deploying our source to CodePlex soon.

Not only that, we hope to push source from our source control to a CodePlex Project’s source control server on a semi-regular basis. These builds would only include source (in a buildable form) and would not include the usual hoopla with associated with a full Preview or Beta release.

How regular a schedule we keep to remains to be seen, but Scott mentioned in his talk around every four to six weeks. Secretly, between you and me, I’d love to push even more regularly. Just keep in mind that this is an experiment in transparency here at Microsoft, so we’ll start slow (baby steps!) and see how it progresses. In spirit, this would be the equivalent to the daily builds that are common in open source projects, just not daily.

Unit Test Framework Integration

In a recent post, I highlighted some of the work we’re doing around integrating third party unit testing frameworks.

Unit Testing

I’ve been in contact with various unit testing framework developers about integrating their frameworks with the MVC project template. I’m happy to see that MbUnit released updated installers that will integrate MbUnit into this dropdown. Hopefully the others will follow suit soon.

One interesting approach I should point out is that this is a great way to integrate your own totally tricked out unit testing project template complete with your favorite mock framework and your own private assembly full of your useful unit test helper classes and methods.

If you’re interested in building your own customized test framework project which will show up in this dropdown, Joe Cartano of the Web Tools team posted an updated streamlined walkthrough on how to do so using NUnit and Rhino Mocks as an example.

Upcoming improvements

One area in this preview we need to improve is the testability of the framework. The entire MVC dev team had a pause in which we performed some app building and uncovered some of the same problems being reported in various forums. Problems such as testing controller actions in which a call to RedirectToAction is made. Other problems include the fact that you need to mock ControllerContext even if you’re not using it within the action. There are many others that have been reported by various people in the community and we are listening. We ourselves have encountered many of them and definitely want to address them.

Experiencing the pain ourselves is very important to understanding how we should fix these issues. One valuable lesson I learned is that a framework that is testable does not mean that applications built with that framework are testable. We definitely have to keep that in mind as we move forward.

To that end, we’ll be applying some suggested improvements in upcoming releases that address these problems. One particular refactoring I’m excited about is ways to make the controller class itself more lightweight. Some of us are still recovering from Mix so as we move forward, I hope to provide even more details on specifically what we hope to do rather than this hand-waving approach. Bear with me.

During this next phase, I personally hope to have more time to continuously do app building to make sure these sort of testing problems don’t crop up again. For the ones that are out there, I take responsibility and apologize. I am a big a fan of TDD as anyone and I hate making life difficult for my brethren. ;)


When do we RTM? This is probably the most asked question I get and right now, I don’t have a good answer. In part, because I’m still trying to sort through massive amounts of feedback regarding this question. Some questions I’ve been asking various people revolve around the very notion of RTM for a project like this :

  • How important is RTM to you?
  • Right now, the license allows you to go-live and we’ll provide the source, is that good enough for your needs?
  • Is it really RTM that you want, or is it the assurance that the framework won’t churn so much?
  • What if we told you what areas are stable and which areas will undergo churn, would you still need the RTM label?
  • Is there a question I should be asking regarding this that I’m not? ;)

I do hope to have a more concrete answer soon based on this feedback. In general, what I have been hearing from people thus far is they would like to see an RTM release sooner rather than later, even if it is not feature rich. I look forward to hearing from more people on this.

Closing Thoughts

In general, we have received an enormous amount of interest and feedback in this project. Please do keep it coming as the constructive feedback is really shaping the project. Tell us what you like as well as what is causing you problems. We might not respond to every single thing reported as quickly as we would like to, but we are involved in the forums and I am still trying to working through the massive list of emails accrued during Mix.

Technorati Tags: ASP.NET,aspnetmvc,software

personal 0 comments suggest edit

You don’t so much return from Las Vegas as you recover from Las Vegas.

Right now, I am recovering from my Las Vegas trip. Recovering from Vegas Nose caused by the extremely dry air and massive second hand smoke inhalation. Recovering from the sensory onslaught they call casinos. Recovering from the fake kitschiness and manafactured excitement as people sit like zombies feeding machines their life savings.

Yet despite all that, I still love the place. Maybe because despite the bad things, Vegas is really the Disneyland for adults with a slight bad streak inside that yearns to get out once in a while to indugle in a vice or three.

I especially love the Mix conference there. As Jeff Atwood writes (and I agree)…

MIX is by far my favorite Microsoft conference after attending the ’06 and ’07 iterations…

What I love about Mix is that it …

  • is relatively small and intimate, at around 2,000 attendees.
  • seamlessly merges software engineering and design.
  • includes a lot of non-Microsoft folks, even those that are traditionally hostile to Microsoft, so there’s plenty of perspective.

Some highlights from Mix include seeing old friends again such as Jon, Steve, and Miguel. Also enjoyed having time to hang with my coworkers in a non work setting after having imbibed much alcohol. And of course meeting a lot of people I know through the intarweb such as Kevin Dente, Steven Smith, and Dave Laribee among others I’m forgetting at the moment.

I also played Poker for the first time at a casino and I had a blast, leaving with some winnings despite losing four close showdowns, a couple of them in which my pocket Kings were beaten by pocket Aces. Who’d a thunk it?!

The past couple of weeks have been a rough time for me and my family. I missed a few days of work due to a bad fever, and my wife then suffered through the same fever the following week. Fortunately Cody hasn’t seemed to have caught the fever.


In a week, we will finally move into our new home. While our temporary housing has been nice, we’ve been here a long while and it’s just hard to feel settled when we are not in our own home. Once we move into our house and make it our own, I think we’ll finally feel settled into this place and find our rhythm here. Who knows, maybe I’ll have more time for my extracurricular activities I used to do.

Speaking of extracurricular, I have joined two soccer leagues now, the GSSL (Greater Seattle Soccer League) and the MSSL (Microsoft Soccer League). Unfortunately, my foot is injured due to a hard tackle so I didn’t have a good showing in my first MSSL game today.

In the past month, I’ve given several talks, some that went well and one that went…how should I put it…very badly. Hopefully I will be asked to speak again. At least my TechReady (an internal talk) went well with 240 attendees, but probably because Scott Hanselman carried the both of us. ;) That boy can talk!

If you’ve read this far expecting technical content, my apologies, it’s a Saturday. We just had an updated preview of MVC released and I will be writing about that in some upcoming posts. In the meanwhile, the sun just came up outside for once and my mood is lifted after a dark few weeks. Things are starting to look up for me in this town. Hope things are well with you as well.

Technorati Tags: Vegas,Mix,Personal

0 comments suggest edit

broken-glass One interesting response to my series on versioning of interfaces and abstract base classes is the one in which someone suggested that we should go ahead and break their code from version to version. They’re fine in requiring a recompile when upgrading.

In terms of recompile, it’s not necessarily recompiling yourcode that is the worry. It’s the third party libraries you’re using that poses the problem. Buying a proper third-party library that meets a need that your company can be a huge cost-saving measure in the long run, especially for libraries that perform non-trivial calculations/tasks/etc… that are important to your application. Not being able to use them in the event of an upgrade could be problematic.

Let’s walk through a scenario. You’re using a third-party language translation class library which has a class that implements an interface in the framework. In the next update of the framework, a new property is added to the interface. Along with that property, some new classes are added to the Framework that would give your application a competitive edge if it could use those classes.

So you try and recompile your app, fix any breaks in your app, and everything seems fine. But when you test your app, it breaks because the third party library is not implementing the new interface. The third party library is broken by this change.

Now you’re in a bind. You could try to write your own language translation library, but that’s a huge and difficult task and your company is not in the business of writing language translators. You could simply wait for the third party to update their library, but in the meanwhile, your competitors are passing you by.

So in this case, the customer has no choice but to wait. It is my hunch that this is the sort of bind we’d like to limit as much as possible. This is not to say we never break, as we certainly do. But we generally limit breaks to the cases in which there is a security issue.

Again, as I’ve said before, the constraints in building a framework, any framework, are not the same constraints of building an app. Especially when the framework has a large number of users. When Microsoft ships an interface, it has to expect that there will be many many customers out there implementing that interface.

This is why there tends to be an emphasis on Abstract Base Classes as I mentioned before within the .NET Framework. Fortunately, when the ABC keeps most, if not all, methods abstract or virtual, then testability is not affected. It’s quite trivial to mock or fake an abstract base class written in such a manner as you would an interface.

Personally, I tend to like interfaces in my own code. I like the way I can quickly describe another concern via an interface and then mock out the interface so I can focus on the code I am currently concerned about. A common example is persistence. Sometimes I want to write code that does something with some data, but I don’t want to have to think about how the data is retrieved or persisted. So I will simply write up an interface that describes what the interaction with the persistence layer will look like and then mock that interface. This allows me to focus on the real implementation of persistence later so I can focus on the code that does stuff with that data now.

Of course I have the luxury of breaking my interfaces whenever I want, as my customers tend to be few and forgiving. ;)

Technorati Tags: Framework,Design,Software,Interfaces, code 0 comments suggest edit

Eilon Lipton recently wrote a bit about context objects in ASP.NET MVC and in an “Oh by the way” moment, tossed out the fact that we changed the IHttpContext interface to the HttpContextBase abstract base class (ABC for short).

Not long after, this spurred debate among the Twitterati. Why did you choose an Abstract Base Class in this case? The full detailed answer would probably break my keyboard in length, so I thought I would try to address it in a series of posts.

In the end, I hope to convince the critiques that the real point of contention is about maintaining backwards compatibility, not about choosing an abstract base class in this one instance.

Our Constraints

All engineering problems are about optimizing for constraints. As I’ve written before, there is no perfect design, partly because we’re all optimizing for different constraints. The constraints you have in your job are probably different than the constraints that I have in my job.

For better or worse, these are the constraints my team is dealing with in the long run. You may disagree with these constraints, so be it. We can have that discussion later. I only ask that for the time being, you evaluate this discussion in light of these constraints. In logical terms, these are the premises on which my argument rests.

  • Avoid Breaking Changes at all costs
  • Allow for future changes

Specifically I mean breaking changes in our public API once we RTM. We can make breaking changes while we’re in the CTP/Beta phase.

You Can’t Change An Interface

The first problem we run into is that you cannot change an interface.

Now some might state, “Of course you can change an interface. Watch me! Changing an interface only means some clients will break, but I still changed it.

The misunderstanding here is that after you ship an assembly with an interface, any changes to that interface result in a new interface. Eric Lippert points this out in this old Joel On Software forum thread

The key thing to understand regarding “changing interfaces” is that an interface is a _type_.  A type is logically bound to an assembly, and an assembly can have a strong name.

This means that if you correctly version and strong-name your assemblies, there is no “you can’t change this interface” problem.  An interface updated in a new version of an assembly is a _different_ interface from the old one.

(This is of course yet another good reason to get in the habit of strong-naming assemblies.)

Thus trying to make even one tiny change to an interface violates our first constraint. It is a breaking change. You can however add a new virtual method to an abstract base class without breaking existing clients of the class. Hey, it’s not pretty, but it works.

Why Not Use An Interface And an Abstract Base Class?

Why not have a corresponding interface for every abstract base class? This assumes that the purpose of the ABC is simply to provide the default implementation of an interface. This isn’t always the case. Sometimes we may want to use an ABC in the same way we use an interface (all methods are abstract…revised versions of the class may add virtual methods which throw a NotImplementedException).

The reason that having a corresponding interface doesn’t necessarily buy us anything in terms of versioning, is that you can’t expose the interface. Let me explain with a totally contrived example.

Suppose you have an abstract base class we’ll randomly call HttpContextBase. Let’s also suppose that HttpContextBase implements an IHttpContext interface. Now we want to expose an instance of HttpContextBase via a property of another class, say RequestContext. The question is, what is the type of that property?

Is it…

public IHttpContext HttpContext {get; set;}

? Or is it…

public HttpContextBase HttpContext {get; set;}

If you choose the first option, then we’re back to square one with the versioning issue. If you choose the second option, we don’t gain much by having the interface.

What Is This Versioning Issue You Speak Of?

The versioning issue I speak of relates to clients of the property. Suppose we wish to add a new method or property to IHttpContext. We’ve effectively created a new interface and now all clients need to recompile. Not only that, but any components you might be using that refer to IHttpContext need to be recompiled. This can get ugly.

You could decide to add the new method to the ABC and not change the interface. What this means is that new clients of this class need to perform an interface check when they want to call this method every time.

public void SomeMethod(IHttpContext context)
  HttpContextBase contextAbs = context as HttpContextBase;
  if(contextAbs != null)

In the second case with the ABC, you can add the method as a virtual method and throw NotImplementedException. You don’t get compile time checking with this approach when implementing this ABC, but hey, thems the breaks. Remember, no perfect design.

Adding this method doesn’t break older clients. Newer clients who might need to call this method can recompile and now call this new method if they wish. This is where we get the versioning benefits.

So Why Not Keep Interfaces Small?

It’s not being small that makes an interface resilient to change. What you really want is an interface that is small and cohesivewith very little reason to change. This is probably the best strategy with interfaces and versioning, but even this can run into problems. I’ll address this in more detail in an upcoming post, but for now will provide just one brief argument.

Many times, you want to divide a wide API surface into a group of distinct smaller interfaces. The problem arises when a method needs functionality of several of those interfaces. Now, a change in any one of those interfaces would break the client. In fact, all you’ve really done is spread out one large interface with many reasons to change into many interfaces each with few reasons to change. Overall, it adds up to the same thing in terms of risk of change.

Alternative Approaches

These issues are one of the trade-offs of using statically typed languages. One reason you don’t hear much about this in the Ruby community, for example, is there really aren’t interfaces in Ruby, though some have proposed approaches to provide something similar. Dynamic typing is really great for resilience to versioning.

One thing I̻’d love to hear more feedback from others is why, in .NET land, are we so tied to interfaces? If the general rule of thumb is to keep interfaces small (I’ve even heard some suggest interfaces should only have one method), why aren’t we using delegates more instead of interfaces? That would provide for even looser coupling than interfaces.

The proposed dynamic keyword and duck-typing features in future versions of C# might provide more resilience. As with dynamically typed languages such as Ruby, the trade-off in these cases is that you forego compile time checking for run-time checking. Personally, I think the evidence is mounting that this may be a worthwhile tradeoff in many cases.

For More On This

The Framework Design Guidelines highlights the issues I covered here well in chapter 4 (starting on page 11). You can read chapter 4 from here. In particular, I found this quote quite interesting as it is based on the experience from other Framework developers.

Over the course of the three versions of the .NET Framework, I have talked about this guideline with quite a few developers on our team. Many of them, including those who initially disagreed with the guideline, have said that they regret having shipped some API as an interface. I have not heard of even one case in which somebody regretted that they shipped a class.

Again, these guidelines are specific to Framework development (for statically typed languages), and not to other types of software development.

What’s Next?


If I’ve done my job well, you by now agree with the conclusions I put forth in this post, given the constraints I laid out. Unless of course there is something I missed, which I would love to hear about.

My gut feeling is that most disagreements will focus on the premise, the constraint, of avoiding breaking changes at all costs. This is where you might find me in some agreement. After all, before I joined Microsoft, I wrote a blog post asking, Is Backward Compatibility Holding Microsoft Back? Now that I am on the inside, I realize the answer requires more nuance than a simple yes or no answer. So I will touch on this topic in an upcoming post.

Other topics I hope to cover:

  • On backwards compatibility and breaking changes.
  • Different criteria for choosing interfaces and abstract base classes.
  • Facts and Fallacies regarding small interfaces.
  • Whatever else crosses my mind.

My last word on this is to keep the feedback coming. It may well turn out that based on experience, HttpContextBase should be an interface while HttpRequest should remain an abstract base class. Who knows?! Frameworks are best extracted from real applications, not simply from guidelines. The guidelines are simply that, a guide based on past experiences. So keep building applications on top of ASP.NET MVC and let us know what needs improvement (and also what you like about it).

Technorati Tags: ASP.NET,Design,Framework

code 0 comments suggest edit

This is part 2 in an ongoing series in which I talk about various design and versioning issues as they relate to Abstract Base Classes (ABC), Interfaces, and Framework design. In part 1 I discussed some ways in which ABCs are more resilient to versioning than interfaces. I haven’t covered the full story yet and will address some great points raised in the comments.

In this part, I want to point out some cases in which Abstract Base Classes fail in versioning. In my last post, I mentioned you could simply add new methods to an Abstract Base Class and not break clients. Well that’s true, it’s possible, but I didn’t emphasize that this is not true for all cases and can be risky. I was saving that for another post (aka this one).

I had been thinking about this particular scenario a while ago, but it was recently solidified in talking to a coworker today (thanks Mike!). Let’s look at the scenario. Suppose there is an abstract base class in a framework named FrameworkContextBase. The framework also provides a concrete implementation.

public abstract class FrameworkContextBase
  public abstract void MethodOne();

Somewhere else in another class in the framework, there is a method that takes in an instance of the base class and calls the method on it for whatever reason.

public void Accept(FrameworkContextBase arg)

With me so far? Good. Now imagine that you, as a consumer of the Framework write a concrete implementation of FrameworkContextBase. In the next release of the framework, the framework includes a method to FrameworkContextBase like so…

public abstract class FrameworkContextBase
  public abstract void MethodOne();
  public virtual void MethodTwo()
    throw new NotImplementedException();

And the Accept method is updated like so…

public void Accept(FrameworkContextBase arg)

Seems innocuous enough. You might even be lulled into the false sense that all is well in the world and decide to go ahead and upgrade the version of the Framework hosting your application without recompiling. Unfortunately, somewhere in your application, you pass your old implementation of the ABC to the new Accept method. Uh oh! Runtime exception!

The fix sounds easy in theory, when adding a new method to the ABC, the framework developer need to make sure it has a reasonable default implementation. In my contrived example, the default implementation throws an exception. This seems easy enough to fix. But how can you be sure the implementation is reasonable for all possible implementations of your ABC? You can’t.

This is often why you see guidelines for .NET which suggest making all methods non-virtual unless you absolutely need to. The idea is that the Framework should provide checks before and after to make sure certain invariants are not broken when calling a virtual method since we have no idea what that method will do.

As you might guess, I tend to take the approach of buyer beware. Rather than putting the weight on the Framework to make sure that virtual methods don’t do anything weird, I’d rather put the weight on the developer overriding the virtual method. At least that’s the approach we’re taking with ASP.NET MVC.

Another possible fix is to also add an associated Supports{Method} property when you add a method to an ABC. All code that calls that new method would have to check the property. For example…

public abstract class FrameworkContextBase
  public abstract void MethodOne();
  public virtual void MethodTwo()
    throw new NotImplementedException();
  public virtual bool SupportsMethodTwo {get{return false;}}

//Some other class in the same framework
public void Accept(FrameworkContextBase arg)

But it may not be clear to you, the framework developer, what you should do when the instance doesn’t support MethodTwo. This might not be clear nor straightforward.

This post seems to contradict my last post a bit, but I don’t see it that way. As I stated all along, there is no perfect design, we are simply trying to optimize for constraints. Not only that, I should add that versioning is a hard problem. I am not fully convinced we have made all the right optimizations (so to speak) hence I am writing this series.

Coming up: More on versioning interfaces with real code examples and tradeoffs. More on why breaking changes suck. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Interfaces,Abstract Base Classes,Framework, code, mvc 0 comments suggest edit

By the way my blogging frequency has declined, you can guess I’ve been quite busy here at Microsoft preparing for the next release of ASP.NET MVC.

It’s not just working on specs, design meetings, etc… that keep me busy. It’s preparing for several talks, various spec reviews, building hands on labs, demo and and app building, etc…that keeps me busy. All the while I am still learning the ropes and dealing with selling a house in L.A. and buying a house up here. There’s a lot that goes into being a PM I naively didn’t expect, on top of just how much work goes into simply moving.

Not that I’m complaining. It’s still a lot of fun. ScottGu posted an entry on his blog about the fun we’re having in preparing for the upcoming ASP.NET MVC Mix Preview.

Here’s a screenshot that shows a tooling feature I’m particularly excited about.

Unit Testing

I’ve written about the challenges Microsoft faces with bundling Open Source software in the past and what I thought they should do about it…

What I would have liked to have seen is for Team System to provide extensibility points which make it extremely easy to swap out MS Test for another testing framework. MS Test isn’t the money maker for Microsoft, it’s the whole integrated suite that brings in the moolah, so being able to replace it doesn’t hurt the bottom line.

What we have here is similar in spirit to what I hoped for. AFAIK it is not going to be integrated to the level that Visual Studio Team Test is integrated, but it also won’t require VSTS. This is a feature of the ASP.NET MVC project template.

I’m also excited about some of the community feedback we were able to incorporate such as removing the ControllerActionAttribute among other things. A hat or two might be eaten over that one ;).

In any case, there are still some areas I’m not yet happy with (there always will be, won’t there?), so we are not done by any measure. I’ll reserve talking about that until after Mix when you have the code in your hands and can follow along.

Technorati Tags: ASP.NET,aspnetmvc, code, mvc 0 comments suggest edit

UPDATE: I improved this based on some feedback in my comments.

With ASP.NET MVC it is possible for someone to try and navigate directly to a .aspx view. In general, this only leaves them with an ugly error message as Views typically need ViewData in order to work.

However, one approach that I think will easily work is to create a Web.config file in the root of your Views directory that contains the following.

We need to do more testing on our side to make sure there’s no pathological case in doing this, but so far in my personal testing, it seems to work.

<?xml version="1.0"?>
      <remove verb="*" path="*.aspx"/>
      <add path="*.aspx" verb="*" 

    <validation validateIntegratedModeConfiguration="false"/>
      <remove name="PageHandlerFactory-ISAPI-2.0"/>
      <remove name="PageHandlerFactory-ISAPI-1.1"/>
      <remove name="PageHandlerFactory-Integrated"/>
      <add name="BlockViewHandler" path="*.aspx" verb="*" 

Let me know if you run into problems with this.

Technorati Tags: ASP.NET MVC,Tips,ASP.NET, code 0 comments suggest edit

If you know me, you know I go through great pains to write automated unit tests for my code. Some might even call me anal about it. Those people know me too well.

For example, in the active branch of Subtext, we have 882 unit tests, of which I estimate I wrote around 800 of those. Yep, if you’re browsing the Subtext unit test code and something smells bad, chances are I probably dealt it.

IMG_1117 Unfortunately, by most definitions of Unit Test, most of these tests are really integration tests. Partly because I was testing legacy code that and partly because I was blocked by the framework, not every method under test could be easily tested in isolation.

Whether you’re a TDD fan or not, I think most of us can agree that unit testing your own code is a necessary practice, whether it is manually or automated.

I still think it’s worthwhile to take that one step further and automate your unit tests whenever possible and where it makes most sense (for large values of make sense).

When writing an automated unit test, the key is to try and isolate the unit under test (typically a method) by both controlling the external dependencies for the method and being able to capture any side-effects of the method.

Sometimes though, external code that your code makes calls into can sometimes be written in such a way that makes it challenging to test your own code. Often, you have to resort to building all sorts of Bridge or Adapter classes to abstract away the thing you’re calling. Sometimes you are plain stuck.

What “external code” might exhibit this characteristic of making your own code hard to test? I don’t have anything in particular in mind but…oh…off the top of my head if you made me pick one totally spontaneously, I might mention my way of example one little piece of code called the .NET Framework.

For the most part, I’ve had very few problems with the Base Class Libraries or other parts of the Framework. Most of my testing woes came when writing code against ASP.NET. The ASP.NET MVC framework hopes to help address some of that.

I’ve been in a lot of internal discussions recently talking with various people and teams about testable code. In order to contribute more value to these discussions, I am trying to gather specific cases and scenarios where testing your code is really painful.

What I am not looking for is feedback such as

It’s hard to write unit tests when writing a Foo application/control/part”.


Class Queezle should really make method Bloozle public.

Perhaps Bloozle should be public, but I am not interested in theoretical pains. If the inaccessibility of Bloozle caused a real problem in a real unit test, that’s what I want to hear.

What I am looking for is specifics! Concrete scenarios that are blocked or extremely painful.Including the actual unit test is even better! For example…

When writing ASP.NET code, I want to have helper methods that accept an HttpContext instance and get some values from its various properties and then perform a calculation. Because HttpContext is sealed and is tightly coupled to the ASP.NET stack, I can’t mock it out or replace it with a test specific subclass. This means I always have to write some sort of bridge or adapter that wraps the context which gets tedious.

Only, you can’t use that one, because I already wrote it. Ideally, I’d love to hear feedback from across the board, not just ASP.NET. Got issues with WPF, WCF, Sharepoint Web Parts, etc… tell us about it. Please post them in my comments or on your blog and link to this post. Your input is very valuable and could help shape the future of the Framework, or at least help me to sound like I am clued into customer needs next time I talk to someone internal about this. ;)

Technorati Tags: TDD,Unit Testing

code 0 comments suggest edit

At the risk of embarrassing myself and eliciting more groans, I am going to share a joke I made up for my Code Camp talk. I was inspired to come up with some humor based on Jeff’s Sesame Street Presentation Rule post. I fully understand his post was addressing something deeper than simply telling a gratuitous joke in the presentation.

The context is that I just finished explaining the various dependencies between the Model, View, and Controller within the MVC pattern.

UPDATE: Updating this joke as the feedback comes in. The original is at the bottom.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The View has a few more drinks than the others and turns to the Model and says,

“Can you give me a ride home tonight? I know I always burden you with this, but I can never depend on that guy.”

Ba dum dum.

I am refining it a bit and posting it here in the hopes that I can improve upon it with your feedback. :)

Here’s the original as a point of comparison.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The Model has a few more drinks than the others and turns to the Controller and says,

“Can you give me a ride home tonight? I know I always burden you with this and never ask him, but I can never depend on that guy.”

Thanks for the feedback!

Technorati Tags: Humor,ASP.NET MVC, mvc 0 comments suggest edit

The ASP.NET and Silverlight team are hiring! Brad Abrams (who happens to be my “grand-boss” as in my boss’s boss) posted a developers wanted ad on his blog:

  • Are you JavaScript guru who has a passion to make Ajax easier for the masses?
  • Are you the resident expert in ASP.NET and consistently think about how data driven applications could be easier to build?
  • Are you a TDD or patterns wonk that sees how ASP.NET MVC is a game-changer?
  • Are you excited about the potential of Silverlight for business applications, and particularly the potential for the ASP.NET+Silverlight combination where we have the great .NET programming model on the server and client?

Some of you out there have provided great insight into how ASP.NET MVC could be better (and then some of you just point out its warts).  Either way, here’s a chance to have an even bigger influence on its design and code. I promise I’m easy to work with.

These are senior developer positions…

My team has several openings for senior, experienced developers with a passion for platform\framework technology. If you have 5+ years of software development industry experience, know how to work effectively in teams, have a great software design abilities and are an excellent developer than we just might have the spot for you!

Oh, and if you do get hired and join my feature team, I like my coffee with milk and sugar. Thank you very much. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Microsoft,Jobs

code 0 comments suggest edit

camping The Seattle Code Camp (which despite the misleading photo, isn’t a camping trip) is now over and I have nothing but good things to say about it. I didn’t get a chance to see a lot of talks but did enjoy the talk by Jim Newkirk and Brad Wilson. I’m a fan of their approach to providing extensibility and this session provided all the impetus I needed to really give a try rather than simply talking about trying it. :)

As for my own talk, I had the great pleasure of showing up late to my talk. To this moment, I still don’t know why my alarm didn’t go off. All indications are that it was set properly.

My wife woke me up at 9:00 AM asking, “Don’t you have a talk to give today?” I sure did, at 9:15 AM. I drove like a madman from Mercer Island (sorry to all the people I cut off) to Redmond and ended up being around 10 minutes late I think.

Fortunately the attendees were patiently waiting and despite my frazzled approach, I think the presentation went fairly well. Perhaps Chris Tavares can tell me in private how it really went. ;) Thanks to everyone who attended and refrained from heckling me. I appreciate the restraint and thoughtfulness.

By the way, Chris is part of the P&P group, but he’s also a member of the ASP.NET MVC feature team. I think his role is a P&P liason or something like that. Mainly he’s there to give great advice and help us out. So definitely give his blog a look for some great software wisdom.

As this was my first Code Camp, I am definitely looking forward to the next one.

Technorati Tags: Code Camp,ASP.NET MVC

personal 0 comments suggest edit

Well today I turn old. 33 years old to be exact. A day after Hanselman turns really old. It seems I’m always following in that guy’s footsteps doesn’t it?

Oh look, Hanselman gets a blue badge so Phil has to go and get a blue badge. Hanselman has a birthday and Phil has to go and have a birthday on the next day. What a pathetic follower!

What’s interesting is that Rob Conery’s birthday was just a few days ago and we’re all recent hires and involved in MVC in some way. Ok, maybe that isn’t all that interesting after all as it is statistically insignificant, but I thought I’d mention it anyways.

Clustering is a natural occurrence in random sequences. In fact, when looking at a sequence in which there is no clustering, you’d pretty much know it wasn’t random. That’s often what happens when a human tries to make a sequence look random. The person will tend to spread things evenly out. But randomness typically forms clusters. Not only that, but people have a natural tendency to find patterns in sequences and phenomena that don’t exist. But I digress.

Code Camp Seattle

If you’re in the Seattle area this weekend, I’ll be speaking at the Seattle Code Camp. There are several speakers covering various aspects of ASP.NET MVC so I thought I would try not to overlap and maybe cover a bit of an introduction, a few techniques, and some Q & A.

In other words, I’m woefully unprepared because I’ve been so slammed trying to get a plan finalized for MVC that we can talk about. But I’m enlisting Rob’s help to put together some hopefully worthwhile demos. All I really need to do is get a designer involved to make something shiny. ;)

In any case, it’s 2 AM, I need to retire for the evening. I leave you with this photo of my son after he got ahold of a pen while my wife wasn’t looking. It was a brief moment, but that’s all he needed to change his look.

Cody Pen

code, tdd 0 comments suggest edit

In a recent post, Frans Bouma laments the lack of computer science or reference to research papers by agile pundits in various mailing lists (I bet this really applies to nearly all mailing lists, not just the ones he mentions).

What surprised me to no end was the total lack of any reference/debate about computer science research, papers etc. … in the form of CS research. Almost all the debates focused on tools and their direct techniques, not the computer science behind them. In general asking ’Why’ wasn’t answered with: “research has shown that…” but with replies which were pointing at techniques, tools and patterns, not the reasoning behind these tools, techniques and patterns.

science-in-actionAs a fan of the scientific method, I understand the difference between a hypothesis and a theory/law. Experience and anecdotal evidence do not a theory make, as anyone who’s participated in a memory experiment will learn that memory itself is easily manipulated. At the same time though, if a hypothesis works for you every time you’ve tried it, you start to have confidence that the hypothesis holds weight.

So while the lack of research was a slight itch sitting there in the back of my mind, I’ve never been overly concerned because I’ve always felt that the efficacy of TDD would hold weight when put to the test (get it?). It is simply a young hypothesis and it was only a matter of time before experimentation would add evidence to bolster the many claims I’ve made on my blog.

I’ve been having a lot of interesting discussions around TDD internally here lately and wanted to brush up on the key points when I happened upon this paper published in the Proceedings of the IEEE Transactions on Software Engineering entitled On the Effectiveness of Test-first Approach to Programming.

The researchers put together an experiment in which an experiment group and a control group implemented a set of features in an incremental fashion. The experiment group wrote tests first, the control group applied a more conventional approach of writing the tests after the fact. As a result, they found…

We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed.

The interesting result here is that quality is more a factor of the number of tests you write, and not whether you write them first. TDD just happens to be a technique in which the natural inclination (path of least resistance) is to write more tests than less in the same time span. The lesson here is even if you don’t buy TDD, you should still be writing automated unit tests for your code.

Obviously such a controlled experiment on undergraduate students leaves one wondering if the conclusions drawn can really be applied to the workplace. The researches do address this question of validity…

The external validity of the results could be limited since the subjects were students. Runeson [21] compared freshmen, graduate, and professional developers and concluded that similar improvement trends persisted among the three groups. Replicated experiments by Porter \ and Votta [22] and Höst et al. [23] suggest that students may provide an adequate model of the professional population.

Other evidence they refer to even suggests that in the case of advance techniques, the benefit that professional developers gain outweighs that of students, which could bolster the evidence they present.

My favorite part of the paper is the section in which they offer their explanations for why they believe that Test-First programming might offer productivity benefits. I won’t dock them for using the word synergistic.

We believe that the observed productivity advantage of Test-First subjects is due to a number of synergistic effects:

  • Better task understanding. Writing a test before implementing the underlying functionality requires \ the programmer to express the functionality unambiguously.
  • Better task focus. Test-First advances one test case at a time. A single test case has a limited scope. Thus, the programmer is engaged in a decomposition process in which larger pieces of functionality are broken down to smaller, more manageable chunks. While developing the functionality for a single test, the cognitive load of the programmer is lower.
  • Faster learning. Less productive and coarser decomposition strategies are quickly abandoned in favor of more productive, finer ones.
  • Lower rework effort. Since the scope of a single test is limited, when the test fails, rework is easier. When rework immediately follows a short burst of testing and implementation activity, the problem context is still fresh in the programmer’s mind. With a high number of focused tests, when a test fails, the root cause is more easily pinpointed. In addition, more frequent regression testing shortens the feedback cycle. When new functionality interferes with old functionality, this situation is revealed faster. Small problems are detected before they become serious and costly.

Test-First also tends to increase the variation in productivity. This effect is attributed to the relative difficulty of the technique, which is supported by the subjects’ responses to the post-questionnaire and by the observation that higher skill subjects were able to achieve more significant productivity benefits.

So while I don’t expect that those who are resistant or disparaging of TDD will suddenly change their minds on TDD, I am encouraged by this result as it jives with my own experience. The authors do cite several other studies (future reading material, woohoo!) that for the most part support the benefits of TDD.

So while I’m personally convinced of the benefits of TDD and feel the evidence thus far supports that, I do agree that the evidence is not yet overwhelming. More research is required.

I prefer to take a provisional approach to theories, ready to change my mind if the evidence supports it. Though in this case, I find TDD a rather pleasant fun enjoyable method for writing code. There would have to be a massive amount solid evidence that TDD is a colossal waste of time for me to drop it.

Technorati Tags: TDD,Unit Tests,Research

0 comments suggest edit

After joining Microsoft and drinking from the firehose a bit, I’m happy to report that I am still alive and well and now residing in the Seattle area along with my family. In meeting with various groups, I’ve been very excited by how much various teams here are embracing Test Driven Development (and its close cousin, TAD aka Test After Development). We’ve had great discussions in which we really looked at a design from a TDD perspective and discussed ways to make it more testable. Teams are also starting to really apply TDD in their development process as a team effort, and not just sporadic individuals.

BugsTDD doesn’t work well in a vacuum. I mean, it can work to adopt TDD as an individual developer, but the effectiveness of TDD is lessened if you don’t consider the organizational changes that should occur in tandem when adopting TDD.

One obvious example is the fact that TDD works much better when everyone on your team applies it. If only one developer applies TDD, the developer loses the regression tests benefit of having a unit test suite when making changes that might affect code written by a coworker who doesn’t have code coverage via unit tests.

Another example that was less obvious to me until now was how the role of a dedicated QA (Quality Assurance) team changes subtly when part of a team that does TDD. This wasn’t obvious to me because I’ve rarely been on a team with such talented, technical, and dedicated QA team members.

QA teams at Microsoft invest heavily in test automation from the UI level down to the code level. Typically, the QA team goes through an exploratory testing phase in which they try to break the app and suss out bugs. This is pretty common across the board.

The exploratory testing provides data for the QA team to use in determining what automation tests are necessary and provide the most bang for the buck. Engineering is all about balancing constraints so we can’t write every possible automated test. We want to prioritize them. So if exploratory testing reveals a bug, that will be a high priority area for test automation to help prevent regressions.

However, this appears to have some overlap with TDD. Normally, when someone reports a bug to me, the test driven developer, I will write a unit test (or tests) that should pass but fails because of that bug. I then fix the bug and ensure my test (or tests) now pass because the bug is fixed. This prevents regressions.

In order to reduce duplication of efforts, we’re looking at ways of reducing this overlap and adjusting the QA role slightly in light of TDD to maximize the effectiveness of QA working in tandem with TDD.

In talking with our QA team, here’s what we came up with that seems to be a good guideline for QA in a TDD environment:

  • QA is responsible for exploratory testing along with non-unit tests (system testing, UI testing, integration testing, etc…). TDD often focuses on the task at hand so it may be easy for a developer to miss obscure test cases.
  • Upon finding bugs in exploratory testing, assuming the test case doesn’t require external systems (aka it isn’t a system or integration test), the developer would be responsible for test automation via writing unit tests that expose the bug. Once the test is in place, the developer fixes the bug.
  • In the case where the test requires integrating with external resources and is thus outside the domain of a unit test, the QA team is responsible for test automation.
  • QA takes the unit tests and run them on all platforms.
  • QA ensures API integrity by testing for API changes.
  • QA might also review unit tests which could provide ideas for where to focus exploratory testing.
  • Much of this is flexible and can be negotiated between Dev and QA as give and take based on project needs.

Again, it is a mistake to assume that TDD should be done in a vacuum or that it negates the need for QA. TDD is only one small part of the testing story and if you don’t have testers, shame on you ;) . One anecdote a QA tester shared with me was a case where the developer had nice test coverage of exception cases. In these cases, the code threw a nice informative exception. Unfortunately a component higher up the stack swallowed the exception, resulting in a very unintuitive error message for the end user. This might rightly be considered an integration test, but the point is that relying soley on the unit tests caused an error to go unnoticed.

TDD creates a very integrated and cooperative relationship between developers and QA staff. It puts the two on the same team, rather than simply focusing on the adversarial relationship.

Technorati Tags: TDD,Microsoft

0 comments suggest edit

File this in my learn something new every day bucket. I received an email from Steve Maine after he read a blog post in which I discuss the anonymous object as dictionary trick that Eilon came up with.

He mentioned that there is an object initializer syntax for collections and dictionaries.

var foo = new Dictionary<string, string>()
  { "key1", "value1" },
  { "key2", "value2" }

That’s pretty nice!

Here is a post by Mads Torgersen about collections and collection initializers.

Naturally someone will mention that the Ruby syntax is even cleaner (I know, because I was about to say that).

However, suppose C# introduced such syntax:

var foo = {"key1" => "value1", 2 => 3};

What should be the default type for the key and values? Should this always produce Dictionary<object, object>? Or should it attempt type inference for the best match? Or should there be a funky new syntax for specifying types?

var foo = <string, object> {"key1" => "value1", "2" => 3};

My vote would be for type inference. If the inferred type is not the one you want, then you have to resort to the full declaration.

Then again, the initializer syntax that does exist is much better than the old way of doing it, so I’m happy with it for now. In working with a statically typed language, I don’t expect that all idioms for dynamic languages will translate over in as terse a form.

Check out this use of lambda expressions by Alex Henderson to create a hash that is very similar in style to what ruby hashes look like. Thanks Sergio for pointing that out in the comments.

Technorati Tags: C#,Object Initializers,Collection Initializers

personal 0 comments suggest edit


No, the title of this post isn’t suggesting that L.A. is about to be demolished to make way for a hyperspatial express route. It’s pretty much already one big freeway already, isn’t it? ;)

Seriously though, I am going to miss this wonderful city after spending the last fourteen years here. A lot of people look at L.A. as a place they could never live. I thought that myself coming from Alaska, but boy was I wrong.152767725_7ca21f59c9[1]

I think the weather alone is enough to move to L.A. Perhaps the only place with better weather might be San Diego. But along with great weather during the day, L.A. has a great nightlife.

Being an outdoorsy type, I’ve always been fortunate that a good camping trip is just a short drive away.


Not to mention great skiing and snowboarding.


I even took part in my first protest march in Los Angeles, something that pretty much never happened in Alaska.


One of the things I’ll miss the most is the great soccer game I’ve been a part of for 10 years there. This particular regular pick-up game has been going on for around 15 to 20 years in one form or another.


Today the movers came and moved all of our stuff out and we left the keys behind in our now empty house. Tomorrow morning, we will be on a plane for Seattle.

The one thing that has made this move a lot easier for us has been that Microsoft took care of just about everything. Let me tell you, if you’re thinking about joining Microsoft, we might not have free food like some other companies, but we do have a great healthcare plan and the relocation benefits (at least for me) are astounding. They have made this transition a lot less a nightmare than it could’ve been. Though it has still been tough with all three of us catching colds around the same time.

Stay classy L.A., I will miss you. Seattle, watch your back. We’ll be there tomorrow.

Technorati Tags: Moving,Seattle,L.A.

code 0 comments suggest edit

Six months ago and six days after the birth of my son, Subkismet was also born which I introduced as the cure for comment spam. The point of the project was to be a useful class library containing multiple spam fighting classes that could be easily integrated into a blog platform or any website that allows users to comment.

One of my goals with the project was to make it safe to enable trackbacks/pingbacks again.

I’ve been very happy that others have joined in the Subkismet efforts. Mads Kristensen, well known for BlogEngine.NET, contributed some code.

Keyvan Nayyeri has also been hard at work putting the finishing touches on an implementation of a client for the Google Safe Browsing API service.

Keyvan just announced the release of Subkismet 1.0. Here is a list of what is included in this release. I’ve bolded the items that are new since the first release of Subkismet.

For the current version (1.0) we’ve included following tools in Subkismet core library:

  • Akismet service client: Akismet is an online web service that helps to block spam. This service client implementation for .NET is done by Phil and lets you send your comments to the service to check them against a huge database of spammers.
  • Invisible CAPTCHA control: A clever CAPTHCA control that hides from real human but appears to spammers. This control is done by Phil and you can read more about it here.
  • CAPTCHA control: An excellent implementation of traditional image CAPTCHA control for ASP.NET applications.
  • Honeypot CAPTCHA: Another clever CAPTCHA control that hides form fields from spam bots to stop them. This control is done by Phil and you can read more about it here.
  • Trackback HTTP Module: An ASP.NET Http Module that blocks trackback spam. This module is done by Mads Kristensen and he has explained his method in a blog post and this Word document.
  • Google Safe Browsing API service client: This is done by me and you saw its Beta 2 version last week. Google Safe Browsing helps you check URLs to see if they’re phishing or malware links.

Since it is always fun for us developers to write code with the latest technologies, we’re planning to have the next release target .NET 3.5.

Part of the reason for that is that over the holidays, I wrote a little something something just for fun as a way of getting up to speed with C# 3.0. It was a lot of fun and I think my secret project will make a nice addition to Subkismet. I’ll write about it when it is ready for action.

Unfortunately, this move has made it difficult to fully complete the implementation and test it out on real data.

Subkismet is hosted on CodePlex. Many thanks go out to Mads and others for their contributions and to Keyvan for all the implementation work as well as release work he did. And for the record, Keyvan mentioned that I ordered him to prepare the release. While I do have dictatorial tendencies, I think this is too harsh a description. I’d like to say I requested that he prepare the release. ;)

Technorati Tags: Subkismet,Spam

personal 0 comments suggest edit

You’ve been forewarned, this is yet another end-of-year slightly self-inflating retrospective blog post (complete with the cheesy meta-blogging introduction).

But this year for me was quite significant and full of big changes. On a personal level, this was the year the agent of my little world domination plan was hatched.

Newborn Cody With

He was tiny then, but six months later, he’s gotten quite big. He’s remained in the 95^th^ percentile in height for a baby his age.

Cody in a shark

As much fun as it has been with Cody, he’s never taken his devious eye off the prize in keeping with the master plan.


He is not going to be a kid one wants to mess with. I kid you not, he laughs out loud with glee when I whack him with a pillow or pound on his belly with my fists. That’s his idea of fun. As I said originally, he’s a li’l thug.

In terms of my professional life, it’s been quite a whirlwind year. All in the same year, I:

Throughout this whole ordeal, my wonderful wife has been most gracious and supportive. Consider the fact that when she was pregnant with Cody, I was working as a co-owner at a startup, often not taking a paycheck so we could pay our employees. That has to be stressful.

Then after one job change, I tell her about this great opportunity and ask her if it is ok to move to Redmond. Her response was immediate. Let’s go!What a woman!

Unfortunately once we made that decision, in a wonderful display of synchronicity, the Los Angeles housing market stated a downward spiral. We had zero offers for the first two months of trying to sell our house. It was so bad, in fact, that we decided we would try and rent the house. On the day we had an open house for renters, an offer came just in the nick of time!

So while we close out another year, we’re also closing out the Los Angeles era for my family. We really love this town. For one thing, we have some great friends who live here. We’re trying to recruit them up to Seattle, but they keep mumbling something about the rain and cold as if it were a bad thing.

Something else we’ll miss is the great diverse food here. Many regard the Korean food in L.A. as the best in the world. And the Japanese market and restaurants on Sawtelle have been a staple for us.

I’ll particularly miss my pickup soccer crew as well as the league of ex-pros. Lots to love about L.A., but at the same time, we’re quite excited about our new adventures and opportunities up in the Seattle area. For example, the opportunity to breathe clean air!

To all who have read this far, I hope 2007 was a good year for you and that 2008 proves to be even better. Happy New Year!

code 0 comments suggest edit

Faceoff Recently, Maxfield Pool from CodeSqueeze sent me an email about a new monthly feature he calls Developer Faceoff in which he pits two developers side-by-side for a showdown.

It’s an obvious attempt to gain readers via an appeal to vanity of the featured bloggers who have no choice to link to it. Brilliant!  :)

Seriously, I think it’s a fun and creative idea, but I have no credibility in saying so because I’m obviously biased being featured alongside my longtime nemesis and friend, Scott Hanselman. So check it out for yourself.

Some of my answers were truncated due to the format so I thought it’d be fun to elaborate on a couple of questions.

5. Are software developers - engineers or artists?

Don’t take this as a copout, but a little of both. I see it more as craftsmanship. Engineering relies on a lot of science. Much of it is demonstrably empirical and constrained by the laws of physics. Software is less constrained by physics as it is by the limits of the mind.

Software in many respects is as much a social activity as it is an engineering discipline. Working well as a team is essential, as is understanding your users and how they get their work done so your software helps, rather than hinders.

Much of software is based on faith and anecdotal evidence. For example, do we have scientific evidence that TDD improves the design of code? Do we have empirical evidence that it doesn’t? Research is scant, in part because it’s extremely challenging to set up valid experiments. Much of software research focuses on retrospective research, the sort that sociologists do.

So again, back to the question. Crafting a sorting algorithm is engineering. Building a line of business app that delights all the users is an art.

8. What is the biggest mistake you made along the way?

My first year as software developer, I deployed a reset password feature to a large music community site. A week or so after we deployed, a child of a VP of the client (or maybe it was an investor, I can’t remember) complained she couldn’t get into her account and hadn’t received her new password.

I was “sure” I had tested it, but it turned out I hadn’t done a very good job of it. There was a bug in my code and a bunch of users were waiting around for a regenerated password that would never arrive.

Needless to say, my boss wasn’t very happy it and for a good while I tread lightly worried I’d lose my job if I made another mistake. In retrospect, it was a good thing to get such a big production mistake out of the way early because I was extremely careful afterwards, always triple checking my work.

Tags: Software Development