asp.net, code comments suggest edit

Eilon Lipton recently wrote a bit about context objects in ASP.NET MVC and in an “Oh by the way” moment, tossed out the fact that we changed the IHttpContext interface to the HttpContextBase abstract base class (ABC for short).

Not long after, this spurred debate among the Twitterati. Why did you choose an Abstract Base Class in this case? The full detailed answer would probably break my keyboard in length, so I thought I would try to address it in a series of posts.

In the end, I hope to convince the critiques that the real point of contention is about maintaining backwards compatibility, not about choosing an abstract base class in this one instance.

Our Constraints

All engineering problems are about optimizing for constraints. As I’ve written before, there is no perfect design, partly because we’re all optimizing for different constraints. The constraints you have in your job are probably different than the constraints that I have in my job.

For better or worse, these are the constraints my team is dealing with in the long run. You may disagree with these constraints, so be it. We can have that discussion later. I only ask that for the time being, you evaluate this discussion in light of these constraints. In logical terms, these are the premises on which my argument rests.

  • Avoid Breaking Changes at all costs
  • Allow for future changes

Specifically I mean breaking changes in our public API once we RTM. We can make breaking changes while we’re in the CTP/Beta phase.

You Can’t Change An Interface

The first problem we run into is that you cannot change an interface.

Now some might state, “Of course you can change an interface. Watch me! Changing an interface only means some clients will break, but I still changed it.

The misunderstanding here is that after you ship an assembly with an interface, any changes to that interface result in a new interface. Eric Lippert points this out in this old Joel On Software forum thread

The key thing to understand regarding “changing interfaces” is that an interface is a _type_.  A type is logically bound to an assembly, and an assembly can have a strong name.

This means that if you correctly version and strong-name your assemblies, there is no “you can’t change this interface” problem.  An interface updated in a new version of an assembly is a _different_ interface from the old one.

(This is of course yet another good reason to get in the habit of strong-naming assemblies.)

Thus trying to make even one tiny change to an interface violates our first constraint. It is a breaking change. You can however add a new virtual method to an abstract base class without breaking existing clients of the class. Hey, it’s not pretty, but it works.

Why Not Use An Interface And an Abstract Base Class?

Why not have a corresponding interface for every abstract base class? This assumes that the purpose of the ABC is simply to provide the default implementation of an interface. This isn’t always the case. Sometimes we may want to use an ABC in the same way we use an interface (all methods are abstract…revised versions of the class may add virtual methods which throw a NotImplementedException).

The reason that having a corresponding interface doesn’t necessarily buy us anything in terms of versioning, is that you can’t expose the interface. Let me explain with a totally contrived example.

Suppose you have an abstract base class we’ll randomly call HttpContextBase. Let’s also suppose that HttpContextBase implements an IHttpContext interface. Now we want to expose an instance of HttpContextBase via a property of another class, say RequestContext. The question is, what is the type of that property?

Is it…

public IHttpContext HttpContext {get; set;}

? Or is it…

public HttpContextBase HttpContext {get; set;}

If you choose the first option, then we’re back to square one with the versioning issue. If you choose the second option, we don’t gain much by having the interface.

What Is This Versioning Issue You Speak Of?

The versioning issue I speak of relates to clients of the property. Suppose we wish to add a new method or property to IHttpContext. We’ve effectively created a new interface and now all clients need to recompile. Not only that, but any components you might be using that refer to IHttpContext need to be recompiled. This can get ugly.

You could decide to add the new method to the ABC and not change the interface. What this means is that new clients of this class need to perform an interface check when they want to call this method every time.

public void SomeMethod(IHttpContext context)
{
  HttpContextBase contextAbs = context as HttpContextBase;
  if(contextAbs != null)
  {
    contextAbs.NewMethod();
  }
    
  context.Response.Write("Score!");
}

In the second case with the ABC, you can add the method as a virtual method and throw NotImplementedException. You don’t get compile time checking with this approach when implementing this ABC, but hey, thems the breaks. Remember, no perfect design.

Adding this method doesn’t break older clients. Newer clients who might need to call this method can recompile and now call this new method if they wish. This is where we get the versioning benefits.

So Why Not Keep Interfaces Small?

It’s not being small that makes an interface resilient to change. What you really want is an interface that is small and cohesivewith very little reason to change. This is probably the best strategy with interfaces and versioning, but even this can run into problems. I’ll address this in more detail in an upcoming post, but for now will provide just one brief argument.

Many times, you want to divide a wide API surface into a group of distinct smaller interfaces. The problem arises when a method needs functionality of several of those interfaces. Now, a change in any one of those interfaces would break the client. In fact, all you’ve really done is spread out one large interface with many reasons to change into many interfaces each with few reasons to change. Overall, it adds up to the same thing in terms of risk of change.

Alternative Approaches

These issues are one of the trade-offs of using statically typed languages. One reason you don’t hear much about this in the Ruby community, for example, is there really aren’t interfaces in Ruby, though some have proposed approaches to provide something similar. Dynamic typing is really great for resilience to versioning.

One thing I̻’d love to hear more feedback from others is why, in .NET land, are we so tied to interfaces? If the general rule of thumb is to keep interfaces small (I’ve even heard some suggest interfaces should only have one method), why aren’t we using delegates more instead of interfaces? That would provide for even looser coupling than interfaces.

The proposed dynamic keyword and duck-typing features in future versions of C# might provide more resilience. As with dynamically typed languages such as Ruby, the trade-off in these cases is that you forego compile time checking for run-time checking. Personally, I think the evidence is mounting that this may be a worthwhile tradeoff in many cases.

For More On This

The Framework Design Guidelines highlights the issues I covered here well in chapter 4 (starting on page 11). You can read chapter 4 from here. In particular, I found this quote quite interesting as it is based on the experience from other Framework developers.

Over the course of the three versions of the .NET Framework, I have talked about this guideline with quite a few developers on our team. Many of them, including those who initially disagreed with the guideline, have said that they regret having shipped some API as an interface. I have not heard of even one case in which somebody regretted that they shipped a class.

Again, these guidelines are specific to Framework development (for statically typed languages), and not to other types of software development.

What’s Next?

###

If I’ve done my job well, you by now agree with the conclusions I put forth in this post, given the constraints I laid out. Unless of course there is something I missed, which I would love to hear about.

My gut feeling is that most disagreements will focus on the premise, the constraint, of avoiding breaking changes at all costs. This is where you might find me in some agreement. After all, before I joined Microsoft, I wrote a blog post asking, Is Backward Compatibility Holding Microsoft Back? Now that I am on the inside, I realize the answer requires more nuance than a simple yes or no answer. So I will touch on this topic in an upcoming post.

Other topics I hope to cover:

  • On backwards compatibility and breaking changes.
  • Different criteria for choosing interfaces and abstract base classes.
  • Facts and Fallacies regarding small interfaces.
  • Whatever else crosses my mind.

My last word on this is to keep the feedback coming. It may well turn out that based on experience, HttpContextBase should be an interface while HttpRequest should remain an abstract base class. Who knows?! Frameworks are best extracted from real applications, not simply from guidelines. The guidelines are simply that, a guide based on past experiences. So keep building applications on top of ASP.NET MVC and let us know what needs improvement (and also what you like about it).

Technorati Tags: ASP.NET,Design,Framework

code comments suggest edit

This is part 2 in an ongoing series in which I talk about various design and versioning issues as they relate to Abstract Base Classes (ABC), Interfaces, and Framework design. In part 1 I discussed some ways in which ABCs are more resilient to versioning than interfaces. I haven’t covered the full story yet and will address some great points raised in the comments.

In this part, I want to point out some cases in which Abstract Base Classes fail in versioning. In my last post, I mentioned you could simply add new methods to an Abstract Base Class and not break clients. Well that’s true, it’s possible, but I didn’t emphasize that this is not true for all cases and can be risky. I was saving that for another post (aka this one).

I had been thinking about this particular scenario a while ago, but it was recently solidified in talking to a coworker today (thanks Mike!). Let’s look at the scenario. Suppose there is an abstract base class in a framework named FrameworkContextBase. The framework also provides a concrete implementation.

public abstract class FrameworkContextBase
{
  public abstract void MethodOne();
}

Somewhere else in another class in the framework, there is a method that takes in an instance of the base class and calls the method on it for whatever reason.

public void Accept(FrameworkContextBase arg)
{
  arg.MethodOne();
}

With me so far? Good. Now imagine that you, as a consumer of the Framework write a concrete implementation of FrameworkContextBase. In the next release of the framework, the framework includes a method to FrameworkContextBase like so…

public abstract class FrameworkContextBase
{
  public abstract void MethodOne();
  public virtual void MethodTwo()
  {
    throw new NotImplementedException();
  }
}

And the Accept method is updated like so…

public void Accept(FrameworkContextBase arg)
{
  arg.MethodOne();
  arg.MethodTwo();
}

Seems innocuous enough. You might even be lulled into the false sense that all is well in the world and decide to go ahead and upgrade the version of the Framework hosting your application without recompiling. Unfortunately, somewhere in your application, you pass your old implementation of the ABC to the new Accept method. Uh oh! Runtime exception!

The fix sounds easy in theory, when adding a new method to the ABC, the framework developer need to make sure it has a reasonable default implementation. In my contrived example, the default implementation throws an exception. This seems easy enough to fix. But how can you be sure the implementation is reasonable for all possible implementations of your ABC? You can’t.

This is often why you see guidelines for .NET which suggest making all methods non-virtual unless you absolutely need to. The idea is that the Framework should provide checks before and after to make sure certain invariants are not broken when calling a virtual method since we have no idea what that method will do.

As you might guess, I tend to take the approach of buyer beware. Rather than putting the weight on the Framework to make sure that virtual methods don’t do anything weird, I’d rather put the weight on the developer overriding the virtual method. At least that’s the approach we’re taking with ASP.NET MVC.

Another possible fix is to also add an associated Supports{Method} property when you add a method to an ABC. All code that calls that new method would have to check the property. For example…

public abstract class FrameworkContextBase
{
  public abstract void MethodOne();
  public virtual void MethodTwo()
  {
    throw new NotImplementedException();
  }
  public virtual bool SupportsMethodTwo {get{return false;}}
}

//Some other class in the same framework
public void Accept(FrameworkContextBase arg)
{
  arg.MethodOne();
  if(arg.SupportsMethodTwo)
  {
    arg.MethodTwo();
  }
}

But it may not be clear to you, the framework developer, what you should do when the instance doesn’t support MethodTwo. This might not be clear nor straightforward.

This post seems to contradict my last post a bit, but I don’t see it that way. As I stated all along, there is no perfect design, we are simply trying to optimize for constraints. Not only that, I should add that versioning is a hard problem. I am not fully convinced we have made all the right optimizations (so to speak) hence I am writing this series.

Coming up: More on versioning interfaces with real code examples and tradeoffs. More on why breaking changes suck. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Interfaces,Abstract Base Classes,Framework

asp.net, code, asp.net mvc comments suggest edit

By the way my blogging frequency has declined, you can guess I’ve been quite busy here at Microsoft preparing for the next release of ASP.NET MVC.

It’s not just working on specs, design meetings, etc… that keep me busy. It’s preparing for several talks, various spec reviews, building hands on labs, demo and and app building, etc…that keeps me busy. All the while I am still learning the ropes and dealing with selling a house in L.A. and buying a house up here. There’s a lot that goes into being a PM I naively didn’t expect, on top of just how much work goes into simply moving.

Not that I’m complaining. It’s still a lot of fun. ScottGu posted an entry on his blog about the fun we’re having in preparing for the upcoming ASP.NET MVC Mix Preview.

Here’s a screenshot that shows a tooling feature I’m particularly excited about.

Unit Testing
Frameworks

I’ve written about the challenges Microsoft faces with bundling Open Source software in the past and what I thought they should do about it…

What I would have liked to have seen is for Team System to provide extensibility points which make it extremely easy to swap out MS Test for another testing framework. MS Test isn’t the money maker for Microsoft, it’s the whole integrated suite that brings in the moolah, so being able to replace it doesn’t hurt the bottom line.

What we have here is similar in spirit to what I hoped for. AFAIK it is not going to be integrated to the level that Visual Studio Team Test is integrated, but it also won’t require VSTS. This is a feature of the ASP.NET MVC project template.

I’m also excited about some of the community feedback we were able to incorporate such as removing the ControllerActionAttribute among other things. A hat or two might be eaten over that one ;).

In any case, there are still some areas I’m not yet happy with (there always will be, won’t there?), so we are not done by any measure. I’ll reserve talking about that until after Mix when you have the code in your hands and can follow along.

Technorati Tags: ASP.NET,aspnetmvc

asp.net, code, asp.net mvc comments suggest edit

UPDATE: I improved this based on some feedback in my comments.

With ASP.NET MVC it is possible for someone to try and navigate directly to a .aspx view. In general, this only leaves them with an ugly error message as Views typically need ViewData in order to work.

However, one approach that I think will easily work is to create a Web.config file in the root of your Views directory that contains the following.

We need to do more testing on our side to make sure there’s no pathological case in doing this, but so far in my personal testing, it seems to work.

<?xml version="1.0"?>
<configuration>
  <system.web>
    <httpHandlers>
      <remove verb="*" path="*.aspx"/>
      <add path="*.aspx" verb="*" 
          type="System.Web.HttpNotFoundHandler"/>
    </httpHandlers>
  </system.web>

  <system.webServer>
    <validation validateIntegratedModeConfiguration="false"/>
    <handlers>
      <remove name="PageHandlerFactory-ISAPI-2.0"/>
      <remove name="PageHandlerFactory-ISAPI-1.1"/>
      <remove name="PageHandlerFactory-Integrated"/>
      <add name="BlockViewHandler" path="*.aspx" verb="*" 
        preCondition="integratedMode" 
        type="System.Web.HttpNotFoundHandler"/>
    </handlers>
  </system.webServer>
</configuration>

Let me know if you run into problems with this.

Technorati Tags: ASP.NET MVC,Tips,ASP.NET

asp.net, code comments suggest edit

If you know me, you know I go through great pains to write automated unit tests for my code. Some might even call me anal about it. Those people know me too well.

For example, in the active branch of Subtext, we have 882 unit tests, of which I estimate I wrote around 800 of those. Yep, if you’re browsing the Subtext unit test code and something smells bad, chances are I probably dealt it.

IMG_1117 Unfortunately, by most definitions of Unit Test, most of these tests are really integration tests. Partly because I was testing legacy code that and partly because I was blocked by the framework, not every method under test could be easily tested in isolation.

Whether you’re a TDD fan or not, I think most of us can agree that unit testing your own code is a necessary practice, whether it is manually or automated.

I still think it’s worthwhile to take that one step further and automate your unit tests whenever possible and where it makes most sense (for large values of make sense).

When writing an automated unit test, the key is to try and isolate the unit under test (typically a method) by both controlling the external dependencies for the method and being able to capture any side-effects of the method.

Sometimes though, external code that your code makes calls into can sometimes be written in such a way that makes it challenging to test your own code. Often, you have to resort to building all sorts of Bridge or Adapter classes to abstract away the thing you’re calling. Sometimes you are plain stuck.

What “external code” might exhibit this characteristic of making your own code hard to test? I don’t have anything in particular in mind but…oh…off the top of my head if you made me pick one totally spontaneously, I might mention my way of example one little piece of code called the .NET Framework.

For the most part, I’ve had very few problems with the Base Class Libraries or other parts of the Framework. Most of my testing woes came when writing code against ASP.NET. The ASP.NET MVC framework hopes to help address some of that.

I’ve been in a lot of internal discussions recently talking with various people and teams about testable code. In order to contribute more value to these discussions, I am trying to gather specific cases and scenarios where testing your code is really painful.

What I am not looking for is feedback such as

It’s hard to write unit tests when writing a Foo application/control/part”.

or

Class Queezle should really make method Bloozle public.

Perhaps Bloozle should be public, but I am not interested in theoretical pains. If the inaccessibility of Bloozle caused a real problem in a real unit test, that’s what I want to hear.

What I am looking for is specifics! Concrete scenarios that are blocked or extremely painful.Including the actual unit test is even better! For example…

When writing ASP.NET code, I want to have helper methods that accept an HttpContext instance and get some values from its various properties and then perform a calculation. Because HttpContext is sealed and is tightly coupled to the ASP.NET stack, I can’t mock it out or replace it with a test specific subclass. This means I always have to write some sort of bridge or adapter that wraps the context which gets tedious.

Only, you can’t use that one, because I already wrote it. Ideally, I’d love to hear feedback from across the board, not just ASP.NET. Got issues with WPF, WCF, Sharepoint Web Parts, etc… tell us about it. Please post them in my comments or on your blog and link to this post. Your input is very valuable and could help shape the future of the Framework, or at least help me to sound like I am clued into customer needs next time I talk to someone internal about this. ;)

Technorati Tags: TDD,Unit Testing

code comments suggest edit

At the risk of embarrassing myself and eliciting more groans, I am going to share a joke I made up for my Code Camp talk. I was inspired to come up with some humor based on Jeff’s Sesame Street Presentation Rule post. I fully understand his post was addressing something deeper than simply telling a gratuitous joke in the presentation.

The context is that I just finished explaining the various dependencies between the Model, View, and Controller within the MVC pattern.

UPDATE: Updating this joke as the feedback comes in. The original is at the bottom.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The View has a few more drinks than the others and turns to the Model and says,

“Can you give me a ride home tonight? I know I always burden you with this, but I can never depend on that guy.”

Ba dum dum.

I am refining it a bit and posting it here in the hopes that I can improve upon it with your feedback. :)

Here’s the original as a point of comparison.

So a Model, a View, and a Controller walk into a bar and sit down to order some drinks from the bartender. The Model has a few more drinks than the others and turns to the Controller and says,

“Can you give me a ride home tonight? I know I always burden you with this and never ask him, but I can never depend on that guy.”

Thanks for the feedback!

Technorati Tags: Humor,ASP.NET MVC

asp.net, asp.net mvc comments suggest edit

The ASP.NET and Silverlight team are hiring! Brad Abrams (who happens to be my “grand-boss” as in my boss’s boss) posted a developers wanted ad on his blog:

  • Are you JavaScript guru who has a passion to make Ajax easier for the masses?
  • Are you the resident expert in ASP.NET and consistently think about how data driven applications could be easier to build?
  • Are you a TDD or patterns wonk that sees how ASP.NET MVC is a game-changer?
  • Are you excited about the potential of Silverlight for business applications, and particularly the potential for the ASP.NET+Silverlight combination where we have the great .NET programming model on the server and client?

Some of you out there have provided great insight into how ASP.NET MVC could be better (and then some of you just point out its warts).  Either way, here’s a chance to have an even bigger influence on its design and code. I promise I’m easy to work with.

These are senior developer positions…

My team has several openings for senior, experienced developers with a passion for platform\framework technology. If you have 5+ years of software development industry experience, know how to work effectively in teams, have a great software design abilities and are an excellent developer than we just might have the spot for you!

Oh, and if you do get hired and join my feature team, I like my coffee with milk and sugar. Thank you very much. ;)

Technorati Tags: ASP.NET,ASP.NET MVC,Microsoft,Jobs

code comments suggest edit

camping The Seattle Code Camp (which despite the misleading photo, isn’t a camping trip) is now over and I have nothing but good things to say about it. I didn’t get a chance to see a lot of talks but did enjoy the xUnit.net talk by Jim Newkirk and Brad Wilson. I’m a fan of their approach to providing extensibility and this session provided all the impetus I needed to really give xUnit.net a try rather than simply talking about trying it. :)

As for my own talk, I had the great pleasure of showing up late to my talk. To this moment, I still don’t know why my alarm didn’t go off. All indications are that it was set properly.

My wife woke me up at 9:00 AM asking, “Don’t you have a talk to give today?” I sure did, at 9:15 AM. I drove like a madman from Mercer Island (sorry to all the people I cut off) to Redmond and ended up being around 10 minutes late I think.

Fortunately the attendees were patiently waiting and despite my frazzled approach, I think the presentation went fairly well. Perhaps Chris Tavares can tell me in private how it really went. ;) Thanks to everyone who attended and refrained from heckling me. I appreciate the restraint and thoughtfulness.

By the way, Chris is part of the P&P group, but he’s also a member of the ASP.NET MVC feature team. I think his role is a P&P liason or something like that. Mainly he’s there to give great advice and help us out. So definitely give his blog a look for some great software wisdom.

As this was my first Code Camp, I am definitely looking forward to the next one.

Technorati Tags: Code Camp,ASP.NET MVC

personal comments suggest edit

Well today I turn old. 33 years old to be exact. A day after Hanselman turns really old. It seems I’m always following in that guy’s footsteps doesn’t it?

Oh look, Hanselman gets a blue badge so Phil has to go and get a blue badge. Hanselman has a birthday and Phil has to go and have a birthday on the next day. What a pathetic follower!

What’s interesting is that Rob Conery’s birthday was just a few days ago and we’re all recent hires and involved in MVC in some way. Ok, maybe that isn’t all that interesting after all as it is statistically insignificant, but I thought I’d mention it anyways.

Clustering is a natural occurrence in random sequences. In fact, when looking at a sequence in which there is no clustering, you’d pretty much know it wasn’t random. That’s often what happens when a human tries to make a sequence look random. The person will tend to spread things evenly out. But randomness typically forms clusters. Not only that, but people have a natural tendency to find patterns in sequences and phenomena that don’t exist. But I digress.

Code Camp Seattle

If you’re in the Seattle area this weekend, I’ll be speaking at the Seattle Code Camp. There are several speakers covering various aspects of ASP.NET MVC so I thought I would try not to overlap and maybe cover a bit of an introduction, a few techniques, and some Q & A.

In other words, I’m woefully unprepared because I’ve been so slammed trying to get a plan finalized for MVC that we can talk about. But I’m enlisting Rob’s help to put together some hopefully worthwhile demos. All I really need to do is get a designer involved to make something shiny. ;)

In any case, it’s 2 AM, I need to retire for the evening. I leave you with this photo of my son after he got ahold of a pen while my wife wasn’t looking. It was a brief moment, but that’s all he needed to change his look.

Cody Pen
Moustache

code, tdd comments suggest edit

In a recent post, Frans Bouma laments the lack of computer science or reference to research papers by agile pundits in various mailing lists (I bet this really applies to nearly all mailing lists, not just the ones he mentions).

What surprised me to no end was the total lack of any reference/debate about computer science research, papers etc. … in the form of CS research. Almost all the debates focused on tools and their direct techniques, not the computer science behind them. In general asking ’Why’ wasn’t answered with: “research has shown that…” but with replies which were pointing at techniques, tools and patterns, not the reasoning behind these tools, techniques and patterns.

science-in-actionAs a fan of the scientific method, I understand the difference between a hypothesis and a theory/law. Experience and anecdotal evidence do not a theory make, as anyone who’s participated in a memory experiment will learn that memory itself is easily manipulated. At the same time though, if a hypothesis works for you every time you’ve tried it, you start to have confidence that the hypothesis holds weight.

So while the lack of research was a slight itch sitting there in the back of my mind, I’ve never been overly concerned because I’ve always felt that the efficacy of TDD would hold weight when put to the test (get it?). It is simply a young hypothesis and it was only a matter of time before experimentation would add evidence to bolster the many claims I’ve made on my blog.

I’ve been having a lot of interesting discussions around TDD internally here lately and wanted to brush up on the key points when I happened upon this paper published in the Proceedings of the IEEE Transactions on Software Engineering entitled On the Effectiveness of Test-first Approach to Programming.

The researchers put together an experiment in which an experiment group and a control group implemented a set of features in an incremental fashion. The experiment group wrote tests first, the control group applied a more conventional approach of writing the tests after the fact. As a result, they found…

We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed.

The interesting result here is that quality is more a factor of the number of tests you write, and not whether you write them first. TDD just happens to be a technique in which the natural inclination (path of least resistance) is to write more tests than less in the same time span. The lesson here is even if you don’t buy TDD, you should still be writing automated unit tests for your code.

Obviously such a controlled experiment on undergraduate students leaves one wondering if the conclusions drawn can really be applied to the workplace. The researches do address this question of validity…

The external validity of the results could be limited since the subjects were students. Runeson [21] compared freshmen, graduate, and professional developers and concluded that similar improvement trends persisted among the three groups. Replicated experiments by Porter \ and Votta [22] and Höst et al. [23] suggest that students may provide an adequate model of the professional population.

Other evidence they refer to even suggests that in the case of advance techniques, the benefit that professional developers gain outweighs that of students, which could bolster the evidence they present.

My favorite part of the paper is the section in which they offer their explanations for why they believe that Test-First programming might offer productivity benefits. I won’t dock them for using the word synergistic.

We believe that the observed productivity advantage of Test-First subjects is due to a number of synergistic effects:

  • Better task understanding. Writing a test before implementing the underlying functionality requires \ the programmer to express the functionality unambiguously.
  • Better task focus. Test-First advances one test case at a time. A single test case has a limited scope. Thus, the programmer is engaged in a decomposition process in which larger pieces of functionality are broken down to smaller, more manageable chunks. While developing the functionality for a single test, the cognitive load of the programmer is lower.
  • Faster learning. Less productive and coarser decomposition strategies are quickly abandoned in favor of more productive, finer ones.
  • Lower rework effort. Since the scope of a single test is limited, when the test fails, rework is easier. When rework immediately follows a short burst of testing and implementation activity, the problem context is still fresh in the programmer’s mind. With a high number of focused tests, when a test fails, the root cause is more easily pinpointed. In addition, more frequent regression testing shortens the feedback cycle. When new functionality interferes with old functionality, this situation is revealed faster. Small problems are detected before they become serious and costly.

Test-First also tends to increase the variation in productivity. This effect is attributed to the relative difficulty of the technique, which is supported by the subjects’ responses to the post-questionnaire and by the observation that higher skill subjects were able to achieve more significant productivity benefits.

So while I don’t expect that those who are resistant or disparaging of TDD will suddenly change their minds on TDD, I am encouraged by this result as it jives with my own experience. The authors do cite several other studies (future reading material, woohoo!) that for the most part support the benefits of TDD.

So while I’m personally convinced of the benefits of TDD and feel the evidence thus far supports that, I do agree that the evidence is not yet overwhelming. More research is required.

I prefer to take a provisional approach to theories, ready to change my mind if the evidence supports it. Though in this case, I find TDD a rather pleasant fun enjoyable method for writing code. There would have to be a massive amount solid evidence that TDD is a colossal waste of time for me to drop it.

Technorati Tags: TDD,Unit Tests,Research

comments suggest edit

After joining Microsoft and drinking from the firehose a bit, I’m happy to report that I am still alive and well and now residing in the Seattle area along with my family. In meeting with various groups, I’ve been very excited by how much various teams here are embracing Test Driven Development (and its close cousin, TAD aka Test After Development). We’ve had great discussions in which we really looked at a design from a TDD perspective and discussed ways to make it more testable. Teams are also starting to really apply TDD in their development process as a team effort, and not just sporadic individuals.

BugsTDD doesn’t work well in a vacuum. I mean, it can work to adopt TDD as an individual developer, but the effectiveness of TDD is lessened if you don’t consider the organizational changes that should occur in tandem when adopting TDD.

One obvious example is the fact that TDD works much better when everyone on your team applies it. If only one developer applies TDD, the developer loses the regression tests benefit of having a unit test suite when making changes that might affect code written by a coworker who doesn’t have code coverage via unit tests.

Another example that was less obvious to me until now was how the role of a dedicated QA (Quality Assurance) team changes subtly when part of a team that does TDD. This wasn’t obvious to me because I’ve rarely been on a team with such talented, technical, and dedicated QA team members.

QA teams at Microsoft invest heavily in test automation from the UI level down to the code level. Typically, the QA team goes through an exploratory testing phase in which they try to break the app and suss out bugs. This is pretty common across the board.

The exploratory testing provides data for the QA team to use in determining what automation tests are necessary and provide the most bang for the buck. Engineering is all about balancing constraints so we can’t write every possible automated test. We want to prioritize them. So if exploratory testing reveals a bug, that will be a high priority area for test automation to help prevent regressions.

However, this appears to have some overlap with TDD. Normally, when someone reports a bug to me, the test driven developer, I will write a unit test (or tests) that should pass but fails because of that bug. I then fix the bug and ensure my test (or tests) now pass because the bug is fixed. This prevents regressions.

In order to reduce duplication of efforts, we’re looking at ways of reducing this overlap and adjusting the QA role slightly in light of TDD to maximize the effectiveness of QA working in tandem with TDD.

In talking with our QA team, here’s what we came up with that seems to be a good guideline for QA in a TDD environment:

  • QA is responsible for exploratory testing along with non-unit tests (system testing, UI testing, integration testing, etc…). TDD often focuses on the task at hand so it may be easy for a developer to miss obscure test cases.
  • Upon finding bugs in exploratory testing, assuming the test case doesn’t require external systems (aka it isn’t a system or integration test), the developer would be responsible for test automation via writing unit tests that expose the bug. Once the test is in place, the developer fixes the bug.
  • In the case where the test requires integrating with external resources and is thus outside the domain of a unit test, the QA team is responsible for test automation.
  • QA takes the unit tests and run them on all platforms.
  • QA ensures API integrity by testing for API changes.
  • QA might also review unit tests which could provide ideas for where to focus exploratory testing.
  • Much of this is flexible and can be negotiated between Dev and QA as give and take based on project needs.

Again, it is a mistake to assume that TDD should be done in a vacuum or that it negates the need for QA. TDD is only one small part of the testing story and if you don’t have testers, shame on you ;) . One anecdote a QA tester shared with me was a case where the developer had nice test coverage of exception cases. In these cases, the code threw a nice informative exception. Unfortunately a component higher up the stack swallowed the exception, resulting in a very unintuitive error message for the end user. This might rightly be considered an integration test, but the point is that relying soley on the unit tests caused an error to go unnoticed.

TDD creates a very integrated and cooperative relationship between developers and QA staff. It puts the two on the same team, rather than simply focusing on the adversarial relationship.

Technorati Tags: TDD,Microsoft

comments suggest edit

File this in my learn something new every day bucket. I received an email from Steve Maine after he read a blog post in which I discuss the anonymous object as dictionary trick that Eilon came up with.

He mentioned that there is an object initializer syntax for collections and dictionaries.

var foo = new Dictionary<string, string>()
{
  { "key1", "value1" },
  { "key2", "value2" }
};

That’s pretty nice!

Here is a post by Mads Torgersen about collections and collection initializers.

Naturally someone will mention that the Ruby syntax is even cleaner (I know, because I was about to say that).

However, suppose C# introduced such syntax:

var foo = {"key1" => "value1", 2 => 3};

What should be the default type for the key and values? Should this always produce Dictionary<object, object>? Or should it attempt type inference for the best match? Or should there be a funky new syntax for specifying types?

var foo = <string, object> {"key1" => "value1", "2" => 3};

My vote would be for type inference. If the inferred type is not the one you want, then you have to resort to the full declaration.

Then again, the initializer syntax that does exist is much better than the old way of doing it, so I’m happy with it for now. In working with a statically typed language, I don’t expect that all idioms for dynamic languages will translate over in as terse a form.

Check out this use of lambda expressions by Alex Henderson to create a hash that is very similar in style to what ruby hashes look like. Thanks Sergio for pointing that out in the comments.

Technorati Tags: C#,Object Initializers,Collection Initializers

personal comments suggest edit

132554078_55943228ef[2]

No, the title of this post isn’t suggesting that L.A. is about to be demolished to make way for a hyperspatial express route. It’s pretty much already one big freeway already, isn’t it? ;)

Seriously though, I am going to miss this wonderful city after spending the last fourteen years here. A lot of people look at L.A. as a place they could never live. I thought that myself coming from Alaska, but boy was I wrong.152767725_7ca21f59c9[1]

I think the weather alone is enough to move to L.A. Perhaps the only place with better weather might be San Diego. But along with great weather during the day, L.A. has a great nightlife.

Being an outdoorsy type, I’ve always been fortunate that a good camping trip is just a short drive away.

16831248_a228be178e[1]

Not to mention great skiing and snowboarding.

16831344_77a321d104[1]

I even took part in my first protest march in Los Angeles, something that pretty much never happened in Alaska.

138586490_61d126c584[1]

One of the things I’ll miss the most is the great soccer game I’ve been a part of for 10 years there. This particular regular pick-up game has been going on for around 15 to 20 years in one form or another.

347545471_ab87b90733[1]

Today the movers came and moved all of our stuff out and we left the keys behind in our now empty house. Tomorrow morning, we will be on a plane for Seattle.

The one thing that has made this move a lot easier for us has been that Microsoft took care of just about everything. Let me tell you, if you’re thinking about joining Microsoft, we might not have free food like some other companies, but we do have a great healthcare plan and the relocation benefits (at least for me) are astounding. They have made this transition a lot less a nightmare than it could’ve been. Though it has still been tough with all three of us catching colds around the same time.

Stay classy L.A., I will miss you. Seattle, watch your back. We’ll be there tomorrow.

Technorati Tags: Moving,Seattle,L.A.

code comments suggest edit

Six months ago and six days after the birth of my son, Subkismet was also born which I introduced as the cure for comment spam. The point of the project was to be a useful class library containing multiple spam fighting classes that could be easily integrated into a blog platform or any website that allows users to comment.

One of my goals with the project was to make it safe to enable trackbacks/pingbacks again.

I’ve been very happy that others have joined in the Subkismet efforts. Mads Kristensen, well known for BlogEngine.NET, contributed some code.

Keyvan Nayyeri has also been hard at work putting the finishing touches on an implementation of a client for the Google Safe Browsing API service.

Keyvan just announced the release of Subkismet 1.0. Here is a list of what is included in this release. I’ve bolded the items that are new since the first release of Subkismet.

For the current version (1.0) we’ve included following tools in Subkismet core library:

  • Akismet service client: Akismet is an online web service that helps to block spam. This service client implementation for .NET is done by Phil and lets you send your comments to the service to check them against a huge database of spammers.
  • Invisible CAPTCHA control: A clever CAPTHCA control that hides from real human but appears to spammers. This control is done by Phil and you can read more about it here.
  • CAPTCHA control: An excellent implementation of traditional image CAPTCHA control for ASP.NET applications.
  • Honeypot CAPTCHA: Another clever CAPTCHA control that hides form fields from spam bots to stop them. This control is done by Phil and you can read more about it here.
  • Trackback HTTP Module: An ASP.NET Http Module that blocks trackback spam. This module is done by Mads Kristensen and he has explained his method in a blog post and this Word document.
  • Google Safe Browsing API service client: This is done by me and you saw its Beta 2 version last week. Google Safe Browsing helps you check URLs to see if they’re phishing or malware links.

Since it is always fun for us developers to write code with the latest technologies, we’re planning to have the next release target .NET 3.5.

Part of the reason for that is that over the holidays, I wrote a little something something just for fun as a way of getting up to speed with C# 3.0. It was a lot of fun and I think my secret project will make a nice addition to Subkismet. I’ll write about it when it is ready for action.

Unfortunately, this move has made it difficult to fully complete the implementation and test it out on real data.

Subkismet is hosted on CodePlex. Many thanks go out to Mads and others for their contributions and to Keyvan for all the implementation work as well as release work he did. And for the record, Keyvan mentioned that I ordered him to prepare the release. While I do have dictatorial tendencies, I think this is too harsh a description. I’d like to say I requested that he prepare the release. ;)

Technorati Tags: Subkismet,Spam

personal comments suggest edit

You’ve been forewarned, this is yet another end-of-year slightly self-inflating retrospective blog post (complete with the cheesy meta-blogging introduction).

But this year for me was quite significant and full of big changes. On a personal level, this was the year the agent of my little world domination plan was hatched.

Newborn Cody With
Glasses

He was tiny then, but six months later, he’s gotten quite big. He’s remained in the 95^th^ percentile in height for a baby his age.

Cody in a shark
costume

As much fun as it has been with Cody, he’s never taken his devious eye off the prize in keeping with the master plan.

cody-plotting

He is not going to be a kid one wants to mess with. I kid you not, he laughs out loud with glee when I whack him with a pillow or pound on his belly with my fists. That’s his idea of fun. As I said originally, he’s a li’l thug.

In terms of my professional life, it’s been quite a whirlwind year. All in the same year, I:

Throughout this whole ordeal, my wonderful wife has been most gracious and supportive. Consider the fact that when she was pregnant with Cody, I was working as a co-owner at a startup, often not taking a paycheck so we could pay our employees. That has to be stressful.

Then after one job change, I tell her about this great opportunity and ask her if it is ok to move to Redmond. Her response was immediate. Let’s go!What a woman!

Unfortunately once we made that decision, in a wonderful display of synchronicity, the Los Angeles housing market stated a downward spiral. We had zero offers for the first two months of trying to sell our house. It was so bad, in fact, that we decided we would try and rent the house. On the day we had an open house for renters, an offer came just in the nick of time!

So while we close out another year, we’re also closing out the Los Angeles era for my family. We really love this town. For one thing, we have some great friends who live here. We’re trying to recruit them up to Seattle, but they keep mumbling something about the rain and cold as if it were a bad thing.

Something else we’ll miss is the great diverse food here. Many regard the Korean food in L.A. as the best in the world. And the Japanese market and restaurants on Sawtelle have been a staple for us.

I’ll particularly miss my pickup soccer crew as well as the league of ex-pros. Lots to love about L.A., but at the same time, we’re quite excited about our new adventures and opportunities up in the Seattle area. For example, the opportunity to breathe clean air!

To all who have read this far, I hope 2007 was a good year for you and that 2008 proves to be even better. Happy New Year!

code comments suggest edit

Faceoff Recently, Maxfield Pool from CodeSqueeze sent me an email about a new monthly feature he calls Developer Faceoff in which he pits two developers side-by-side for a showdown.

It’s an obvious attempt to gain readers via an appeal to vanity of the featured bloggers who have no choice to link to it. Brilliant!  :)

Seriously, I think it’s a fun and creative idea, but I have no credibility in saying so because I’m obviously biased being featured alongside my longtime nemesis and friend, Scott Hanselman. So check it out for yourself.

Some of my answers were truncated due to the format so I thought it’d be fun to elaborate on a couple of questions.

5. Are software developers - engineers or artists?

Don’t take this as a copout, but a little of both. I see it more as craftsmanship. Engineering relies on a lot of science. Much of it is demonstrably empirical and constrained by the laws of physics. Software is less constrained by physics as it is by the limits of the mind.

Software in many respects is as much a social activity as it is an engineering discipline. Working well as a team is essential, as is understanding your users and how they get their work done so your software helps, rather than hinders.

Much of software is based on faith and anecdotal evidence. For example, do we have scientific evidence that TDD improves the design of code? Do we have empirical evidence that it doesn’t? Research is scant, in part because it’s extremely challenging to set up valid experiments. Much of software research focuses on retrospective research, the sort that sociologists do.

So again, back to the question. Crafting a sorting algorithm is engineering. Building a line of business app that delights all the users is an art.

8. What is the biggest mistake you made along the way?

My first year as software developer, I deployed a reset password feature to a large music community site. A week or so after we deployed, a child of a VP of the client (or maybe it was an investor, I can’t remember) complained she couldn’t get into her account and hadn’t received her new password.

I was “sure” I had tested it, but it turned out I hadn’t done a very good job of it. There was a bug in my code and a bunch of users were waiting around for a regenerated password that would never arrive.

Needless to say, my boss wasn’t very happy it and for a good while I tread lightly worried I’d lose my job if I made another mistake. In retrospect, it was a good thing to get such a big production mistake out of the way early because I was extremely careful afterwards, always triple checking my work.

Tags: Software Development

code, asp.net mvc comments suggest edit

A common pattern when submitting a form in ASP.NET MVC is to post the form data to an action which performs some operation and then redirects to another action afterwards. The only problem is, the form data is not repopulated automatically after a redirect. Let’s look at remedying that, shall we?

When submitting form data, the ASP.NET MVC Toolkit includes a helper extension method that makes it easy to go the other direction, populating an object from the request. Check out the following simplified controller action adapted from an example ScottGu’s post on handling form data.

[ControllerAction]
public void Create()
{
  Article article = new Article();
  article.UpdateFrom(Request.Form);
  
  repository.Persist(article);
 
  RedirectToAction("List");
}

UpdateFrom() is an extension method on the type object that will attempt to populate each property of the object with a corresponding value from the posted form.

The example above doesn’t address what should happen if the data passed to product is incomplete or incorrect.

In that situation, I may want to redirect back to the action that renders the form for creating a new product. But now I’m responsible for populating the form fields because the posted values are lost after a redirect.

One option is to use the TempData dictionary, which is intended for these scenarios in which you need to persist data for the next request, but not afterwards.

[ControllerAction]
public void Create()
{
  Article article = new Article();
  article.UpdateFrom(Request.Form);
  
  //Pretend this was extensive validation
  if(article.IsValid)
  {
    repository.Persist(article);
    RedirectToAction("List");
  }
  else
  {
    TempData["message"] = "Please fill in all fields!";
    TempData["title"] = article.Title;
    TempData["body"] = article.Body;
    //... and so on and so on...

    //Takes us to a form for creating a product.
     RedirectToAction("New");
  }
}

That’ll work nicely for object with a small number of properties, but when you have a lot of properties to store in TempData, it gets a bit cumbersome. We have an extension method for populating an object from a form, why not do something similar for populating TempData from an object?

I put together a series of helper extension methods that help simplify the above case. Using these extension methods, you can reduce the above code to…

[ControllerAction]
public void Create()
{
  Article article = new Article();
  article.UpdateFrom(Request.Form);
  
  //Pretend this was extensive validation
  if(article.IsValid)
  {
    repository.Persist(article);
    RedirectToAction("List");
  }
  else
  {
    TempData["Message"] = "Please supply all fields.";
    TempData.PopulateFrom(article);

    //Takes us to a form for creating a product.
     RedirectToAction("New");
  }
}

PopulateFrom is an extension method on TempDataDictionary that populates the dictionary collection using whatever you pass into it. I wrote some overloads so you can populate it directly from an object, the form collection, or from a dictionary. I will show the code for the extension methods at the end.

This gets us part of the way, but we still need to populate the form field values in the view. Here is the relevant code snippet from the New view.

<% using(Html.Form("Article", "Create")) { %>
    
<span class="error">
    <%= ViewContext.TempData.SafeGet("Message") %>
</span>

<label for="title">Title: </label> 
<%= Html.TextBox("title"
  , Html.Encode(ViewContext.TempData.SafeGet("title"))) %>

<label for="body">Body: </label>
<%= Html.TextArea("body"
  , Html.Encode(ViewContext.TempData.SafeGet("body"))) %>


<%= Html.SubmitButton() %>
<% } %>

When retrieving a value from the TempData dictionary, if the key you specify doesn’t exist, it throws an exception. So I added a SafeGet extension method to make it cleaner to extract values from TempData.

Note: We plan on making TempData a direct property of ViewPage in the future, so you don’t have to go through ViewContext.

Here is the code for my temp data extensions…

public static class TempDataExtensions
{
  public static void PopulateFrom(this TempDataDictionary tempData, object o)
  {
    foreach (PropertyValue property in o.GetProperties())
    {
      tempData[property.Name] = property.Value;
    }
  }

  public static void PopulateFrom(this TempDataDictionary tempData
    , NameValueCollection nameValueCollection)
  {
    foreach (string key in nameValueCollection.Keys)
      tempData[key] = nameValueCollection[key];
  }

  public static void PopulateFrom(this TempDataDictionary tempData
    , IDictionary<string, object> dictionary)
  {
    foreach (string key in dictionary.Keys)
      tempData[key] = dictionary[key];
  }

  public static string SafeGet(this TempDataDictionary tempData, string key)
  { 
    object value;
    if (!tempData.TryGetValue(key, out value))
      return string.Empty;
    return value.ToString();
  }
}

These methods rely on some object extension methods I wrote based on the work that Eilon Lipton did with using C# 3.0 Anonymous Types as Dictionaries.

public static class ObjectHelpers
{
  public static IDictionary<string, object> ToDictionary(this object o)
  {
    Dictionary<string, object> properties = new Dictionary<string, object>();

    foreach (PropertyValue property in o.GetProperties())
    {
      properties.Add(property.Name, property.Value);
    }
    return properties;
  }


  internal static IEnumerable<PropertyValue> GetProperties(this object o)
  {
    if (o != null)
    {
      PropertyDescriptorCollection props = TypeDescriptor.GetProperties(o);
      foreach (PropertyDescriptor prop in props)
      {
        object val = prop.GetValue(o);
        if (val != null)
        {
          yield return new PropertyValue { Name = prop.Name, Value = val };
        }
      }
    }
  }
}

internal sealed class PropertyValue
{
  public string Name { get; set; }
  public object Value { get; set; }
}

The ToDictionary method makes it easy to convert an anonymous typed object to a dictionary.

Using these extension methods should help reduce some of the chore of handling form submissions with the CTP version ASP.NET MVC. In future versions of the framework, I hope we can make some of these common scenarios more streamlined.

And before I forget, I have a simple solution for download that includes full source code for the extension methods I wrote as well as a trivially simple application that demonstrates using this code.

Tags: ASP.NET MVC , TempData , Form Handling

asp.net, code, asp.net mvc comments suggest edit

The ASP.NET Routing engine used by ASP.NET MVC plays a very important role. Routes map incoming requests for URLs to a Controller and Action. They also are used to construct an URL to a Controller/Action. In this way, they provide a two-way mapping between URLs and controller actions.

When building routes, it may be useful to write unit tests for the routes to ensure that you’ve set up the proper mappings you intend.

ScottGu touched a bit on unit testing routes in part 2 of his series on MVC in which he covers URL Routing. In this post, we’ll go into a little more depth with testing routes.

To keep it interesting, let me show you what the final unit test looks like for testing a route. That way, if you don’t care about the details, you can skip all this discussion and just download the code.

[TestMethod]
public void RouteHasDefaultActionWhenUrlWithoutAction()
{
  RouteCollection routes = new RouteCollection();
  GlobalApplication.RegisterRoutes(routes);

  TestHelper.AssertRoute(routes, "~/product"
    , new { controller = "product", action = "Index" });
}

The first part of the test simply populates a collection with routes from your Global application class defined in Global.asax.cs. The second part is a call to another helper method that attempts to map a route to the specified virtual URL and compares the route data to a dictionary of name/value pairs. The dictionary is passed in as an anonymous type using a technique that my coworker Eilon Lipton wrote about on his blog.

Let’s take a quick look at the RegisterRoutes method so you can see which routes I am testing.

public static void RegisterRoutes(RouteCollection routes) {
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapRoute("blog-route", "blog/{year}/{month}/{day}",
        new { controller = "Blog", action = "Index", id = "" },
        new { year = @"\d{4}", month = @"\d{2}", day = @"\d{2}" }
    );

    routes.MapRoute(
        "Default",
        "{controller}/{action}/{id}",
        new { controller = "Home", action = "Index", id = "" }
    );
}

Looks like your standard set of routes, no? I threw one in there that looks like one you might use with a blog engine.

Next, I’ll show you how I would write a test the long way using MoQ.

[TestMethod]
public void CanMapNormalControllerActionRoute() {
    RouteCollection routes = new RouteCollection();
    GlobalApplication.RegisterRoutes(routes);

    var httpContextMock = new Mock<HttpContextBase>();
    httpContextMock.Setup(c => c.Request.AppRelativeCurrentExecutionFilePath)
        .Returns("~/product/list");

    RouteData routeData = routes.GetRouteData(httpContextMock.Object);
    Assert.IsNotNull(routeData, "Should have found the route");
    Assert.AreEqual("product", routeData.Values["Controller"]
        , "Expected a different controller");
    Assert.AreEqual("list", routeData.Values["action"]
        , "Expected a different action");
}

Ok, that isn’t all that bad. It may seem like a lot of code, but it’s pretty straightforward, assuming you understand the general pattern for using a mock framework.

However, we can shorten a lot of this code by using an extension method I wrote in a previous post. I actually wrote an overload that makes it easier to mock a request for a specific URL.

That gets us further, but we can do so much more. Here is the code I wrote for my AssertRoute method.

public static void AssertRoute(RouteCollection routes, string url, 
    object expectations) 
{
    var httpContextMock = new Mock<HttpContextBase>();
    httpContextMock.Setup(c => c.Request.AppRelativeCurrentExecutionFilePath)
        .Returns(url);

    RouteData routeData = routes.GetRouteData(httpContextMock.Object);
    Assert.IsNotNull(routeData, "Should have found the route");

    foreach (PropertyValue property in GetProperties(expectations)) {
        Assert.IsTrue(string.Equals(property.Value.ToString(), 
            routeData.Values[property.Name].ToString(), 
            StringComparison.OrdinalIgnoreCase)
            , string.Format("Expected '{0}', not '{1}' for '{2}'.", 
            property.Value, routeData.Values[property.Name], property.Name));
    }
}

This code makes use of the GetProperties method I lifted from Eilon’s blog post, Using C# 3.0 Anonymous types as Dictionaries.

The expectations passed to this method are the name/value pairs you expect to see in the RouteData.Values dictionary.

I hope you find this useful. The code (along with other unit test examples) are in solution ready for download.

UPDATE: I updated the blog post and solution (7/27 1:38 PM PST) to account for all the changes to routing made since I wrote this post originally.

UPDATE AGAIN: Updated the post to use MoQ 3.0 and cleaned up the code a slight bit.

UPDATE: 03/11/2012 - Fixed the broken download link. Rewrote the tests to use xUnit.NET.

Tags: ASP.NET MVC , TDD , Routing

code, asp.net mvc comments suggest edit

Yesterday, Wally McClure interviewed me for the ASP.NET Podcast Show. We chatted for around half an hour on my background, Microsoft, and ASP.NET MVC.

It was a fun chat, but I have to warn you, I was very sleep deprived (a constant condition lately), so at points I tend to ramble a bit, second-guess myself (I was right the first time, model-view-thing-editor!), and even contradict myself.

I’m not normally that harebrained. I promise. Ok, maybe just a little. With all those caveats in place, give it a listen.

This is why writing a blog is so much easier than being interviewed. All the times I correct myself are not visible in a blog entry.

Tags: ASP.NET MVC , Podcast

asp.net, code, asp.net mvc comments suggest edit

I have a set of little demos apps I’ve been working on that I want to release to the public. I need to clean them up a bit (you’d be surprised how much I swear in comments) and make sure they work with the CTP. Hopefully I will publish them on my blog over the next few weeks.

In the meanwhile, there’s some great stuff being posted by the community I want to call out. All these great posts are making my life easier.

  • Routing Revisited
    • Sean Lynch talks about some interesting route scenarios. Currently the Route object doesn’t support all the scenarios he is attempting. This is good feedback and we’re already looking into it. He mentioned wanting Subtext-style URLs. You better believe I’m going to bring this up. ;) He also brings up a good point clarifying which Page templates in the Add New Item dialog to select when working with master pages. I’m sorry that dialog is crazy busy.
  • Using script.aculo.us with ASP.NET MVC
    • Chad Myers does some fancy schmancy AJAX stuff with ASP.NET MVC and the ever so flashy Script.aculo.us framework. What! No JQuery?! ;) Hasn’t anyone told Chad that Ajax is just a fad? All the interactivity you’ll ever need is in the <blink /> tag.
  • ASP.NET MVC Framework - Create your own IRouteHandler
    • Fredik Normén didn’t like the fact that IControllerFactory.GetController takes in the type of the controller (something we’re definitely looking at) because it made it more difficult to use Spring.NET. So he went an implemented his own IControllerFactory and his own IRouteHandler. This is a great demonstration of how to swap out a couple of nodes in the “snake diagram” with your own implementation. While it’s a validation of our extensibility story that he was able to accomplish this scenario, the fact that he needed to do all this also highlights areas for improvements.
  • MvcContrib Open Source Project Call for Participation
    • Jeffrey Palermo writes about a brand spanking new OSS project to build useful tools and libraries for MVC. Gotta give them credit for starting an OSS project on a CTP technology the day before it launched.

It is really great to see people building demos and applications on top of ASP.NET MVC. Learing about the rough areas that you run into doing real-world tasks is immensely valuable feedback.

Forums!

We have an official ASP.NET MVC forum now for discussing…surprise surprise…ASP.NET MVC. If you have questions about ASP.NET MVC, I encourage you ask them there for the benefit of others. Feel free to comment on my blog if you don’t get a satisfactory answer in a reasonable amount of time.

Even better, jump in and help answer questions!

Tags: ASP.NET MVC