code, asp.net mvc comments suggest edit

Yesterday, Wally McClure interviewed me for the ASP.NET Podcast Show. We chatted for around half an hour on my background, Microsoft, and ASP.NET MVC.

It was a fun chat, but I have to warn you, I was very sleep deprived (a constant condition lately), so at points I tend to ramble a bit, second-guess myself (I was right the first time, model-view-thing-editor!), and even contradict myself.

I’m not normally that harebrained. I promise. Ok, maybe just a little. With all those caveats in place, give it a listen.

This is why writing a blog is so much easier than being interviewed. All the times I correct myself are not visible in a blog entry.

Tags: ASP.NET MVC , Podcast

asp.net, code, asp.net mvc comments suggest edit

I have a set of little demos apps I’ve been working on that I want to release to the public. I need to clean them up a bit (you’d be surprised how much I swear in comments) and make sure they work with the CTP. Hopefully I will publish them on my blog over the next few weeks.

In the meanwhile, there’s some great stuff being posted by the community I want to call out. All these great posts are making my life easier.

  • Routing Revisited
    • Sean Lynch talks about some interesting route scenarios. Currently the Route object doesn’t support all the scenarios he is attempting. This is good feedback and we’re already looking into it. He mentioned wanting Subtext-style URLs. You better believe I’m going to bring this up. ;) He also brings up a good point clarifying which Page templates in the Add New Item dialog to select when working with master pages. I’m sorry that dialog is crazy busy.
  • Using script.aculo.us with ASP.NET MVC
    • Chad Myers does some fancy schmancy AJAX stuff with ASP.NET MVC and the ever so flashy Script.aculo.us framework. What! No JQuery?! ;) Hasn’t anyone told Chad that Ajax is just a fad? All the interactivity you’ll ever need is in the <blink /> tag.
  • ASP.NET MVC Framework - Create your own IRouteHandler
    • Fredik Normén didn’t like the fact that IControllerFactory.GetController takes in the type of the controller (something we’re definitely looking at) because it made it more difficult to use Spring.NET. So he went an implemented his own IControllerFactory and his own IRouteHandler. This is a great demonstration of how to swap out a couple of nodes in the “snake diagram” with your own implementation. While it’s a validation of our extensibility story that he was able to accomplish this scenario, the fact that he needed to do all this also highlights areas for improvements.
  • MvcContrib Open Source Project Call for Participation
    • Jeffrey Palermo writes about a brand spanking new OSS project to build useful tools and libraries for MVC. Gotta give them credit for starting an OSS project on a CTP technology the day before it launched.

It is really great to see people building demos and applications on top of ASP.NET MVC. Learing about the rough areas that you run into doing real-world tasks is immensely valuable feedback.

Forums!

We have an official ASP.NET MVC forum now for discussing…surprise surprise…ASP.NET MVC. If you have questions about ASP.NET MVC, I encourage you ask them there for the benefit of others. Feel free to comment on my blog if you don’t get a satisfactory answer in a reasonable amount of time.

Even better, jump in and help answer questions!

Tags: ASP.NET MVC

altdotnet design-patterns comments suggest edit

Love them or hate them, the ALT.NET mailing list is a source of interesting debate, commentary and insight. I can’t help myself but to participate. Debate is good. Stifling debate is bad. Period. End of debate. (see!? That was bad!)

The community itself is a young community, and as such, they are going through a period of identity forming. What are their shared values? What does it mean to be an ALT.NET-er? It’s not exactly clear yet, but it is starting to form.

One thing I would caution this community is to be careful in how they define their shared principles. For example, in one thread one individual mentioned debating me and then in the same message proposed the idea of Composition over Inheritance as a shared principle.

In response, someone posted this:

You can throw the book at those people–literally. Favoring composition over inheritance is straight out of the Gang of Four book. Don’t like design patterns? Fine. No problem. I have a couple of Don Box COM+ books that say the exact same thing.

Here was my response, which I also wanted to put in a blog post since it represents pretty well what I think.

I think ALT.NET should focus more on the principles of thinking for yourselfand a desire to improve.

Favoring composition over inheritance is straight out of the Gang of Four book.

So is the Singleton pattern. So is the Template Method pattern.

Sorry, Appeal to Authority doesn’t work for me. Look, I’m not against composition over inheritance in many cases. Perhaps most cases. What I am against is saying that it applies in all cases and that if you don’t do it, you’re not ALT.NET.

I’m against the blind application of these pithy catch phrases. Blindly applying a “best practice” is just as irresponsible as never applying a “best practice”. There is no perfect design. There is no one true way. There is no one size fits all.

Why favor composition over inheritance? What trade-offs are you making when you do so? Developers should think through these things when they make these choices. And when a developer does think through the issue, but makes a choice that differs from what you think, you should applaud that. At least the developer thought through the decision.

I don’t care that a developer doesn’t favor composition over inheritance in a specific case. I only care that the developer thought it through, had a reason for the decision, wants to improve.

The goal is not to bend developers to the will of some specific patterns, but to get them to think about their work and what they are doing. For example, one advantage with inheritance is that it is easier to use than composition. However, that ease of use comes at the cost that it is harder to reuse because the subclass is tied to the parent class.

One advantage of composition is that it is more flexible because behavior can be swapped at runtime. One disadvantage of composition is that the behavior of the system may be harder to understand just by looking at the source. These are all factors one should think about when applying composition over inheritance.

So while I agree that you should favor composition over inheritance, inheritance is still necessary. After all, “the set of components is never quite rich enough in practice.”

That quote is from Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides in Design Patterns. But don’t believe it just because they said it. After all, I would hate to be guilty of an Appeal to Authority. ;)

asp.net, code, asp.net mvc comments suggest edit

Eilon Lipton, the lead developer on the ASP.NET MVC project shares some of his thoughts on the design philosophy guiding the shaping of the framework.

There have been many post describing what the framework is and how to perform tasks, which are really useful. I think a more reflective post like this is a breath of fresh air and a nice insight into how the team is making decisions.

Rodin-Denker-KyotoEilon also covers some of the lessons we’ve already learned in building the CTP, and some of the changes we have planned for the next CTP.

At the end he lists some interesting facts. Here are my two favorites:

  • For the released CTP, the unit test code coverage numbers were about 93%, far more than any other major feature area. This does not include the code coverage by our QA guys, which I’m sure would bring the number up to at least 99%.
  • We had about 250 unit tests for MVC, and the ratio of unit test code to product code is about 1.9 to 1 (in terms of the size of the code files, not lines of code; I’m too lazy to do the latter!).

It has been a real joy to work with this team. For example, I’ve never worked with such experienced and smart QA people before. I always read that the QA team should be involved in design meetings, but Microsoft is the first place I worked where we actually did that in a real manner.

Obviously the developers are top notch as you can see from Eilon’s blog in general. Hopefully the PM of this team can keep up. ;)

Tags: ASP.NET MVC

asp.net, code, asp.net mvc comments suggest edit

UPDATE: ASP.NET MVC now works with Visual Web Developer Express SP1

Some developers who downloaded the ASP.NET Extensions CTP specifically for ASP.NET MVC and then opened up Visual Web Developer like it was Christmas morning instead got a lump of coal.

We currently only include Web Application Projects for ASP.NET MVC, which Visual Web Developer does not support. I was planning to write up a post on this, but Scott Koon beat me to it.

And thank goodness! I’m busy enough as it is already. It’s all part of my master plan to have members of the community doing my job for me so we can finally take that trip to Tahiti and sip margaritas while I tell my bosses I’m “telecommuting”. ;) Seriously though, I appreciate it.

If you’re using Visual Web Developer Express, read Scott’s blog post on how to get MVC working for you.

I personally would like to get ASP.NET MVC in more hands than less hands, so I hope we have a better story for this with a future release.

Tags: ASP.NET MVC , Visual Web Developer Express , VWD

asp.net, code, asp.net mvc, tdd comments suggest edit

UPDATE: Completely ignore the contents of this post. All of this is out-dated. Test specific subclasses are no longer necessary with ASP.NET MVC since our April CodePlex refresh

Just a brief note on writing unit tests for controller actions. When your action has a call to RedirectToAction or RenderView (yeah, pretty much every action) be aware that these methods have dependencies on various context objects.

If you attempt to mock these objects, you sometimes also have to mock their dependencies and their dependencies’ dependencies and so on, depending on what you are trying to test. This is why I wrote my post on Test Specific Subclasses. It provides an easier way to test some of these cases.

Some of these challenges are the nature of mocking and some of them are due to protected methods that we realize we should probably make public.

In this post, I want to demonstrate a couple of unit test techniques for testing controller actions for the CTP release of the ASP.NET MVC Framework. Remember, this is a CTP so all of this may change in the future. I will be compiling testing patterns into a longer document on unit testing patterns for ASP.NET MVC

Controller with RedirectToAction

Here is the really simple controller we’ll test

public class HomeController : Controller
{
  [ControllerAction]
  public void Index()
  {
    RenderView("Index");
  }

  [ControllerAction]
  public void About()
  {
    RedirectToAction("Index");
  }
}

We will test the About action.

Test Specific Subclass Approach

For the most part, when a test calls RedirectToAction, you just want to no-op that method call. But if you want to verify that the action that is being redirected to is the correct one, here’s one way to test it using a test-specific subclass.

[Test]
public void VerifyAboutRedirectsToCorrectActionUsingTestSpecificSubclass()
{
  HomeControllerTester controller = new HomeControllerTester();
  controller.About();
  Assert.AreEqual("Index", controller.RedirectedAction
    , "Should have redirected to 'Index'.");
}

internal class HomeControllerTester : HomeController
{
  public string RedirectedAction { get; private set; }

  protected override void RedirectToAction(object values)
  {
    this.RedirectedAction = (string)values.GetType()
      .GetProperty("Action").GetValue(values, null);
  }
}

In this test I inherited from the controller I am testing, following the Test Specific Subclass pattern (Note: This pattern leaves a bad taste in some TDDers mouths. I am aware of that. I still like it. But I already know some of you don’t).

One thing that is really ugly is I had to resort to reflection to get the Action we are redirecting to. This testing scenario will be fixed in the next release. Just showing you how it is done now.

Mock Framework Approach

In this test, I will use RhinoMocks to test the same thing as above.

[Test]
public void VerifyAboutRedirectsToCorrectActionUsingMockViewFactory()
{
  RouteTable.Routes.Add(new Route
  {
    Url = "[controller]/[action]",
    RouteHandler = typeof(MvcRouteHandler)
  });

  HomeController controller = new HomeController();
    
  MockRepository mocks = new MockRepository();
  IHttpContext httpContextMock = mocks.DynamicMock<IHttpContext>();
  IHttpRequest requestMock = mocks.DynamicMock<IHttpRequest>();
  IHttpResponse responseMock = mocks.DynamicMock<IHttpResponse>();
  SetupResult.For(httpContextMock.Request).Return(requestMock);
  SetupResult.For(httpContextMock.Response).Return(responseMock);
  SetupResult.For(requestMock.ApplicationPath).Return("/");
  responseMock.Redirect("/Home/Index");

  RouteData routeData = new RouteData();
  routeData.Values.Add("Action", "About");
  routeData.Values.Add("Controller", "Home");
  ControllerContext contextMock = new 
    ControllerContext(httpContextMock, routeData, controller);
  mocks.ReplayAll();

  controller.ControllerContext = contextMock;
  controller.About();

  mocks.VerifyAll();
}

The mock test actually tests the final URL that we would be redirecting to. You can verify this test is actually testing what I say it will by changing the line with “/Home/Index” to something like “/Home/Index2” and see that the test does fail.

Controller With RenderView

Using the same controller class above, let’s write a test to make sure the correct view is rendered.

Using Test Specific Subclass

[Test]
public void VerifyIndexSelectsCorrectViewUsingTestSpecificSubclass()
{
  HomeControllerTester controller = new HomeControllerTester();
  controller.Index();
  Assert.AreEqual("Index", controller.SelectedViewName
    , "Should have selected 'Index'.");
}

internal class HomeControllerTester : HomeController
{
  public string SelectedViewName { get; private set; }
    
  protected override void RenderView(string viewName
    , string masterName, object viewData)
  {
    this.SelectedViewName = viewName;   
  }
}

Using a Mock Framework

UPDATE: Sorry, but the following test doesn’t work in the CTP. I had compiled it against an interim build and not the CTP version. Apologies. For this scenario, you pretty much have to use the subclass approach. We will make this better in the next CTP.

[Test]
public void VerifyIndexSelectsCorrectViewUsingMockViewFactory()
{
  MockRepository mocks = new MockRepository();
  IViewFactory mockViewFactory = mocks.DynamicMock<IViewFactory>();
  IView mockView = mocks.DynamicMock<IView>();
  IHttpContext httpContextMock = mocks.DynamicMock<IHttpContext>();

  HomeController controller = new HomeController();
  RouteData routeData = new RouteData();

  ControllerContext contextMock = new ControllerContext(httpContextMock
    , routeData, controller);

  Expect.Call(mockViewFactory.CreateView(contextMock, "Index"
    , string.Empty, controller.ViewData)).Return(mockView);
  Expect.Call(delegate { mockView.RenderView(null); }).IgnoreArguments();
    
  mocks.ReplayAll();

  controller.ControllerContext = contextMock;
  controller.ViewFactory = mockViewFactory;
  controller.Index();

  mocks.VerifyAll();
}

Please note that while the Rhino Mocks examples look like a lot of code, on a real project I would build up a custom set of Extension methods to effectively create a DSL (Domain Specific Language) for testing my controllers.

I’ve already started on this a bit. Hopefully together, we can build up a really nice library to make testing controllers much more fluid.

In the meanwhile, we will also evaluate the sticking points when it comes to writing tests and do our part to reduce the friction for TDD scenarios.

 

Tags: aspnetmvc , TDD

asp.net, code, asp.net mvc comments suggest edit

UPDATE: Much of this post is out-of-date with the latest versions of MVC. We long sinced removed the ControllerAction attribute.

Note: If you hate reading and just want the code, it is at the bottom.

Eons ago, I was a youngster living in Spain watching my Saturday morning cartoons when my dad walked in bearing freshly made taquitos and a small cup of green stuff. The taquitos looked delicious, but I was appalled at the green stuff.

Was this some kind of joke? My dad wanted me to simply just taste it but I refused because I absolutely knew it would suck just by looking at it. The green stuff, of course, was guacamole, which I love by the truckload now.

Guacamole

With all the code samples and blog posts published about the ASP.NET MVC Framework, there’s been some debate about the big picture stuff.

  • Should I be looking to migrate to this? (Depends)
  • Will this replace Web Forms? (NO!)
  • Is this feature even necessary? (I sure think so!)

Interestingly enough, the most passionate debate I’ve seen is not around these big picture questions, but is centered around very specific detailed design decisions. After all, we are software developers, and if there’s one thing software developers love to do, it’s debate the design and architecture of code.

I’m no exception to this rule and admit I rather enjoy it, sometimes to the point of absurdity and pointlessness. At the end of the day, however, the framework developer has to make a decision and move forward so he or she can go home. These choices will never make everyone happy because there is no such thing as a perfect design that satisfies everyone.

That doesn’t mean we give up trying though!

Hopefully the quality of feedback the framework designer receives pushes that designer to reevaluate assumptions and either reinforces the decision, or provides insight for an even better decision.

One design decision in particular that seems to have drawn somewhat disproportionate amount of attention is the decision to require a [ControllerAction] attribute on public methods within a controller class that are meant to be a web visible action. There’s been much discussion in various mailing lists and in some blog posts before the CTP has been released.

This post is not going to rehash these concerns nor address these concerns. If I think it would help, perhaps I will have a follow-up post in which I explain some of the reasoning behind this decision. This would give those who feel all that strongly about this a chance to make a well-reasoned point by point refutation should they wish.

I worry about focusing too much on this one issue. I don’t want it to become such a hang up that it disproportionately dominates discussion and feedback at the expense of gathering valuable feedback on other areas of the framework. There is much more to the framework than this one issue.

At the same time, I do understand this issue is about more than a single attribute. It’s about applying a design philosophy centered around convention over configurationand where it works well and where it doesn’t.

Like guacamole, I hope that critics of this particular issue don’t judge it by sight alone and give it a real honest try. See if it makes as big a difference as you think. Maybe it does. Maybe it doesn’t. At least you’ve tasted it. Feedback based on trying it out for a while is more valuable and potent than feedback based on just on seeing sample code.

Please understand that I’m not dismissing feedback based on what you have seen. It certainly is valuable and much of it has been incorporated and discussed in our design meetings. Some of it has led to changes. All I am saying is that as valuable as that feedback is, feedback based on usage is even more valuable.

The Convention Controller

However, if you still hate it, I have a little workaround for you :). I’ve written a custom controller base class that allows for a more “convention over configuration” approach to building your controllers named ConventionController.

By inheriting from this controller instead of the vanilla Controller class, you no longer need to add the [ControllerAction] attribute to every public method. You also don’t need to call RenderView if you name your view the same name as the action. Of course this means you cannot use strongly typed views and must use the ViewData property bag to pass data to the view.

So instead of writing your controller like this:

public class HomeController : Controller
{
  [ControllerAction]
  public void Index()
  {
    //Your action logic
    RenderView("Index");
  }
}

Using my class you could write it like this

public class HomeController : ConventionController
{
  public void Index()
  {
    //Your action logic
  }
}

The key point in posting this code is to demonstrate how easy it is to override the behavior we bake in with something more to your liking. When you look at the code, you will see it wasn’t rocket science to do what I did. Extending the framework is quite easy.

We may not provide the exact out of box experience that you want, but we do try and give you the tools to control your own destiny with this framework and provide you with the power of choice.

I will look into adding this to the MVC Contrib project started by some community members. In the meanwhile, if you like this approach or style of building controllers, you can either add the ConventionController.cs class to your own project or compile it into a separate assembly and drop it into your own projects.

To get to this class, download the example code.

Tags: ASP.NET MVC , Convention over Configuration , ControllerAction

asp.net, code, asp.net mvc, tdd comments suggest edit

One of the guiding principles in the design of the new ASP.NET MVC Framework is enabling TDD (Test Driven Development) when building a web application. If you want to follow along, this post makes use of ASP.NET MVC CodePlex Preview 4 which you’ll need to install from CodePlex. I’ll try and keep this post up to date with the latest releases, but it may take me time to get around to it.

This post will provide a somewhat detailed walk-through of building a web application in a Test Driven manner while also demonstrating one way to integrate a Dependency Injection (DI) Framework into an ASP.NET MVC app. At the very end, you can download the code.

I chose StructureMap 2.0 for the DI framework simply because I’m familiar with it and it requires very little code and configuration. If you’re interested in an example using Spring.NET, check out Fredrik Normen’s post. I’ll try and post code examples in the future using Castle Windsor and ObjectBuilder.

Start Me Up! with apologies to The Rolling Stones

Once the CTP is released and you have it installed, open Visual Studio 2008 and select the File | New Projectmenu item. In the dialog, select the ASP.NET MVC Web Applicationproject template.

New
Project

At this point, you should see the following unit test project selection dialog.

Select Unit Test
Project

In a default installation, only the Visual Studio Unit Test project option is available. But MbUnit, xUnit.NET and others have installers available to get their test frameworks in this dialog.

As you might guess, I’ll start off building the canonical blog demo. I am going to start without using a database. We can always add that in later.

The first thing I want to do is add a few classes to the main project. I won’t add any implementation yet, I just want something to compile against. I’m going to add the following:

  • BlogController.cs to the Controllers directory
  • IPostRepository.cs to the Models directory
  • Post.cs to the Models directory
  • BlogControllerTests to the MvcApplicationTest project.

After I’m done, my project tree should look like this.

 Project
Tree

At this point, I want to implement just enough code so we can write a test. First, I define my repository interface.

using System;
using System.Collections.Generic;

namespace MvcApplication.Models
{
  public interface IPostRepository
  {
    void Create(Post post);

    IList<Post> ListRecentPosts(int retrievalCount);
  }
}

Not much of a blog post repository, but it’ll do for this demo. When you’re ready to write the next great blog engine, you can revisit this and add more methods.

Also, I’m going to leave the Post class empty for the time being. We can implement that later. Let’s implement the blog controller next.

using System;
using System.Web.Mvc;

namespace MvcApplication.Controllers 
{
  public class BlogController : Controller 
  {
    public ActionResult Recent() 
    {
      throw new NotImplementedException("Wait! Gotta write a test first!");
    }
  }
}

Ok, we better stop here. We’ve gone far enough without writing a unit test. After all, I’m supposed to be demonstrating TDD. Let’s write a test.

Let’s Get Test Started, In Here. with apologies to the Black Eyed Peas

Starting with the simplest test possible, I’ll make sure that the Recent action does not specify a view name because I want the default behavior to apply (this snippet assumes you’ve imported all the necessary namespaces).

[TestClass]
public class BlogControllerTests 
{
  [TestMethod]
  public void RecentActionUsesConventionToChooseView() 
    {
    //Arrange
    var controller = new BlogController();

    //Act
    var result = controller.Recent() as ViewResult;

    //Assert
    Assert.IsTrue(String.IsNullOrEmpty(result.ViewName));
  }
}

When I run this test, the test fails.

failed-test

This is what we expect, after all, we haven’t yet implemented the Recent method. This is the RED part of the RED, GREEN, REFACTOR rhythm of TDD.

Let’s go and implement that method.

public ActionResult Recent() 
{
  //Note we haven’t yet created a view
  return View();
}

Notice that at this point, we’re focusing on the behavior of our app first rather than focusing on the UI first. This is a stylistic difference between ASP.NET MVC and ASP.NET WebForms. Neither one is necessarily better than the other. Just a difference in approach and style.

Now when I run the unit test, it passes.

passing-test 

Ok, so that’s the GREEN part of the TDD lifecycle and a very very simple demo of TDD. Let’s move to the REFACTOR stage and start applying Dependency Injection.

It’s Refactor Time! with apologies to the reader for stretching this theme too far

In order to obtain the recent blog posts, I want to provide my blog controller with a “service” instance it can request those posts from. 

At this point, I’m not sure how I’m going to store my blog posts. Will I use SQL? XML? Stone Tablet?

Dunno. Don’t care… yet.

We can delay that decision till the last responsible moment. For now, I’ll create a repository abstraction to represent how I will store and retrieve blog posts in the form of an IPostRepository interface. We’ll update the blog controller to accept an instance of this interface in its constructor.

This is the dependency part of Dependency Injection. My controller now has a dependency on IPostRepository. The injection part refers to the mechanism you use to pass that dependency to the dependent class as opposed to having the class create that instance directly and thus binding the class to a specific implementation of that interface.

Here’s the change to my BlogController class.

public class BlogController : Controller 
{
  IPostRepository repository;

  public BlogController(IPostRepository repository) 
  {
    this.repository = repository;
  }

  public ActionResult Recent() 
  {
    //Note we haven’t yet created a view
    return View();
  }
}

Great. Notice I haven’t changed Recent yet. I need to write another test first. This will make sure that we pass the proper data to the view.

Note: If you’re following along, you’ll notice that the first test we wrote won’t compile. Comment it out for now. We can fix it later.

I’m going to use a mock framework, so before I write this test, I need to reference Moq.dll in my test project, downloaded from the MoQ downloads page.

Note: I’ve included this assembly in the example project at the end of this post.

Here’s the new test.

[TestMethod]
public void BlogControllerPassesCorrectViewData() 
{
  //Arrange
  var posts = new List<Post>();
  posts.Add(new Post());
  posts.Add(new Post());

  var repository = new Mock<IPostRepository>();
  repository.Expect(r => r.ListRecentPosts(It.IsAny<int>())).Returns(posts);

  //Act
  BlogController controller = new BlogController(repository.Object);
  var result = controller.Recent() as ViewResult;

  //Assert
  var model = result.ViewData.Model as IList<Post>;
  Assert.AreEqual(2, model.Count);
}

What this test is doing is dynamically stubbing out an implementation of the IPostRepository interface. We then tell it that no matter what argument is passed to ListRecentPosts, return two posts. We can then pass that stub to our controller.

Note: We haven’t yet needed to implement this interface. We don’t need to yet. We’re interested in isolating our test to only test the logic in the action method, so we fake out the interface for the time being.

At this point, the test fails as we expect. We need to refactor Recent to do the right thing now.

public ActionResult Recent() 
{
  IList<Post> posts = repository.ListRecentPosts(10); //Doh! Magic Number!
  return View(posts);
}

Now when I run my test, it passes!

Inject That Dependency

But we’re not done yet. When I load up a browser and try to navigate to this controller action (on my machine, http://localhost:64701/blog/recent/), I get the following error page.

No parameterless constructor defined for this object. - Mozilla
Firefox

Well of course it errors out! By default, ASP.NET MVC requires that controllers have a public parameter-less constructor so that it can create an instance of the controller. But our constructor requires an instance of IPostRepository. We need someone, anyone, to pass such an instance to our controller.

StructureMap (or DI framework of your choice) to the rescue!

Note: Make sure todownloadand reference the StructureMap.dll assembly if you’re following along. I’ve included the assembly in the source code at the end of this post.

The first step I’m going to do is create a StructureMap.config file and add it to the root of my application. Here are the contents of the file.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <Assembly Name="MvcApplication" />
  <Assembly Name="System.Web.Mvc
              , Version=1.0.0.0
              , Culture=neutral
              , PublicKeyToken=31bf3856ad364e35" />

  <PluginFamily Type="System.Web.Mvc.IController" 
                Assembly="System.Web.Mvc
                  , Version=1.0.0.0
                  , Culture=neutral
                  , PublicKeyToken=31bf3856ad364e35">
    <Plugin Type="MvcApplication.Controllers.BlogController" 
            ConcreteKey="blog" 
            Assembly="MvcApplication" />
  </PluginFamily>

  <PluginFamily Type="MvcApplication.Models.IPostRepository" 
                Assembly="MvcApplication" 
                DefaultKey="InMemory">
    <Plugin Assembly="MvcApplication" 
            Type="MvcApplication.Models.InMemoryPostRepository" 
            ConcreteKey="InMemory" />
  </PluginFamily>

</StructureMap>

I don’t want to get bogged down in describing this file in too much detail. If you want a deeper understanding, check out the StructureMap documentation.

The bare minimum you need to know is that each PluginFamily node describes an interface type and a key for that type. A Plugin node describes a concrete type that will be used when an instance of the family type needs to be created by the framework.

For example, in the second PluginFamily node, the interface type is  IPostRepository which we defined. The concrete type is InMemoryPostRepository. So anytime we use StructureMap to construct an instance of a type that has a dependency on IPostRepository, StructureMap will pass in an instance of InMemoryPostRepository.

Well if that’s true, we better then create that class. Normally, I would use a SqlPostRepository. But for purposes of this demo, we’ll store blog posts in memory using a static collection. We can always implement the SQL version later.

Note: This is where I would normally write tests for InMemoryPostRepository but this post is already long enough, right? Don’t worry, I included unit tests in the downloadable code sample.

public class InMemoryPostRepository : IPostRepository
{
  //simulates database storage
  private static IList<Post> posts = new List<Post>();

  public void Create(Post post)
  {
    posts.Add(post);
  }

  public System.Collections.Generic.IList<Post> 
    ListRecentPosts(int retrievalCount)
  {
    if (retrievalCount < 0)
      throw new ArgumentOutOfRangeException("retrievalCount"
          , "Let’s be positive, ok?");

    IList<Post> recent = new List<Post>();
    int recentIndex = posts.Count - 1;
    for (int i = 0; i < retrievalCount; i++)
    {
      if (recentIndex < 0)
        break;
      recent.Add(posts[recentIndex--]);
    }
    return recent;
  }

  public static void Clear()
  {
    posts.Clear();
  }
}

Quick, We Need A Factory {.clear}

We’re almost done. We now need to hook up StructureMap to ASP.NET MVC by writing implementing IControllerFactory. The controller factory is responsible for creating controller instances. We can replace the built in logic with our own factory.

public class StructureMapControllerFactory : DefaultControllerFactory
{
  protected override 
    IController CreateController(RequestContext requestContext, string controllerName) 
  {
    try
    {
      string key = controllerName.ToLowerInvariant();
      return ObjectFactory.GetNamedInstance<IController>(key);
    }
    catch (StructureMapException)
    {
      //Use the default logic.
      return base.CreateController(requestContext, controllerName);
    }
  }
}

Finally, we wire it all up together by adding the following method call within the Application_Start method in Global.asax.cs.

protected void Application_Start() {
  ControllerBuilder.Current.SetControllerFactory(new StructureMapControllerFactory());

  RegisterRoutes(RouteTable.Routes);
}

And we’re done! Now that we have hooked up the dependency injection framework into our application, we can revisit our site in the browser (after compiling) and we get…

View Not Found
Message

Excellent! Despite the Yellow Screen of Death here, this is a good sign. We know our dependency is getting injected because this is a different error message than we were getting before. This one in particular is informing us that we haven’t created a view for this action. So we need to create a view.

Sorry! Out of scope. Not in the spec.

I leave it as an exercise for the reader to create a view for the page, or you can look at the silly simple one included in the source download.

Although this example was a ridiculously simple application, the principle applies in building a larger app. Just take the techniques here and rinse, recycle, repeat your way to TDD nirvana.

To see the result of all this, download the source code.

code, tdd comments suggest edit

Sometimes when writing unit tests, you run into the case where you want to override the behavior of a specific method.

Here’s a totally contrived example I just pulled from my head to demonstrate this idea. Any similarity to specific real world scenarios is coincidental ;). Suppose we have this class we want to test.

public class MyController
{
  public void MyAction()
  {
      RenderView("it matches?");
  }

  public virtual void RenderView(string s)
  {
      throw new NotImplementedException("To ensure this method is overridden.");
  }
}

What we have here is a class with a public method MyAction that calls another virtual method, RenderView. We want to test the MyAction method and make sure it calls RenderView properly. But we don’t want the implementation of RenderView to execute because it will throw an exception. Perhaps we plan to implement that later.

Using A Partial Mock

There are two easy ways to test it. One is to create a partial mock using a Mocking framework such as Rhino Mocks.

[TestMethod]
public void DoMethodCallsRenderViewProperly()
{
  MockRepository mocks = new MockRepository();
  MyController fooMock = mocks.PartialMock<MyController>();
  Expect.Call(delegate { fooMock.RenderView("it matches?"); });
  mocks.ReplayAll();

  fooMock.MyAction();
  mocks.VerifyAll();
}

If you’re not familiar with mock framework, what we’ve done here is dynamically create a proxy for our MyController class and we’ve overridden the behavior of the RenderView method by setting an expectation. Basically, the expectation is that the method is called with the string “it matches?”. At the end, we verify that all of our expectations were met.

This is a pretty neat way of testing abstract and non-sealed classes. If a method does something that would break the test, and you don’t want to deal with that, or you don’t care if that method even runs, you can use this technique.

However, there are two problems you might run into this approach. First, VerifyAll doesn’t allow you to specify a message. That is a minor concern, but it’d be nice to supply an assert message there.

Secondly, and more importantly, what if RenderView is protected and not public? You won’t be able to use a partial mock (at least not using Rhino Mocks).

Using a Test-Specific Subclass

One approach is to use a test-specific subclass. I’ve used this test pattern many times before, but didn’t know there was a name for it till my colleague, Chris Tavares of the P&P group (no blog as far as I can tell), told me about the book xUnit Test Patterns: Refactoring Test Code.

In the book, the author categorizes various useful test techniques into groups of test patterns. The Test-Specific Subclass pattern addresses the situation described in this post. So looking at the above code, but assuming that RenderView is protected, we can still test it by doing the following.

[TestMethod]
public void DoMethodCallsRenderViewProperly()
{
  FooTestDouble fooDouble = new FooTestDouble();
  fooDouble.MyAction();
  Assert.AreEqual("it matches?", fooDouble.ReceivedArgument, "Did render the right view.");
}

private class FooTestDouble : MyController
{
  public string ReceivedArgument { get; private set; }

  protected override void RenderView(string s)
  {
      this.ReceivedArgument = s;
  }
}

All we did was write a class specific to this test called FooTestDouble. In that class we override the protected RenderView method and set a property with the passed in argument. Then in our test, we can simply check that the argument matches our expectation (and we get a human friendly assert message to boot).

Is this a valid test pattern?

Interestingly enough, I have shown this technique to some developers who told me it made them feel dirty (I’m not naming names). They didn’t feel this was a valid way to write a unit test. One complaint is that one shouldn’t have to inherit from a class in order to test that class.

So far, none of these complaints have provided empirical reasons behind this feeling that it is wrong. One complaint I’ve heard is that we are not testing the class, we are testing a derived class.

Sure, we’re technically testing a subclass and not the class itself, but we are in control of the subclass. We know that the behavior of the subclass is exactly the same except for what we chose to override.

Not only that, the same argument could be applied to using a partial mock. After all, what is the mocking example (which many feel is more appropriate) doing but implicitly generating a class that inherits from the class being tested whereas this pattern explicitly inherits from the class.

My own feeling on this is - I want to choose the technique that involves less code and is more understandable for any given situation. In some cases, using a mock framework does this. For example, in the first case when RenderView is public, I like having my test fully self-contained. But in the second case, RenderView is protected, I think the test-specific subclass is perfectly valid. The test-specific subclass is also great for those who are not familiar with a mock framework.

While some guidelines around TDD and unit testing are designed to produce better tests (for example, the Red Green approach and trying not to touch external resources such as the database) and better design, I don’t like to subscribe to arbitrary rules that only make writing tests harder and don’t seem to provide any measurable benefit based on some vague feelingof dirtiness.

Tags: TDD , Unit Testing , Test Specific Subclass

code, tdd comments suggest edit

Brazil
Jersey Note that in the same vein as Pele, Ronaldinho and Ronaldo, Joel has reach that Brazillian Soccer player level of stardom in the geek community and can pretty much go by just his first name. Admit it, you knew who I was referring to in the title. Admit it!

Please indulge me in a brief moment of hubris.

I was reading part 1 of Joel Spolsky’s talk at Yale that he gave on November 28 and came upon this quote on code provability…

The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program!

Hey, that sounds familiar! By coincidence, I wrote the following on November 16^th^,12 days prior to Joel’s talk…

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker…

Sure, it isn’t exactly an original thought, but the timing seemed too coincidental. Could it really be? Could my post inspired one small point in a talk given by someone of his stature?

For a moment, I allowed myself to dream like a true fanboy that Mr. Joel really did read my blog. Until I read this (cue sound of pots and pans crashing down)…

The geeks want to solve the problem automatically, using software. They propose things like unit tests, test driven development, automated testing, dynamic logic and other ways to “prove” that a program is bug-free.

Crestfallen, I realized Mr. Joel indeed did not read my post, in which I claimed…

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

To be fair, I wouldn’t call Joel a major TDD proponent, but I hoped he would understand that TDD is about improving the design of code and also providing confidence to make changes to the system. It’s no proof of correctness, but can be a proof of incorrectness when a test fails.

Oh well, one can dream.

Tags: TDD , Joel Spolsky , Code Provability

comments suggest edit

Oren Eini, aka Ayende, writes about his dissatisfaction with Microsoft reproducing the efforts of the OSS community. His post was sparked by the following thread in the ALT.NET mailing list:

Brad: If you’re simply angry because we had the audacity to make our own object factory with DI, then I can’t help you; the fact that P&P did ObjectBuilder does not invalidate any other object factory and/or DI container.

Ayende: No, it doesn’t. But it is a waste of time and effort.

Brad: In all seriousness: why should you care if I waste my time?

Ayende’s response is:

  • I care because it means that people are going to get a product that is a duplication of work already done elsewhere, usually with less maturity and flexibility.
  • I care because people are choosing Microsoft blindly, and that puts MS in a position of considerable responsibility.
  • I care because I see this as continued rejection of the community efforts and hard work.
  • I care because it, frankly, shows contempt to anything except what is coming from Microsoft.
  • I care because it so often ends up causing me grief.
  • I care because it is doing disservice to the community.

As a newly minted employee of Microsoft, it may seem like I am incapable of having a balanced opinion on this, but I am also an OSS developer and was so before I joined, so hopefully I am not so totally unbalanced ;).

I think his sentiment comes from certain specific efforts by Microsoft that, how can I put this delicately, sucked in comparison to the existing open source alternatives.

Two specific implementations come to mind, MS Test and SandCastle.

However, as much I tend to enjoy and agree with much of what Ayende says in his blog, I have to disagree with Ayende on this point that duplication of efforts is the problem.

After all, open source projects are just as guilty of this duplication. Why do we need BlogEngine.NET when there is already Subtext? And why do we need Subtext when there is already DasBlog? Why do we need MbUnit when there is NUnit? For that matter, why do we need Monorail when there is Ruby on Rails or RhinoMocks when there is NMock?

I think Ayende is well suited to answer that question. When he created RhinoMocks, there was already an open source mocking framework out there, NMock. But NMock perhaps didn’t meet Ayende’s need. Or perhaps he thought he could do better. In any case, he went out and duplicated the efforts of NMock, but in many (but maybe not all) ways, made it better. I personally love using RhinoMocks.

The thing is, there is no way for NMock nor RhinoMocks to meet all the needs of all the possible constituencies needs for a mocking framework. Technical superiority isn’t always the deciding factor. Sometimes political realities come into play. For example, whether we like it or not, some companies won’t use open source software. In an environment like that, neither NMock nor RhinoMocks will make any headway, leaving the door open for yet another mocking framework to make a dent.

Projects that seem to duplicate efforts never make perfect copies. They each have a slightly different set of requirements they seek to address. In an evolutionary sense, each duplicate contains mutations. And like evolution, survival of the fittest ensues. Except this isn’t a global winner takes all zero sum game.

What works in one area might not survive in another area. Like the real world, niches form and that which can find a niche in which it is strong will survive in that niche.

I’m reminded of this when I read that the Opera Mini browser beats Apple Safari, Netscape, and Mozilla combined in the Ukraine. Another remindes is how Google built yet another social platform that is really big Brazil.

So again, Duplication Is Not The Problem. Competition is healthy. If anything, the problem is, to stick with the evolution analogy, is that Microsoft because of its sheer might gives its creations quite the head start, to survive when the same product would die had it been released by a smaller company. We’ve seen this before when Microsoft let IE 6 rot on the vine and risks doing the same with IE 7. Fact of the matter is, Microsoft has a lot of influence.

So can we really fault Microsoft for duplicating efforts? Or only for doing a half-assed job of it sometimes? As I wrote before when I asked the question Should Microsoft Really Bundle Open Source Software?, I’d like to see some balance that both recognizes business realities that push Microsoft to duplicating community efforts, but at the same time support the community.

After all, Microsoft can’t let what is out there completely dictate its product strategy, but it also can’t ignore the open source ecosystem which is a boon to the .NET Framework.

Disclaimer: I shouldn’t need to say it, but to be clear, these are my opinions and not necessarily those of my employer.

Tags: Microsoft , Open Source , OSS

comments suggest edit

Despite an international team of committers to Subtext and the fact that MySpace China uses a customized version of Subtext for its blog, I am ashamed to say that Subtext’s support for internationalization has been quite weak.

world
map

True, I did once write that The Only Universal Language in Software is English, but I didn’t mean that English is the only language that matters, especially on the web.

One area that we need to improve is in dealing with international URLs. For example, if I’m a user in Korea, I should be able to write a post with a Korean domain and a Korean title and thus have a friendly URL like so:

http://하쿹.com/blog/안녕하십니까.aspx

(As an aside, roughly speaking, 하쿹 would be pronounced hah-kut. About as close as I can get to haacked which is pronounced like hackt.)

If you’re a kind soul, you will forgive us for punting on this issue for so long. After all, RFC 2396, which defines the syntax for Uniform Resource Identifiers (URI) only allows for a subset of ASCII (about 60 characters).

But then again, I’ve been hiding behind this RFC as an excuse for a while fully knowing there are workarounds. I have just been too busy to fix this.

There are two issues here actually, the hostname (aka domain name) which is quite restrictive and cannot be URL encoded, AFAIK, and the rest of the URL which can be encoded.

The domain name issue is resolved by the diminutively named Punycode (described in RFC 3492). Punycode is a protocol for converting Unicode strings into the more limited set of ASCII characters for network host names.

For example, http://你好.com/ translates to http://xn–6qq79v.com/in Punycode.

Fortunately, this issue is pretty easy to fix. Since the browser is responsible for converting the Unicode domain name in the URL to Punycode, all we need to do in Subtext is allow users to setup a hostname that contains Unicode and we can then convert that to Punycode using something like the Punycode / IDN library for .NET 2.0. For this blog post, I used the web based phlyLabs IDNA Converter for converting Unicode to Punycode.

The second issue is rest of the URL. When you enter a title of a URL in Subtext, we convert that to a human and URL friendly ASCII “slug”. For example, if you enter the title “I like lamp” for a blog post, Subtext creates the friendly URL ending with “i_like_lamp.aspx”.

We haven’t totally ignored international URLs. For international western languages, we have code that effectively replaces accented characters with a close ASCII equivalent. A couple of examples (there are more in our unit tests) are:

Åñçhòr çùè becomes Anchor_cue

Héllò wörld becomes Hello_world

Unfortunately for my Korean brethren, something like 안녕하십니까 becomes (empty string). Well that totally sucks!

The thing is, the simple solution in this case is to just allow the Unicode Korean word as the slug. Browsers will apply the correct URL encoding to the URL. Thus https://haacked.com/안녕하십니까/ would become a request for https://haacked.com/%EC%95%88%EB%85%95%ED%95%98%EC%8B%AD%EB%8B%88%EA%B9%8C/and everything works just fine as far as I can tell. Please note that Firefox 2.0 actually replaces the Unicode string in the address bar with the encoded string while IE7 displays the Unicode as-is, but makes the request using the encoded URL (as confirmed by Fiddler).

For western languages in which we can do a decent enough conversion to ASCII, the benefit there is the URL remains somewhat readable and “friendlier” than a long URL encoded string. But for non-western scripts, we have no choice but to deal with these ugly URL encoded strings (at least in Firefox).

The interesting thing is, when researching how sites in China handle internationalized URLs, I discovered that in the same way we did, they simply punt on the issue. For example, http://baidu.com/, the most popular search engine in China last I checked, has English URLs.

Tags: URL , Localization , Punycode

code, tdd comments suggest edit

My friend (and former boss and business partner) Micah found this gem of a quote from Donald Knuth addressing code proofs.

Beware of bugs in the above code; I have only proved it correct, not tried it.

Micah writes more on the topic and reminds me of why I enjoyed working with him so much. He’s always been quite thoughtful in his approach to problems. And I’m not just saying that because he agrees with me. ;)

On another note, several commenters pointed out that one thing I didn’t mention before, but should have, is that verifying the quality of code is only one small aspect of unit testing and Test Driven Development.

The more important factor is that TDD is a design process. Employing TDD is one (not the only one, but I think it is a good one) approach for improving the design of code and especially the usability of your code. By usability, I mean from another developer’s perspective.

If I have to create twenty different objects in order to call a method on your class, your class is probably not very usable to other developers. TDD is one approach that forces you to find that out sooner, rather than later.

A code proof won’t necessarily find that “flaw” because it is not a flaw in logic.

Tags: TDD , Code Provability

code, tdd comments suggest edit

I’m currently doing some app building with ASP.NET MVC in which I try to cover a bunch of different scenarios. One scenario in particular I wanted to cover is approaching an application using a Test Driven Development approach. I especially wanted to cover using various Dependency Injection frameworks, to make sure everything plays nice.

Since I’ve already seen demos with Castle Windsor and Spring.NET, I wanted to give StructureMap a try. Here is the problem I’ve run into.

Say I have a class like so:

public class MyController : IController
{
  MembershipProvider membership;
  public HomeController(MembershipProvider provider)
  {
    this.membership = provider;
  }
}

As you can see, this class has a dependency on the abstract MembershipProvider class, which is passed to this class via a constructor argument. In my unit tests, I can use RhinoMocks to dynamically create a mock that inherits MembershipProvider provider and pass that mock to this controller class. It’s nice for testing.

But eventually, I need to use this class in a real app and I would like a DI framework container to create the controller for me. Here is my StructureMap.config file with some details left out.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication" />
  </PluginFamily>
</StructureMap>

If I add an empty constructor to HomeController, this code allows me to create an instance of HomeController like so.

HomeController c = 
  ObjectFactory.GetNamedInstance<IController>("HomeController")
  as HomeController;

But when I remove the empty constructor, StructureMap cannot create an instance of HomeController. I would need to tell StructureMap (via StructureMap.config) how to construct an instance of MembershipProvider to pass into the constructor for HomeController

Normally, I would just specify a type to instantiate as another PluginFamily entry. But what I really want to happen in this case is for StructureMap to call a method or delegate and use the value returned as the constructor argument.

In other words, I pretty much want something like this:

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication">
      <Instance>
        <Property Name="provider">
          <![CDATA[
            return Membership.Provider;
          ]]>
        </Property>
      </Instance>
    </Plugin>
  </PluginFamily>
</StructureMap>

The made up syntax I am using here is stating that when StructureMap is creating an instance of HomeController, execute the code in the CDATA section to get the instance to pass in as the constructor argument named provider.

Does anyone know if something like this is possible with any of the Dependency Injection frameworks out there?Whether via code or configuration?

Tags: TDD , Dependency Injection , IoC , StructureMap

code, tdd comments suggest edit

Frans Bouma wrote an interesting response to my last post, Writing Testable Code Is About Managing Complexity entitled Correctness Provability should be the goal, not Testability.

He states in his post:

When focusing on testability, one can fall into the trap of believing that the tests prove that your code is correct.

God I hope not. Perhaps someone in theory could fall into that trap, but a person could also fall into the trap and buy a modestly priced bridge I have to sell to them in the bay area? This seems like a straw man fallacy to me.

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

Instead, what you often hear testing proponents talk about is confidence. For example, in my post Unit Tests cost More To Write I make the following point (emphasis added):

They reduce the true cost of software development by promoting cleaner code with less bugs. They reduce the TCO by documenting how code works and by serving as regression tests, giving maintainers more confidence to make changes to the system.

Frans goes on to say (emphasis mine)…

Proving code to be correct isn’t easy, but it should be your main focus when writing solid software. Your first step should be to prove that your algorithms are correct. If an algorithm fails to be correct, you can save yourself the trouble typing the executable form of it (the code representing the algorithm) as it will never result in solid correct software.

Before I address this, let me tell you a short story from my past. I promise it’ll be brief.

When I was a young bright eyed bushy tailed math major in college, I took a fantastic class called Differential Equations that covered equations which describe continuous phenomena in one or more dimension.

During the section on partial differential equations, we wracked our brains going through crazy mental gymnastics in order to find an explicit formula that solved a set of equations with multiple independent variables. With these techniques, it seemed like we could solve anything. Until of course, near the end of the semester when the cruel joke was finally revealed.

The sets of equations we solved were heavily contrived examples. As difficult as they were to solve, it turns out that only the most trivial sets of differential equations can be solved by an explicit formula. All that mental gymnastics we were doing up until that point was essentially just mental masturbation. Real world phenomena is hardly ever described by sets of equations that lined up so nicely.

Instead, mathematicians use techniques like Numerical Analysis (the Monte Carlo Method is one classic example) to attempt to find approximate solutions with reasonable error bounds.

Disillusioned, I never ended up taking Numerical Analysis (the next class in the series), choosing to try my hand at studying stochastic processes as well as number theory at that point.

The point of this story is that trying to prove the correctness of computer programs is a lot like trying to solve a set of partial differential equations. It works great on small trivial programs, but is incredibly hard and costly on anything resembling a real world software system.

Not only that, what exactly are you trying to prove?

In mathematics, a mathematician will take a set of axioms, a postulate, and then spend years converting caffeine into a long beard (whether you are male or female) and little scribbles on paper (which mathematicians call equations) that hopefully result in a proof that the postulate is true. At that point, the postulate becomes a theorem.

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker, because the only unambiguous complete spec for what an application does is code. Even then, you have to ask, how do you prove that the specification does what the customers want?

This is why proving code should not be your main focus, unless, maybe, you write code for the Space Shuttle.

Like differential equations, it’s too costly to explicitly prove code in all but the most trivial cases. If you are an algorithms developer writing the next sort algorithm, perhaps it is worth your time to prove your code because that cost is amortized over the life of such a small reusable unit of code. You have to look at your situation and see if the cost is worth it.

For large real world data driven applications, proving code correctness is just not reasonable because it calls for an extremely costly specification process, whereas tests are very easy to specify and cheap to write and maintain.

This is somewhat more obvious with an example. Suppose I asked you to write a program that could break a CAPTCHA. Writing the program is very time consuming and difficult. But first, before you write the program, what if I asked you to write some tests for the program you will write. That’s trivially easy, isn’t it? You just feed in some CAPTCHA images and then check that the program spits out the correct value. How do you know your tests are correct? You apply the red-green-refactor cycle along with the principle of triangulation. ;)

As we see, testing is easy. So how do you prove its correctness? Is it as easy as testing it?

As I said before, testing doesn’t give you a proof of correctness, but like the approaches of numerical analysis, it can give you an approximate proof with reasonable error bounds, aka, a confidence factor. The more tests, the smaller the error bounds and the better your confidence. This is a way better use of your time than trying to prove everything you write.

Technorati Tags: TDD,Math,Code Correctness

code, tdd comments suggest edit

When discussing the upcoming ASP.NET MVC framework, one of the key benefits I like to tout is how this framework will improve testability of your web applications.

The response I often get is the same question I get when mention patterns such as Dependency Injection, IoC, etc…

Why would I want to do XYZ just to improve testability?

I think to myself in response

Just to improve testability? Isn’t that enough of a reason!

That’s how excited I am about test driven development. Testing seems enough of a reason for me!

Of course, when I’m done un-bunching my knickers, I realize that despite all the benefits of unit testable code, the real benefit of testable code is how it helps handle the software development’s biggest problem since time immemorial, managing complexity.

There are two ways that testable code helps manage complexity.

​1. It directly helps manage complexity assuming that you not only write testable code, but also write the unit tests to go along with them. With decent code coverage, you now have a nice suite of regression tests, which helps manage complexity by alerting you to potential bugs introduced during code maintenance in a large project long before they become a problem in production.

​2. It indirectly helps manage complexity because in order to write testable code, you have to employ the principle of separation of concerns to really write testable code.

Separating concerns within an application is an excellent tool for managing complexity when writing code. And writing code is complex!

The MVC pattern, for example, separates an application into three main components: the Model, the View, and the Controller. Not only does it separate these three components, it outlines the loosely coupled relationships and communication between these components.

Key Benefits of Separating Concerns

This separation combined with loose coupling allows a developer to manage complexity because it allows the developer to focus on one aspect of the problem at a time.

Martin Fowler writes about this benefit in his paper, Separating User Interface Code (pdf):

A clear separation lets you concentrate on each aspect of the problem separately—and one complicated thing at a time is enough. It also lets different people work on the separate pieces, which is useful when people want to hone more specialized skills. \

The ability to divide work into parallel tracks is a great benefit of this approach. In a well separated application, if Alice needs time to implement the controller or business logic, she can quickly stub out the model so that Bob can work on the view without being blocked by Alice. Meanwhile, Alice continues developing the business layer without the added stress that Bob is waiting on her.

Bring it home

The MVC example above talks about separation of concerns on a large architectural scale. But the same benefits apply on a much smaller scale outside of the MVC context. And all of these benefits can be yours as a side-effect of writing testable code.

So to summarize, when you write testable code, whether it is via Test Driven Development (TDD) or Test After Development, you get the following side effects.

  1. A nice suite of regression tests.
  2. Well separated code that helps manage complexity.
  3. Well separated code that helps enable concurrent development.

Compare that list of side effects with the list of side effects of the latest pharmaceutical wonder drug for curing restless legs or whatever. What’s not to like!?

Technorati Tags: TDD, Separation of Concerns, MVC, Loose Coupling, Design

comments suggest edit

In his book, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (title long enough for you?), James Surowiecki argues that decisions made by a crowd are generally better than those made by any single individual in the group.

Crowd

Seems like a lot of theoretical hogwash until you see this thesis put to action in the real world via a prediction market. A prediction market (also called a decision market) is, as its name implies, a market created specifically to predict the likelihood of a specific outcome. They are most successful when participants have something invested in their decisions. Money often does the trick, but is not necessary.

The Hollywood Stock Exchange, a prediction market focused around the film industry, demonstrated the potential accuracy of such markets when it went 32 for 39 in predicting 2006’s Oscar Nominees (in the major categories that are the only ones people care about). They didn’t do so bad in 2007 either.

It’s no surprise then that there is a lot of research going on in predictive markets. Preliminary results show they seem to work very well when properly structured.

Contrast the success of prediction markets to another decision making process that involves a group. A recent comment by a reader got me thinking on this subject as he described one of the symptoms of this process.

If you try to satisfy all parties, you’ll end up with mediocre product that does not satisfy everybody.

An even more strong way to state this is to say that trying to satisfy everyone leaves almost everyone unsatisfied.

This is one typical result of a decision making process known as Groupthink. Another colorful term used to describe it is decision by committee. These terms do not, by any means, have a positive connotation.

Group
Think 

There seems to be a paradox here between the power of markets and the ineptitude of Group Think. Why do these two somewhat similar means of decision making have such a wide variance in accuracy?

Perhaps counter-intuitively, it has a lot to do with the coercive effects of seeking consensus in decision making. Group think is often hamstrung by seeking consensus whereas participants in an effective market are largely independent of one another.

Before I continue, let me head off that insightful comment on how I am trying to promote anarchy and save your fingers the pain of some typing by pointing out that this is not an indictment of all consensus driven decision making. In many cases, consensus is absolutely required. It doesn’t help if a decision is optimal but nobody is willing to participate in the result of the decision. This sort of decision making is especially essential in one particular form of decision making - negotiations,which is well covered by the book Getting to Yes: Negotiating Agreement Without Giving In.

The problem lies in applying negotiation to make decisions that should not be made by consensus. At least not without applying non-consensus based fact gathering first, so that negotiations can occur using data gathered in a dispassionate manner.

So what makes groupthink particularly inadequate? Social psychologist Irving Janis identified the following eight symptoms of groupthink that hamstring good decision making:

  1. Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
  2. Collective rationalization – Members discount warnings and do not reconsider their assumptions.
  3. Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
  4. Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.
  5. Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
  6. Self-censorship – Doubts and deviations from the perceived group consensus are not expressed.
  7. Illusion of unanimity – The majority view and judgments are assumed to be unanimous.
  8. Self-appointed ‘mindguards’ – Members protect the group and the leader from information that is problematic or contradictory to the group’s cohesiveness, view, and/or decisions.

One symptom not listed here, but probably fits in with #6 is the fear of causing offense.

One theme in common in the list is that many of these symptoms are the result of seeking some form of consensus with others within the group. The net effect is that all of these symptoms provide incentives not to deviate from the group.

Prediction Markets counter these symptoms through independence and proper incentive. Because the participants in a market are not directly working together, there is no way for peer pressure to burrow itself into the decision making process. There is no fear of offending others in such a market. Putting something valuable (such as money) on the line (or having a huge investment in the correct outcome, such as predicting disaster) has an eye-opening habit of making people truthful and helps to avoid self-censorship etc…

Another necessary factor in the success of markets, as pointed out in an insightful comment to this post, is diversity. Markets and committees that lack diversity of viewpoint fail to take advantage of all available information in a meaningful way and succumb to a form of tunnel vision. This type of phenomena is very evident in a leader who surrounds him/herself with sycophants and thus makes decisions based on what they want to hear rather than fact.

Prediction markets are not perfect and are not suitable for all decisions. In fact, they are probably not suitable for most decisions as the cost to set up a market isn’t justified for everyday decisions like whether you should get the bottle of beer or the pitcher of sangria.

However, understanding what makes predictive markets work and what the symptoms of groupthink look like can be a great benefit the next time you are making a decision within a group and start to see groupthink emerging.

Technorati Tags: Decision Making,Consensus,Predictive Markets,Groupthink

comments suggest edit

Last week I was busy in Las Vegas at the DevConnections/OpenForce conferences, and unlike that pithy but over-used catch-phrase, what happens at a conference in Vegas should definitely not stay in Vegas but should be blogged (only things during sessions that won’t get anyone in trouble).

It was my first time at DevConnections and I was not disappointed. This was also a first ever OpenForce conference put on by the DotNetNuke corporation and from all appearances, it was quite a success.

Carl Franklin and Rob
Conery Rob Conery and Rick
Strahl

The first night, as I passed by the hotel bar, I noticed Carl Franklin, well known for .NET Rocks, a podcast on .NET technologies. As I had an appearance on the show, it was nice to finally meet the guy in person.

Also met up with the Flyin’ Hawaiians, Rob Conery (from Kauai) and Rick Strahl (from Maui).

This is a big part of the value and fun in such conferences. The back-channel conversations on various topics that provide new insights you might not have otherwise received.

The next day, I attended the ASP.NET MVC talk given by Scott Hanselman and Eilon Lipton. The talk was well attended for a technology in which there are no CTP bits available yet. There were quite a few who stuck around to ask questions. I also attended their next talk that was part 3 of 3 in a series in which Scott Guthrie gave the first two parts.

Scott and Eilon are like the Dean Martin and Jerry Lewis of conference talks (I won’t say which is which). They play off each other quite well in giving a humorous, but informative, talk.

The best part for me was watching Eilon squirm as a star-struck attendee asked to have her picture taken with him after having done the same with Scott Hanselman. I think we expect this sort of geek-worship with Scott, but Eilon seemed genuinely uncomfortable. She was quite excited to get a pic with the guy who wrote**the UpdatePanel!

An admirer gazes at
Scott An admirer gazes at
Eilon

One particular experience that was particularly fun for me was that I got to go around the exhibitor floor, camera-man in tow, to interview attendees about their impressions of the conference. Normally such work goes to charismatic and well-spoken guys like Carl Franklin and Scott Hanselman, but both were too busy at the time and Scott pointed the cameraman to me.

I try to remain open to new experiences, even ones that take me out of my comfort zone (I have stage fright). I walked around with a microphone interviewing people and saw that the attendees really love this conference and felt they got a lot out of it. At least the ones willing to talk about it on camera. ;)

When asked what their favorite talk was, a couple attendees mentioned the MVC talk, which was good to hear.

IMG_0818 IMG_0820

While on the floor, they had a drawing in which they gave out a Harley. The winner happened to be a Harley-loving motorcycle rider, so it worked out pretty well.

On Wednesday and Thursday, I participated on two panels for the OpenForce conference on the topic of Open Source. They taped these panels so hopefully the videos will be up soon. I’ll write more about what we discussed later. I need to get some sleep as after leaving Vegas, I flew out the next day to Redmond and it is very late.

Technorati Tags: DevConnections,OpenForce

asp.net, asp.net mvc comments suggest edit

While at DevConnections/OpenForce, I had some great conversations with various people on the topic of ASP.NET MVC. While many expressed their excitement about the framework and asked when they could see the bits (soon, I promise), there were several who had mixed feelings about it. I relish these conversations because it helps highlight the areas in which we need to put more work in and helps me become a better communicator about it.

One thing I’ve noticed is that most of my conversations focused too much on the MVC part of the equation. Dino Esposito (who I met very briefly), wrote an insightful post pointing out that it isn’t the MVC part of the framework that is most compelling:

So what’s IMHO the main aspect of the MVC framework? It uses a REST-like approach to ASP.NET Web development. It implements each request to the Web server as an HTTP call to something that can be logically described as a “remote service endpoint”. The target URL contains all that is needed to identify the controller that will process the request up to generating the response–whatever response format you need. I see more REST than MVC in this model. And, more importantly, REST is a more appropriate pattern to describe what pages created with the MVC framework actually do.

In describing the framework, I’ve tended to focus on the MVC part of it and the benefits in separation of concerns and testability. However, others have pointed out that by keeping the UI thin, a good developer could do all these things without MVC. So what’s the benefit of the MVC framework?

I agree, yet I still think that MVC provides even greater support for Test Driven Development than before both in substance and in style, so even in that regard, there’s a benefit. I need to elaborate on this point, but I’ll save that for another time.

But MVC is not the only benefit of the MVC framework. I think the REST-like nature is a big selling point. Naturally, the next question is, well why should I care about that?

Fair question. Many developers won’t care and perhaps shouldn’t. In those cases, this framework might not be a good fit. Some developers do care and desire a more REST-like approach. In this case, I think the MVC framework will be a good fit.

This is not a satisfying answer, I know. In a future post, I hope to answer that question better. In what situations should developers care about REST and in which situations, should they not? For now, I really should get some sleep. Over and out.

Technorati Tags: ASP.NET MVC,REST