asp.net, code, asp.net mvc, tdd comments edit

One of the guiding principles in the design of the new ASP.NET MVC Framework is enabling TDD (Test Driven Development) when building a web application. If you want to follow along, this post makes use of ASP.NET MVC CodePlex Preview 4 which you’ll need to install from CodePlex. I’ll try and keep this post up to date with the latest releases, but it may take me time to get around to it.

This post will provide a somewhat detailed walk-through of building a web application in a Test Driven manner while also demonstrating one way to integrate a Dependency Injection (DI) Framework into an ASP.NET MVC app. At the very end, you can download the code.

I chose StructureMap 2.0 for the DI framework simply because I’m familiar with it and it requires very little code and configuration. If you’re interested in an example using Spring.NET, check out Fredrik Normen’s post. I’ll try and post code examples in the future using Castle Windsor and ObjectBuilder.

Start Me Up! with apologies to The Rolling Stones

Once the CTP is released and you have it installed, open Visual Studio 2008 and select the File | New Projectmenu item. In the dialog, select the ASP.NET MVC Web Applicationproject template.

New
Project

At this point, you should see the following unit test project selection dialog.

Select Unit Test
Project

In a default installation, only the Visual Studio Unit Test project option is available. But MbUnit, xUnit.NET and others have installers available to get their test frameworks in this dialog.

As you might guess, I’ll start off building the canonical blog demo. I am going to start without using a database. We can always add that in later.

The first thing I want to do is add a few classes to the main project. I won’t add any implementation yet, I just want something to compile against. I’m going to add the following:

  • BlogController.cs to the Controllers directory
  • IPostRepository.cs to the Models directory
  • Post.cs to the Models directory
  • BlogControllerTests to the MvcApplicationTest project.

After I’m done, my project tree should look like this.

 Project
Tree

At this point, I want to implement just enough code so we can write a test. First, I define my repository interface.

using System;
using System.Collections.Generic;

namespace MvcApplication.Models
{
  public interface IPostRepository
  {
    void Create(Post post);

    IList<Post> ListRecentPosts(int retrievalCount);
  }
}

Not much of a blog post repository, but it’ll do for this demo. When you’re ready to write the next great blog engine, you can revisit this and add more methods.

Also, I’m going to leave the Post class empty for the time being. We can implement that later. Let’s implement the blog controller next.

using System;
using System.Web.Mvc;

namespace MvcApplication.Controllers 
{
  public class BlogController : Controller 
  {
    public ActionResult Recent() 
    {
      throw new NotImplementedException("Wait! Gotta write a test first!");
    }
  }
}

Ok, we better stop here. We’ve gone far enough without writing a unit test. After all, I’m supposed to be demonstrating TDD. Let’s write a test.

Let’s Get Test Started, In Here. with apologies to the Black Eyed Peas

Starting with the simplest test possible, I’ll make sure that the Recent action does not specify a view name because I want the default behavior to apply (this snippet assumes you’ve imported all the necessary namespaces).

[TestClass]
public class BlogControllerTests 
{
  [TestMethod]
  public void RecentActionUsesConventionToChooseView() 
    {
    //Arrange
    var controller = new BlogController();

    //Act
    var result = controller.Recent() as ViewResult;

    //Assert
    Assert.IsTrue(String.IsNullOrEmpty(result.ViewName));
  }
}

When I run this test, the test fails.

failed-test

This is what we expect, after all, we haven’t yet implemented the Recent method. This is the RED part of the RED, GREEN, REFACTOR rhythm of TDD.

Let’s go and implement that method.

public ActionResult Recent() 
{
  //Note we haven’t yet created a view
  return View();
}

Notice that at this point, we’re focusing on the behavior of our app first rather than focusing on the UI first. This is a stylistic difference between ASP.NET MVC and ASP.NET WebForms. Neither one is necessarily better than the other. Just a difference in approach and style.

Now when I run the unit test, it passes.

passing-test 

Ok, so that’s the GREEN part of the TDD lifecycle and a very very simple demo of TDD. Let’s move to the REFACTOR stage and start applying Dependency Injection.

It’s Refactor Time! with apologies to the reader for stretching this theme too far

In order to obtain the recent blog posts, I want to provide my blog controller with a “service” instance it can request those posts from. 

At this point, I’m not sure how I’m going to store my blog posts. Will I use SQL? XML? Stone Tablet?

Dunno. Don’t care… yet.

We can delay that decision till the last responsible moment. For now, I’ll create a repository abstraction to represent how I will store and retrieve blog posts in the form of an IPostRepository interface. We’ll update the blog controller to accept an instance of this interface in its constructor.

This is the dependency part of Dependency Injection. My controller now has a dependency on IPostRepository. The injection part refers to the mechanism you use to pass that dependency to the dependent class as opposed to having the class create that instance directly and thus binding the class to a specific implementation of that interface.

Here’s the change to my BlogController class.

public class BlogController : Controller 
{
  IPostRepository repository;

  public BlogController(IPostRepository repository) 
  {
    this.repository = repository;
  }

  public ActionResult Recent() 
  {
    //Note we haven’t yet created a view
    return View();
  }
}

Great. Notice I haven’t changed Recent yet. I need to write another test first. This will make sure that we pass the proper data to the view.

Note: If you’re following along, you’ll notice that the first test we wrote won’t compile. Comment it out for now. We can fix it later.

I’m going to use a mock framework, so before I write this test, I need to reference Moq.dll in my test project, downloaded from the MoQ downloads page.

Note: I’ve included this assembly in the example project at the end of this post.

Here’s the new test.

[TestMethod]
public void BlogControllerPassesCorrectViewData() 
{
  //Arrange
  var posts = new List<Post>();
  posts.Add(new Post());
  posts.Add(new Post());

  var repository = new Mock<IPostRepository>();
  repository.Expect(r => r.ListRecentPosts(It.IsAny<int>())).Returns(posts);

  //Act
  BlogController controller = new BlogController(repository.Object);
  var result = controller.Recent() as ViewResult;

  //Assert
  var model = result.ViewData.Model as IList<Post>;
  Assert.AreEqual(2, model.Count);
}

What this test is doing is dynamically stubbing out an implementation of the IPostRepository interface. We then tell it that no matter what argument is passed to ListRecentPosts, return two posts. We can then pass that stub to our controller.

Note: We haven’t yet needed to implement this interface. We don’t need to yet. We’re interested in isolating our test to only test the logic in the action method, so we fake out the interface for the time being.

At this point, the test fails as we expect. We need to refactor Recent to do the right thing now.

public ActionResult Recent() 
{
  IList<Post> posts = repository.ListRecentPosts(10); //Doh! Magic Number!
  return View(posts);
}

Now when I run my test, it passes!

Inject That Dependency

But we’re not done yet. When I load up a browser and try to navigate to this controller action (on my machine, http://localhost:64701/blog/recent/), I get the following error page.

No parameterless constructor defined for this object. - Mozilla
Firefox

Well of course it errors out! By default, ASP.NET MVC requires that controllers have a public parameter-less constructor so that it can create an instance of the controller. But our constructor requires an instance of IPostRepository. We need someone, anyone, to pass such an instance to our controller.

StructureMap (or DI framework of your choice) to the rescue!

Note: Make sure todownloadand reference the StructureMap.dll assembly if you’re following along. I’ve included the assembly in the source code at the end of this post.

The first step I’m going to do is create a StructureMap.config file and add it to the root of my application. Here are the contents of the file.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <Assembly Name="MvcApplication" />
  <Assembly Name="System.Web.Mvc
              , Version=1.0.0.0
              , Culture=neutral
              , PublicKeyToken=31bf3856ad364e35" />

  <PluginFamily Type="System.Web.Mvc.IController" 
                Assembly="System.Web.Mvc
                  , Version=1.0.0.0
                  , Culture=neutral
                  , PublicKeyToken=31bf3856ad364e35">
    <Plugin Type="MvcApplication.Controllers.BlogController" 
            ConcreteKey="blog" 
            Assembly="MvcApplication" />
  </PluginFamily>

  <PluginFamily Type="MvcApplication.Models.IPostRepository" 
                Assembly="MvcApplication" 
                DefaultKey="InMemory">
    <Plugin Assembly="MvcApplication" 
            Type="MvcApplication.Models.InMemoryPostRepository" 
            ConcreteKey="InMemory" />
  </PluginFamily>

</StructureMap>

I don’t want to get bogged down in describing this file in too much detail. If you want a deeper understanding, check out the StructureMap documentation.

The bare minimum you need to know is that each PluginFamily node describes an interface type and a key for that type. A Plugin node describes a concrete type that will be used when an instance of the family type needs to be created by the framework.

For example, in the second PluginFamily node, the interface type is  IPostRepository which we defined. The concrete type is InMemoryPostRepository. So anytime we use StructureMap to construct an instance of a type that has a dependency on IPostRepository, StructureMap will pass in an instance of InMemoryPostRepository.

Well if that’s true, we better then create that class. Normally, I would use a SqlPostRepository. But for purposes of this demo, we’ll store blog posts in memory using a static collection. We can always implement the SQL version later.

Note: This is where I would normally write tests for InMemoryPostRepository but this post is already long enough, right? Don’t worry, I included unit tests in the downloadable code sample.

public class InMemoryPostRepository : IPostRepository
{
  //simulates database storage
  private static IList<Post> posts = new List<Post>();

  public void Create(Post post)
  {
    posts.Add(post);
  }

  public System.Collections.Generic.IList<Post> 
    ListRecentPosts(int retrievalCount)
  {
    if (retrievalCount < 0)
      throw new ArgumentOutOfRangeException("retrievalCount"
          , "Let’s be positive, ok?");

    IList<Post> recent = new List<Post>();
    int recentIndex = posts.Count - 1;
    for (int i = 0; i < retrievalCount; i++)
    {
      if (recentIndex < 0)
        break;
      recent.Add(posts[recentIndex--]);
    }
    return recent;
  }

  public static void Clear()
  {
    posts.Clear();
  }
}

Quick, We Need A Factory {.clear}

We’re almost done. We now need to hook up StructureMap to ASP.NET MVC by writing implementing IControllerFactory. The controller factory is responsible for creating controller instances. We can replace the built in logic with our own factory.

public class StructureMapControllerFactory : DefaultControllerFactory
{
  protected override 
    IController CreateController(RequestContext requestContext, string controllerName) 
  {
    try
    {
      string key = controllerName.ToLowerInvariant();
      return ObjectFactory.GetNamedInstance<IController>(key);
    }
    catch (StructureMapException)
    {
      //Use the default logic.
      return base.CreateController(requestContext, controllerName);
    }
  }
}

Finally, we wire it all up together by adding the following method call within the Application_Start method in Global.asax.cs.

protected void Application_Start() {
  ControllerBuilder.Current.SetControllerFactory(new StructureMapControllerFactory());

  RegisterRoutes(RouteTable.Routes);
}

And we’re done! Now that we have hooked up the dependency injection framework into our application, we can revisit our site in the browser (after compiling) and we get…

View Not Found
Message

Excellent! Despite the Yellow Screen of Death here, this is a good sign. We know our dependency is getting injected because this is a different error message than we were getting before. This one in particular is informing us that we haven’t created a view for this action. So we need to create a view.

Sorry! Out of scope. Not in the spec.

I leave it as an exercise for the reader to create a view for the page, or you can look at the silly simple one included in the source download.

Although this example was a ridiculously simple application, the principle applies in building a larger app. Just take the techniques here and rinse, recycle, repeat your way to TDD nirvana.

To see the result of all this, download the source code.

code, tdd comments edit

Sometimes when writing unit tests, you run into the case where you want to override the behavior of a specific method.

Here’s a totally contrived example I just pulled from my head to demonstrate this idea. Any similarity to specific real world scenarios is coincidental ;). Suppose we have this class we want to test.

public class MyController
{
  public void MyAction()
  {
      RenderView("it matches?");
  }

  public virtual void RenderView(string s)
  {
      throw new NotImplementedException("To ensure this method is overridden.");
  }
}

What we have here is a class with a public method MyAction that calls another virtual method, RenderView. We want to test the MyAction method and make sure it calls RenderView properly. But we don’t want the implementation of RenderView to execute because it will throw an exception. Perhaps we plan to implement that later.

Using A Partial Mock

There are two easy ways to test it. One is to create a partial mock using a Mocking framework such as Rhino Mocks.

[TestMethod]
public void DoMethodCallsRenderViewProperly()
{
  MockRepository mocks = new MockRepository();
  MyController fooMock = mocks.PartialMock<MyController>();
  Expect.Call(delegate { fooMock.RenderView("it matches?"); });
  mocks.ReplayAll();

  fooMock.MyAction();
  mocks.VerifyAll();
}

If you’re not familiar with mock framework, what we’ve done here is dynamically create a proxy for our MyController class and we’ve overridden the behavior of the RenderView method by setting an expectation. Basically, the expectation is that the method is called with the string “it matches?”. At the end, we verify that all of our expectations were met.

This is a pretty neat way of testing abstract and non-sealed classes. If a method does something that would break the test, and you don’t want to deal with that, or you don’t care if that method even runs, you can use this technique.

However, there are two problems you might run into this approach. First, VerifyAll doesn’t allow you to specify a message. That is a minor concern, but it’d be nice to supply an assert message there.

Secondly, and more importantly, what if RenderView is protected and not public? You won’t be able to use a partial mock (at least not using Rhino Mocks).

Using a Test-Specific Subclass

One approach is to use a test-specific subclass. I’ve used this test pattern many times before, but didn’t know there was a name for it till my colleague, Chris Tavares of the P&P group (no blog as far as I can tell), told me about the book xUnit Test Patterns: Refactoring Test Code.

In the book, the author categorizes various useful test techniques into groups of test patterns. The Test-Specific Subclass pattern addresses the situation described in this post. So looking at the above code, but assuming that RenderView is protected, we can still test it by doing the following.

[TestMethod]
public void DoMethodCallsRenderViewProperly()
{
  FooTestDouble fooDouble = new FooTestDouble();
  fooDouble.MyAction();
  Assert.AreEqual("it matches?", fooDouble.ReceivedArgument, "Did render the right view.");
}

private class FooTestDouble : MyController
{
  public string ReceivedArgument { get; private set; }

  protected override void RenderView(string s)
  {
      this.ReceivedArgument = s;
  }
}

All we did was write a class specific to this test called FooTestDouble. In that class we override the protected RenderView method and set a property with the passed in argument. Then in our test, we can simply check that the argument matches our expectation (and we get a human friendly assert message to boot).

Is this a valid test pattern?

Interestingly enough, I have shown this technique to some developers who told me it made them feel dirty (I’m not naming names). They didn’t feel this was a valid way to write a unit test. One complaint is that one shouldn’t have to inherit from a class in order to test that class.

So far, none of these complaints have provided empirical reasons behind this feeling that it is wrong. One complaint I’ve heard is that we are not testing the class, we are testing a derived class.

Sure, we’re technically testing a subclass and not the class itself, but we are in control of the subclass. We know that the behavior of the subclass is exactly the same except for what we chose to override.

Not only that, the same argument could be applied to using a partial mock. After all, what is the mocking example (which many feel is more appropriate) doing but implicitly generating a class that inherits from the class being tested whereas this pattern explicitly inherits from the class.

My own feeling on this is - I want to choose the technique that involves less code and is more understandable for any given situation. In some cases, using a mock framework does this. For example, in the first case when RenderView is public, I like having my test fully self-contained. But in the second case, RenderView is protected, I think the test-specific subclass is perfectly valid. The test-specific subclass is also great for those who are not familiar with a mock framework.

While some guidelines around TDD and unit testing are designed to produce better tests (for example, the Red Green approach and trying not to touch external resources such as the database) and better design, I don’t like to subscribe to arbitrary rules that only make writing tests harder and don’t seem to provide any measurable benefit based on some vague feelingof dirtiness.

Tags: TDD , Unit Testing , Test Specific Subclass

code, tdd comments edit

Brazil
Jersey Note that in the same vein as Pele, Ronaldinho and Ronaldo, Joel has reach that Brazillian Soccer player level of stardom in the geek community and can pretty much go by just his first name. Admit it, you knew who I was referring to in the title. Admit it!

Please indulge me in a brief moment of hubris.

I was reading part 1 of Joel Spolsky’s talk at Yale that he gave on November 28 and came upon this quote on code provability…

The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program!

Hey, that sounds familiar! By coincidence, I wrote the following on November 16^th^,12 days prior to Joel’s talk…

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker…

Sure, it isn’t exactly an original thought, but the timing seemed too coincidental. Could it really be? Could my post inspired one small point in a talk given by someone of his stature?

For a moment, I allowed myself to dream like a true fanboy that Mr. Joel really did read my blog. Until I read this (cue sound of pots and pans crashing down)…

The geeks want to solve the problem automatically, using software. They propose things like unit tests, test driven development, automated testing, dynamic logic and other ways to “prove” that a program is bug-free.

Crestfallen, I realized Mr. Joel indeed did not read my post, in which I claimed…

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

To be fair, I wouldn’t call Joel a major TDD proponent, but I hoped he would understand that TDD is about improving the design of code and also providing confidence to make changes to the system. It’s no proof of correctness, but can be a proof of incorrectness when a test fails.

Oh well, one can dream.

Tags: TDD , Joel Spolsky , Code Provability

comments edit

Oren Eini, aka Ayende, writes about his dissatisfaction with Microsoft reproducing the efforts of the OSS community. His post was sparked by the following thread in the ALT.NET mailing list:

Brad: If you’re simply angry because we had the audacity to make our own object factory with DI, then I can’t help you; the fact that P&P did ObjectBuilder does not invalidate any other object factory and/or DI container.

Ayende: No, it doesn’t. But it is a waste of time and effort.

Brad: In all seriousness: why should you care if I waste my time?

Ayende’s response is:

  • I care because it means that people are going to get a product that is a duplication of work already done elsewhere, usually with less maturity and flexibility.
  • I care because people are choosing Microsoft blindly, and that puts MS in a position of considerable responsibility.
  • I care because I see this as continued rejection of the community efforts and hard work.
  • I care because it, frankly, shows contempt to anything except what is coming from Microsoft.
  • I care because it so often ends up causing me grief.
  • I care because it is doing disservice to the community.

As a newly minted employee of Microsoft, it may seem like I am incapable of having a balanced opinion on this, but I am also an OSS developer and was so before I joined, so hopefully I am not so totally unbalanced ;).

I think his sentiment comes from certain specific efforts by Microsoft that, how can I put this delicately, sucked in comparison to the existing open source alternatives.

Two specific implementations come to mind, MS Test and SandCastle.

However, as much I tend to enjoy and agree with much of what Ayende says in his blog, I have to disagree with Ayende on this point that duplication of efforts is the problem.

After all, open source projects are just as guilty of this duplication. Why do we need BlogEngine.NET when there is already Subtext? And why do we need Subtext when there is already DasBlog? Why do we need MbUnit when there is NUnit? For that matter, why do we need Monorail when there is Ruby on Rails or RhinoMocks when there is NMock?

I think Ayende is well suited to answer that question. When he created RhinoMocks, there was already an open source mocking framework out there, NMock. But NMock perhaps didn’t meet Ayende’s need. Or perhaps he thought he could do better. In any case, he went out and duplicated the efforts of NMock, but in many (but maybe not all) ways, made it better. I personally love using RhinoMocks.

The thing is, there is no way for NMock nor RhinoMocks to meet all the needs of all the possible constituencies needs for a mocking framework. Technical superiority isn’t always the deciding factor. Sometimes political realities come into play. For example, whether we like it or not, some companies won’t use open source software. In an environment like that, neither NMock nor RhinoMocks will make any headway, leaving the door open for yet another mocking framework to make a dent.

Projects that seem to duplicate efforts never make perfect copies. They each have a slightly different set of requirements they seek to address. In an evolutionary sense, each duplicate contains mutations. And like evolution, survival of the fittest ensues. Except this isn’t a global winner takes all zero sum game.

What works in one area might not survive in another area. Like the real world, niches form and that which can find a niche in which it is strong will survive in that niche.

I’m reminded of this when I read that the Opera Mini browser beats Apple Safari, Netscape, and Mozilla combined in the Ukraine. Another remindes is how Google built yet another social platform that is really big Brazil.

So again, Duplication Is Not The Problem. Competition is healthy. If anything, the problem is, to stick with the evolution analogy, is that Microsoft because of its sheer might gives its creations quite the head start, to survive when the same product would die had it been released by a smaller company. We’ve seen this before when Microsoft let IE 6 rot on the vine and risks doing the same with IE 7. Fact of the matter is, Microsoft has a lot of influence.

So can we really fault Microsoft for duplicating efforts? Or only for doing a half-assed job of it sometimes? As I wrote before when I asked the question Should Microsoft Really Bundle Open Source Software?, I’d like to see some balance that both recognizes business realities that push Microsoft to duplicating community efforts, but at the same time support the community.

After all, Microsoft can’t let what is out there completely dictate its product strategy, but it also can’t ignore the open source ecosystem which is a boon to the .NET Framework.

Disclaimer: I shouldn’t need to say it, but to be clear, these are my opinions and not necessarily those of my employer.

Tags: Microsoft , Open Source , OSS

comments edit

Despite an international team of committers to Subtext and the fact that MySpace China uses a customized version of Subtext for its blog, I am ashamed to say that Subtext’s support for internationalization has been quite weak.

world
map

True, I did once write that The Only Universal Language in Software is English, but I didn’t mean that English is the only language that matters, especially on the web.

One area that we need to improve is in dealing with international URLs. For example, if I’m a user in Korea, I should be able to write a post with a Korean domain and a Korean title and thus have a friendly URL like so:

http://하쿹.com/blog/안녕하십니까.aspx

(As an aside, roughly speaking, 하쿹 would be pronounced hah-kut. About as close as I can get to haacked which is pronounced like hackt.)

If you’re a kind soul, you will forgive us for punting on this issue for so long. After all, RFC 2396, which defines the syntax for Uniform Resource Identifiers (URI) only allows for a subset of ASCII (about 60 characters).

But then again, I’ve been hiding behind this RFC as an excuse for a while fully knowing there are workarounds. I have just been too busy to fix this.

There are two issues here actually, the hostname (aka domain name) which is quite restrictive and cannot be URL encoded, AFAIK, and the rest of the URL which can be encoded.

The domain name issue is resolved by the diminutively named Punycode (described in RFC 3492). Punycode is a protocol for converting Unicode strings into the more limited set of ASCII characters for network host names.

For example, http://你好.com/ translates to http://xn–6qq79v.com/in Punycode.

Fortunately, this issue is pretty easy to fix. Since the browser is responsible for converting the Unicode domain name in the URL to Punycode, all we need to do in Subtext is allow users to setup a hostname that contains Unicode and we can then convert that to Punycode using something like the Punycode / IDN library for .NET 2.0. For this blog post, I used the web based phlyLabs IDNA Converter for converting Unicode to Punycode.

The second issue is rest of the URL. When you enter a title of a URL in Subtext, we convert that to a human and URL friendly ASCII “slug”. For example, if you enter the title “I like lamp” for a blog post, Subtext creates the friendly URL ending with “i_like_lamp.aspx”.

We haven’t totally ignored international URLs. For international western languages, we have code that effectively replaces accented characters with a close ASCII equivalent. A couple of examples (there are more in our unit tests) are:

Åñçhòr çùè becomes Anchor_cue

Héllò wörld becomes Hello_world

Unfortunately for my Korean brethren, something like 안녕하십니까 becomes (empty string). Well that totally sucks!

The thing is, the simple solution in this case is to just allow the Unicode Korean word as the slug. Browsers will apply the correct URL encoding to the URL. Thus https://haacked.com/안녕하십니까/ would become a request for https://haacked.com/%EC%95%88%EB%85%95%ED%95%98%EC%8B%AD%EB%8B%88%EA%B9%8C/and everything works just fine as far as I can tell. Please note that Firefox 2.0 actually replaces the Unicode string in the address bar with the encoded string while IE7 displays the Unicode as-is, but makes the request using the encoded URL (as confirmed by Fiddler).

For western languages in which we can do a decent enough conversion to ASCII, the benefit there is the URL remains somewhat readable and “friendlier” than a long URL encoded string. But for non-western scripts, we have no choice but to deal with these ugly URL encoded strings (at least in Firefox).

The interesting thing is, when researching how sites in China handle internationalized URLs, I discovered that in the same way we did, they simply punt on the issue. For example, http://baidu.com/, the most popular search engine in China last I checked, has English URLs.

Tags: URL , Localization , Punycode

code, tdd comments edit

My friend (and former boss and business partner) Micah found this gem of a quote from Donald Knuth addressing code proofs.

Beware of bugs in the above code; I have only proved it correct, not tried it.

Micah writes more on the topic and reminds me of why I enjoyed working with him so much. He’s always been quite thoughtful in his approach to problems. And I’m not just saying that because he agrees with me. ;)

On another note, several commenters pointed out that one thing I didn’t mention before, but should have, is that verifying the quality of code is only one small aspect of unit testing and Test Driven Development.

The more important factor is that TDD is a design process. Employing TDD is one (not the only one, but I think it is a good one) approach for improving the design of code and especially the usability of your code. By usability, I mean from another developer’s perspective.

If I have to create twenty different objects in order to call a method on your class, your class is probably not very usable to other developers. TDD is one approach that forces you to find that out sooner, rather than later.

A code proof won’t necessarily find that “flaw” because it is not a flaw in logic.

Tags: TDD , Code Provability

code, tdd comments edit

I’m currently doing some app building with ASP.NET MVC in which I try to cover a bunch of different scenarios. One scenario in particular I wanted to cover is approaching an application using a Test Driven Development approach. I especially wanted to cover using various Dependency Injection frameworks, to make sure everything plays nice.

Since I’ve already seen demos with Castle Windsor and Spring.NET, I wanted to give StructureMap a try. Here is the problem I’ve run into.

Say I have a class like so:

public class MyController : IController
{
  MembershipProvider membership;
  public HomeController(MembershipProvider provider)
  {
    this.membership = provider;
  }
}

As you can see, this class has a dependency on the abstract MembershipProvider class, which is passed to this class via a constructor argument. In my unit tests, I can use RhinoMocks to dynamically create a mock that inherits MembershipProvider provider and pass that mock to this controller class. It’s nice for testing.

But eventually, I need to use this class in a real app and I would like a DI framework container to create the controller for me. Here is my StructureMap.config file with some details left out.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication" />
  </PluginFamily>
</StructureMap>

If I add an empty constructor to HomeController, this code allows me to create an instance of HomeController like so.

HomeController c = 
  ObjectFactory.GetNamedInstance<IController>("HomeController")
  as HomeController;

But when I remove the empty constructor, StructureMap cannot create an instance of HomeController. I would need to tell StructureMap (via StructureMap.config) how to construct an instance of MembershipProvider to pass into the constructor for HomeController

Normally, I would just specify a type to instantiate as another PluginFamily entry. But what I really want to happen in this case is for StructureMap to call a method or delegate and use the value returned as the constructor argument.

In other words, I pretty much want something like this:

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication">
      <Instance>
        <Property Name="provider">
          <![CDATA[
            return Membership.Provider;
          ]]>
        </Property>
      </Instance>
    </Plugin>
  </PluginFamily>
</StructureMap>

The made up syntax I am using here is stating that when StructureMap is creating an instance of HomeController, execute the code in the CDATA section to get the instance to pass in as the constructor argument named provider.

Does anyone know if something like this is possible with any of the Dependency Injection frameworks out there?Whether via code or configuration?

Tags: TDD , Dependency Injection , IoC , StructureMap

code, tdd comments edit

Frans Bouma wrote an interesting response to my last post, Writing Testable Code Is About Managing Complexity entitled Correctness Provability should be the goal, not Testability.

He states in his post:

When focusing on testability, one can fall into the trap of believing that the tests prove that your code is correct.

God I hope not. Perhaps someone in theory could fall into that trap, but a person could also fall into the trap and buy a modestly priced bridge I have to sell to them in the bay area? This seems like a straw man fallacy to me.

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

Instead, what you often hear testing proponents talk about is confidence. For example, in my post Unit Tests cost More To Write I make the following point (emphasis added):

They reduce the true cost of software development by promoting cleaner code with less bugs. They reduce the TCO by documenting how code works and by serving as regression tests, giving maintainers more confidence to make changes to the system.

Frans goes on to say (emphasis mine)…

Proving code to be correct isn’t easy, but it should be your main focus when writing solid software. Your first step should be to prove that your algorithms are correct. If an algorithm fails to be correct, you can save yourself the trouble typing the executable form of it (the code representing the algorithm) as it will never result in solid correct software.

Before I address this, let me tell you a short story from my past. I promise it’ll be brief.

When I was a young bright eyed bushy tailed math major in college, I took a fantastic class called Differential Equations that covered equations which describe continuous phenomena in one or more dimension.

During the section on partial differential equations, we wracked our brains going through crazy mental gymnastics in order to find an explicit formula that solved a set of equations with multiple independent variables. With these techniques, it seemed like we could solve anything. Until of course, near the end of the semester when the cruel joke was finally revealed.

The sets of equations we solved were heavily contrived examples. As difficult as they were to solve, it turns out that only the most trivial sets of differential equations can be solved by an explicit formula. All that mental gymnastics we were doing up until that point was essentially just mental masturbation. Real world phenomena is hardly ever described by sets of equations that lined up so nicely.

Instead, mathematicians use techniques like Numerical Analysis (the Monte Carlo Method is one classic example) to attempt to find approximate solutions with reasonable error bounds.

Disillusioned, I never ended up taking Numerical Analysis (the next class in the series), choosing to try my hand at studying stochastic processes as well as number theory at that point.

The point of this story is that trying to prove the correctness of computer programs is a lot like trying to solve a set of partial differential equations. It works great on small trivial programs, but is incredibly hard and costly on anything resembling a real world software system.

Not only that, what exactly are you trying to prove?

In mathematics, a mathematician will take a set of axioms, a postulate, and then spend years converting caffeine into a long beard (whether you are male or female) and little scribbles on paper (which mathematicians call equations) that hopefully result in a proof that the postulate is true. At that point, the postulate becomes a theorem.

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker, because the only unambiguous complete spec for what an application does is code. Even then, you have to ask, how do you prove that the specification does what the customers want?

This is why proving code should not be your main focus, unless, maybe, you write code for the Space Shuttle.

Like differential equations, it’s too costly to explicitly prove code in all but the most trivial cases. If you are an algorithms developer writing the next sort algorithm, perhaps it is worth your time to prove your code because that cost is amortized over the life of such a small reusable unit of code. You have to look at your situation and see if the cost is worth it.

For large real world data driven applications, proving code correctness is just not reasonable because it calls for an extremely costly specification process, whereas tests are very easy to specify and cheap to write and maintain.

This is somewhat more obvious with an example. Suppose I asked you to write a program that could break a CAPTCHA. Writing the program is very time consuming and difficult. But first, before you write the program, what if I asked you to write some tests for the program you will write. That’s trivially easy, isn’t it? You just feed in some CAPTCHA images and then check that the program spits out the correct value. How do you know your tests are correct? You apply the red-green-refactor cycle along with the principle of triangulation. ;)

As we see, testing is easy. So how do you prove its correctness? Is it as easy as testing it?

As I said before, testing doesn’t give you a proof of correctness, but like the approaches of numerical analysis, it can give you an approximate proof with reasonable error bounds, aka, a confidence factor. The more tests, the smaller the error bounds and the better your confidence. This is a way better use of your time than trying to prove everything you write.

Technorati Tags: TDD,Math,Code Correctness

code, tdd comments edit

When discussing the upcoming ASP.NET MVC framework, one of the key benefits I like to tout is how this framework will improve testability of your web applications.

The response I often get is the same question I get when mention patterns such as Dependency Injection, IoC, etc…

Why would I want to do XYZ just to improve testability?

I think to myself in response

Just to improve testability? Isn’t that enough of a reason!

That’s how excited I am about test driven development. Testing seems enough of a reason for me!

Of course, when I’m done un-bunching my knickers, I realize that despite all the benefits of unit testable code, the real benefit of testable code is how it helps handle the software development’s biggest problem since time immemorial, managing complexity.

There are two ways that testable code helps manage complexity.

​1. It directly helps manage complexity assuming that you not only write testable code, but also write the unit tests to go along with them. With decent code coverage, you now have a nice suite of regression tests, which helps manage complexity by alerting you to potential bugs introduced during code maintenance in a large project long before they become a problem in production.

​2. It indirectly helps manage complexity because in order to write testable code, you have to employ the principle of separation of concerns to really write testable code.

Separating concerns within an application is an excellent tool for managing complexity when writing code. And writing code is complex!

The MVC pattern, for example, separates an application into three main components: the Model, the View, and the Controller. Not only does it separate these three components, it outlines the loosely coupled relationships and communication between these components.

Key Benefits of Separating Concerns

This separation combined with loose coupling allows a developer to manage complexity because it allows the developer to focus on one aspect of the problem at a time.

Martin Fowler writes about this benefit in his paper, Separating User Interface Code (pdf):

A clear separation lets you concentrate on each aspect of the problem separately—and one complicated thing at a time is enough. It also lets different people work on the separate pieces, which is useful when people want to hone more specialized skills. \

The ability to divide work into parallel tracks is a great benefit of this approach. In a well separated application, if Alice needs time to implement the controller or business logic, she can quickly stub out the model so that Bob can work on the view without being blocked by Alice. Meanwhile, Alice continues developing the business layer without the added stress that Bob is waiting on her.

Bring it home

The MVC example above talks about separation of concerns on a large architectural scale. But the same benefits apply on a much smaller scale outside of the MVC context. And all of these benefits can be yours as a side-effect of writing testable code.

So to summarize, when you write testable code, whether it is via Test Driven Development (TDD) or Test After Development, you get the following side effects.

  1. A nice suite of regression tests.
  2. Well separated code that helps manage complexity.
  3. Well separated code that helps enable concurrent development.

Compare that list of side effects with the list of side effects of the latest pharmaceutical wonder drug for curing restless legs or whatever. What’s not to like!?

Technorati Tags: TDD, Separation of Concerns, MVC, Loose Coupling, Design

comments edit

In his book, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (title long enough for you?), James Surowiecki argues that decisions made by a crowd are generally better than those made by any single individual in the group.

Crowd

Seems like a lot of theoretical hogwash until you see this thesis put to action in the real world via a prediction market. A prediction market (also called a decision market) is, as its name implies, a market created specifically to predict the likelihood of a specific outcome. They are most successful when participants have something invested in their decisions. Money often does the trick, but is not necessary.

The Hollywood Stock Exchange, a prediction market focused around the film industry, demonstrated the potential accuracy of such markets when it went 32 for 39 in predicting 2006’s Oscar Nominees (in the major categories that are the only ones people care about). They didn’t do so bad in 2007 either.

It’s no surprise then that there is a lot of research going on in predictive markets. Preliminary results show they seem to work very well when properly structured.

Contrast the success of prediction markets to another decision making process that involves a group. A recent comment by a reader got me thinking on this subject as he described one of the symptoms of this process.

If you try to satisfy all parties, you’ll end up with mediocre product that does not satisfy everybody.

An even more strong way to state this is to say that trying to satisfy everyone leaves almost everyone unsatisfied.

This is one typical result of a decision making process known as Groupthink. Another colorful term used to describe it is decision by committee. These terms do not, by any means, have a positive connotation.

Group
Think 

There seems to be a paradox here between the power of markets and the ineptitude of Group Think. Why do these two somewhat similar means of decision making have such a wide variance in accuracy?

Perhaps counter-intuitively, it has a lot to do with the coercive effects of seeking consensus in decision making. Group think is often hamstrung by seeking consensus whereas participants in an effective market are largely independent of one another.

Before I continue, let me head off that insightful comment on how I am trying to promote anarchy and save your fingers the pain of some typing by pointing out that this is not an indictment of all consensus driven decision making. In many cases, consensus is absolutely required. It doesn’t help if a decision is optimal but nobody is willing to participate in the result of the decision. This sort of decision making is especially essential in one particular form of decision making - negotiations,which is well covered by the book Getting to Yes: Negotiating Agreement Without Giving In.

The problem lies in applying negotiation to make decisions that should not be made by consensus. At least not without applying non-consensus based fact gathering first, so that negotiations can occur using data gathered in a dispassionate manner.

So what makes groupthink particularly inadequate? Social psychologist Irving Janis identified the following eight symptoms of groupthink that hamstring good decision making:

  1. Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
  2. Collective rationalization – Members discount warnings and do not reconsider their assumptions.
  3. Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
  4. Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.
  5. Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
  6. Self-censorship – Doubts and deviations from the perceived group consensus are not expressed.
  7. Illusion of unanimity – The majority view and judgments are assumed to be unanimous.
  8. Self-appointed ‘mindguards’ – Members protect the group and the leader from information that is problematic or contradictory to the group’s cohesiveness, view, and/or decisions.

One symptom not listed here, but probably fits in with #6 is the fear of causing offense.

One theme in common in the list is that many of these symptoms are the result of seeking some form of consensus with others within the group. The net effect is that all of these symptoms provide incentives not to deviate from the group.

Prediction Markets counter these symptoms through independence and proper incentive. Because the participants in a market are not directly working together, there is no way for peer pressure to burrow itself into the decision making process. There is no fear of offending others in such a market. Putting something valuable (such as money) on the line (or having a huge investment in the correct outcome, such as predicting disaster) has an eye-opening habit of making people truthful and helps to avoid self-censorship etc…

Another necessary factor in the success of markets, as pointed out in an insightful comment to this post, is diversity. Markets and committees that lack diversity of viewpoint fail to take advantage of all available information in a meaningful way and succumb to a form of tunnel vision. This type of phenomena is very evident in a leader who surrounds him/herself with sycophants and thus makes decisions based on what they want to hear rather than fact.

Prediction markets are not perfect and are not suitable for all decisions. In fact, they are probably not suitable for most decisions as the cost to set up a market isn’t justified for everyday decisions like whether you should get the bottle of beer or the pitcher of sangria.

However, understanding what makes predictive markets work and what the symptoms of groupthink look like can be a great benefit the next time you are making a decision within a group and start to see groupthink emerging.

Technorati Tags: Decision Making,Consensus,Predictive Markets,Groupthink

comments edit

Seen in Twitter today

CodingHorror: Guitar Hero III for the PC/Mac has truly insane system requirements http://tinyurl.com/2c42fm and USB guitar = dongle :(

LazyCoder: @codinghorror So for only 1 billion times the computing power required to put a man on the moon, you too can fake guitar playing.

It is a good thing we put all that computing power to good use.

Technorati Tags: Humor

comments edit

Last week I was busy in Las Vegas at the DevConnections/OpenForce conferences, and unlike that pithy but over-used catch-phrase, what happens at a conference in Vegas should definitely not stay in Vegas but should be blogged (only things during sessions that won’t get anyone in trouble).

It was my first time at DevConnections and I was not disappointed. This was also a first ever OpenForce conference put on by the DotNetNuke corporation and from all appearances, it was quite a success.

Carl Franklin and Rob
Conery Rob Conery and Rick
Strahl

The first night, as I passed by the hotel bar, I noticed Carl Franklin, well known for .NET Rocks, a podcast on .NET technologies. As I had an appearance on the show, it was nice to finally meet the guy in person.

Also met up with the Flyin’ Hawaiians, Rob Conery (from Kauai) and Rick Strahl (from Maui).

This is a big part of the value and fun in such conferences. The back-channel conversations on various topics that provide new insights you might not have otherwise received.

The next day, I attended the ASP.NET MVC talk given by Scott Hanselman and Eilon Lipton. The talk was well attended for a technology in which there are no CTP bits available yet. There were quite a few who stuck around to ask questions. I also attended their next talk that was part 3 of 3 in a series in which Scott Guthrie gave the first two parts.

Scott and Eilon are like the Dean Martin and Jerry Lewis of conference talks (I won’t say which is which). They play off each other quite well in giving a humorous, but informative, talk.

The best part for me was watching Eilon squirm as a star-struck attendee asked to have her picture taken with him after having done the same with Scott Hanselman. I think we expect this sort of geek-worship with Scott, but Eilon seemed genuinely uncomfortable. She was quite excited to get a pic with the guy who wrote**the UpdatePanel!

An admirer gazes at
Scott An admirer gazes at
Eilon

One particular experience that was particularly fun for me was that I got to go around the exhibitor floor, camera-man in tow, to interview attendees about their impressions of the conference. Normally such work goes to charismatic and well-spoken guys like Carl Franklin and Scott Hanselman, but both were too busy at the time and Scott pointed the cameraman to me.

I try to remain open to new experiences, even ones that take me out of my comfort zone (I have stage fright). I walked around with a microphone interviewing people and saw that the attendees really love this conference and felt they got a lot out of it. At least the ones willing to talk about it on camera. ;)

When asked what their favorite talk was, a couple attendees mentioned the MVC talk, which was good to hear.

IMG_0818 IMG_0820

While on the floor, they had a drawing in which they gave out a Harley. The winner happened to be a Harley-loving motorcycle rider, so it worked out pretty well.

On Wednesday and Thursday, I participated on two panels for the OpenForce conference on the topic of Open Source. They taped these panels so hopefully the videos will be up soon. I’ll write more about what we discussed later. I need to get some sleep as after leaving Vegas, I flew out the next day to Redmond and it is very late.

Technorati Tags: DevConnections,OpenForce

asp.net, asp.net mvc comments edit

While at DevConnections/OpenForce, I had some great conversations with various people on the topic of ASP.NET MVC. While many expressed their excitement about the framework and asked when they could see the bits (soon, I promise), there were several who had mixed feelings about it. I relish these conversations because it helps highlight the areas in which we need to put more work in and helps me become a better communicator about it.

One thing I’ve noticed is that most of my conversations focused too much on the MVC part of the equation. Dino Esposito (who I met very briefly), wrote an insightful post pointing out that it isn’t the MVC part of the framework that is most compelling:

So what’s IMHO the main aspect of the MVC framework? It uses a REST-like approach to ASP.NET Web development. It implements each request to the Web server as an HTTP call to something that can be logically described as a “remote service endpoint”. The target URL contains all that is needed to identify the controller that will process the request up to generating the response–whatever response format you need. I see more REST than MVC in this model. And, more importantly, REST is a more appropriate pattern to describe what pages created with the MVC framework actually do.

In describing the framework, I’ve tended to focus on the MVC part of it and the benefits in separation of concerns and testability. However, others have pointed out that by keeping the UI thin, a good developer could do all these things without MVC. So what’s the benefit of the MVC framework?

I agree, yet I still think that MVC provides even greater support for Test Driven Development than before both in substance and in style, so even in that regard, there’s a benefit. I need to elaborate on this point, but I’ll save that for another time.

But MVC is not the only benefit of the MVC framework. I think the REST-like nature is a big selling point. Naturally, the next question is, well why should I care about that?

Fair question. Many developers won’t care and perhaps shouldn’t. In those cases, this framework might not be a good fit. Some developers do care and desire a more REST-like approach. In this case, I think the MVC framework will be a good fit.

This is not a satisfying answer, I know. In a future post, I hope to answer that question better. In what situations should developers care about REST and in which situations, should they not? For now, I really should get some sleep. Over and out.

Technorati Tags: ASP.NET MVC,REST

asp.net, code, asp.net mvc, tdd comments edit

UPDATE: This content is a bit outdated as these interfaces have changed in ASP.NET MVC since the writing of this post.

One task that I relish as a PM on the ASP.NET MVC project is to build code samples and sample applications to put the platform through its paces and try to suss out any problems with the design or usability of the API.

Since testability is a key goal of this framework, I’ve been trying to apply a Test Driven Development (TDD) approach as I build out the sample applications. This has led to some fun discoveries in terms of using new language features of C# to improve my tests.

For example, the MVC framework will include interfaces for the ASP.NET intrinsics. So to mock up the HTTP context using Rhino Mocks, you might do the following.

MockRepository mocks = new MockRepository();
      
IHttpContext context = mocks.DynamicMock<IHttpContext>();
IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();
IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();

SetupResult.For(context.Request).Return(request);
SetupResult.For(context.Response).Return(response);
SetupResult.For(context.Server).Return(server);
SetupResult.For(context.Session).Return(session);

mocks.ReplayAll();
//Ready to use the mock now

Kind of a mouthful, no?

Then it occurred to me. I should use C# 3.0 Extension Methods to create a mini DSL (to abuse the term) for building HTTP mock objects. First, I wrote a simple proof of concept class with extension methods.

public static class MvcMockHelpers
{
  public static IHttpContext 
    DynamicIHttpContext(this MockRepository mocks)
  {
    IHttpContext context = mocks.DynamicMock<IHttpContext>();
    IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
    IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
    IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();
    IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();

    SetupResult.For(context.Request).Return(request);
    SetupResult.For(context.Response).Return(response);
    SetupResult.For(context.Session).Return(session);
    SetupResult.For(context.Server).Return(server);

    mocks.Replay(context);
    return context;
  }

  public static void SetFakeHttpMethod(
    this IHttpRequest request, string httpMethod)
  { 
    SetupResult.For(request.HttpMethod).Return(httpMethod);
  }
}

And then I rewrote the setup part for the test (the rest of the test is omitted for brevity).

MockRepository mocks = new MockRepository();
IHttpContext context = mocks.DynamicIHttpContext();
context.Request.SetFakeHttpMethod("GET");
mocks.ReplayAll();

That’s much cleaner, isn’t it?

Please note that I call the Replay method on the IHttpContext mock. That means you won’t be able to setup any more expectations on the context. But in most cases, you won’t need to.

This is just a proof-of-concept, but I could potentially add a bunch of SetFakeXYZ extension methods on the various intrinsics to make setting up expectations and results much easier. I chose the pattern of using the SetFake prefix to help differentiate these test helper methods.

Note that this technique isn’t specific to ASP.NET MVC. As you start to build apps with #C 3.0, you can build extensions for commonly used mocks to make it easier to write unit tests with mocked objects. That takes a lot of the drudgery out of setting up a mocked object.

Oh, and if you’re lamenting the fact that you’re writing ASP.NET 2.0 apps that don’t have interfaces for the HTTP intrinsics, you should read my post on IHttpContext and Duck Typing in which I provide such interfaces.

Happy testing to you!

I have a follow-up post on testing routes. The project includes a slightly more full featured version of the MvcMockHelpers class.

Tags: ASP.NET MVC , TDD, Rhino Mocks

comments edit

Recently I gave out a few free copies of a book I co-wrote, but ran out quickly. This is the same book that Jeff Atwood (a co-author) told everyone, Do Not Buy This Book.

Well, if you didn’t get a copy, there is another opportunity to get a free copy. DotNetSlackers is running a contest and will reward the top 3 contributors to their forums (ooh, that could get…interesting. First Post!orI Agree!Count it up.) with great prizes.

ASP.NET 2.0
Anthology Unfortunately, our book is part of a second prize package which includes ANTS Profiler Pro. Our book is also the third-prize.

Why is that unfortunate?

XBox 360
Elite Because if you want a free copy of our book, you have to try real hard not to do too well that you win first prize, which is an XBox 360 Elite along with Telerik RadControls. Once you got your hands on Halo 3 or Gears of War, you’d have no time to read our book! And you wouldn’t want that, would you?

In any case, for more contest details, check out the contest page.

Tags: Contest , Book

code comments edit

One thing I’ve found with various open source projects is that many of them contain very useful code nuggets that could be generally useful to developers writing different kinds of apps. Unfortunately, in many cases, these nuggets are hidden. If you’ve ever found yourself thinking, Man, I wonder how that one open source app does XYZ because I could use that in this app, then you know what I mean.

One goal I have with Subtext is to try and expose code that I think would be useful to others. It’s part of the reason I started the Subkismet project.

Another useful library you might find useful in Subtext is our SQL Script execution library encapsulated in the Subtext.Scripting.dll assembly.

A loooong time ago, Jon Galloway wrote a post entitled Handling GO Separators in SQL Scripts - the easy way that tackled the subject of executing SQL Scripts that contain GO separators using SQL Server Management Objects (SMO). SMO handles GO separators, but it doesn’t (AFAIK) handle SQL template parameters.

So rather than go the easy way, we went the hard way and wrote our own library for parsing and executing SQL scripts that contain GO separators (much harder than it sounds) and template parameters. Here’s a code sample that demonstrates the usage.

string script = @"SELECT * FROM <table1, nvarchar(256), Products>
GO
SELECT * FROM <table2, nvarchar(256), Users>";

SqlScriptRunner runner = new SqlScriptRunner(script);
runner.TemplateParameters["table1"] = "Post";
runner.TemplateParameters["table2"] = "Comment";

using(SqlConnection conn = new SqlConnection(connectionString))
{
  conn.Open();
  using(SqlTransaction transaction = conn.BeginTransaction())
  {
    runner.Execute(transaction);
    transaction.Commit();
  }
}            

The above code uses the SqlScriptRunner class to parse the script into its constituent scripts (you can access them via a ScriptCollection property) and then sets the value of two template parameters before executing all of the constituent scripts within a transaction.

Currently, the class only has one Execute method which takes in a SqlTransaction instance. This is slightly cumbersome and it would be nice to have a version that didn’t need all this setup, but this was all we needed for Subtext.

When I started writing this post, I thought about making some overrides that would make this class even easier to use, but instead, I will provide a copy of the assembly and point people to our Subversion repository and hope that someone out there will find this useful and have enough incentive to submit improvements!

Also, be sure to check out our unit tests for this class to understand what I mean when I said it was harder than it look. As a hint, think about dealing with GO statements in comments and quotes. Also, GO doesn’t have to be the only thing on the line. Certain specific elements can come before or after a GO statement on a line.

In case you missed the link, DOWNLOAD IT HERE.

Tags: Subtext , SQL , Sql Script Runner

comments edit

A while back, Jon Galloway asked the question, Can Operating Systems tell if they’re running in a Virtual Machine? What a silly question! When was the last time an Operating System questioned its own existence? Is that going to be in the next version of Windows - Windows Vista Into Its Own Soul? Or perhaps Apple will come out with Mac OS Existentialist?

Perhaps a more interesting question is whether or not you can tell that a web server is running in a virtual machine? Last weekend I migrated my blog into a virtual server running on a pretty sweet host machine and so far, my site seems to have gained an extra spring in its step.

Given that it’s hosted on the same server as that bandwidth hog, CodingHorror.com, I’m very pleased with the performance thus far. At least until he gets Dugged again.

In Jeff’s post, he mentioned that CrystalTech hooked us up with a beefy 64-bit dedicated server. Several commenters noted that there was no 64-bit offering on their site. The reason is that they hadn’t received much demand for 64-bit servers until we came along with our bullying tactics.

Dedicated Dual & Quad
Core Through a contact over there, I wanted to see if we could work out a hosting deal. Jeff was adamant that we get a 64-bit server, which they didn’t have at the time, but could certainly order. I pretty much didn’t want them to go through all that trouble and was ready to move on, but they insisted it was not a big deal.

They lied…er…understated their case. Rather than simply build a single 64-bit server, they took this as an opportunity to build out a whole new product offering of 64 bit dedicated servers.

So what started off as me trying to scam some discounted or free hosting ended up spurring these guys to start a new product offering. Nice!

I’m now hosting this site and SubtextProject.com on this box, but our CCNET server is still in a virtual machine hosted generously by Eric Kemp of Subsonic fame.

I used to be skeptical of hosting my site in a virtual machine, as I felt like if I hosted closer to the metal, I could wring out extra performance. But let’s get real here, I’m not taxing this machine in any way.

I’m sold on the benefits and convenience of virtualization.

Tags: CrystalTech , Hosting , Virtual Machine

comments edit

From Monday night to Thursday afternoon next week I will be in Las Vegas attending both DevConnections/ASPConnections as well as the DotNetNuke OpenForce conference. After that, I will be up in Redmond for the next week.

I wrote before that I would be speaking on a couple panels at OpenForce conference talking about open source and .NET.

If you’re interested, the panels will be:

Wednesday, Nov 7 - 8:00 AM - 9:15 AM Lagoon L

DOS101: Panel Discussion: Open Source on the Microsoft \ Technology Stack\ Scott Guthrie, Phil Haack, Rob Conery and Shaun Walker

Thursday, Nov 8 - 9:30 AM - 10:45 AM Lagoon F

DOS102: Panel Discussion: .NET Open Source Architectural Models \ Joe Brinkman, Phil Haack, Jay Flowers, Jon Galloway and Rob Conery

It’s unfortunate that the first panel is at 8:00 AM because I never really sleep much at all when I’m in Vegas. I pretty much have too much fun (unless I’m losing bad). So if you run into me, I apologize in advance for the fool I make of myself. ;)

Another talk you’ll not want to miss is the MVC talk given by Scott Hanselman and Eilon Lipton.

Tuesday, Nov 6 - 2:30 PM - 3:30 PM Room 2

AMS304: Introduction to the new ASP.NET Model View Controller (MVC) Framework

Scott Hanselman, Eilon Lipton

I’ll definitely be at this talk. These two are giving a couple of talks together you can read about on Scott’s post. Come by and say hi.

Tags: Microsoft , DevConnections , AspConnections , OpenForce , Conference

comments edit

Pop quiz for you C# developers out there. Will the following code compile?

//In Foo.dll
public class Kitty
{
  protected internal virtual void MakeSomeNoise()
  {
    Console.WriteLine("I'm in ur serverz fixing things...");
  }
}

//In Bar.dll
public class Lion : Kitty
{
  protected override void MakeSomeNoise()
  {
    Console.WriteLine("LOL!");
  }
}

If you had asked me that yesterday, I would have said hell no. You can’t override an internal method in another assembly.

Of course, I would have been WRONG!

Well the truth of the matter is, I was wrong. This came up in an internal discussion in which I was unfairly complaining that certain methods I needed to override were internal. In fact, they were protected internal. Doesn’t that mean that the method is both protected and internal?

Had I simply tried to override them, I would have learned that my assumption was wrong. For the record…

protected internal means protected OR internal

It’s very clear when you think of the keywords as the union of accessibility rather than the intersection. Thus protected internal means the method is accessible by anything that can access the protected method UNION with anything that can access the internal method.

A Donkey Named Lester - Creative Commons By Attribution -
ninjapoodlesAs the old saying goes, when you assume, you make an ass out of u and me. I never understood this saying because when I assume, I only make an ass of me. I really think the word should simply be assme. As in… 

Never assme something won’t work without at least trying it.

UPDATE: Eilon, sent me an email to point out that…

BTW the CLR does have the notion of ProtectedANDInternal, but C# has no syntax to specify it. If you look at the CLR’s System.Reflection.MethodAttributes enum you’ll see both FamANDAssem as well as FamORAssem (“Family” is the CLR’s term for C#’s protected and “Assem” is C#’s internal).

If you don’t know Eilon, he’s a freaking sharp developer I get to work with on the MVC project and was the one who kindly disabused me of my ignorance on this subject. He keeps a blog at http://weblogs.asp.net/leftslipper/.

Apparently he’s the one with the clever idea of using a C# 3.0 anonymous type as a dictionary, that many of you saw in ScottGu’s ALT.NET Conference talk. Very cool.

subtext comments edit

MySpace China
LogoAn undisclosed source informed me that MySpace China is using a modified version of Subtext for its blogging engine.

I had to check it out for myself to confirm it and it is true! Check out my first MySpace China blog post. How do I know for a fact that this is running on Subtext? I just viewed source and saw this little bit of javascript…

var subtextBlogInfo = new blogInfo('/', '/1304049400/');

So if anyone is wondering if Subtext can scale, it sure can. MySpace China gets around 100 million page views, approximately a million of which go to the blog.

My source tells me the MySpace China developers found some bugs with Subtext they had to fix that were only exposed when they put a huge load on it. Although they are under no requirements to do so under our license, I hope that they would contribute those fixes as patches back to Subtext.

So to all you Chinese users of Subtext (via MySpace China), 你好 to you.

Technorati Tags: Subtext , MySpace China