code, tdd 0 comments suggest edit

If you’ve worked with unit test frameworks like NUnit or MbUnit for a while, you are probably all too familiar with the set of assertion methods that come built into these frameworks. For example:

Assert.AreEqual(expected, actual);
Assert.Between(actual, left, right);
Assert.Greater(value1, value2);
Assert.IsAssignableFrom(expectedType, actualType);
// and so on...

While the list of methods on the Assert class is impressive, it leaves much to be desired. For example, I needed to assert that a string value was a member of an array. Here’s the test I wrote.

public void CanFindRole()
  string[] roles = Roles.GetRolesForUser("pikachu");
  bool found = false;
  foreach (string role in roles)
    if (role == "Pokemon")
      found = true;

Ok, so that’s not all that terrible (and yes, I could write my own array contains method, but bear with me). But still, if only there was a better way to do this.

Well I obviously wouldn’t be writing about this if there wasn’t. It turns out that MbUnit has a rich collection of specialized assertion classes that help handle the grudge work of writing unit tests. These classes aren’t as well known as the straightforward Assert class.

As an example, here is the previous test rewritten using the CollectionAssert class.

public void CanFindRole()
  string[] roles = Roles.GetRolesForUser("pikachu");
  CollectionAssert.Contains(roles, "pokemon");

How much cleaner is that? CollectionAssert has many useful assertion methods. Here’s a small sampling.

CollectionAssert.DoesNotContain(collection, actual);
CollectionAssert.IsSubsetOf(subset, superset);

Here is a list of some of the other useful specialized assert classes.

  • CompilerAssert - Allows you to compile source code
  • ArrayAssert - Methods to compare two arrays
  • ControlAssert - Tons of methods for comparing Windows controls
  • DataAssert - Methods for comparing data sets and the like
  • FileAssert - Compare files and assert existence
  • GenericAssert - Compare generic collections
  • ReflectionAssert - Lots of methods for using reflection to compare types, etc…
  • SecurityAssert - Assert security properties such as whether the user is authenticated
  • StringAssert - String specific assertions
  • SerialAssert - Assertions for serialization
  • WebAssert - Assertionns for Web Controls
  • XmlAssert - XML assertions

Unfortunately, the MbUnit wiki is sparse on documentation for these classes (volunteers are always welcome to flesh out the docs!). But the methods are very well named and using Intellisense, it is quite easy to figure out what each method of these classes does.

Using these specialized assertion classes can dramatically cut down the amount of boilerplate test code you write to test your methods.

Keep in mind, that if you need the option to port your tests to NUnit in the future (not sure why you’d want to once you have a taste of MbUnit) you are better off sticking with the Assert class, as it has parity with the NUnit implementation. These specialized assertion classes are specific to MbUnit (and one good reason to choose MbUnit for your unit testing needs).

0 comments suggest edit

Not too long ago I mentioned that a power surge bricked the Subtext Build Server. What followed was a comedy of errors on my part in trying to get this sucker back to life. Let my sleep deprived misadventures be a cautionary tale for you.

My first assumption was that the hard drive failed, so I ordered a new Hard Drive.

Lesson #1: If you think your hard drive has failed, it might not be a bad idea to actually test it if you can. Don’t just order a new one!

I have my main desktop machine I could have used to test the drive, but due to my sheer and immense laziness, I didn’t just pop the drive in there as a secondary drive to test it out. I just ordered the drive and moved on to other tasks.

Days later, the drive arrived and I popped it in and started to install Ubuntu on the machine. As I got to the disk partitioning part, I noticed that it found a disk and I went ahead and formatted the drive and installed Ubuntu. Sweet! But when I rebooted, the server not find the drive. Huh?

The Scream - Edvard
Munch Lesson #2: When installing an Operating System on a machine, make sure to unplug any external USB or Firewire drives.

Yep, I formatted my external hard drive and installed Ubuntu on that. The Ubuntu installation process recognized my firewire drive and offered that as an available drive to partition and install. Ouch!

At this point, I realized that the machine was not detecting my brand new hard drive, though I could hear the hard drive spin up when I powered on the machine. I figure that quite possibly it’s a problem with the SATA cable. So I order a new one.

Lesson #3: In the spirit of lesson 1, why not just temporarily pull a SATA cable from your other machine, if you have it.

I thought the SATA cables were all inaccessible and would be a pain to pull, but didn’t bother to check. It was in fact easy to grab one. To my defense, I figured having extra SATA cables on hand wouldn’t be a bad idea anyways and they are cheap.

So I plugged the SATA cable that I know to be good into the box and still it won’t recognize the hard drive. At this point it seems pretty clear to me that the drive controller on the Motherboard is fried. Any suggestions on how to fix this are welcome, if it is even possible.

In any case, after a good night of sleep, I started doing the right thing. I plugged the old drive into my desktop and sure enough, I can copy all its files onto my main machine.

I installed VMWare server and the build server is now up and running on my main desktop for the time being. Woohoo!

As a side note, I tried to use this VMDK (VMWare) to VHD (Virtual PC) Converter (registration required) so I wouldn’t have to install VMWare Server on my machine, but it didn’t seem to work. Has anyone had good luck converting a VMWare hard disk into a Virtual PC hard disk?

Long story short, do not under any circumstances let me anywhere near your hardware. At least the build server is back up and working fine. It is officially time to subscribe to Im exhausted. Good night.

0 comments suggest edit

My friend Scott Hanselman is on a mission to raise $50,000 and then some for the American Diabetes Association to help fund the search for a cure.

Team Hanselman Fight
Diabetes If you don’t know Scott, you should definitely subscribe to his blog.

His blog is has a wealth of information on software development, diabetes, .NET, and other topics.

He’s given much to the community over the years through his blog, podcast, Open Source projects, etc….

Interesting little tidbit: When I first started to take blogging I set a goal for myself. Setting a goal was my way of taking blogging more seriously. I surveyed the blog landscape and said to myself, “Hmmm, this Hanselman dude has a pretty popular blog. I want my blog to be as well known as that guy someday.”, not realizing at the time just how big a target I chose. His blog was an inspiration for mine, and many others I’m sure.

I’m far from reaching that goal, but along the way I have become friends with Scott (as well as a friendly rival with our competing open source projects, though we cross-contribute more than we compete). This past Mix07 I finally met the guy in person and he’s a laugh a minute.

Give now and have your contribution matched 7 times! Scott and his coworker Brian Hewitt have started a 48 hour blog matching challenge. From Wednesday May 9th at Noon PST through Friday, May 11 at Noon PST, contributions will be matched by seven bloggers, myself included.

Here’s the donation page.

0 comments suggest edit

I am a total noob when it comes to working with Linux. The only experience I have with Unix is in college when I used to pipe the manual of trn via the write command to unsuspecting classmates. If I remember correctly, this is how you do it.

write username | man trn

This would fill the user’s screen with a bunch of text. Always good for a laugh.

Normally, the write command informs the receiver who originated the message. I remember there was some way to hide who I was when sending the message, but have since forgotten that trick. Yep, them are some true ! Good times! Good times!! Good times! Good times!

As usual, I digress.

I recently decided to try out Ubuntu to see what all the fuss was about. My notes here apply to Virtual PC 2007.

Downloading Ubuntu and Setting up the VPC

To start, download the iso image from the Ubuntu download site. I downloaded the 7.04 version first since I assumed the bigger the version number, the better, right? We’ll see this isn’t always the case as we’ll see.

For completeness, I also installed 6.06.

When creating a new Virtual PC, make sure to bump up the ram to at least 256 MB of memory. Also make sure there is enough disk space. I tried to skimp and had a problem with the install. If in doubt, use the default value for disk space.

Installing Ubuntu

After creating a new Virtual PC machine, select the CD menu and then Capture ISO Image and browse for the iso image you downloaded.

Virtual PC Capture ISO Image

When the Ubuntu menu comes up, make sure to select Start Ubuntu in Safe Graphics Mode. I’ll explain why later.

Ubuntu startup

At this point, Ubuntu boots up and if you’re a total noob like me, you might think “Wow! That was a fast install!”.

It turns out that this is Ubuntu running off the CD. I must’ve been tired at the time because this confounded me for a good while as everytime I rebooted, I lost all progress I was making. ;) The next step is to really perform the install.

The Mouse Capture Issue

If you’re running Ubuntu 7.04, you might run into an issue where you can’t use the mouse in the VPC. This is due to a bug in some Linux distros where it cannot find PS/2 mice, which is the type that VPC emulates.

This post has a workaround for dealing with this issue by using the keyboard until these distros are fixed. Heck, this might be a great feature of 7.04 since it forces you to go commando and learn the keyboard shortcuts.

Ubuntu 6.06 does not suffer from this problem, so it may be a better starting point if you and the mouse have a great rapport.

The Real Install

At this point, you are ready to start the install. From the top level System menu, select Administration | Install.

Starting the

The installation process asks you a few simple questions and doesn’t take too long.

The Bit Depth Issue

Earlier I mentioned making sure to start Ubuntu in Safe Graphics Mode. The reason for this is that the default bit depth property for Ubuntu is 24, which Virtual PC does not support. If you fail to heed this advice, you’ll see something like this. Kind of looks like that All Your Base Are Belong to Us video.

Ubuntu without the proper printer

Fortunately, I found the fix in this post on Phil Scott’s blog (Phils rule!).**I bolded the essential elements.

Once I was in there, I found the configuration file for the graphics card in /etc/X11. Sso type in cd /etc/X11, although I certainly hope even the most harden of MScentric people can figure that out :). Once in there I opened up xorg.conf using pico (so type in pico xorg.conf - isn’t this fun?). Browse down to the screen section. Opps, looks like the defaultDepth property is 24, which VirtualPC doesn’t support. I changed this to 16 and hit CTRL-X to exit (saving when prompted of course). Typed in reboot and awaaaaaaay we go.

When I ran through these steps, I found that I had to use the sudo command (runs the command as a super user) first. For example:

sudo pico xorg.conf

Your results may vary. Speaking of sudo,have you seen my

Virtual Machine Additions for Linux

At this point, you’ll probably want to install the Virtual Machine Additions. Unfortunately, the additions only work for Windows and OS/2 guest operating systems.

However, you can go to the Connect website and download Virtual Machine Additions for Linux. It took me a while to find the actual download link because various blog posts only mentioned the Connect site and not the actual location.

Ubuntu isn’t listed in the list of supported distributions. I’ll let you know if it works for Ubuntu.

Now What?

So now I have Ubuntu running in a virtual machine. It comes with Open Office, Firefox, etc… preinstalled. My next step is to install VMWare and MonoDevelop and start tinkering around. Any suggestions on what else I should check out?

UPDATE: Perhaps I should use VMWare 6 instead since it supports multi-monitor in a virtual machine. That’s hot!

code, tdd 0 comments suggest edit

Although I am a big fan of Rhino Mocks, I typically favor State-Based over Interaction-Based unit testing, though I am not totally against Interaction Based testing.

I often use Rhino Mocks to dynamically create Dummy objects and Fake objects rather than true Mocks, based on this definition given by Martin Fowler.

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.\
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).\
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ’sent’, or maybe only how many messages it ’sent’.\
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

Fortunately Rhino Mocks is well suited to this purpose. For example, you can dynamically add a PropertyBehavior to a mock, which generates a backing member for a property. If that doesn’t make sense, let’s let the code do the talking.

Here we have a very simple interface. In the real world, imagine there are a lot of properties.

public interface IAnimal
  int Legs { get; set; }

Next, we have a simple class we want to test that interacts with IAnimal instances. This is a contrived example.

public class SomeClass
  private IAnimal animal;

  public SomeClass(IAnimal animal)
    this.animal = animal;

  public void SetLegs(int count)
    this.animal.Legs = count;

Finally, let’s write our unit test.

public void DemoLegsProperty()
  MockRepository mocks = new MockRepository();
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  //Makes the Legs property actually work, creating a fake.
  animalMock.Legs = 0;
  Assert.AreEqual(0, animalMock.Legs);
  SomeClass instance = new SomeClass(animalMock);
  Assert.AreEqual(10, animalMock.Legs);

Keep in mind here that I did not need to stub out a test class that inherits from IAnimal. Instead, I let RhinoMocks dynamically create one for me. The bolded line modifies the mock so that the Legs property exhibits property behavior. Behind the scenes, it’s generating something like this:

public int Legs
  get {return this.legs;}
  set {this.legs = value;}
int legs;

At this point, you might wonder what the point of this is? Why not just create a test class that implements the IAnimal interface? It isn’t that many more lines of code.

Now we get to the meat of this post. Suppose the interface was more realistic and looked like this:

public interface IAnimal
  int Legs { get; set; }
  int Eyes { get; set; }
  string Name { get; set; }
  string Species { get; set; }
  //... and so on

Now you have a lot of work to do to implement this interface just for a unit test. At this point, some readers might be squirming in their seats ready to jump out and say, “Aha! That’s what ReSharper|CodeSmith|Etc… can do for you!”

Fair enough. And in fact, the code to add the PropertyBehavior to each property of the IAnimal mock starts to get a bit cumbersome in this situation too. Let’s look at what that would look like.


Still a lot less code to maintain than implementing each of the properties of the interface. But not very pretty. So I wrote up a quick utility method for adding the PropertyBehavior to every property of a mock.

/// <summary>
/// Sets all public read/write properties to have a 
/// property behavior when using Rhino Mocks.
/// </summary>
/// <param name="mock"></param>
public static void SetPropertyBehaviorOnAllProperties(object mock)
  PropertyInfo[] properties = mock.GetType().GetProperties();
  foreach (PropertyInfo property in properties)
    if (property.CanRead && property.CanWrite)
      property.GetValue(mock, null);

Using this method, this approach now has a lot of advantages to explicitly implementing the interface. Here’s an example of the test now with a test of another property.

public void DemoLegsProperty()
  MockRepository mocks = new MockRepository();
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  SomeClass instance = new SomeClass(animalMock);
  Assert.AreEqual(10, animalMock.Legs);
  animalMock.Eyes = 2;
  Assert.AreEqual(2, animalMock.Eyes);

Be warned, I didn’t test this with indexed properties. It only applies to public read/write properties.

Hopefully I can convince Ayende to include something like this in a future version of Rhino Mocks.

0 comments suggest edit

Thought I’d post a few pics from mix with some notes. Click on any for a larger view.

Phil Jeff and

This first one is of the three amigos, not to mention coauthors. That is me on the left sporting a Subtext shirt, Jeff Atwood in the middle, complete with CodingHorror sticker, and Jon Galloway on the right.

Scott Hanselman and Rob

That’s Scott Hanselman (who runs that other .NET open source blog engine) on the left and Rob Conery (of Subsonic fame) on the right. The joke here is that Scott is standing on some stairs because Rob Conery is a giant.

ScottGu Miguel and

Sometimes, the best parts of conferences are outside of the sessions. A few of us were sitting around having drinks when we spotted Scott Guthrie walking by. Not content to just let him be on his merry way, as that would be the polite thing to do. We called him over and he proceeded to regale us with stories and walked us through some of the namespaces and such of Silverlight.

ScottGu, as he is known, is a total class act and I was happy to finally meet him in person.


Here I am regaling Sam Ramji with an obviously hilarious joke. The picture is not, you know, staged whatsoever. No, not at all.

Sam’s is Director of Platform Technology Strategy and runs the Open Source Software Lab at Microsoft. Didn’t know there was someone in charge of Open Source at Microsoft? Neither did I until meeting him. A few of us had his ear during a dinner. Hopefully we’ll see some interesting things come out of it.

Tantek and

Tantek Çelik was walking by and noticed my XKCD t-shirt which sports a unix joke and had to take a picture. Not many people got the joke.


John Lam prepares for the Dynamic Language Runtime session with Jim Hugunin. This was one of my favorite sessions. One of the demonstrations was an application that allowed them to evaluate script dynamically using Silverlight. The neat part was they could switch languages, for example from Ruby to Python, and still evaluate properties of objects they had declared in the previous language. Hot!

I got a chance to hang out more with John more at Pure and really enjoyed his perspective on Microsoft and child rearing.

Jeff Phil Miguel

Jeff, Jon, and I intently watch as Miguel de Icaza gives us a demo of Mono. The rotating cube desktop is pretty sweet.


Jeff cannot conceal his Man Crush on Miguel.


Scott Stanfield, CEO of Vertigo, on the right playing Guitar Hero, the addict. Tonight he and I cleaned up at Spanish 21, winning over $300 each. This surprised the guy at the craps table who informed us that Spanish 21 is a terrible game in terms of odds. But aren’t they all?

0 comments suggest edit

Just a couple of notes while I have a break during the conference. I’ll try to find some time to write about my impressions of the technologies when I’ve had time to reflect.

In the meanwhile, allow me to tell a story about the Italia soccer jersey I wore on Sunday. It was a gift from a friend and I figured it fit the theme of staying at the Venetian. Get it? Italy!?

On Sunday, when Jon arrived in L.A. from SD, we went to brunch with my wife before leaving for Las Vegas. We decided to go to a nice French brunch place, La Dijonaise. Already some of you must see the conflict brewing.

Here I am, walking into a French restaurant wearing an Italian soccer jersey. The guy at the door took one look at me and told me, in a deeply French accent, “No no no. You cannot come in here.”

Eric Kemp, Miguel De Icaza, Jon Galloway, John Osborn,

I figured he was joking, but it took me a moment to realize why this guy I had never met was joking with me, as he pointed to my shirt. Silly me.

0 comments suggest edit

Yesterday, while hanging out in the so called “BlogZone”, Tim Heuer pulled me aside for a audio short interview on the topic of Subtext and Open Source, two things I love to talk about and good luck getting me to shut up once you get me started. ;)

This was a surprise for me as the last time I was interviewed was by a reporter for my college paper after my soccer team used the school paper to dry windows for a fund raising car wash. I told the reporter that the paper was good for drying windows because they don’t leave streaks. I was merely relaying what someone told me when they went to grab the papers, but my teammates all congratulated for sticking it to the paper. Funny how that works out sometimes.

Back to the present, I cringed while listening to the interview as I learned I’m much less eloquent than I hoped I would be in a situation. Apparently I suffer from the “You Know” disease that Atwood suffers from. This is simply due to my nervousness at being interviewed along with the fact that we were in a very noisy room surrounded by a lot of distractions (yes, this is me making excuses).

Not only that, there’s a point in the interview where I seem to lose focus and stammer. That’s because Scott Hanselman was calling me and I wasn’t sure whether to stop and give him directions to the BlogZone or continue. As you can hear, I continue and he found it just fine.

Unfortunately, there’s a lot more I would’ve liked to have said. Upon being asked about whether the community has chipped into Subtext, I started off with the example of recent committs related to the build server and mentioned a couple of people. I was just getting warmed up and didn’t get a chance to mention many others who have contributed. I apologize, but the interview probably would’ve gone on for hours if I had the proper time to express my appreciation to the Subtext developers and community.

The lesson learned for me is to slow down, take a deep breath, and don’t be afraid to take a moment to collect my thoughts. Don’t be afraid of dead air when speaking publicly.

In any case, Tim, I enjoyed being interviewed. I personally think you have a talent for it and would have done a much better job than the painful interview we were subjected to during the keynote. Seriously, they should’ve had you up there asking Ray and Scott questions.

In case you didn’t know, Tim contributed what is probably the most popular skin to Subtext, Origami.

0 comments suggest edit

Well Jon and I arrived safely, driving into Vegas around 4 PM last evening. Upon arriving, we met up with Miguel De Icaza, the founder of the Mono project, and headed over to the Mashup Lounge where we ran into John Osborne, a senior editor with O’ Reilly.

Being the small world that it is, John was a reviewer for the Windows Developer Power Tools book and happened to review the section I wrote on Tortoise CVS/SVN.

We were joined by Eric Kemp, one of the members of the Subsonic team and fun conversation on Open Source, Mono, Politics, etc… ensued.

Later on in the evening we headed over the BlogZone, a suite in the Venetian towers with a couple of X-Boxes, food, and drinks. We were later joined by Jeff Atwood, Scott Hanselman, Clemens Vasters, Steve Maine and a deadly game of Guitar Hero ensued.

Keynote is about to start, will write more later.

0 comments suggest edit

If you’ve read my blog at all, you know I’m a big proponent of Continuous Integration (CI). For the Subtext project, we use CruiseControl.NET. I’ve written about our build process in the past.

Given the usefulness of having a build server, you can understand my frustration and sadness when our build server recently took a dive. I bought a replacement hard drive, but it was the wrong kind (a rookie mistake on my part, accidentally getting an IDE drive rather than SATA).

Members of the Subtext team such as Simo, Myself, and Scott Dorman have put in countless hours into perfecting the build server. If only we had CI Factory in our toolbelt before we started.

CI Factory is just that, a factory for creating CruiseControl.NET scripts. Scott Hanselman calls it a Continuous Integration accelerator. It bundles just about everything you need for a complete CI setup such as CCNET, NUnit or MbUnit, NCover, etc…

In the latest dnrTV episode, Jay Flowers, the creator of CI Factory, joins hosts Scott Hanselman and Carl Franklin to create a Continuous Integration setup using CI Factory in around an hour.

The project they chose to use as a demonstration is none other than Subtext! Given the number of hours we’ve taken to setup the Subtext build server, this is quite an ambituous undertaking to take, especially while being recorded.

Can you imagine having to write code while two guys provide color commentary? I’d probably wilt under that pressure, but Jay handles it with aplomb.

The video runs a bit long, but is worth watching if you plan to setup CI for your own project. The amount of XML configuration with CIFactory might seem daunting at first, but trust me when I say that it’s much worse for CCNET by itself. CIFactory reduces the amount of configuration by a lot, and Jay is constantly making it easier and easier to setup.

As an aside, Jay Flowers scores big points with me for also being a member of the MbUnit team, my favorite unit testing framework. Kudos to Jay, Scott, and Carl for a great show.

0 comments suggest edit

Charlez Petzold makes the following lament in response to Jeff Atwood’s review of two WPF books, one being Petzold’s.

I’ve been mulling over Coding Horror’s analysis of two WPF books, not really thrilled about it, of course. The gist of it is that modern programming books should have color, bullet points, boxes, color, snippets, pictures, color, scannability, and color.

Does that remind you of anything?

Apparently the battle for the future of written communication is over. Prose is dead. PowerPoint has won.

With all due respect to Mr. Petzold, and he certainly deserves much respect, I think the comparison to PowerPoint is unfair and really misses the point.

Since when is technical writing prose?

Well it often does meet one of the definitions of prose.

​1. the ordinary form of spoken or written language, without metrical structure, as distinguished from poetry or verse.\ 2. matter-of-fact, commonplace, or dull expression, quality, discourse, etc.

Using that definition, I fail to see how the death of dull and commonplace expression signals a loss for the future of written communication. If anything, it’s a step in the right direction.

Technical writing is supposed to teach and help readers learn and retain information. Having visual aids not only helps cement the information in your mind, but also aids in finding that information when you need to look it up again.

Long passages of unbroken prose are great for getting lost in mental imagery when reading a novel, but it sucks for recall. Prose is alive and well in its proper place. Save the lengthy prose for the next great work of fiction, but cater to how the brain works when writing something meant to be absorbed, learned, and remembered.

Head First Design Patterns
Cover I think the Head First series really gets it when it comes to how the mind works and learns. From the introduction to Head First Design Patterns.

Your brain craves novelty. It’s always searching, scanning, waiting for something unusual. It was built that way, and it helps you stay alive.

Today, you’re less likely to be a tiger snack. But your brain’s still looking. You just never know.

So what does your brain do with all the routine, ordinary, normal things you encounter? Everything it can to stop them from interfering with the brain’s real job—recording things that matter. It doesn’t bother saving the boring things; they never make it past the “this is obviously not important” filter.

In a subsequent section, the book describes the Head First learning principles, a couple of which I quote below. I highly recommend reading this entire intro the next time you are in the bookstore.

Make it visual. Images are far more memorable than words alone, and make learning much more effective (up to 89% improvement in recall and transfer studies). It also makes things more understandable. Put the words within or near the graphics they relate to, rather than on the bottom or on another page, and learners will be up to twice as likely to solve problems related to the content.

Use a conversational and personalized style. In recent studies, students performed up to 40% better on post-learning tests if the content spoke directly to the reader, using a first-person conversational style rather than taking a formal tone.

What we see here is that studies after studies show that appropriate use of images and graphics improve recall. Not only that, but a casual tone, like that found in a blog, also helps recall.

Unfortunately, Petzold draws an unfair analogy between Adam Nathan’s WPF book and PowerPoint. We’ve all heard that PowerPoint is evil, but the evil is in how users misuse PowerPoint, not PowerPoint itself. PowerPoint certainly makes it easy to go to the extreme with noisy graphics resulting in garish crowded presentations.

It’s this proliferation of PowerPoint presentations that favor graphics to the detriment of the content that leads to the disdain towards PowerPoint. But it is also possible to create sublime presentations with PowerPoint with just the right amount of graphics.

Even Tufte would acknowledge that getting rid of graphics and bullet points completely is also extreme in the opposite direction and works against the real goal, to convey information in a manner that the audience can understand and retain it.

Drawing a comparison between Nathan’s book and PowerPoint suggests that the Nathan’s Book is all fluff and flash. But based on reading sample chapters, that is hardly the case. As Jeff wrote, the graphics, colors, and bullets all are used judiciously and appropriately. This isn’t the case of Las Vegas trying to pretend it is Florence. There’s real substance here.

code, tech, blogging 0 comments suggest edit

Several pople have asked me recently about the nice code syntax highlighting in my blog. For example:

public string Test()
  //Look at the pretty colors
  return "Yay!";

A long time ago, I wrote about using for converting code to HTML.

But these days, I use Omar Shahine’s Insert Code for Windows Live Writer plugin for, you guessed it, Windows Live Writer. This plugin just happens to use the Manoli code to perform syntax highlighting.


I recommend downloading and referencing the CSS stylesheet from the Manoli site and making sure to uncheck the Embed StyleSheet option in the plugin.

The dropshadow around the code is some CSS I found on the net.

0 comments suggest edit

UPDATE: This functionality is now rolled into the latest version of MbUnit.

A long time ago Patrick Cauldwell wrote up a technique for managing external files within unit tests by embedding them as resources and unpacking the resources during the unit test. This is a powerful technique for making unit tests self contained.

If you look in our unit tests for Subtext, I took this approach to heart, writing several different methods in our UnitTestHelper class for extracting embedded resources.

Last night, I had the idea to make the code cleaner and even easier to use by implementing a custom test decorator attribute for my favorite unit testing framework, MbUnit.

Usage Examples

The following code snippets demonstrates the usage of the attribute within a unit test. These code samples assume an embedded resource already exists in the same assembly that the unit test itself is defined in.

This first test demonstrates how to extract the resource to a specific file. You can specify a full destination path, or a path relative to the current directory.

[ExtractResource("Embedded.Resource.Name.txt", "TestResource.txt")]
public void CanExtractResourceToFile()

The next demonstrates how to extract the resource to a stream rather than a file.

public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using(StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

As demonstrated in the previous example, you can access the stream via the static ExtractResourceAttribute.Stream property. This is only set if you don’t specify a destination.

In case you’re wondering, the stream is stored in a static member marked with the[ThreadStatic]attribute. That way if you are taking advantage of MbUnits ability torepeat a test multiple times using multiple threads, you should be OK.

What if the resource is embedded in another assembly other than the one you are testing?

Not to worry. You can specify a type (any type) defined in the assembly that contains the embedded resource like so:

  , "TestResource.txt"
  , ResourceCleanup.DeleteAfterTest
  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResource()

  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using (StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

This attribute should go a long way to making unit tests that use external files cleaner. It also demonstrates how easy it is to extend MbUnit.

A big Thank You goes to Jay Flowers for his help with this code. And before I forget, you can download the code for thiscustom test decorator here.

Please note that I left in my unit tests for the attribute which will fail unless you change the embedded resource name to match an embedded resource in your own assembly.

0 comments suggest edit

Take a good look at this picture.


That there is pretty much my Shuttle machine today, metaphorically speaking of course.

We had a brief power outage today which appears to have fried just my hard drive, if I’m lucky. This machine was hosting our build server within a VMWare virtual machine.

Fortunately my main machine was not affected by the outtage because it is connected to a

The real loss is all the time it will take me to get the build server up and running again. Not to mention we were planning an imminent release and rely on our build server to automatically prepare a release. I hate manual work.

0 comments suggest edit

Before I begin, I should clarify what I mean by using a database as an API integration point.

In another life in a distant galaxy far far away, I worked on a project in which we needed to integrate a partner’s system with our system. The method of integration required that when a particular event occurred, they would write some data to a particular table in our database, which would then fire a trigger to perform whatever actions were necessary on our side (vague enough for ya?).

In this case, the data model and the related stored procedures made up the API used by the partner to integrate into our system.

So what’s the problem?

I always felt this was ugly in a few ways, I’m sure you’ll think of more.

  1. First, we have to make our database directly accessible to a third party, exposing ourselves to all the security risk that entails.
  2. We’re not really free to make schema changes as we have no abstraction layer between the database and any clients to the system.
  3. How exactly do you define a contract in SQL? With Web Services, you have XSD. With code, you have interfaces.

Personally, I’d like to have some sort of abstraction layer for my integration points so that I am free to change the underlying implementation.

Why am I bringing this up?

A little while ago, I was having a chat with a member of the Subtext team, telling him about the custom MembershipProvider we’re implementing for Subtext 2.0 to fit in with our data model. His initial reaction was that developer-users are going to grumble that we’re not using the “Standard” Membership Provider.

The “Standard”?

I question this notion of “The Standard Membership Provider”? Which provider is the standard? Is it the ActiveDirectoryMembershipProvider?

It is in anticipation of developer grumblings that I write this post to plead my case and perhaps rail against the wind.

The point of the Provider Model

You see, it seems that the whole point of the Provider Model is lost if you require a specific data model. The whole point of the provider model is to provide an abstraction to the underlying physical data store.

For example, Rob Howard, one of the authors of the Provider Pattern wrote this in the second part of his introduction to the Provider Pattern (emphasis mine).

A point brought up in the previous article discussed the conundrum the ASP.NET team faced while building the Personalization system used for ASP.NET 2.0. The problem was choosing the right data model: standard SQL tables versus a schema approach. Someone pointed out that the provider pattern doesn’t solve this, which is 100% correct. What it does allow is the flexibility to choose which data model makes the most sense for your organization. An important note about the pattern: it doesn’t solve how you store your data, but it does abstract that decision out of your programming interface.

What Rob and Microsoft realized is that no one data model fits all. Many applications will already have a data model for storing users and roles.

The idea is that if you write code and controls against the provider API, the underlying data model doesn’t matter. This is emphasized by the goals of the provider model according to the MSDN introduction…

The ASP.NET 2.0 provider model was designed with the following goals in mind:

  • To make ASP.NET state storage both flexible and extensible \
  • To insulate application-level code and code in the ASP.NET run-time from the physical storage media where state is stored, and to isolate the changes required to use alternative media types to a single well-defined layer with minimal surface area
  • To make writing custom providers as simple as possible by providing a robust and well-documented set of base classes from which developers can derive provider classes of their own

It is expected that developers who wish to pair ASP.NET 2.0 with data sources for which off-the-shelf providers are not available can, with a reasonable amount of effort, write custom providers to do the job.

Of course, Microsoft made it easy for all of us developers by shipping a full featured SqlMembershipProvider complete with database schema and stored procedures. When building a new implementation from scratch, it makes a lot of sense to use this implementation. If your needs fit within the implementation, then that is a lot of work that you don’t have to do.

Unfortunately, many developers took it to be the gospel truth and standard in how the the data model should be implemented. This is really only one possible database implementation of a Membership Provider.

An Example Gone Wrong

There is one particular open source application that I recall that already had a fantastic user and roles implementation at the time that the Membership Provder Model was released. Their existing implementation was in all respects, a superset of the features of the Membership Provider.

Naturally there was a lot of pressure to implement the Membership Provider API, so they chose to simply implement the SqlMembershipProvider’s tables side by side with their own user tables.

Stepping through the code in a debugger one day, I watched in disbelief when upon logging in as a user, the code started copying all users from the SqlMembershipProvider’s stock aspnet_* tables to the application’s internal user tables and vice versa. They were essentially keeping two separate user databases in synch on every login.

In my view, this was the wrong approach to take. It would’ve been much better to simply implement a custom MembershipProvider class that read from and wrote to their existing user database tables.

For the features of their existing users and roles implementation that the Membership Provider did not support, they could have been exposed via their existing API.

Yes, I’m armchair quarterbacking at this point as there may have been some extenuating circumstances I am not aware of. But I can’t imagine doing a full multi-table synch on every login being a good choice, especially for a large database of users. I’m not aware of the status of this implementation detail at this point in time.

The Big But

Someone somewhere is reading this thinking I’m being a bit overly dogmatic. They might be thinking

But, but I have three apps in my organization which communicate with each other via the database just fine. This is a workable solution for our scenario, thank you very much. You’re full of it.

I totally agree on all three counts.

For a set of internal applications within an organization, it may well make sense to integrate at the database layer, since all communications between apps occurs within the security boundary of your internal network and you have full control over the implementation details for all of the applications.

So while I still think even these apps could benefit from a well defined API or Web Service layer as the point of integration, I don’t think you should never consider the database as a potential integration point.

But when you’re considering integration for external applications outside of your control, especially applications that haven’t even been written yet, I think the database is a really poor choice and should be avoided.

Microsoft recognized this with the Provider Model, which is why controls written for the MembershipProvider are not supposed to assume anything about the underlying data store. For example, they don’t make direct queries against the “standard” Membership tables.

Instead, when you need to integrate with a membership database, use the API.

Hopefully future users and developers of Subtext will also recognize this when we unveil the Membership features in Subtext 2.0 and keep the grumbling to a minimum. Either that or point out how full of it I am and convince me to change my mind.

See also: Where the Provider Model Falls Short.

code 0 comments suggest edit

I don’t think it’s too much of a stretch to say that the hardest part of coding is not writing code, but reading it. As Eric Lippert points out, Reading code is hard.

First off, I agree with you that there are very few people who can read code who cannot write code themselves. It’s not like written or spoken natural languages, where understanding what someone else says does not require understanding why they said it that way.

Screenshot of
codeHmmm, now why did Eric say that in that particular way?

This in part is why reinventing the wheel is so common (apart from the need to prove you can build a better wheel). It’s easier to write new code than try and understand and use existing code.

It is crucial to try and make your code as easy to read as possible. Strive to be the Dr. Seuss of writing code. Making your code easy to read makes it easier to use.

The basics of readable code include the usual advice of following code conventions, formatting code properly, and choosing good names for methods and variables, among other things. This is all included within Code Complete which should be your software development bible.

Aside from all that, a key tactic to improve code readibility and usability is make your code’s intentions crystal clear.

Oftentimes it’s paying attention to the little things that can really help your code along this path. Let’s look at a few examples.

out vs ref

A while ago I encountered some code that looked something like this contrived example:

int y = 7;
bool success = TrySomething(someParam, ref y);

Ignore the terrible names and focus on the parameters. At a glance, what is your initial expectation of this code regarding its parameter?

When I encountered this code, I assumed that that the y parameter value passed in to this method is important somehow and that the method probably changes the value.

I then took a look at the method (keep in mind this is all extremely simplified from the actual code).

public bool TrySomething(object something, ref int y)
    y = resultOfCalculation(something);
    return false;
  return true;

Now this annoyed me. Sure, this method is perfectly valid and will compile. But notice that the value of y is never used. It is immediately assigned to something else.

The intention of this method is not clear. It’s intent is not to ever use the value of y, but to merely set it. But since the method uses the ref keyword, you are required to set the value of the parameter before you call it. You can’t do this:

int y;
bool success = TrySomething(someParam, ref y);

In this case, using the out keyword expresses the intentions much better.

public bool TrySomething(object something, out int y)
    y = resultOfCalculation(something);
    return false;
  return true;

It’s a really teeny tiny thing, something you might accuse me of being nitpicky even bringing it up, but anything you can do so that the reader of the code doesn’t have to interrupt her train of thought to figure out the meaning of the code will make your code more readable and the API more usable.

Boolean Arguments vs Enums

Brad Abrams touched upon this one a while ago. Let’s look at an example.

BlogPost p = CreatePost(post, true, false);

What exactly is this code doing? Well it’s obvious it creates a blog post. But what is that true indicate? Hard to say. I better pause, look up the method, and then move on. What a pain!

BlogPost p = CreatePost(post
  , PostStatus.Published, CommentStatus.CommentsDisabled);

In the second case, the intentions of the code is much clearer and there is no interruption for the reader to figure out the context of the true or false as in the first method.

Assigning a Value You Don’t Use

Another common example I’ve seen is where the result of a method is assigned to the value of a variable, but the variable is never used. I think this often happens because some developers falsely believe that if a method returns a value, that value has to be assigned to something.

Let’s look at an example that uses the TrySomething method I wrote earlier.

int y;
bool success = TrySomething(something, out y);
/*success is never used again.*/

Fortunately, Resharper makes this sort of thing stick out like a sore thumb. The problem here is that as a code reader, I’m left wondering if you meant to use the variable and forgot, or if this is an unecessary declaration. Do this instead.

int y;
TrySomething(something, out y);

Again, these are very small things, but they make a big difference. Don’t worry about coming across as anal (you will) because the payout is worth it in the end.

What are some examples that you can think of to make code more readable and usable?

UPDATE: Lesson learned. If you oversimplify your code examples, your main point is lost. Especially on the topic of code readability. Touche! I’ve updated the sample code to better illustrate my point. The comments may be out of synch with what you read here as a result.

UPDATE AGAIN: I found another great blog post about writing concise code that adds a lot to this discussion. It is part of the Fail Fast and Return Early school of thought. Short, concise and readable code - invert your logic and stop nesting already!

0 comments suggest edit

According to FeedBurner, many of my readers are from London, so I thought you might enjoy this little tale.

Tonight, I met someone extremely famous, or so I was told. When I got home, I looked him up, and sure enough, he is huge in Europe. According to Wikipedia, “he has sold more albums in the UK than any other British solo artist in history”.

Have any of you heard of Robbie Williams?


My wife knew who he was immediately. Must be the fact that she’s a British citizen (she has dual Japanese citizenship as well). She played one of his songs from an Alice 97.3 compilation we have. I rather liked it.

It turns out that he runs (owns?) a soccer team in Los Angeles. We had a friendly scrimmage set up with them at UCLA. I fully expected we’d be playing on the intramural fields where everyone else plays, but instead we played on the immaculate UCLA Football team’s practice field.

This seems to be a trend I’m noticing among British music stars. They move to Los Angeles and start up soccer teams to manage. They also seem to have the means to absorb some of the best talent in Los Angeles in doing so.

As I’ve written before, Steve Jones of the Sex Pistols runs a team in my league. I have heard that Rod Stewart has a team in Los Angeles as well. I suppose if the day comes when I can’t run on the pitch, and if I had that sort of money, I could see running a soccer club (sorry, Footbal Club) as a fantastic hobby.


Not to be outdone, my team now has its own celebrity member. Santiago Cabrera from the TV show Heroes is now a member of our team.

Fortunately, he is a very talented soccer player, scoring a bycicle kick against us in our scrimmage tonight (he plays on the other team as well). Now if we could just get some former pros to join us to help solidify our midfield. Zidane, I’m looking at you buddy!

0 comments suggest edit

Tim Heuer has been on a tear lately submitting some great new skins to the Subtext Skin Showcase, which is part of

The Showcase is the part of the site in which we display user submitted skins and allow others to download the skins. The other part of the site displays the default skins in Subtext.

Blue Terrafirma Dirtylicious Informatif

It appears that Tim has been porting some of the nicer designs in the Open Designs website, a website devoted to open source web design.

Tim happens to also be the creator of Origami (which you can see in use on Rob Conery’s Blog), which many consider to be the nicest skin in Subtext.

If you are a Subtext user, try out some of these skins. They may find their way into future releases of Subtext.