comments edit

With the announcement of the 1.9.5 release of Subtext, I thought I should talk about the new tagging and tag cloud feature. You can see it in action in the sidebar of my site.

Tag To implement tagging, we followed the model I wrote about before. Tags do not replace categories in Subtext. Instead, we adopted an approach using Microformats.

We see categories as a structural element and navigational aid, whereas we see tags as meta-data. For example, in the future, we might consider implementing sub-categories like WordPress does.

The other reason not to implement tags as categories is that most people create way more tags than categories and blog clients are not well suited to deal with a huge number of categories.

To create a tag, simply use the rel-tag microformat. For example, use the following markup…

<a href="" rel="tag">ASP.NET</a>

…to tag a post with ASP.NET.

Please note that according to the microformat, the last section of the URL defines the tag, not the text within the anchor element. For example, the following markup…

<a href="" rel="tag">Blog</a>

…tags the post with Subtext and not Blog.

Also note that the URL does not have to point to It can point to anywhere. We just take the last portion of the URL according to the microformat.

comments edit

Subtext Submarine
Logo It is with great pleasure and relief that I announce the release of Subtext 1.9.5. Between you and me, I’m just happy to get this release out before the kid is born.

As with most point releases, this is primarily a bug fix release, but we found time to introduce a few nice new features - most notably support for tagging and Identicons.

New Features

  • Content Tagging and Tag Cloud - for more details, refere to this post
  • Identicon Support - Uses the Identicon Handler project on CodePlex.
  • MyBrand Feedburner Support - Updated our FeedBurner implementation to support custom feedburner URLs
  • Upgrade toLightBox 2.0
    • If you referenced the default lightbox skin in your custom skin, please reference this post by Simone to understand how to update the skin.
  • Author CSS Class - The CSS class of “author” is added to comments left by the owner of a blog (must be logged in when leaving comment for this to work). This allows custom skin authors to highlight comments by authors.
  • Credits Page - In the Admin section, we give credit where credit is due, displaying a list of the open source products we make use of in building Subtext.
  • Implemented ASP.NET AJAX - We replaced MagicAjax panel with ASP.NET Ajax libraries. Keep in mind that this requires a bit of new Web.config configuration sections. So be careful when merging your Web.config changes.

Bug Fixes

Clean Installation

A clean install of Subtext is relatively painless. Essentially copy all the files to your webserver, create a database if you don’t already have one, update web.config to point to your database, and you’re all set. For more details on the setup instructions, read the Clean Installation Instructions


Upgrading from a previous 1.9 version is relatively straightforward. For the safest approach, follow the upgrade instructions here.

I’ve written a command line tool for upgrading Subtext, but it isn’t production ready yet. Our goal is to make the upgrade process as seamless as possible in future versions. If you’d like to help with that, we’d love to have your contributions!


As always, you can download the latest release here. The install file contains just the files you need to deploy to your webserver. The source file contains full source code.


As always, many thanks go out to the many Subtext contributors and community members who helped make this latest release possible. Subtext keeps on getting better because of the community involvement. Just take a look at the improvements to our Continous Integration and Build Server as an example.

If you’d like to contribute, we’re always looking for help. Great positions are open!

What’s Next?

I’m wrapping up a project for a client in which I was able to implement multiple authors per blog. Hopefully, this means that Subtext 2.0 won’t take as long to release as 1.9.5 did.

We are constantly improving our development process and refactoring the code. The big push for 2.0 is to get the Plugin Model and custom MembershipProvider rock solid and to refactor and clean up the code heavily.

comments edit

Today is my last day of work as a VelocIT employee, a company I helped start and had (and still have) high hopes for as employee #1.

No, I’m not being fired for blogging too much or embezzling funds. No, there wasn’t a big falling out with partners in the company throwing books at each other and screaming expletives. Unfortunately, nothing dramatic and tabloid-worthy like that happened at all.

I simply lost interest in being a consultant and I blameSubtext. Micah Dylan, the CEO and Founder of VelocIT and my good friend, and I often talked about the idea that there are two general types of developers (I’m sure there are many more).

  1. Developers who are easily bored and love to learn about new businesses and business models. Staying on one project forever would cause these devs to go insane. They love the excitement of jumping from client to client and project to project.
  2. Developers who love to craft and hone a single code-base through multiple versions. These devs are fine sticking with a project for a long time and get enjoyment in watching the application take form over the years.

For a long time, I’ve been more firmly in camp #1 with tendencies towards #2. But over the past couple of years working on Subtext, I’ve never gotten bored with working on the same code and realized I have been in Camp #2 for a good while now.

Sure, I do get excited about learning new technologies all the time, but now it is in the context of how they will help me make Subtext a better blog engine.

Not only that, I found that what I most love about the Subtext project is not just the craft of developing an application over multiple versions, but the joy in building a community around that project.

Maybe this is because with Subtext, my “clients” are other developers. I understand developers better than I do other clients because their pain is often my pain. I just don’t have the same pains that a Director of Marketing does (well actually I kind of do with Subtext, but I don’t have any budget to address those pains so I ignore the pain).

My heart just hasn’t been in consulting for a good while now, but I couldn’t leave while we were struggling along at the brink of going out of business. So I pushed on, helped land a big client, and now it looks like VelocIT is close to having more projects on its hands than employees! So if you love consulting and software development, send Jon Galloway your resume.

I will still be involved with VelocIT in a limited capacity. Discussions are still underway, but I hope to remain on as a Board Member and shareholder. The team we’ve assembled at VelocIT are among the best and brightest I have ever worked with. I love working with them and working from home. I willl certainly miss all of that.

So where am I going next?

I’ll be taking a position with Koders Inc. as the Product Manager of the website, an Open Source code search engine. I think this will be a good fit for me due to my passion for open source software.

My goal is to as much as possible help developers become more productive via search driven development and the services that naturally extend from that.

Naturally, the best way to do that is to provide relevant search results. But beyond that, I believe that building an active community around the site via tools, widgets, and APIs that developers can use in their own projects will also be very important in being a useful resource for developers. Koders is for coders and developers.

I’ll be relying on your feedback regarding the site’s usability and how well it helps you to be more productive to help me do my job. In other words, I’m going to take my lazy butt and try and ride your coattails in order to do my job well. Is that genius or what? ;)

One thing I really like about the site so far is the project browser. Check out the browser for the MbUnit project. Wouldn’t it be nice to integrate that into your project homepage, your CruiseControl.NET build, or even replace the the CodePlex code browser with that? (hint hint Codeplex).

In any case, wish me luck. This is probably the most difficult job change ever for me since it’s not just a job that I’m leaving, and not just a job that I’m joining.

One funny part of this I won’t tell you yet. But you’ll laugh when you hear the name we chose for our son, which we chose before all this happened.

comments edit

If you downloaded Subtext last night and try to edit keywords in the admin section, you might run into a syntax error. I fixed the download so if the file you download is named

(notice the all caps “INSTALL”) you have nothing to worry about.

If you downloaded

Then you might want to replace the EditKeywords.aspx file in the Admin folder with this one.

My apologies. I thought I had tested every page in the admin before releasing, but it was late and I must have missed this one. I never use that page day to day so my dogfooding attempts surely missed it.

code, tdd comments edit

If you’ve worked with unit test frameworks like NUnit or MbUnit for a while, you are probably all too familiar with the set of assertion methods that come built into these frameworks. For example:

Assert.AreEqual(expected, actual);
Assert.Between(actual, left, right);
Assert.Greater(value1, value2);
Assert.IsAssignableFrom(expectedType, actualType);
// and so on...

While the list of methods on the Assert class is impressive, it leaves much to be desired. For example, I needed to assert that a string value was a member of an array. Here’s the test I wrote.

public void CanFindRole()
  string[] roles = Roles.GetRolesForUser("pikachu");
  bool found = false;
  foreach (string role in roles)
    if (role == "Pokemon")
      found = true;

Ok, so that’s not all that terrible (and yes, I could write my own array contains method, but bear with me). But still, if only there was a better way to do this.

Well I obviously wouldn’t be writing about this if there wasn’t. It turns out that MbUnit has a rich collection of specialized assertion classes that help handle the grudge work of writing unit tests. These classes aren’t as well known as the straightforward Assert class.

As an example, here is the previous test rewritten using the CollectionAssert class.

public void CanFindRole()
  string[] roles = Roles.GetRolesForUser("pikachu");
  CollectionAssert.Contains(roles, "pokemon");

How much cleaner is that? CollectionAssert has many useful assertion methods. Here’s a small sampling.

CollectionAssert.DoesNotContain(collection, actual);
CollectionAssert.IsSubsetOf(subset, superset);

Here is a list of some of the other useful specialized assert classes.

  • CompilerAssert - Allows you to compile source code
  • ArrayAssert - Methods to compare two arrays
  • ControlAssert - Tons of methods for comparing Windows controls
  • DataAssert - Methods for comparing data sets and the like
  • FileAssert - Compare files and assert existence
  • GenericAssert - Compare generic collections
  • ReflectionAssert - Lots of methods for using reflection to compare types, etc…
  • SecurityAssert - Assert security properties such as whether the user is authenticated
  • StringAssert - String specific assertions
  • SerialAssert - Assertions for serialization
  • WebAssert - Assertionns for Web Controls
  • XmlAssert - XML assertions

Unfortunately, the MbUnit wiki is sparse on documentation for these classes (volunteers are always welcome to flesh out the docs!). But the methods are very well named and using Intellisense, it is quite easy to figure out what each method of these classes does.

Using these specialized assertion classes can dramatically cut down the amount of boilerplate test code you write to test your methods.

Keep in mind, that if you need the option to port your tests to NUnit in the future (not sure why you’d want to once you have a taste of MbUnit) you are better off sticking with the Assert class, as it has parity with the NUnit implementation. These specialized assertion classes are specific to MbUnit (and one good reason to choose MbUnit for your unit testing needs).

comments edit

Not too long ago I mentioned that a power surge bricked the Subtext Build Server. What followed was a comedy of errors on my part in trying to get this sucker back to life. Let my sleep deprived misadventures be a cautionary tale for you.

My first assumption was that the hard drive failed, so I ordered a new Hard Drive.

Lesson #1: If you think your hard drive has failed, it might not be a bad idea to actually test it if you can. Don’t just order a new one!

I have my main desktop machine I could have used to test the drive, but due to my sheer and immense laziness, I didn’t just pop the drive in there as a secondary drive to test it out. I just ordered the drive and moved on to other tasks.

Days later, the drive arrived and I popped it in and started to install Ubuntu on the machine. As I got to the disk partitioning part, I noticed that it found a disk and I went ahead and formatted the drive and installed Ubuntu. Sweet! But when I rebooted, the server not find the drive. Huh?

The Scream - Edvard
Munch Lesson #2: When installing an Operating System on a machine, make sure to unplug any external USB or Firewire drives.

Yep, I formatted my external hard drive and installed Ubuntu on that. The Ubuntu installation process recognized my firewire drive and offered that as an available drive to partition and install. Ouch!

At this point, I realized that the machine was not detecting my brand new hard drive, though I could hear the hard drive spin up when I powered on the machine. I figure that quite possibly it’s a problem with the SATA cable. So I order a new one.

Lesson #3: In the spirit of lesson 1, why not just temporarily pull a SATA cable from your other machine, if you have it.

I thought the SATA cables were all inaccessible and would be a pain to pull, but didn’t bother to check. It was in fact easy to grab one. To my defense, I figured having extra SATA cables on hand wouldn’t be a bad idea anyways and they are cheap.

So I plugged the SATA cable that I know to be good into the box and still it won’t recognize the hard drive. At this point it seems pretty clear to me that the drive controller on the Motherboard is fried. Any suggestions on how to fix this are welcome, if it is even possible.

In any case, after a good night of sleep, I started doing the right thing. I plugged the old drive into my desktop and sure enough, I can copy all its files onto my main machine.

I installed VMWare server and the build server is now up and running on my main desktop for the time being. Woohoo!

As a side note, I tried to use this VMDK (VMWare) to VHD (Virtual PC) Converter (registration required) so I wouldn’t have to install VMWare Server on my machine, but it didn’t seem to work. Has anyone had good luck converting a VMWare hard disk into a Virtual PC hard disk?

Long story short, do not under any circumstances let me anywhere near your hardware. At least the build server is back up and working fine. It is officially time to subscribe to Im exhausted. Good night.

comments edit

My friend Scott Hanselman is on a mission to raise $50,000 and then some for the American Diabetes Association to help fund the search for a cure.

Team Hanselman Fight
Diabetes If you don’t know Scott, you should definitely subscribe to his blog.

His blog is has a wealth of information on software development, diabetes, .NET, and other topics.

He’s given much to the community over the years through his blog, podcast, Open Source projects, etc….

Interesting little tidbit: When I first started to take blogging I set a goal for myself. Setting a goal was my way of taking blogging more seriously. I surveyed the blog landscape and said to myself, “Hmmm, this Hanselman dude has a pretty popular blog. I want my blog to be as well known as that guy someday.”, not realizing at the time just how big a target I chose. His blog was an inspiration for mine, and many others I’m sure.

I’m far from reaching that goal, but along the way I have become friends with Scott (as well as a friendly rival with our competing open source projects, though we cross-contribute more than we compete). This past Mix07 I finally met the guy in person and he’s a laugh a minute.

Give now and have your contribution matched 7 times! Scott and his coworker Brian Hewitt have started a 48 hour blog matching challenge. From Wednesday May 9th at Noon PST through Friday, May 11 at Noon PST, contributions will be matched by seven bloggers, myself included.

Here’s the donation page.

comments edit

I am a total noob when it comes to working with Linux. The only experience I have with Unix is in college when I used to pipe the manual of trn via the write command to unsuspecting classmates. If I remember correctly, this is how you do it.

write username | man trn

This would fill the user’s screen with a bunch of text. Always good for a laugh.

Normally, the write command informs the receiver who originated the message. I remember there was some way to hide who I was when sending the message, but have since forgotten that trick. Yep, them are some true ! Good times! Good times!! Good times! Good times!

As usual, I digress.

I recently decided to try out Ubuntu to see what all the fuss was about. My notes here apply to Virtual PC 2007.

Downloading Ubuntu and Setting up the VPC

To start, download the iso image from the Ubuntu download site. I downloaded the 7.04 version first since I assumed the bigger the version number, the better, right? We’ll see this isn’t always the case as we’ll see.

For completeness, I also installed 6.06.

When creating a new Virtual PC, make sure to bump up the ram to at least 256 MB of memory. Also make sure there is enough disk space. I tried to skimp and had a problem with the install. If in doubt, use the default value for disk space.

Installing Ubuntu

After creating a new Virtual PC machine, select the CD menu and then Capture ISO Image and browse for the iso image you downloaded.

Virtual PC Capture ISO Image

When the Ubuntu menu comes up, make sure to select Start Ubuntu in Safe Graphics Mode. I’ll explain why later.

Ubuntu startup

At this point, Ubuntu boots up and if you’re a total noob like me, you might think “Wow! That was a fast install!”.

It turns out that this is Ubuntu running off the CD. I must’ve been tired at the time because this confounded me for a good while as everytime I rebooted, I lost all progress I was making. ;) The next step is to really perform the install.

The Mouse Capture Issue

If you’re running Ubuntu 7.04, you might run into an issue where you can’t use the mouse in the VPC. This is due to a bug in some Linux distros where it cannot find PS/2 mice, which is the type that VPC emulates.

This post has a workaround for dealing with this issue by using the keyboard until these distros are fixed. Heck, this might be a great feature of 7.04 since it forces you to go commando and learn the keyboard shortcuts.

Ubuntu 6.06 does not suffer from this problem, so it may be a better starting point if you and the mouse have a great rapport.

The Real Install

At this point, you are ready to start the install. From the top level System menu, select Administration | Install.

Starting the

The installation process asks you a few simple questions and doesn’t take too long.

The Bit Depth Issue

Earlier I mentioned making sure to start Ubuntu in Safe Graphics Mode. The reason for this is that the default bit depth property for Ubuntu is 24, which Virtual PC does not support. If you fail to heed this advice, you’ll see something like this. Kind of looks like that All Your Base Are Belong to Us video.

Ubuntu without the proper printer

Fortunately, I found the fix in this post on Phil Scott’s blog (Phils rule!).**I bolded the essential elements.

Once I was in there, I found the configuration file for the graphics card in /etc/X11. Sso type in cd /etc/X11, although I certainly hope even the most harden of MScentric people can figure that out :). Once in there I opened up xorg.conf using pico (so type in pico xorg.conf - isn’t this fun?). Browse down to the screen section. Opps, looks like the defaultDepth property is 24, which VirtualPC doesn’t support. I changed this to 16 and hit CTRL-X to exit (saving when prompted of course). Typed in reboot and awaaaaaaay we go.

When I ran through these steps, I found that I had to use the sudo command (runs the command as a super user) first. For example:

sudo pico xorg.conf

Your results may vary. Speaking of sudo,have you seen my

Virtual Machine Additions for Linux

At this point, you’ll probably want to install the Virtual Machine Additions. Unfortunately, the additions only work for Windows and OS/2 guest operating systems.

However, you can go to the Connect website and download Virtual Machine Additions for Linux. It took me a while to find the actual download link because various blog posts only mentioned the Connect site and not the actual location.

Ubuntu isn’t listed in the list of supported distributions. I’ll let you know if it works for Ubuntu.

Now What?

So now I have Ubuntu running in a virtual machine. It comes with Open Office, Firefox, etc… preinstalled. My next step is to install VMWare and MonoDevelop and start tinkering around. Any suggestions on what else I should check out?

UPDATE: Perhaps I should use VMWare 6 instead since it supports multi-monitor in a virtual machine. That’s hot!

code, tdd comments edit

Although I am a big fan of Rhino Mocks, I typically favor State-Based over Interaction-Based unit testing, though I am not totally against Interaction Based testing.

I often use Rhino Mocks to dynamically create Dummy objects and Fake objects rather than true Mocks, based on this definition given by Martin Fowler.

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.\
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).\
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ’sent’, or maybe only how many messages it ’sent’.\
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

Fortunately Rhino Mocks is well suited to this purpose. For example, you can dynamically add a PropertyBehavior to a mock, which generates a backing member for a property. If that doesn’t make sense, let’s let the code do the talking.

Here we have a very simple interface. In the real world, imagine there are a lot of properties.

public interface IAnimal
  int Legs { get; set; }

Next, we have a simple class we want to test that interacts with IAnimal instances. This is a contrived example.

public class SomeClass
  private IAnimal animal;

  public SomeClass(IAnimal animal)
    this.animal = animal;

  public void SetLegs(int count)
    this.animal.Legs = count;

Finally, let’s write our unit test.

public void DemoLegsProperty()
  MockRepository mocks = new MockRepository();
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  //Makes the Legs property actually work, creating a fake.
  animalMock.Legs = 0;
  Assert.AreEqual(0, animalMock.Legs);
  SomeClass instance = new SomeClass(animalMock);
  Assert.AreEqual(10, animalMock.Legs);

Keep in mind here that I did not need to stub out a test class that inherits from IAnimal. Instead, I let RhinoMocks dynamically create one for me. The bolded line modifies the mock so that the Legs property exhibits property behavior. Behind the scenes, it’s generating something like this:

public int Legs
  get {return this.legs;}
  set {this.legs = value;}
int legs;

At this point, you might wonder what the point of this is? Why not just create a test class that implements the IAnimal interface? It isn’t that many more lines of code.

Now we get to the meat of this post. Suppose the interface was more realistic and looked like this:

public interface IAnimal
  int Legs { get; set; }
  int Eyes { get; set; }
  string Name { get; set; }
  string Species { get; set; }
  //... and so on

Now you have a lot of work to do to implement this interface just for a unit test. At this point, some readers might be squirming in their seats ready to jump out and say, “Aha! That’s what ReSharper|CodeSmith|Etc… can do for you!”

Fair enough. And in fact, the code to add the PropertyBehavior to each property of the IAnimal mock starts to get a bit cumbersome in this situation too. Let’s look at what that would look like.


Still a lot less code to maintain than implementing each of the properties of the interface. But not very pretty. So I wrote up a quick utility method for adding the PropertyBehavior to every property of a mock.

/// <summary>
/// Sets all public read/write properties to have a 
/// property behavior when using Rhino Mocks.
/// </summary>
/// <param name="mock"></param>
public static void SetPropertyBehaviorOnAllProperties(object mock)
  PropertyInfo[] properties = mock.GetType().GetProperties();
  foreach (PropertyInfo property in properties)
    if (property.CanRead && property.CanWrite)
      property.GetValue(mock, null);

Using this method, this approach now has a lot of advantages to explicitly implementing the interface. Here’s an example of the test now with a test of another property.

public void DemoLegsProperty()
  MockRepository mocks = new MockRepository();
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  SomeClass instance = new SomeClass(animalMock);
  Assert.AreEqual(10, animalMock.Legs);
  animalMock.Eyes = 2;
  Assert.AreEqual(2, animalMock.Eyes);

Be warned, I didn’t test this with indexed properties. It only applies to public read/write properties.

Hopefully I can convince Ayende to include something like this in a future version of Rhino Mocks.

comments edit

Thought I’d post a few pics from mix with some notes. Click on any for a larger view.

Phil Jeff and

This first one is of the three amigos, not to mention coauthors. That is me on the left sporting a Subtext shirt, Jeff Atwood in the middle, complete with CodingHorror sticker, and Jon Galloway on the right.

Scott Hanselman and Rob

That’s Scott Hanselman (who runs that other .NET open source blog engine) on the left and Rob Conery (of Subsonic fame) on the right. The joke here is that Scott is standing on some stairs because Rob Conery is a giant.

ScottGu Miguel and

Sometimes, the best parts of conferences are outside of the sessions. A few of us were sitting around having drinks when we spotted Scott Guthrie walking by. Not content to just let him be on his merry way, as that would be the polite thing to do. We called him over and he proceeded to regale us with stories and walked us through some of the namespaces and such of Silverlight.

ScottGu, as he is known, is a total class act and I was happy to finally meet him in person.


Here I am regaling Sam Ramji with an obviously hilarious joke. The picture is not, you know, staged whatsoever. No, not at all.

Sam’s is Director of Platform Technology Strategy and runs the Open Source Software Lab at Microsoft. Didn’t know there was someone in charge of Open Source at Microsoft? Neither did I until meeting him. A few of us had his ear during a dinner. Hopefully we’ll see some interesting things come out of it.

Tantek and

Tantek Çelik was walking by and noticed my XKCD t-shirt which sports a unix joke and had to take a picture. Not many people got the joke.


John Lam prepares for the Dynamic Language Runtime session with Jim Hugunin. This was one of my favorite sessions. One of the demonstrations was an application that allowed them to evaluate script dynamically using Silverlight. The neat part was they could switch languages, for example from Ruby to Python, and still evaluate properties of objects they had declared in the previous language. Hot!

I got a chance to hang out more with John more at Pure and really enjoyed his perspective on Microsoft and child rearing.

Jeff Phil Miguel

Jeff, Jon, and I intently watch as Miguel de Icaza gives us a demo of Mono. The rotating cube desktop is pretty sweet.


Jeff cannot conceal his Man Crush on Miguel.


Scott Stanfield, CEO of Vertigo, on the right playing Guitar Hero, the addict. Tonight he and I cleaned up at Spanish 21, winning over $300 each. This surprised the guy at the craps table who informed us that Spanish 21 is a terrible game in terms of odds. But aren’t they all?

comments edit

Just a couple of notes while I have a break during the conference. I’ll try to find some time to write about my impressions of the technologies when I’ve had time to reflect.

In the meanwhile, allow me to tell a story about the Italia soccer jersey I wore on Sunday. It was a gift from a friend and I figured it fit the theme of staying at the Venetian. Get it? Italy!?

On Sunday, when Jon arrived in L.A. from SD, we went to brunch with my wife before leaving for Las Vegas. We decided to go to a nice French brunch place, La Dijonaise. Already some of you must see the conflict brewing.

Here I am, walking into a French restaurant wearing an Italian soccer jersey. The guy at the door took one look at me and told me, in a deeply French accent, “No no no. You cannot come in here.”

Eric Kemp, Miguel De Icaza, Jon Galloway, John Osborn,

I figured he was joking, but it took me a moment to realize why this guy I had never met was joking with me, as he pointed to my shirt. Silly me.

comments edit

Yesterday, while hanging out in the so called “BlogZone”, Tim Heuer pulled me aside for a audio short interview on the topic of Subtext and Open Source, two things I love to talk about and good luck getting me to shut up once you get me started. ;)

This was a surprise for me as the last time I was interviewed was by a reporter for my college paper after my soccer team used the school paper to dry windows for a fund raising car wash. I told the reporter that the paper was good for drying windows because they don’t leave streaks. I was merely relaying what someone told me when they went to grab the papers, but my teammates all congratulated for sticking it to the paper. Funny how that works out sometimes.

Back to the present, I cringed while listening to the interview as I learned I’m much less eloquent than I hoped I would be in a situation. Apparently I suffer from the “You Know” disease that Atwood suffers from. This is simply due to my nervousness at being interviewed along with the fact that we were in a very noisy room surrounded by a lot of distractions (yes, this is me making excuses).

Not only that, there’s a point in the interview where I seem to lose focus and stammer. That’s because Scott Hanselman was calling me and I wasn’t sure whether to stop and give him directions to the BlogZone or continue. As you can hear, I continue and he found it just fine.

Unfortunately, there’s a lot more I would’ve liked to have said. Upon being asked about whether the community has chipped into Subtext, I started off with the example of recent committs related to the build server and mentioned a couple of people. I was just getting warmed up and didn’t get a chance to mention many others who have contributed. I apologize, but the interview probably would’ve gone on for hours if I had the proper time to express my appreciation to the Subtext developers and community.

The lesson learned for me is to slow down, take a deep breath, and don’t be afraid to take a moment to collect my thoughts. Don’t be afraid of dead air when speaking publicly.

In any case, Tim, I enjoyed being interviewed. I personally think you have a talent for it and would have done a much better job than the painful interview we were subjected to during the keynote. Seriously, they should’ve had you up there asking Ray and Scott questions.

In case you didn’t know, Tim contributed what is probably the most popular skin to Subtext, Origami.

comments edit

Well Jon and I arrived safely, driving into Vegas around 4 PM last evening. Upon arriving, we met up with Miguel De Icaza, the founder of the Mono project, and headed over to the Mashup Lounge where we ran into John Osborne, a senior editor with O’ Reilly.

Being the small world that it is, John was a reviewer for the Windows Developer Power Tools book and happened to review the section I wrote on Tortoise CVS/SVN.

We were joined by Eric Kemp, one of the members of the Subsonic team and fun conversation on Open Source, Mono, Politics, etc… ensued.

Later on in the evening we headed over the BlogZone, a suite in the Venetian towers with a couple of X-Boxes, food, and drinks. We were later joined by Jeff Atwood, Scott Hanselman, Clemens Vasters, Steve Maine and a deadly game of Guitar Hero ensued.

Keynote is about to start, will write more later.

comments edit

If you’ve read my blog at all, you know I’m a big proponent of Continuous Integration (CI). For the Subtext project, we use CruiseControl.NET. I’ve written about our build process in the past.

Given the usefulness of having a build server, you can understand my frustration and sadness when our build server recently took a dive. I bought a replacement hard drive, but it was the wrong kind (a rookie mistake on my part, accidentally getting an IDE drive rather than SATA).

Members of the Subtext team such as Simo, Myself, and Scott Dorman have put in countless hours into perfecting the build server. If only we had CI Factory in our toolbelt before we started.

CI Factory is just that, a factory for creating CruiseControl.NET scripts. Scott Hanselman calls it a Continuous Integration accelerator. It bundles just about everything you need for a complete CI setup such as CCNET, NUnit or MbUnit, NCover, etc…

In the latest dnrTV episode, Jay Flowers, the creator of CI Factory, joins hosts Scott Hanselman and Carl Franklin to create a Continuous Integration setup using CI Factory in around an hour.

The project they chose to use as a demonstration is none other than Subtext! Given the number of hours we’ve taken to setup the Subtext build server, this is quite an ambituous undertaking to take, especially while being recorded.

Can you imagine having to write code while two guys provide color commentary? I’d probably wilt under that pressure, but Jay handles it with aplomb.

The video runs a bit long, but is worth watching if you plan to setup CI for your own project. The amount of XML configuration with CIFactory might seem daunting at first, but trust me when I say that it’s much worse for CCNET by itself. CIFactory reduces the amount of configuration by a lot, and Jay is constantly making it easier and easier to setup.

As an aside, Jay Flowers scores big points with me for also being a member of the MbUnit team, my favorite unit testing framework. Kudos to Jay, Scott, and Carl for a great show.

comments edit

Charlez Petzold makes the following lament in response to Jeff Atwood’s review of two WPF books, one being Petzold’s.

I’ve been mulling over Coding Horror’s analysis of two WPF books, not really thrilled about it, of course. The gist of it is that modern programming books should have color, bullet points, boxes, color, snippets, pictures, color, scannability, and color.

Does that remind you of anything?

Apparently the battle for the future of written communication is over. Prose is dead. PowerPoint has won.

With all due respect to Mr. Petzold, and he certainly deserves much respect, I think the comparison to PowerPoint is unfair and really misses the point.

Since when is technical writing prose?

Well it often does meet one of the definitions of prose.

​1. the ordinary form of spoken or written language, without metrical structure, as distinguished from poetry or verse.\ 2. matter-of-fact, commonplace, or dull expression, quality, discourse, etc.

Using that definition, I fail to see how the death of dull and commonplace expression signals a loss for the future of written communication. If anything, it’s a step in the right direction.

Technical writing is supposed to teach and help readers learn and retain information. Having visual aids not only helps cement the information in your mind, but also aids in finding that information when you need to look it up again.

Long passages of unbroken prose are great for getting lost in mental imagery when reading a novel, but it sucks for recall. Prose is alive and well in its proper place. Save the lengthy prose for the next great work of fiction, but cater to how the brain works when writing something meant to be absorbed, learned, and remembered.

Head First Design Patterns
Cover I think the Head First series really gets it when it comes to how the mind works and learns. From the introduction to Head First Design Patterns.

Your brain craves novelty. It’s always searching, scanning, waiting for something unusual. It was built that way, and it helps you stay alive.

Today, you’re less likely to be a tiger snack. But your brain’s still looking. You just never know.

So what does your brain do with all the routine, ordinary, normal things you encounter? Everything it can to stop them from interfering with the brain’s real job—recording things that matter. It doesn’t bother saving the boring things; they never make it past the “this is obviously not important” filter.

In a subsequent section, the book describes the Head First learning principles, a couple of which I quote below. I highly recommend reading this entire intro the next time you are in the bookstore.

Make it visual. Images are far more memorable than words alone, and make learning much more effective (up to 89% improvement in recall and transfer studies). It also makes things more understandable. Put the words within or near the graphics they relate to, rather than on the bottom or on another page, and learners will be up to twice as likely to solve problems related to the content.

Use a conversational and personalized style. In recent studies, students performed up to 40% better on post-learning tests if the content spoke directly to the reader, using a first-person conversational style rather than taking a formal tone.

What we see here is that studies after studies show that appropriate use of images and graphics improve recall. Not only that, but a casual tone, like that found in a blog, also helps recall.

Unfortunately, Petzold draws an unfair analogy between Adam Nathan’s WPF book and PowerPoint. We’ve all heard that PowerPoint is evil, but the evil is in how users misuse PowerPoint, not PowerPoint itself. PowerPoint certainly makes it easy to go to the extreme with noisy graphics resulting in garish crowded presentations.

It’s this proliferation of PowerPoint presentations that favor graphics to the detriment of the content that leads to the disdain towards PowerPoint. But it is also possible to create sublime presentations with PowerPoint with just the right amount of graphics.

Even Tufte would acknowledge that getting rid of graphics and bullet points completely is also extreme in the opposite direction and works against the real goal, to convey information in a manner that the audience can understand and retain it.

Drawing a comparison between Nathan’s book and PowerPoint suggests that the Nathan’s Book is all fluff and flash. But based on reading sample chapters, that is hardly the case. As Jeff wrote, the graphics, colors, and bullets all are used judiciously and appropriately. This isn’t the case of Las Vegas trying to pretend it is Florence. There’s real substance here.

code, tech, blogging comments edit

Several pople have asked me recently about the nice code syntax highlighting in my blog. For example:

public string Test()
  //Look at the pretty colors
  return "Yay!";

A long time ago, I wrote about using for converting code to HTML.

But these days, I use Omar Shahine’s Insert Code for Windows Live Writer plugin for, you guessed it, Windows Live Writer. This plugin just happens to use the Manoli code to perform syntax highlighting.


I recommend downloading and referencing the CSS stylesheet from the Manoli site and making sure to uncheck the Embed StyleSheet option in the plugin.

The dropshadow around the code is some CSS I found on the net.

comments edit

UPDATE: This functionality is now rolled into the latest version of MbUnit.

A long time ago Patrick Cauldwell wrote up a technique for managing external files within unit tests by embedding them as resources and unpacking the resources during the unit test. This is a powerful technique for making unit tests self contained.

If you look in our unit tests for Subtext, I took this approach to heart, writing several different methods in our UnitTestHelper class for extracting embedded resources.

Last night, I had the idea to make the code cleaner and even easier to use by implementing a custom test decorator attribute for my favorite unit testing framework, MbUnit.

Usage Examples

The following code snippets demonstrates the usage of the attribute within a unit test. These code samples assume an embedded resource already exists in the same assembly that the unit test itself is defined in.

This first test demonstrates how to extract the resource to a specific file. You can specify a full destination path, or a path relative to the current directory.

[ExtractResource("Embedded.Resource.Name.txt", "TestResource.txt")]
public void CanExtractResourceToFile()

The next demonstrates how to extract the resource to a stream rather than a file.

public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using(StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

As demonstrated in the previous example, you can access the stream via the static ExtractResourceAttribute.Stream property. This is only set if you don’t specify a destination.

In case you’re wondering, the stream is stored in a static member marked with the[ThreadStatic]attribute. That way if you are taking advantage of MbUnits ability torepeat a test multiple times using multiple threads, you should be OK.

What if the resource is embedded in another assembly other than the one you are testing?

Not to worry. You can specify a type (any type) defined in the assembly that contains the embedded resource like so:

  , "TestResource.txt"
  , ResourceCleanup.DeleteAfterTest
  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResource()

  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using (StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

This attribute should go a long way to making unit tests that use external files cleaner. It also demonstrates how easy it is to extend MbUnit.

A big Thank You goes to Jay Flowers for his help with this code. And before I forget, you can download the code for thiscustom test decorator here.

Please note that I left in my unit tests for the attribute which will fail unless you change the embedded resource name to match an embedded resource in your own assembly.

comments edit

Take a good look at this picture.


That there is pretty much my Shuttle machine today, metaphorically speaking of course.

We had a brief power outage today which appears to have fried just my hard drive, if I’m lucky. This machine was hosting our build server within a VMWare virtual machine.

Fortunately my main machine was not affected by the outtage because it is connected to a

The real loss is all the time it will take me to get the build server up and running again. Not to mention we were planning an imminent release and rely on our build server to automatically prepare a release. I hate manual work.

comments edit

Before I begin, I should clarify what I mean by using a database as an API integration point.

In another life in a distant galaxy far far away, I worked on a project in which we needed to integrate a partner’s system with our system. The method of integration required that when a particular event occurred, they would write some data to a particular table in our database, which would then fire a trigger to perform whatever actions were necessary on our side (vague enough for ya?).

In this case, the data model and the related stored procedures made up the API used by the partner to integrate into our system.

So what’s the problem?

I always felt this was ugly in a few ways, I’m sure you’ll think of more.

  1. First, we have to make our database directly accessible to a third party, exposing ourselves to all the security risk that entails.
  2. We’re not really free to make schema changes as we have no abstraction layer between the database and any clients to the system.
  3. How exactly do you define a contract in SQL? With Web Services, you have XSD. With code, you have interfaces.

Personally, I’d like to have some sort of abstraction layer for my integration points so that I am free to change the underlying implementation.

Why am I bringing this up?

A little while ago, I was having a chat with a member of the Subtext team, telling him about the custom MembershipProvider we’re implementing for Subtext 2.0 to fit in with our data model. His initial reaction was that developer-users are going to grumble that we’re not using the “Standard” Membership Provider.

The “Standard”?

I question this notion of “The Standard Membership Provider”? Which provider is the standard? Is it the ActiveDirectoryMembershipProvider?

It is in anticipation of developer grumblings that I write this post to plead my case and perhaps rail against the wind.

The point of the Provider Model

You see, it seems that the whole point of the Provider Model is lost if you require a specific data model. The whole point of the provider model is to provide an abstraction to the underlying physical data store.

For example, Rob Howard, one of the authors of the Provider Pattern wrote this in the second part of his introduction to the Provider Pattern (emphasis mine).

A point brought up in the previous article discussed the conundrum the ASP.NET team faced while building the Personalization system used for ASP.NET 2.0. The problem was choosing the right data model: standard SQL tables versus a schema approach. Someone pointed out that the provider pattern doesn’t solve this, which is 100% correct. What it does allow is the flexibility to choose which data model makes the most sense for your organization. An important note about the pattern: it doesn’t solve how you store your data, but it does abstract that decision out of your programming interface.

What Rob and Microsoft realized is that no one data model fits all. Many applications will already have a data model for storing users and roles.

The idea is that if you write code and controls against the provider API, the underlying data model doesn’t matter. This is emphasized by the goals of the provider model according to the MSDN introduction…

The ASP.NET 2.0 provider model was designed with the following goals in mind:

  • To make ASP.NET state storage both flexible and extensible \
  • To insulate application-level code and code in the ASP.NET run-time from the physical storage media where state is stored, and to isolate the changes required to use alternative media types to a single well-defined layer with minimal surface area
  • To make writing custom providers as simple as possible by providing a robust and well-documented set of base classes from which developers can derive provider classes of their own

It is expected that developers who wish to pair ASP.NET 2.0 with data sources for which off-the-shelf providers are not available can, with a reasonable amount of effort, write custom providers to do the job.

Of course, Microsoft made it easy for all of us developers by shipping a full featured SqlMembershipProvider complete with database schema and stored procedures. When building a new implementation from scratch, it makes a lot of sense to use this implementation. If your needs fit within the implementation, then that is a lot of work that you don’t have to do.

Unfortunately, many developers took it to be the gospel truth and standard in how the the data model should be implemented. This is really only one possible database implementation of a Membership Provider.

An Example Gone Wrong

There is one particular open source application that I recall that already had a fantastic user and roles implementation at the time that the Membership Provder Model was released. Their existing implementation was in all respects, a superset of the features of the Membership Provider.

Naturally there was a lot of pressure to implement the Membership Provider API, so they chose to simply implement the SqlMembershipProvider’s tables side by side with their own user tables.

Stepping through the code in a debugger one day, I watched in disbelief when upon logging in as a user, the code started copying all users from the SqlMembershipProvider’s stock aspnet_* tables to the application’s internal user tables and vice versa. They were essentially keeping two separate user databases in synch on every login.

In my view, this was the wrong approach to take. It would’ve been much better to simply implement a custom MembershipProvider class that read from and wrote to their existing user database tables.

For the features of their existing users and roles implementation that the Membership Provider did not support, they could have been exposed via their existing API.

Yes, I’m armchair quarterbacking at this point as there may have been some extenuating circumstances I am not aware of. But I can’t imagine doing a full multi-table synch on every login being a good choice, especially for a large database of users. I’m not aware of the status of this implementation detail at this point in time.

The Big But

Someone somewhere is reading this thinking I’m being a bit overly dogmatic. They might be thinking

But, but I have three apps in my organization which communicate with each other via the database just fine. This is a workable solution for our scenario, thank you very much. You’re full of it.

I totally agree on all three counts.

For a set of internal applications within an organization, it may well make sense to integrate at the database layer, since all communications between apps occurs within the security boundary of your internal network and you have full control over the implementation details for all of the applications.

So while I still think even these apps could benefit from a well defined API or Web Service layer as the point of integration, I don’t think you should never consider the database as a potential integration point.

But when you’re considering integration for external applications outside of your control, especially applications that haven’t even been written yet, I think the database is a really poor choice and should be avoided.

Microsoft recognized this with the Provider Model, which is why controls written for the MembershipProvider are not supposed to assume anything about the underlying data store. For example, they don’t make direct queries against the “standard” Membership tables.

Instead, when you need to integrate with a membership database, use the API.

Hopefully future users and developers of Subtext will also recognize this when we unveil the Membership features in Subtext 2.0 and keep the grumbling to a minimum. Either that or point out how full of it I am and convince me to change my mind.

See also: Where the Provider Model Falls Short.