comments suggest edit

My friend Scott Hanselman is on a mission to raise $50,000 and then some for the American Diabetes Association to help fund the search for a cure.

Team Hanselman Fight
Diabetes If you don’t know Scott, you should definitely subscribe to his blog.

His blog is has a wealth of information on software development, diabetes, .NET, and other topics.

He’s given much to the community over the years through his blog, podcast, Open Source projects, etc….

Interesting little tidbit: When I first started to take blogging I set a goal for myself. Setting a goal was my way of taking blogging more seriously. I surveyed the blog landscape and said to myself, “Hmmm, this Hanselman dude has a pretty popular blog. I want my blog to be as well known as that guy someday.”, not realizing at the time just how big a target I chose. His blog was an inspiration for mine, and many others I’m sure.

I’m far from reaching that goal, but along the way I have become friends with Scott (as well as a friendly rival with our competing open source projects, though we cross-contribute more than we compete). This past Mix07 I finally met the guy in person and he’s a laugh a minute.

Give now and have your contribution matched 7 times! Scott and his coworker Brian Hewitt have started a 48 hour blog matching challenge. From Wednesday May 9th at Noon PST through Friday, May 11 at Noon PST, contributions will be matched by seven bloggers, myself included.

Here’s the donation page.

comments suggest edit

I am a total noob when it comes to working with Linux. The only experience I have with Unix is in college when I used to pipe the manual of trn via the write command to unsuspecting classmates. If I remember correctly, this is how you do it.

write username | man trn

This would fill the user’s screen with a bunch of text. Always good for a laugh.

Normally, the write command informs the receiver who originated the message. I remember there was some way to hide who I was when sending the message, but have since forgotten that trick. Yep, them are some true ! Good times! Good times!! Good times! Good times!

As usual, I digress.

I recently decided to try out Ubuntu to see what all the fuss was about. My notes here apply to Virtual PC 2007.

Downloading Ubuntu and Setting up the VPC

To start, download the iso image from the Ubuntu download site. I downloaded the 7.04 version first since I assumed the bigger the version number, the better, right? We’ll see this isn’t always the case as we’ll see.

For completeness, I also installed 6.06.

When creating a new Virtual PC, make sure to bump up the ram to at least 256 MB of memory. Also make sure there is enough disk space. I tried to skimp and had a problem with the install. If in doubt, use the default value for disk space.

Installing Ubuntu

After creating a new Virtual PC machine, select the CD menu and then Capture ISO Image and browse for the iso image you downloaded.

Virtual PC Capture ISO Image
Menu

When the Ubuntu menu comes up, make sure to select Start Ubuntu in Safe Graphics Mode. I’ll explain why later.

Ubuntu startup
screen

At this point, Ubuntu boots up and if you’re a total noob like me, you might think “Wow! That was a fast install!”.

It turns out that this is Ubuntu running off the CD. I must’ve been tired at the time because this confounded me for a good while as everytime I rebooted, I lost all progress I was making. ;) The next step is to really perform the install.

The Mouse Capture Issue

If you’re running Ubuntu 7.04, you might run into an issue where you can’t use the mouse in the VPC. This is due to a bug in some Linux distros where it cannot find PS/2 mice, which is the type that VPC emulates.

This post has a workaround for dealing with this issue by using the keyboard until these distros are fixed. Heck, this might be a great feature of 7.04 since it forces you to go commando and learn the keyboard shortcuts.

Ubuntu 6.06 does not suffer from this problem, so it may be a better starting point if you and the mouse have a great rapport.

The Real Install

At this point, you are ready to start the install. From the top level System menu, select Administration | Install.

Starting the
Install

The installation process asks you a few simple questions and doesn’t take too long.

The Bit Depth Issue

Earlier I mentioned making sure to start Ubuntu in Safe Graphics Mode. The reason for this is that the default bit depth property for Ubuntu is 24, which Virtual PC does not support. If you fail to heed this advice, you’ll see something like this. Kind of looks like that All Your Base Are Belong to Us video.

Ubuntu without the proper printer
settings

Fortunately, I found the fix in this post on Phil Scott’s blog (Phils rule!).**I bolded the essential elements.

Once I was in there, I found the configuration file for the graphics card in /etc/X11. Sso type in cd /etc/X11, although I certainly hope even the most harden of MScentric people can figure that out :). Once in there I opened up xorg.conf using pico (so type in pico xorg.conf - isn’t this fun?). Browse down to the screen section. Opps, looks like the defaultDepth property is 24, which VirtualPC doesn’t support. I changed this to 16 and hit CTRL-X to exit (saving when prompted of course). Typed in reboot and awaaaaaaay we go.

When I ran through these steps, I found that I had to use the sudo command (runs the command as a super user) first. For example:

sudo pico xorg.conf

Your results may vary. Speaking of sudo,have you seen my t-shirtfromXKCD.com?

Virtual Machine Additions for Linux

At this point, you’ll probably want to install the Virtual Machine Additions. Unfortunately, the additions only work for Windows and OS/2 guest operating systems.

However, you can go to the Connect website and download Virtual Machine Additions for Linux. It took me a while to find the actual download link because various blog posts only mentioned the Connect site and not the actual location.

Ubuntu isn’t listed in the list of supported distributions. I’ll let you know if it works for Ubuntu.

Now What?

So now I have Ubuntu running in a virtual machine. It comes with Open Office, Firefox, etc… preinstalled. My next step is to install VMWare and MonoDevelop and start tinkering around. Any suggestions on what else I should check out?

UPDATE: Perhaps I should use VMWare 6 instead since it supports multi-monitor in a virtual machine. That’s hot!

code, tdd comments suggest edit

Although I am a big fan of Rhino Mocks, I typically favor State-Based over Interaction-Based unit testing, though I am not totally against Interaction Based testing.

I often use Rhino Mocks to dynamically create Dummy objects and Fake objects rather than true Mocks, based on this definition given by Martin Fowler.

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.\
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).\
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ’sent’, or maybe only how many messages it ’sent’.\
  • Mocks are what we are talking about here: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

Fortunately Rhino Mocks is well suited to this purpose. For example, you can dynamically add a PropertyBehavior to a mock, which generates a backing member for a property. If that doesn’t make sense, let’s let the code do the talking.

Here we have a very simple interface. In the real world, imagine there are a lot of properties.

public interface IAnimal
{
  int Legs { get; set; }
}

Next, we have a simple class we want to test that interacts with IAnimal instances. This is a contrived example.

public class SomeClass
{
  private IAnimal animal;

  public SomeClass(IAnimal animal)
  {
    this.animal = animal;
  }

  public void SetLegs(int count)
  {
    this.animal.Legs = count;
  }
}

Finally, let’s write our unit test.

[Test]
public void DemoLegsProperty()
{
  MockRepository mocks = new MockRepository();
  
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  
  //Makes the Legs property actually work, creating a fake.
  SetupResult.For(animalMock.Legs).PropertyBehavior();
  mocks.ReplayAll();
    
  animalMock.Legs = 0;
  Assert.AreEqual(0, animalMock.Legs);
    
  SomeClass instance = new SomeClass(animalMock);
  instance.SetLegs(10);
  Assert.AreEqual(10, animalMock.Legs);
}

Keep in mind here that I did not need to stub out a test class that inherits from IAnimal. Instead, I let RhinoMocks dynamically create one for me. The bolded line modifies the mock so that the Legs property exhibits property behavior. Behind the scenes, it’s generating something like this:

public int Legs
{
  get {return this.legs;}
  set {this.legs = value;}
}
int legs;

At this point, you might wonder what the point of this is? Why not just create a test class that implements the IAnimal interface? It isn’t that many more lines of code.

Now we get to the meat of this post. Suppose the interface was more realistic and looked like this:

public interface IAnimal
{
  int Legs { get; set; }
  int Eyes { get; set; }
  string Name { get; set; }
  string Species { get; set; }
  //... and so on
}

Now you have a lot of work to do to implement this interface just for a unit test. At this point, some readers might be squirming in their seats ready to jump out and say, “Aha! That’s what ReSharper|CodeSmith|Etc… can do for you!”

Fair enough. And in fact, the code to add the PropertyBehavior to each property of the IAnimal mock starts to get a bit cumbersome in this situation too. Let’s look at what that would look like.

SetupResult.For(animalMock.Legs).PropertyBehavior();
SetupResult.For(animalMock.Eyes).PropertyBehavior();
SetupResult.For(animalMock.Name).PropertyBehavior();
SetupResult.For(animalMock.Species).PropertyBehavior();

Still a lot less code to maintain than implementing each of the properties of the interface. But not very pretty. So I wrote up a quick utility method for adding the PropertyBehavior to every property of a mock.

/// <summary>
/// Sets all public read/write properties to have a 
/// property behavior when using Rhino Mocks.
/// </summary>
/// <param name="mock"></param>
public static void SetPropertyBehaviorOnAllProperties(object mock)
{
  PropertyInfo[] properties = mock.GetType().GetProperties();
  foreach (PropertyInfo property in properties)
  {
    if (property.CanRead && property.CanWrite)
    {
      property.GetValue(mock, null);
      LastCall.On(mock).PropertyBehavior();
    }
  }
}

Using this method, this approach now has a lot of advantages to explicitly implementing the interface. Here’s an example of the test now with a test of another property.

[Test]
public void DemoLegsProperty()
{
  MockRepository mocks = new MockRepository();
  
  //Creates an IAnimal stub    
  IAnimal animalMock = (IAnimal)mocks.DynamicMock(typeof(IAnimal));
  UnitTestHelper.SetPropertyBehaviorOnAllProperties(animalMock);
  mocks.ReplayAll();
    
  SomeClass instance = new SomeClass(animalMock);
  instance.SetLegs(10);
  Assert.AreEqual(10, animalMock.Legs);
  animalMock.Eyes = 2;
  Assert.AreEqual(2, animalMock.Eyes);
}

Be warned, I didn’t test this with indexed properties. It only applies to public read/write properties.

Hopefully I can convince Ayende to include something like this in a future version of Rhino Mocks.

comments suggest edit

Thought I’d post a few pics from mix with some notes. Click on any for a larger view.

Phil Jeff and
Jon

This first one is of the three amigos, not to mention coauthors. That is me on the left sporting a Subtext shirt, Jeff Atwood in the middle, complete with CodingHorror sticker, and Jon Galloway on the right.

Scott Hanselman and Rob
Conery

That’s Scott Hanselman (who runs that other .NET open source blog engine) on the left and Rob Conery (of Subsonic fame) on the right. The joke here is that Scott is standing on some stairs because Rob Conery is a giant.

ScottGu Miguel and
Me

Sometimes, the best parts of conferences are outside of the sessions. A few of us were sitting around having drinks when we spotted Scott Guthrie walking by. Not content to just let him be on his merry way, as that would be the polite thing to do. We called him over and he proceeded to regale us with stories and walked us through some of the namespaces and such of Silverlight.

ScottGu, as he is known, is a total class act and I was happy to finally meet him in person.

Sam
Phil

Here I am regaling Sam Ramji with an obviously hilarious joke. The picture is not, you know, staged whatsoever. No, not at all.

Sam’s is Director of Platform Technology Strategy and runs the Open Source Software Lab at Microsoft. Didn’t know there was someone in charge of Open Source at Microsoft? Neither did I until meeting him. A few of us had his ear during a dinner. Hopefully we’ll see some interesting things come out of it.

Tantek and
Phil

Tantek Çelik was walking by and noticed my XKCD t-shirt which sports a unix joke and had to take a picture. Not many people got the joke.

John
Lam

John Lam prepares for the Dynamic Language Runtime session with Jim Hugunin. This was one of my favorite sessions. One of the demonstrations was an application that allowed them to evaluate script dynamically using Silverlight. The neat part was they could switch languages, for example from Ruby to Python, and still evaluate properties of objects they had declared in the previous language. Hot!

I got a chance to hang out more with John more at Pure and really enjoyed his perspective on Microsoft and child rearing.

Jeff Phil Miguel
Jon

Jeff, Jon, and I intently watch as Miguel de Icaza gives us a demo of Mono. The rotating cube desktop is pretty sweet.

Miguel
Jeff

Jeff cannot conceal his Man Crush on Miguel.

Scott
Stansfield

Scott Stanfield, CEO of Vertigo, on the right playing Guitar Hero, the addict. Tonight he and I cleaned up at Spanish 21, winning over $300 each. This surprised the guy at the craps table who informed us that Spanish 21 is a terrible game in terms of odds. But aren’t they all?

comments suggest edit

Just a couple of notes while I have a break during the conference. I’ll try to find some time to write about my impressions of the technologies when I’ve had time to reflect.

In the meanwhile, allow me to tell a story about the Italia soccer jersey I wore on Sunday. It was a gift from a friend and I figured it fit the theme of staying at the Venetian. Get it? Italy!?

On Sunday, when Jon arrived in L.A. from SD, we went to brunch with my wife before leaving for Las Vegas. We decided to go to a nice French brunch place, La Dijonaise. Already some of you must see the conflict brewing.

Here I am, walking into a French restaurant wearing an Italian soccer jersey. The guy at the door took one look at me and told me, in a deeply French accent, “No no no. You cannot come in here.”

Eric Kemp, Miguel De Icaza, Jon Galloway, John Osborn,
Me

I figured he was joking, but it took me a moment to realize why this guy I had never met was joking with me, as he pointed to my shirt. Silly me.

comments suggest edit

Yesterday, while hanging out in the so called “BlogZone”, Tim Heuer pulled me aside for a audio short interview on the topic of Subtext and Open Source, two things I love to talk about and good luck getting me to shut up once you get me started. ;)

This was a surprise for me as the last time I was interviewed was by a reporter for my college paper after my soccer team used the school paper to dry windows for a fund raising car wash. I told the reporter that the paper was good for drying windows because they don’t leave streaks. I was merely relaying what someone told me when they went to grab the papers, but my teammates all congratulated for sticking it to the paper. Funny how that works out sometimes.

Back to the present, I cringed while listening to the interview as I learned I’m much less eloquent than I hoped I would be in a situation. Apparently I suffer from the “You Know” disease that Atwood suffers from. This is simply due to my nervousness at being interviewed along with the fact that we were in a very noisy room surrounded by a lot of distractions (yes, this is me making excuses).

Not only that, there’s a point in the interview where I seem to lose focus and stammer. That’s because Scott Hanselman was calling me and I wasn’t sure whether to stop and give him directions to the BlogZone or continue. As you can hear, I continue and he found it just fine.

Unfortunately, there’s a lot more I would’ve liked to have said. Upon being asked about whether the community has chipped into Subtext, I started off with the example of recent committs related to the build server and mentioned a couple of people. I was just getting warmed up and didn’t get a chance to mention many others who have contributed. I apologize, but the interview probably would’ve gone on for hours if I had the proper time to express my appreciation to the Subtext developers and community.

The lesson learned for me is to slow down, take a deep breath, and don’t be afraid to take a moment to collect my thoughts. Don’t be afraid of dead air when speaking publicly.

In any case, Tim, I enjoyed being interviewed. I personally think you have a talent for it and would have done a much better job than the painful interview we were subjected to during the keynote. Seriously, they should’ve had you up there asking Ray and Scott questions.

In case you didn’t know, Tim contributed what is probably the most popular skin to Subtext, Origami.

comments suggest edit

Well Jon and I arrived safely, driving into Vegas around 4 PM last evening. Upon arriving, we met up with Miguel De Icaza, the founder of the Mono project, and headed over to the Mashup Lounge where we ran into John Osborne, a senior editor with O’ Reilly.

Being the small world that it is, John was a reviewer for the Windows Developer Power Tools book and happened to review the section I wrote on Tortoise CVS/SVN.

We were joined by Eric Kemp, one of the members of the Subsonic team and fun conversation on Open Source, Mono, Politics, etc… ensued.

Later on in the evening we headed over the BlogZone, a suite in the Venetian towers with a couple of X-Boxes, food, and drinks. We were later joined by Jeff Atwood, Scott Hanselman, Clemens Vasters, Steve Maine and a deadly game of Guitar Hero ensued.

Keynote is about to start, will write more later.

comments suggest edit

If you’ve read my blog at all, you know I’m a big proponent of Continuous Integration (CI). For the Subtext project, we use CruiseControl.NET. I’ve written about our build process in the past.

Given the usefulness of having a build server, you can understand my frustration and sadness when our build server recently took a dive. I bought a replacement hard drive, but it was the wrong kind (a rookie mistake on my part, accidentally getting an IDE drive rather than SATA).

Members of the Subtext team such as Simo, Myself, and Scott Dorman have put in countless hours into perfecting the build server. If only we had CI Factory in our toolbelt before we started.

CI Factory is just that, a factory for creating CruiseControl.NET scripts. Scott Hanselman calls it a Continuous Integration accelerator. It bundles just about everything you need for a complete CI setup such as CCNET, NUnit or MbUnit, NCover, etc…

In the latest dnrTV episode, Jay Flowers, the creator of CI Factory, joins hosts Scott Hanselman and Carl Franklin to create a Continuous Integration setup using CI Factory in around an hour.

The project they chose to use as a demonstration is none other than Subtext! Given the number of hours we’ve taken to setup the Subtext build server, this is quite an ambituous undertaking to take, especially while being recorded.

Can you imagine having to write code while two guys provide color commentary? I’d probably wilt under that pressure, but Jay handles it with aplomb.

The video runs a bit long, but is worth watching if you plan to setup CI for your own project. The amount of XML configuration with CIFactory might seem daunting at first, but trust me when I say that it’s much worse for CCNET by itself. CIFactory reduces the amount of configuration by a lot, and Jay is constantly making it easier and easier to setup.

As an aside, Jay Flowers scores big points with me for also being a member of the MbUnit team, my favorite unit testing framework. Kudos to Jay, Scott, and Carl for a great show.

comments suggest edit

Charlez Petzold makes the following lament in response to Jeff Atwood’s review of two WPF books, one being Petzold’s.

I’ve been mulling over Coding Horror’s analysis of two WPF books, not really thrilled about it, of course. The gist of it is that modern programming books should have color, bullet points, boxes, color, snippets, pictures, color, scannability, and color.

Does that remind you of anything?

Apparently the battle for the future of written communication is over. Prose is dead. PowerPoint has won.

With all due respect to Mr. Petzold, and he certainly deserves much respect, I think the comparison to PowerPoint is unfair and really misses the point.

Since when is technical writing prose?

Well it often does meet one of the definitions of prose.

​1. the ordinary form of spoken or written language, without metrical structure, as distinguished from poetry or verse.\ 2. matter-of-fact, commonplace, or dull expression, quality, discourse, etc.

Using that definition, I fail to see how the death of dull and commonplace expression signals a loss for the future of written communication. If anything, it’s a step in the right direction.

Technical writing is supposed to teach and help readers learn and retain information. Having visual aids not only helps cement the information in your mind, but also aids in finding that information when you need to look it up again.

Long passages of unbroken prose are great for getting lost in mental imagery when reading a novel, but it sucks for recall. Prose is alive and well in its proper place. Save the lengthy prose for the next great work of fiction, but cater to how the brain works when writing something meant to be absorbed, learned, and remembered.

Head First Design Patterns
Cover I think the Head First series really gets it when it comes to how the mind works and learns. From the introduction to Head First Design Patterns.

Your brain craves novelty. It’s always searching, scanning, waiting for something unusual. It was built that way, and it helps you stay alive.

Today, you’re less likely to be a tiger snack. But your brain’s still looking. You just never know.

So what does your brain do with all the routine, ordinary, normal things you encounter? Everything it can to stop them from interfering with the brain’s real job—recording things that matter. It doesn’t bother saving the boring things; they never make it past the “this is obviously not important” filter.

In a subsequent section, the book describes the Head First learning principles, a couple of which I quote below. I highly recommend reading this entire intro the next time you are in the bookstore.

Make it visual. Images are far more memorable than words alone, and make learning much more effective (up to 89% improvement in recall and transfer studies). It also makes things more understandable. Put the words within or near the graphics they relate to, rather than on the bottom or on another page, and learners will be up to twice as likely to solve problems related to the content.

Use a conversational and personalized style. In recent studies, students performed up to 40% better on post-learning tests if the content spoke directly to the reader, using a first-person conversational style rather than taking a formal tone.

What we see here is that studies after studies show that appropriate use of images and graphics improve recall. Not only that, but a casual tone, like that found in a blog, also helps recall.

Unfortunately, Petzold draws an unfair analogy between Adam Nathan’s WPF book and PowerPoint. We’ve all heard that PowerPoint is evil, but the evil is in how users misuse PowerPoint, not PowerPoint itself. PowerPoint certainly makes it easy to go to the extreme with noisy graphics resulting in garish crowded presentations.

It’s this proliferation of PowerPoint presentations that favor graphics to the detriment of the content that leads to the disdain towards PowerPoint. But it is also possible to create sublime presentations with PowerPoint with just the right amount of graphics.

Even Tufte would acknowledge that getting rid of graphics and bullet points completely is also extreme in the opposite direction and works against the real goal, to convey information in a manner that the audience can understand and retain it.

Drawing a comparison between Nathan’s book and PowerPoint suggests that the Nathan’s Book is all fluff and flash. But based on reading sample chapters, that is hardly the case. As Jeff wrote, the graphics, colors, and bullets all are used judiciously and appropriately. This isn’t the case of Las Vegas trying to pretend it is Florence. There’s real substance here.

code, tech, blogging comments suggest edit

Several pople have asked me recently about the nice code syntax highlighting in my blog. For example:

public string Test()
{
  //Look at the pretty colors
  return "Yay!";
}

A long time ago, I wrote about using http://www.manoli.net/csharpformat/ for converting code to HTML.

But these days, I use Omar Shahine’s Insert Code for Windows Live Writer plugin for, you guessed it, Windows Live Writer. This plugin just happens to use the Manoli code to perform syntax highlighting.

Plugin
Screenshot

I recommend downloading and referencing the CSS stylesheet from the Manoli site and making sure to uncheck the Embed StyleSheet option in the plugin.

The dropshadow around the code is some CSS I found on the net.

comments suggest edit

UPDATE: This functionality is now rolled into the latest version of MbUnit.

A long time ago Patrick Cauldwell wrote up a technique for managing external files within unit tests by embedding them as resources and unpacking the resources during the unit test. This is a powerful technique for making unit tests self contained.

If you look in our unit tests for Subtext, I took this approach to heart, writing several different methods in our UnitTestHelper class for extracting embedded resources.

Last night, I had the idea to make the code cleaner and even easier to use by implementing a custom test decorator attribute for my favorite unit testing framework, MbUnit.

Usage Examples

The following code snippets demonstrates the usage of the attribute within a unit test. These code samples assume an embedded resource already exists in the same assembly that the unit test itself is defined in.

This first test demonstrates how to extract the resource to a specific file. You can specify a full destination path, or a path relative to the current directory.

[Test]
[ExtractResource("Embedded.Resource.Name.txt", "TestResource.txt")]
public void CanExtractResourceToFile()
{
  Assert.IsTrue(File.Exists("TestResource.txt"));
}

The next demonstrates how to extract the resource to a stream rather than a file.

[Test]
[ExtractResource("Embedded.Resource.Name.txt")]
public void CanExtractResourceToStream()
{
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using(StreamReader reader = new StreamReader(stream))
  {
    Assert.AreEqual("Hello World!", reader.ReadToEnd());
  }
}

As demonstrated in the previous example, you can access the stream via the static ExtractResourceAttribute.Stream property. This is only set if you don’t specify a destination.

In case you’re wondering, the stream is stored in a static member marked with the[ThreadStatic]attribute. That way if you are taking advantage of MbUnits ability torepeat a test multiple times using multiple threads, you should be OK.

What if the resource is embedded in another assembly other than the one you are testing?

Not to worry. You can specify a type (any type) defined in the assembly that contains the embedded resource like so:

[Test]
[ExtractResource("Embedded.Resource.txt"
  , "TestResource.txt"
  , ResourceCleanup.DeleteAfterTest
  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResource()
{
  Assert.IsTrue(File.Exists("TestResource.txt"));
}

[Test]
[ExtractResource("Embedded.Resource.txt"
  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResourceToStream()
{
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using (StreamReader reader = new StreamReader(stream))
  {
    Assert.AreEqual("Hello World!", reader.ReadToEnd());
  }
}

This attribute should go a long way to making unit tests that use external files cleaner. It also demonstrates how easy it is to extend MbUnit.

A big Thank You goes to Jay Flowers for his help with this code. And before I forget, you can download the code for thiscustom test decorator here.

Please note that I left in my unit tests for the attribute which will fail unless you change the embedded resource name to match an embedded resource in your own assembly.

comments suggest edit

Take a good look at this picture.

Brick

That there is pretty much my Shuttle machine today, metaphorically speaking of course.

We had a brief power outage today which appears to have fried just my hard drive, if I’m lucky. This machine was hosting our build server within a VMWare virtual machine.

Fortunately my main machine was not affected by the outtage because it is connected to a

The real loss is all the time it will take me to get the build server up and running again. Not to mention we were planning an imminent release and rely on our build server to automatically prepare a release. I hate manual work.

comments suggest edit

Before I begin, I should clarify what I mean by using a database as an API integration point.

In another life in a distant galaxy far far away, I worked on a project in which we needed to integrate a partner’s system with our system. The method of integration required that when a particular event occurred, they would write some data to a particular table in our database, which would then fire a trigger to perform whatever actions were necessary on our side (vague enough for ya?).

In this case, the data model and the related stored procedures made up the API used by the partner to integrate into our system.

So what’s the problem?

I always felt this was ugly in a few ways, I’m sure you’ll think of more.

  1. First, we have to make our database directly accessible to a third party, exposing ourselves to all the security risk that entails.
  2. We’re not really free to make schema changes as we have no abstraction layer between the database and any clients to the system.
  3. How exactly do you define a contract in SQL? With Web Services, you have XSD. With code, you have interfaces.

Personally, I’d like to have some sort of abstraction layer for my integration points so that I am free to change the underlying implementation.

Why am I bringing this up?

A little while ago, I was having a chat with a member of the Subtext team, telling him about the custom MembershipProvider we’re implementing for Subtext 2.0 to fit in with our data model. His initial reaction was that developer-users are going to grumble that we’re not using the “Standard” Membership Provider.

The “Standard”?

I question this notion of “The Standard Membership Provider”? Which provider is the standard? Is it the ActiveDirectoryMembershipProvider?

It is in anticipation of developer grumblings that I write this post to plead my case and perhaps rail against the wind.

The point of the Provider Model

You see, it seems that the whole point of the Provider Model is lost if you require a specific data model. The whole point of the provider model is to provide an abstraction to the underlying physical data store.

For example, Rob Howard, one of the authors of the Provider Pattern wrote this in the second part of his introduction to the Provider Pattern (emphasis mine).

A point brought up in the previous article discussed the conundrum the ASP.NET team faced while building the Personalization system used for ASP.NET 2.0. The problem was choosing the right data model: standard SQL tables versus a schema approach. Someone pointed out that the provider pattern doesn’t solve this, which is 100% correct. What it does allow is the flexibility to choose which data model makes the most sense for your organization. An important note about the pattern: it doesn’t solve how you store your data, but it does abstract that decision out of your programming interface.

What Rob and Microsoft realized is that no one data model fits all. Many applications will already have a data model for storing users and roles.

The idea is that if you write code and controls against the provider API, the underlying data model doesn’t matter. This is emphasized by the goals of the provider model according to the MSDN introduction…

The ASP.NET 2.0 provider model was designed with the following goals in mind:

  • To make ASP.NET state storage both flexible and extensible \
  • To insulate application-level code and code in the ASP.NET run-time from the physical storage media where state is stored, and to isolate the changes required to use alternative media types to a single well-defined layer with minimal surface area
  • To make writing custom providers as simple as possible by providing a robust and well-documented set of base classes from which developers can derive provider classes of their own

It is expected that developers who wish to pair ASP.NET 2.0 with data sources for which off-the-shelf providers are not available can, with a reasonable amount of effort, write custom providers to do the job.

Of course, Microsoft made it easy for all of us developers by shipping a full featured SqlMembershipProvider complete with database schema and stored procedures. When building a new implementation from scratch, it makes a lot of sense to use this implementation. If your needs fit within the implementation, then that is a lot of work that you don’t have to do.

Unfortunately, many developers took it to be the gospel truth and standard in how the the data model should be implemented. This is really only one possible database implementation of a Membership Provider.

An Example Gone Wrong

There is one particular open source application that I recall that already had a fantastic user and roles implementation at the time that the Membership Provder Model was released. Their existing implementation was in all respects, a superset of the features of the Membership Provider.

Naturally there was a lot of pressure to implement the Membership Provider API, so they chose to simply implement the SqlMembershipProvider’s tables side by side with their own user tables.

Stepping through the code in a debugger one day, I watched in disbelief when upon logging in as a user, the code started copying all users from the SqlMembershipProvider’s stock aspnet_* tables to the application’s internal user tables and vice versa. They were essentially keeping two separate user databases in synch on every login.

In my view, this was the wrong approach to take. It would’ve been much better to simply implement a custom MembershipProvider class that read from and wrote to their existing user database tables.

For the features of their existing users and roles implementation that the Membership Provider did not support, they could have been exposed via their existing API.

Yes, I’m armchair quarterbacking at this point as there may have been some extenuating circumstances I am not aware of. But I can’t imagine doing a full multi-table synch on every login being a good choice, especially for a large database of users. I’m not aware of the status of this implementation detail at this point in time.

The Big But

Someone somewhere is reading this thinking I’m being a bit overly dogmatic. They might be thinking

But, but I have three apps in my organization which communicate with each other via the database just fine. This is a workable solution for our scenario, thank you very much. You’re full of it.

I totally agree on all three counts.

For a set of internal applications within an organization, it may well make sense to integrate at the database layer, since all communications between apps occurs within the security boundary of your internal network and you have full control over the implementation details for all of the applications.

So while I still think even these apps could benefit from a well defined API or Web Service layer as the point of integration, I don’t think you should never consider the database as a potential integration point.

But when you’re considering integration for external applications outside of your control, especially applications that haven’t even been written yet, I think the database is a really poor choice and should be avoided.

Microsoft recognized this with the Provider Model, which is why controls written for the MembershipProvider are not supposed to assume anything about the underlying data store. For example, they don’t make direct queries against the “standard” Membership tables.

Instead, when you need to integrate with a membership database, use the API.

Hopefully future users and developers of Subtext will also recognize this when we unveil the Membership features in Subtext 2.0 and keep the grumbling to a minimum. Either that or point out how full of it I am and convince me to change my mind.

See also: Where the Provider Model Falls Short.

code comments suggest edit

I don’t think it’s too much of a stretch to say that the hardest part of coding is not writing code, but reading it. As Eric Lippert points out, Reading code is hard.

First off, I agree with you that there are very few people who can read code who cannot write code themselves. It’s not like written or spoken natural languages, where understanding what someone else says does not require understanding why they said it that way.

Screenshot of
codeHmmm, now why did Eric say that in that particular way?

This in part is why reinventing the wheel is so common (apart from the need to prove you can build a better wheel). It’s easier to write new code than try and understand and use existing code.

It is crucial to try and make your code as easy to read as possible. Strive to be the Dr. Seuss of writing code. Making your code easy to read makes it easier to use.

The basics of readable code include the usual advice of following code conventions, formatting code properly, and choosing good names for methods and variables, among other things. This is all included within Code Complete which should be your software development bible.

Aside from all that, a key tactic to improve code readibility and usability is make your code’s intentions crystal clear.

Oftentimes it’s paying attention to the little things that can really help your code along this path. Let’s look at a few examples.

out vs ref

A while ago I encountered some code that looked something like this contrived example:

int y = 7;
//...
bool success = TrySomething(someParam, ref y);

Ignore the terrible names and focus on the parameters. At a glance, what is your initial expectation of this code regarding its parameter?

When I encountered this code, I assumed that that the y parameter value passed in to this method is important somehow and that the method probably changes the value.

I then took a look at the method (keep in mind this is all extremely simplified from the actual code).

public bool TrySomething(object something, ref int y)
{
  try
  {
    y = resultOfCalculation(something);
  }
  catch(SomeException)
  {
    return false;
  }
  return true;
}

Now this annoyed me. Sure, this method is perfectly valid and will compile. But notice that the value of y is never used. It is immediately assigned to something else.

The intention of this method is not clear. It’s intent is not to ever use the value of y, but to merely set it. But since the method uses the ref keyword, you are required to set the value of the parameter before you call it. You can’t do this:

int y;
bool success = TrySomething(someParam, ref y);

In this case, using the out keyword expresses the intentions much better.

public bool TrySomething(object something, out int y)
{
  try
  {
    y = resultOfCalculation(something);
  }
  catch(SomeException)
  {
    return false;
  }
  return true;
}

It’s a really teeny tiny thing, something you might accuse me of being nitpicky even bringing it up, but anything you can do so that the reader of the code doesn’t have to interrupt her train of thought to figure out the meaning of the code will make your code more readable and the API more usable.

Boolean Arguments vs Enums

Brad Abrams touched upon this one a while ago. Let’s look at an example.

BlogPost p = CreatePost(post, true, false);

What exactly is this code doing? Well it’s obvious it creates a blog post. But what is that true indicate? Hard to say. I better pause, look up the method, and then move on. What a pain!

BlogPost p = CreatePost(post
  , PostStatus.Published, CommentStatus.CommentsDisabled);

In the second case, the intentions of the code is much clearer and there is no interruption for the reader to figure out the context of the true or false as in the first method.

Assigning a Value You Don’t Use

Another common example I’ve seen is where the result of a method is assigned to the value of a variable, but the variable is never used. I think this often happens because some developers falsely believe that if a method returns a value, that value has to be assigned to something.

Let’s look at an example that uses the TrySomething method I wrote earlier.

int y;
bool success = TrySomething(something, out y);
/*success is never used again.*/

Fortunately, Resharper makes this sort of thing stick out like a sore thumb. The problem here is that as a code reader, I’m left wondering if you meant to use the variable and forgot, or if this is an unecessary declaration. Do this instead.

int y;
TrySomething(something, out y);

Again, these are very small things, but they make a big difference. Don’t worry about coming across as anal (you will) because the payout is worth it in the end.

What are some examples that you can think of to make code more readable and usable?

UPDATE: Lesson learned. If you oversimplify your code examples, your main point is lost. Especially on the topic of code readability. Touche! I’ve updated the sample code to better illustrate my point. The comments may be out of synch with what you read here as a result.

UPDATE AGAIN: I found another great blog post about writing concise code that adds a lot to this discussion. It is part of the Fail Fast and Return Early school of thought. Short, concise and readable code - invert your logic and stop nesting already!

comments suggest edit

According to FeedBurner, many of my readers are from London, so I thought you might enjoy this little tale.

Tonight, I met someone extremely famous, or so I was told. When I got home, I looked him up, and sure enough, he is huge in Europe. According to Wikipedia, “he has sold more albums in the UK than any other British solo artist in history”.

Have any of you heard of Robbie Williams?

Robbie
Williams

My wife knew who he was immediately. Must be the fact that she’s a British citizen (she has dual Japanese citizenship as well). She played one of his songs from an Alice 97.3 compilation we have. I rather liked it.

It turns out that he runs (owns?) a soccer team in Los Angeles. We had a friendly scrimmage set up with them at UCLA. I fully expected we’d be playing on the intramural fields where everyone else plays, but instead we played on the immaculate UCLA Football team’s practice field.

This seems to be a trend I’m noticing among British music stars. They move to Los Angeles and start up soccer teams to manage. They also seem to have the means to absorb some of the best talent in Los Angeles in doing so.

As I’ve written before, Steve Jones of the Sex Pistols runs a team in my league. I have heard that Rod Stewart has a team in Los Angeles as well. I suppose if the day comes when I can’t run on the pitch, and if I had that sort of money, I could see running a soccer club (sorry, Footbal Club) as a fantastic hobby.

Santiago
Cabrera

Not to be outdone, my team now has its own celebrity member. Santiago Cabrera from the TV show Heroes is now a member of our team.

Fortunately, he is a very talented soccer player, scoring a bycicle kick against us in our scrimmage tonight (he plays on the other team as well). Now if we could just get some former pros to join us to help solidify our midfield. Zidane, I’m looking at you buddy!

comments suggest edit

Tim Heuer has been on a tear lately submitting some great new skins to the Subtext Skin Showcase, which is part of SubtextSkins.com.

The Showcase is the part of the site in which we display user submitted skins and allow others to download the skins. The other part of the site displays the default skins in Subtext.

Glossy
Blue Terrafirma Dirtylicious Informatif

It appears that Tim has been porting some of the nicer designs in the Open Designs website, a website devoted to open source web design.

Tim happens to also be the creator of Origami (which you can see in use on Rob Conery’s Blog), which many consider to be the nicest skin in Subtext.

If you are a Subtext user, try out some of these skins. They may find their way into future releases of Subtext.

comments suggest edit

Simone Chiaretta, a member of the Subtext team (not to mention many other projects), just released a Vista Gadget which allows you to monitor a CruiseControl.NET build within your sidebar.

It looks spiffier than the system tray applet that comes with CCNET.

Here’s a screenshot of it docked.

CCNET Gadget
Docked

And undocked.

CCNET Gadget
Undocked

From the screenshots you can see the status of the projects he is monitoring. The good news is that the 1.9 build has been fixed since he took these screenshots.

Pretty nifty!

comments suggest edit

I received a strange delinquency notice for a parking ticket. At first glance, it seemed normal enough. Yep, there’s my license plate number. Yep, the make of the car is correct. But look at this, the color of the car is wrong.

That’s strange since it’s not one of those cases where they indicated midnight blue when the car is black. No, they indicated red and my car is blue.

And one other minor detail was a bit off. The parking ticket was for Fillmore street in San Francisco and I live in Los Angeles.

Huh?!

I called the SF parking department and the nice woman on the phone looked into it and told me that the parking attendant made several errors in the citation and I can disregard the notice.

Several errors? I’ll say.

Like hallucinating a car that couldn’t possible be in San Francisco at the time? Or, perhaps there just happens to be a red car with the same make as mine and the same license plate number, just with a “B” where mine has an “8”.

comments suggest edit

It wasn’t till 1987 that I experienced my first (and worst) case of technolust ever. The object that inspired such raw feelings of lust, of course, was the Commodore Amiga.

As a lowly Commodore 128 owner, which was really just a glorified Commodore 64 in a beige case, I bought every issue of the Commodore magazines of the day.

Amiga
500These magazines started showing off these lush advertisements of the Commodore Amiga, boasting of its 4096 colors and 4-channel stereo sound.

I had to have it.

Looking back, I am shocked at how much my lust for the Amiga held sway over me. I purchased a copy of every Amiga magazine on the newstand, talked about it incessantly to anyone who would listen, and had vivid dreams of the Amiga’s amazing graphics capabilities.

And when I finally got my hands on it, it was every bit as good as I had hoped.

For many Amiga users at the time, the Amiga was true to its name (spanish for female friend) in that it was the closest thing to a girlfriend we had. Give me a break, I was only twelve at the time.

Like having a girlfriend, I spent countless hours with the computer, not to mention countless dollars on peripherals and upgrades. I remember hustling for tips at the local commissary in order to upgrade the beast from 512K to 1MB of ram (cost: $99).

The reason I bring this up is I came across a recent article on the Wired website entitled Top 10 Most Influential Amiga Games, which filled me with a rush of nostalgia.

I only had the pleasure to play two of the games listed, Defender of the Crown, in which catapulting castles was pure fun, and SpeedBall 2, which probably was responsible for the pile of broken joysticks I accumulated.

Defender of the Crown Catapult
Scene Speedball 2
Screenshot

Personally though, I thought Lords of the Rising Sun (also made by Cinemaware) was even better than Defender of the Crown.

Lords of the Rising Sun
Screenshot Lord of the rising sun screenshot with a
ninja

The game sequence in which you could snipe advancing siegers using a first-person bow and arrow with a little red laser point dot was exhilarating (sadly, I could not find a screenshot).

Speedball 1
Screeshot

I also liked Speedball 1 (shown here) slightly better than 2 because the side scrolling in 2 always threw me off.

I still have my Amiga 500 gathering dust in a storage cabinet in the garage. I’ve been meaning to unpack it and see if it still works, but my home is small and there’s really no room to set it up. I figure there must be a better way to try out my old games.

Amiga Emulation!

Digging around, I discovered there’s an active project to create an Amiga emulator for *nix called UAE. There’s a Windows port called, not surprisingly, WinUAE (click for full size).

WinUAE
screenshot

Unfortunately, these projects cannot distribute the Amiga ROM nor its operating system due to copyright issues. However they do provide instructions on how to transfer the ROM and operating system over to your PC on their FAQ.

Amiga Forever

An even easier approach is to simply purchase Amiga Forever for around forty bucks. This is an ISO image that contains a preconfigured WinUAE with the original ROM and operating system files. Amiga Forever is sold by Cloanto who currently own certain intellectual property rights to the Amiga.

Amiga Forever comes with several games for the Amiga as well that vary with the edition purchased. The site also has a games section in which they list places to download more games.

For example, the Cinemaware site has disk images for pretty much all of their games available for free, including Lords of the Rising Sun.

Play Defender of the Crown Immediately

All this talk of Amiga emulation sounds like fun and everything, but seriously, do I need yet another time sink? If you’re jonesing for some Amiga gaming now and don’t want to be bothered with emulation, head over to the Cinemaware website and satiate your Amiga gaming kick by playing the Flash version of Defender of the Crown. Now about that time sink…

Though I owned a couple computers prior to the Amiga, the Amiga is truly the computer that fueled my fire for computing.