0 comments suggest edit

A few people mentioned that they had the following compiler error when trying to compile HttpSimulator:

HttpSimulator.cs(722,38): error CS0122: ’System.Web.Configuration.IConfigMapPath’ is inaccessible due to its protection level

Well you’re not alone. Our build server is also having this same problem. Now before you curse me for releasing something that doesn’t even compile, I’d like to point out that it works on my machine.

works-on-my-machine-starburstFortunately, we have our expert build problem solver, Simone Chiaretta, to look into it.

After a bit of snooping, he discovered that the reason that it builds on my machine is that I’m running Windows Vista with IIS 7.

The System.Web assembly on Vista is slightly newer than the one on Windows 2003/XP.

  • VISTA: v2.0.50727.312
  • Windows 2003/XP: v2.0.50727.210

So there you have it. Now you finally have a good reason to upgrade to Vista. HttpSimulator to the rescue! (Sorry, I know that punchline is getting old).

I’ll see if I can create a workaround for those of you (such as our build server) not running on Vista.

asp.net, tdd 0 comments suggest edit

Testing code written for the web is challenging. Especially code that makes use of the ASP.NET intrinsic objects such as the HttpRequest object. My goal is to make testing such code easier.

Spider Web (c) FreeFoto.com

A while ago, I wrote some code to simulate the HttpContext in order to make writing such unit tests easier. My goal wasn’t to replace web testing frameworks such as Selenium, Watin, or AspUnit. Instead, I’m a fan of the Pareto principle and I hoped to help people easily reach the 80 of the 80/20 rule before reaching out to one of these tools to cover the last mile.

I’ve spent some time since then refactoring the code and improving the API. I also implemented some features that were lacking such as being able to call MapPath and setting and getting Session and Application variables.

To that end, I introduce the HttpSimulator class. To best demonstrate how to use it, I will present some unit test code.

The following code simulates a simple GET request for the web root with the physical location c:\inetpub. The actual path passed into the simulator doesn’t matter. It’s all simulated. This tests that you can set a session variable and then retrieve it.

public void CanGetSetSession()
  using (new HttpSimulator("/", @"c:\inetpub\").SimulateRequest())
    HttpContext.Current.Session["Test"] = "Success";
    Assert.AreEqual("Success", HttpContext.Current.Session["Test"]);

The following test method demonstrates two different methods for simulating a form post. The second using block shows off the fluent interface.

public void CanSimulateFormPost()
  using (HttpSimulator simulator = new HttpSimulator())
    NameValueCollection form = new NameValueCollection();
    form.Add("Test1", "Value1");
    form.Add("Test2", "Value2");
    simulator.SimulateRequest(new Uri("http://localhost/Test.aspx"), form);

    Assert.AreEqual("Value1", HttpContext.Current.Request.Form["Test1"]);
    Assert.AreEqual("Value2", HttpContext.Current.Request.Form["Test2"]);

  using (HttpSimulator simulator = new HttpSimulator())
    simulator.SetFormVariable("Test1", "Value1")
      .SetFormVariable("Test2", "Value2")
      .SimulateRequest(new Uri("http://localhost/Test.aspx"));

    Assert.AreEqual("Value1", HttpContext.Current.Request.Form["Test1"]);
    Assert.AreEqual("Value2", HttpContext.Current.Request.Form["Test2"]);

The SimulateRequest method is always called last once you’ve set your form or query string variables and whatnot. For read and write values such as session, you can set them after the call. If you download the code, you can see other usage examples in the unit tests.

One area I’ve had a lot of success with this class is in unit testing custom HttpHandlers. I’ve also use it to test custom control rendering code and helper methods for ASP.NET.

This code can be found in the Subtext.TestLibrary project in our Subversion repository. This project contains code I’ve found useful within my unit tests such as a test SMTP server and a test Web Server using WebServer.WebDev.

To make it easy for you to start using the HttpSimulator, I’ve packaged the relevant files in a zip file including the unit tests.

I must make one confession. I originally tried to do all this by using the public APIs. Unfortunately, so many classes are internal or sealed that I had to get my hands dirty and resort to using reflection. Doing so freed me up to finally get certain features working that I could not before.

And now, for some preemptive answers to expected criticism.

​1. You shouldn’t access the HttpContext anyways. You should abstract away the HttpContext by creating your own IContext and using IoC and Dependency Injection.

You’re absolutely right. Next criticism.

​2. This isn’t “unit testing”, this is “integration testing”.

Very astute observation. Well said. Next?

​3. You’re not taking our criticisms seriously!

Au contraire! I take such criticisms very seriously. Even if you write a bunch of code to abstract away the web from your code throwing all sorts of injections and inversions at it, you still have to test your abstraction. HttpSimulator to the rescue!

Likewise, whether this is unit testing or integration testing is splitting semantic hairs. Before TDD came along, unit testing meant testing a unit of code. It usually meant walking through the code line by line and executing a single function. If you want to call these integration tests, fine. HttpSimulator to the rescue!

Not to mention that in the real world, you sometimes don’t get to write code from scratch using sound TDD principles. A lot of time you inherit legacy code and the best you can do is try to write tests after-the-fact as you go before you refactor the code. Again, HttpSimulator to the rescue!

Here is the link todownload the source files in case you missed it the first time.

0 comments suggest edit

What would you do if you found out that a project you were working on was going to be used in an unethical or illegal manner?

This is the sort of question that K. Scott Allen asks via a hypothetical scenario he proposes. At least I hope it’s hypothetical. Scott, is there something you want to confess? ;)

While his situation is hypothetical, I think enough time has passed for me to tell you about a real situation I had the pleasure of dealing with atVelocIT. Some of the minor details have been changed to protect the guilty.

This story starts a couple of years ago in the primordial days of VelocIT. Like many bright-eyed start-ups, spirits were high, but cash flow was low. We were hurting for more clients.

So it seemed providence smiled on us when I received an email from a former coworker who was employed by a gaming company in Las Vegas.


Because of my background in online and mobile gaming, he was interested in hiring me, via VelocIT, to help build out the back-end server infrastructure for their upcoming multi-player mobile gaming platform.

His company flies Micah Dylan (our CEO) and me out to Las Vegas where we have some meetings to go over requirements during the day. In the evening, we head out for a few (ok many) drinks and dancing at a club. He eyes a woman he goes gaga for, but won’t approach because she’s “out of my league”.

I tell him that “league” is a frame of mind and nobody is out of his league unless he believes they are. I proceed to play wingman and approach her, strike up a conversation, then conveniently introduce her to “my friend” who happened to have conveniently just returned with our drinks.

It’s client engagement management in true Vegas fashion.

At this point, I’m feeling pretty good about my sales skills and feel I did a pretty bang up job in sealing the deal. We talk a week later and this guy is dating the girl from the bar! Sure enough, he wants to work with us, but there’s this small eensy weensy tiny little problem. His budget is a fraction of our estimate.

But he has a solution!

He wants to pay us $20,000 US to have a group of Eastern Europeans do the work. To clarify I call him back and ask,

So you want to pay us to manage the offshore team? We can do that.

He clarifies. He’ll pay us $40K and we’ll turn around and hire some Eastern Europeans through him for $20K.

Uh. What happens to the other $20K?

Oh, I pocket it for being an intermediary.


Micah and I start brainstorming scenarios in which this could possibly be legal. We debate and debate looking for ways that this situation might be kosher. Surely there must be some way to arrange this so it is legal. Perhaps we are misunderstanding him. Nothing legitimate comes to mind. We’re in real need of a client so we have a lot of motivation to see this in some sort of positive light.

Being too close to the situation, I call my friend (and our company lawyer) Walter to provide an objective outside opinion. After I explain the situation, he points out this is a classic example of a kickback and is in no way legal nor ethical. Not even close. No amount of convoluted reasoning will take the stink of this crap. We would be helping him to steal from his own company.

While we could really use the business, Micah and I conclude that we don’t want to start our company off on the wrong foot with an illegal or unethical dealing. In fact, even if it was legal, we wanted to run our company with a higher standard than just legal. Our business is a reflection of our values and we want it to be held to the highest of ethical standards. Sure, we struggled this time. But we’d never need to struggle again by following one simple ethical rule:

If it doesn’t pass the smell test, we pass.

VelocIT has stayed true to that direction ever since. I think it is a great way to run a business.

0 comments suggest edit

One praiseworthy aspect of ASP.NET 2.0 is its much improved XHTML compliance. However, there is one particular implementation detail related to this that causes some web designs to break and could have been implemented in a better manner.

The detail is how ASP.NET 2.0 will wrap a DIV tag around hidden input fields. My complaint isn’t that Microsoft added this DIV wrapper, because it is needed for strict compliance. My complaint is that there is no CSS class or id on the DIV to make it easy to exclude CSS styling on it.

For example, here is a snippet from the output of a simple page.

<form name="form1" method="post" action="Default.aspx" id="form1">
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="Omitted" />

Hello World

It would have been nice if the author of this code could have simply added something like:

<div class="aspnet-generated">
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="Omitted" />

It is quite common for web designers to apply a specific style to all DIVs on a page, for example, adding a padding of 5px.

<style type="text/css">
  div {padding: 5px;}

Unfortunately, this leaves a gap where the ASP.NET generated DIV is located.

In a comment made on his blog, Scott Guthrie makes this remark on this topic:

You could modify your CSS to exclude the <div> we create by default immediately underneath the form tag.

In general I’d probably recommend having as broad a CSS rule as the one you have above - since it will effect lots of content on the page. Can you instead have it apply to a CSS class only?

Yes, you could modify the CSS to exclude the first child DIV of the FORM tag by using a child selector and a first-child pseudo class like so:

<style type="text/css">
  div {padding: 5px;}
  form>div:first-child {padding: 0; margin: 0;}        

Unfortunately, IE 6 doesn’t support child selectors nor first-child pseudo classes. Since IE 6 is still quite widely used, this is not a viable solution.

Regarding Scott’s second question, this isn’t always reasonable because many web designs apply certain styles most DIVs on a page and then exclude a few that shouldn’t have that style. In that situation, it takes more work to give every DIV a CSS class so you can apply the style to just that class. It is simpler to use an exclusionary approach in these cases. Simply apply the style to all DIVs and exclude the ones that need to be excluded.

Unfortunately, because of the way this DIV wrapper was implemented and, because of CSS non-compliance in IE 6, it’s not possible to exclude this DIV using CSS alone. It requires changing the markup.

Fortunately, there’s an easy solution with a slight change to your markup, but it requires changing your markup just a bit. Just wrap your content in a DIV with a specific ID.

<form id="form1" runat="server">
  <div id="main">
      Hello World

And then style it like so.

<style type="text/css">
  div {padding: 0; margin: 0;} /* generated div */
  #main div {padding: 5px;} /* all other divs */

This is a lot easier (and higher performing) than trying to muck around with the output via the HttpResponse.Filter.

So while the solution is easy, it still bothers me that it is necessary. One main reason why is that I often get CSS designs handed to me and I have to go through and make sure to make this change appropriately. I’d rather just be able to plop a one line CSS change into every stylesheet like so:

div.aspnet-generated {padding: 0; margin: 0;}

On another note, one other interesting side-effect of this change in ASP.NET 2.0 is that many implementations for moving viewstate to the bottom of the page end up breaking XHTML compliance because they only move the input tags and not the entire DIV to the bottom.

0 comments suggest edit

With Father’s Day fast approaching (June 17 this year), and now that I have joined the hallowed ranks of fathers, I thought I’d have a little fun writing about something I posted on my blog two years ago, but has recently popped up on my radar.

As a fan of dark humor, I thought this was rather clever and funny at the time.

If you take a closer look at the comments, you’ll notice that there are no comments from 2005. They all start in 2007. It turns out that at that time, I didn’t have much traffic.

But recently, this post somehow found its way on StumbleUpon as well as a couple of other humor sites which drove a huge amount of traffic to my blog. The following graph shows you the number of visits per day.

Google Analytics

As you can see, I enjoyed a brief spike in traffic, but things returned pretty much to normal soon afterwards.

What’s interesting about this spike in traffic is that it follows the traffic pattern that you hear about when a site is featured prominently on Digg. In other words, what would you guess happened to the average time on my site?

google analytics average time on

You got it. There was a corresponding dip. Well that makes sense, there isn’t much to read so most users looked at it and moved on. Was there a corresponding drop in the number of pages per visit?

![pages] per visit](https://haacked.com/images/haacked_com/WindowsLiveWriter/MusingsonFathersDayHumor_D88B/pages-per-visit.png)

A slight dip, but not by much. Most visitors to my site view only one to two pages at most per visit.

So there you go, if you want to drive traffic to your blog two years from now, post something really really funny. But keep in mind that this traffic is not qualified traffic in that it won’t likely convert to new readers who actually enjoy the whole of your blog.

In general, I think it’s best to focus on building the type of traffic you want. But if you have something truly funny to post, do share.

Technorati tags: Humor, Fathers Day, Google Analytics

code, tdd 0 comments suggest edit

Globe from the
stock.xchng Most of the time when I’m testing my code, I only test it using the en-US culture since, …well…, I speak English and I live in the U.S. Isn’t the U.S. the only country that matters anyway? ;)

Fortunately, there are Subtext team members living in other countries ready to smack such nonsensical thoughts from my head and keep me honest about Localization and Internationalization issues.

Simone, who is an Italian living in New Zealand, pointed out that a particular unit test that works on my machine always fails on his machine. Here’s the test.

[Row("4/12/2006", "04/12/2006 00:00:00 AM")]
[Row("20070123T120102", "01/23/2007 12:01:02 PM")]
[Row("12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
[Row("Wed, 12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
public void CanParseUnknownFormatUTC(string received, string expected)
  DateTime expectedDate = DateTimeHelper.ParseUnknownFormatUTC(received);
  Assert.AreEqual(expected, expectedDate.ToString("MM/dd/yyyy HH:mm:ss tt"));

The method being tested simply takes in a date string in an unknown format and performs a few heuristics in order to parse the date.

The way I test this method is very U.S. centric. I call ToString() and then match it to the expected string defined in the Row attributes (I can’t use actual DateTime values in the attributes).

So for the very first row, I expect that date to match 04/12/2006 00:00:00 AM. But when Simo runs the test over there in New Zealand, he gets12/04/2006 00:00:00 a.m.

Makes you wonder how anyone over there can keep an appointment with the month and date all backwards like that. ;)

Testing In Another Culture

At this point, I start thinking of convincing my wife to take a vacation in New Zealand so I can test this method properly. Hmmm… that’s probably not going to fly, with the newborn and all.

Another option is to go into my regional settings and change my locale to test temporarily, but that sort of defeats the purpose of automated tests once I change it back. What to do?

MbUnit to the rescue!

Once again, I discover a feature I hadn’t known about in MbUnit that solves this problem (Jeff and Jon, feel free to snicker).

Looking at the MbUnit TestDecorators page, I noticed there is a [MultipleCultureAttribute] decorator! Hmmm, I bet that could end up being useful.

Unfortunately, at the time, this decorator was not documented (I’ve since documented it), so I looked up the code on Koders real quick to see the documentation and saw that I simply need to pass in a comma delimited string of cultures. This allows me to run a single test multiple times, once for each culture listed.

Here is the updated test with my code correction.

[Row("4/12/2006", "04/12/2006 00:00:00 AM")]
[Row("20070123T120102", "01/23/2007 12:01:02 PM")]
[Row("12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
[Row("Wed, 12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
public void CanParseUnknownFormatUTC(string received, string expected)
  DateTime expectedDate = DateTimeHelper.ParseUnknownFormatUTC(received);
    , "MM/dd/yyyy HH:mm:ss tt"
    , new CultureInfo("en-US")), expectedDate);

One cool note about how decorators like this work in MbUnit is the way it composes with the RowTest’s Row attributes. For example, in the above test, the test method will get called once per culture per Row for a grand total of 12 times.

So now my friends in faraway places will have the pleasure of unit tests that pass in their respective locales and I can feel like a better citizen of the world.

0 comments suggest edit


How often do you see code like this to create a file path?

public string GetFullPath(string fileName)
  string folder = ConfigurationManager.AppSettings["somefolder"];
  return folder + fileName;

Code like this drives me crazy because it is so prone to error. For example, when you set the folder setting, you have to remember to make sure it ends with a slash. Having too many things to remember makes this setup fragile.

Sure, you write some code to ensure that the folder has an ending slash, but I’d rather let someone write that code. For example, Microsoft.

The .NET framework is definitely huge so it can be understandable to miss out on some of the useful utility classes in there that will make your life as a developer easier.

public string GetFullPath(string filename)
  string folder = ConfigurationManager.AppSettings["somefolder"];
  return System.IO.Path.Combine(folder, filename);

The Path class is certainly well known and probably well used, but is still one of those classes that developers seem to never use to its full potential. For example, how often do you see this?

//make sure folder path ends with slash
string folder = GetFolderPath() + @"\";

Well that’s nice for Windows machines, but our world is changing and someday, you may want your code to run on Linux or, god forbid, a Mac! Instead, you could use this and be safe.

string folder = GetFolderPath() + Path.DirectorySeparatorChar;

That’ll make sure the slash leans in the correct direction based on the platform. Oh, and the next time I see code to parse a file name from a path, I’m going to slap the developer upside the head and mention this method:

string fileName = Path.GetFileName(fullPath);

System.Web.VirtualPathUtility {.clear}

Not knowing and using this class is forgivable because it didn’t exist until .NET 2.0. But now that you are reading this, you have no excuse. One great usage is for converting tilde paths to absolute paths.

Note: The tilde (~) character is called the root operatorin the context of ASP.NET virtual URLs. A little trivia for you.

For example, if you are running an app in a virtual application named “MyApp”, the following:

string path = VirtualPathUtility.ToAbsolutePath("~/Controls/Test.ascx");

Sets path to /MyApp/Controls/Test.ascx. No need to write your own ResolveUrl method.

Some other useful methods (there are many more than these listed)…

Method Description
AppendTrailingSlash Appends a / to the end of the path if none exists already.
Combine Analagous to Path.Combine, but for URLs.
MakeRelative Useful for getting the relative path from one directory to another (was it dot dot slash dot dot slash? Or just dot dot slash?)


This class has a wealth of methods for URL/HTML encoding and decoding. A small sampling…

Method Description
HtmlEncode Converts a string to an HTML encoded string.
HtmlDecode Decodes an HTML encoded string.
UrlEncode Converts a string to a URL encoded string.
UrlDecode Decodes a URL encoded string.

One particular method that is pretty neat in this class is HtmlAttributeEncode. This method is HtmlEncode’s lazy cousin. It does the minimal work to safely encode a string for HTML. For example, given this string:


HtmlEncode produces: &lt;p&gt;&amp;&lt;/p&gt;

wherease HtmlAttributeEncode produces: &lt;p>&amp;&lt;/p>

In other words, it only encodes left angle brackets, not the right ones.


This class contains a wealth of information about the current environment in which your code is executing. You can get access to the MachineName, the CommandLine, etc…

However, the one property I would like to get developers to use is a simple one:

//Instead of this
string s = "Blah\r\n";
//do this
string s = "Blah" + Environment.NewLine;

Again, this falls under the case that your code might actually run on a different operating system someday. Might as well acquire good habits now.

What Classes Am I Missing?

No matter how hard I can try, there is no way that I could make a complete list. In .NET 3.0, I’d probably add the new TimeZoneInfo class. What classes do you find extremely useful that are not so well known? Or worse, what classes have functionality that you see developers reinventing the wheel recreating, rather than using the existing class?

0 comments suggest edit

In my last post I mentioned that Subkismet is ready to put a thumping on comment SPAM for your web applications. Unfortunately I didn’t have much in the way of demo code.

Today, I have rectified that situation with a new site: http://subkismet.com/. Currently, this is just a one-page site with demonstrations of the three main spam fighting measures, along with source code.

I am really glad that I created this demo site because I realized my first release of Subkismet was incomplete and didn’t work. However, like Google, I cowardly hid behind the BETA moniker as an excuse. But no longer, everything is now working and the proof is in the demo.

If you download the latest source code, you’ll see that I’ve included the source code for http://subkismet.com/ as a separate web application project.

As we add new spam fighting kung fu to the library, we’ll keep the demo site updated as a proof that the code actually works.

0 comments suggest edit

Update: I’ve created a new NuGet Package for Subkismet (Package Id is “subkismet”) which will make it much easier to include this in your own projects.

Been a short break from blogging, but I’m ready to get back to writing aboutCody, I mean code!

My philosophy towards Open Source Software is that the more sharing that goes on between projects, the better off for everyone. As my friend Micah likes to say, A rising tide lifts all boats.

Towards that end, I’ve tried to structure Subtext as much as possible into distinct reusable libraries. The danger in that, of course, is the specter of premature generalization.

I haven’t always been successful at avoiding premature generalization which has led me to focus on consolidating code into less assemblies rather than more. My focus now is to let actual reuse guide when code gets pulled into its own library.

However, there is some useful reusable code I’ve written that is already in use by many others in the wild. This is code included in Subtext as part of its defense system against comment spam. For example:

I contributed the Akismet code to the DasBlog team who I am sure have made adjustments specific to their blog engine. The challenge I face now is how do I get any improvements they may have made back into my own implementation?

a can of
no-spam To answer that, I created the Subkismet project. It’s more than just an Akismet client for .NET, it’s a library of SPAM squashing code meant to be useful to developers who are building web applications that require user input such as Blogs, Forums, etc…

So far it has the three mean features I mentioned, but these alone go a long way to beating comment SPAM. In the future, I hope to incorporate even more tricks for beating comment spam as part of this library.

Hopefully I can convince DasBlog (and others such as BlogEngine.NET and ScribeSonic) to switch to Subkismet for their comment fighting support and help me craft a great API useful to many. This falls in line with my goal to have Subtext be an incubator for useful open source library code that other projects will want to take advantage of.

What’s With The Name?

I thought I should just use a nonsensical word that’s a play off of Subtext and Akismet. Besides, the domain name was available (not yet pointing anywhere).


I’ve decided to host Subkismet on CodePlex, but with grave trepidations. Not too long ago, they had a major server issue and lost the source code for the .NET Identicon Handler project I started with Jeff Atwood and Jon Galloway.

Fortunately I had the source code on my machine so I was not terribly affected, but this is a serious blow to my confidence in their service. However, I do believe that CodePlex is great for small open source projects (though not yet convinced for large ones like Subtext) and I like their issue voting and wiki.

I’ll give them one more chance to impress me. Besides, this allow me to really try out their Subversion bridge when they release it.

Release Schedule

I’ve currently prepared a BETA release in order to get people using it and to provide feedback. It should be stable code as I pulled it from Subtext and cleaned it up a bit so it could be reused by others.

However, my next step is to refactor Subtext to reference this library and see if any API usability issues come up. If you implement it yourself, please let me know if you have any suggestions for improvements.

Once I complete the refactoring and convince others to use it and provide feedback, I’ll create a 1.0 release.

Please try outthe latest release and give me feedback!

0 comments suggest edit

A while back I mentioned the beginning of phase 1 of my total world domination plans. This morning at 3:55 AM, phase 1 is officially complete with the birth of our son, Cody Yokoyama Haack, all seven pounds and fourteen ounces of him.

First, the little dictator is ready to rule our household. Later, the world! (click to see larger)


Actually, our affectionate nickname for Cody is “Little Thug” because of the skull cap they gave him and for the way he likes to mad dog us. Here’s a short little video to demonstrate.

Of course, he may have good reason to look upset at us, given these incriminating photos.

074 Cody
077 Cody

We brought Cody home the same day that he was born. Cody’s momma is doing just fine. From start to finish, the whole process took around 9 hours. It was fast, but furious. Fortunately, there were no complications and my wife was able to deliver without the aid of any drugs, which was her goal. Let’s just say I am in awe of her because I was trying to get pain killers and I was just holding her hand.

It was a long night, but an amazing experience. The little guy is a total trooper. When they pricked his heel to draw blood for various tests, he grumbled a bit, but didn’t end up wailing as I expected. Of course, thugs don’t wail.

There are more photos on flickr.

Technorati tags: Cody

0 comments suggest edit

KeyboardOne thing that never gets old is when someone visits me and asks to check some email on my computer.

I always smile and gracefully hand over the keyboard and watch as nothing but gibberish pours onto the screen. This totally freaked out Jeff Atwood (ok, freak may be too strong a word, but allow me some dramatic license) as he watched in disbelief as I demonstrated my ability to tap on all the wrong keys, but see the right words show up on the screen.

It’s my dirty little secret—I type in Dvorak.


What keeps it interesting is that I type on a physical QWERTY keyboard, but use the Dvorak keyboard layout by switching my Input Language setting within the Regional and Language Control Panel applet. This explains why it looks like I tap the wrong keys if you watch me type.

Text Services and Input

I switched to Dvorak over five years ago as one of several desperate measures I took to attempt to reduce the pain of coding. As I wrote recently, your fingers travel roughly 16 miles in an average eight-hour workday.

At the time, I believed the prevailing idea that the QWERTY layout was specifically designed to reduce coding speed because typewriters used to jam if people typed too quickly. As the Freakonomics blog points out, there’s a continuing dispute on whether or not this is urban legend or in fact true.

The theory behind Dvorak is that the keys are supposed to be arranged in such a way that letters that occur with higher frequency in the English language are on the home row and under stronger fingers. For example, the letter e is under the left middle finger.

The goal is that your fingers would travel less during the course of typing, ideally reducing occurrences of repetitive stress injury, while also increasing typing speed and comfort.

Does it succeed? Hard to say. Personally, I think there’s a law of unintended consequences at work here. If you can type faster with this layout, and you still work 8 hours a day, doesn’t that mean that your fingers might end up traveling just as much?

At the very least it does mean your fingers pound on more keys during the day. So if your keyboard doesn’t have a light touch, it could end up being more painful. I use the GoldTouch keyboard which I find to have a light touch, but not too light. In the end, what probably helped more than switching to Dvorak was that I started taking more breaks to stretch. Typing less is a sure way to reduce the stress of typing.

While learning Dvorak, I had to try and totally give into it, which meant my productivity dived for a short while. Fortunately, it was a slow time at work and only took me a couple of weeks to get up to a decent speed.

Since typing is all about muscle memory, one thing I experimented with was trying to type in QWERTY on Macs, and Dvorak on Windows. I wondered if it would be possible for me to associate QWERTY with the Mac and retain my ability to type in QWERTY when on a Mac.

That didn’t work.

Well, it kinda worked. I can still touch type QWERTY, but at about 60% of my former speed.

0 comments suggest edit

As I mentioned before, I am the Product Manager for the Koders.com website. I am responsible for the search engine, the source code index, the forums, the blog and the Content Management System.

glass My counterpart at Koders, Ben McDonald, is responsible for our client editions of the search engine which include the Enterprise Edition and the recently announced Pro Edition, which makes him one very busy fella.

He just recently blogged about a private beta we have going on for Pro Edition. The Pro Edition allows you to index and search code on your desktop. As far as I know, the initial beta only searches the file system, but future versions might index source control repositories just like the Enterprise Edition.

If you’re interested in trying it out and providing feedback, go ahead and sign up here.

The interesting part about this product for me is the tech:

Oh yeah, in case any of you are wondering we ended up with the following responses to the initial requirements laid out before us:

* 6.2 Mb installer\ * SQLite embedded database\ * Cassini Personal Web Server from Microsoft\ * To make sure developers have something to search immediately after installation, we’ve bundled the indexed source code of our implementation of a Amazon A9 OpenSearch client, broken down into two projects, the business layer and the web UI layer

I believe that’s a heavily customized version of the Cassini web server. The product works similarly to how Google Desktop works in that you search via the browser. This allows you to let other developers search code on your machine, should you so choose.

So what makes the Pro Edition different from just using a normal Desktop search? I’ll let Ben answer that in more detail. But I’m betting he’ll talk about how we provide some degree of semantic analysis of the code, allowing you to search specifically for a method or class for example.

0 comments suggest edit

View Microsoft recently released Windows Live Writer Beta 2, the long awaited next version of their blog editing tool. Although there are a few quirks with WLW, I find the user interface and usability to be really nice. They make great use of the right sidebar panel.

In their latest release, they’ve introduced a few more extensibility points including a Manifest, which allows you to have a branded weblog panel. More than just for cosmetic reasons, this will help those who manage more than one blog see in an instant which blog they are editing.

It looks like WLW is positioning to be the rich client interface into your blog, a direction I like.

Barry Dorrans just posted a manifest on his blog for Subtext based on the one Tim Heuer deployed to his own blog.

You can download the manifest from Barry’s blog. He also committed it to our Subversion repository, so it will be included in the next version of Subtext.

Subtext remains committed to providing a great experience when using Windows Live Writer with a Subtext blog. We were quick to support Really Simple Discovery (RSD) and the newMediaObject method of the MetaWeblog API. We’ll work hard on providing first class support for adding and deleting categories.

I have an open question for the WLW team. Is there a community officer I should be in communications with to get a heads up on future features that might require changes to the Subtext in order to provide first class support? I am wondering if this information was available somewhere and maybe I just missed it somehow. I would love to provide advanced feedback and that sort of thing if you are interested. Consider it an open offer. ;)

Now if we could just get WLW to support search and replace in their HTML editor, I’d be much happier.

0 comments suggest edit

I don’t know about you, but every company I’ve ever worked at had a Fort Knox like system in place for deploying code to the production server. Typically, deployment looks something like this (some with more steps, some with less):

  1. Grab the labeled (tagged) code from the version control system.
  2. Obviously, ensure that the application must compile.
  3. Another developer other than the author must review the code on some level and sign off on it.
  4. Automated unit tests must pass.
  5. If they exist, the automated system and integration tests must pass.
  6. The QA team tests the application and approves it.
  7. The deployment engineer (typically a developer or QA person) very carefully deploys the application attempting to avoid any downtime.

Interestingly enough, many of these companies didn’t have the same procedures for other documents and systems used to run the business. For example, one could in theory login to their CMS system and change the home page of the site to contain every expletive in the book just for fun and it would show up immediately.

Spreasheet photo by
http://www.flickr.com/photos/caterina/ There are a lot of people who want to make it so that the business user can write code by connecting legos. The typical examples include dynamic rules engines and their ilk. Yeah, let’s let Joe the finance guy tweak the rules on the rules engine on the fly by drawing lines and connecting boxes.

The problem with approaches like this is that it ignores the fact that the effect of these changes is no different than writing code, but often with much fewer checks on quality before it gets deployed to where it can do damage.

These systems often are lacking:

  • Version Control
  • Backup and Restore procedures
  • Quality Assurance testing
  • Formal Deployment procedures

A recent report (via Reddit) illustrates this point with a list of news stories on how errors in spreadsheets have cost businesses millions of dollars. A couple of telling snippets (emphasis mine). This one on the lack of version control and auditing:

http://www.namibian.com.na/2005/October/national/05E0F49179.html\ The Agricultural Bank of Namibia (Agribank) is teetering on the edge of bankruptcy. “There is no system of control on which the auditors can rely nor were there satisfactory auditing procedures that could be performed to obtain reasonable assurance that the provision for doubtful debts is adequate and valid,” note the auditors. Auditors found that its loan amount to the now defunct !Uri !Khubis abattoir changed from N$59,5 million on one spreadsheet to N$50,4 million on another, while the total arrears was decreased from a whopping N$9,8 million to only N$710 000.

And this one on the lack of training and Quality Assurance.

Only a matter of time before the spreadsheets hit the fan

  • Telegraph (UK), 30 June 2005\ In his paper “The importance and criticality of spreadsheets in the City of London” presented to Eusprig 2005, Grenville Croll of Frontline Systems (UK) Ltd. reported on a survey of 23 professionals in the £13Bn financial services sector. The interviewees said that spreadsheets were pervasive, and many were key and critical. There is almost no spreadsheet software quality assurance and people who create or modify spreadsheets are almost entirely self-taught. Two each disclosed a recent instance where material spreadsheet error had led to adverse effects involving many tens of millions of pounds.

The solution is not to make programming more like the way business users work now. The solution is to apply the lessons learned from software development into other business processes.

In the same way that companies rely on heavily trained developers and rigid deployment procedures in place for code, companies should make sure their business people are just as heavily trained in the software they use on a day to day basis. After all, million dollar decisions are based on the content of these systems daily.

For example, spreadsheets should be version controlled. Changes to rules within a rules engine should have to pass some automated tests and manual QA before being deployed. All of these should be peer reviewed.

tech 0 comments suggest edit

Ok, this will be my last post on Twitter for the time being. My last two posts on the subject pointed out flaws with it, so I thought I’d follow up with something positive.

A lot of people just don’t get Twitter, dismissing it as hype. I was firmly in that camp until I tried it, and now am a total Twit (Twitter addict). This morning as I stepped into the shower, I was wondering why Twitter has such a hold. Jeff Atwood calls it the combination of blogging and IM. But I had this nagging feeling that I’ve used something like Twitter before. Then it hit me.

Twitter is no different from a chat room, but with better usability.

Searching the web, I found I’m not the first to compare Twitter to chat or IRC. But lets look at what problems with IRC and Chat that Twitter solves.

  • The Firewall Issue
  • The Channel Overload Issue
  • The Signal to Noise Ratio and Trolling
  • The conversation persistence problem

The Firewall Issue

Unlike IRC and many chat rooms back in the day, Twitter runs over port

  1. Thus, it is less likely to be blocked by corporate and personal firewalls. The target here is ubuiquity and getting through the firewall is an important factor.

Channel Overload

I remember when I first started using IRC and then various chat rooms, I ran into the question of which, of the thousands and thousands of channels, should I join? In this case, too many choices causes a headache.

Twitter solves this problem by giving you one choice. Channel You. Public timeline aside, you have full control of who gets to see your tweets and whose tweets you wish to see. Twitter is a completely customized chat room.

Signal to Noise Ration and Trolling

The complete customization I just mentioned also helps solve the trolling problem I mentioned. If someone is being a nuisance, remove them from you friends list. You can allow only your friends to see your tweets you if you wish.

The Conversation Persistence Problem

I remember jumping into a chat room in the middle of a conversation and wondering, what the hell are they talking about? The fact that twitter keeps an ongoing archive makes it easy to back up and get caught up to where everyone else is in the conversation.

Now I know that over time, IRC and other Chat clients solved many of these same problems in one form or another. Twitter has solved them all in a compelling manner. It has the immediacy of IM with the public facing aspects of a blog, and the social interaction of a chat room.

0 comments suggest edit

r_takeoff Jamie Cansdale recently wrote about some legal troubles he has with Microsoft. We were in the middle of an email correspondence on an unrelated topic when he told me about the new chapter in this long saga.

Jamie posted the entire email history and the three (so far) letters received from Microsoft’s legal team. Rather than jump to any conclusions, let’s dig into this a bit.

The Claim

First, let’s examine the claim. In the first letter from OLSWANG, the legal team representing Microsoft, the portion of the EULA for the Visual Studio Express suite of products that Jaime is allegedly in violation of is the following:

…you may use the software only as expressly permitted in this agreement. In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways… You may not work around any technical limitations in the software.

The letter continues with…

Your product enables users of Express to access Visual Studio functionality that has been de-activated in Express and to add new features of your own design to the product, thereby circumventing the measures put in place to prevent these scenarios.

What Technical Limitation?

The interesting thing about all this is that nowhere in all the emails is it specific about which “technical limitation” Jaime is supposedly working around. Exactly what functionality has been “de-activated”?

So I decided to take a look around to see what I could find. The best I could find is this feature comparison chart.

In the row with the heading Extensibility,it says this about the Express Products.

Use 3rd party controls and content. No Macros, Add-ins or Packages

So 3rd party controls and content are enabled, but Macros and Add-ins or packages are not enabled in this product.

When I pointed this out to Jaime, he pointed out that this is not true. If the Express editions could not support Add-Ins, how does Microsoft release a Reporting Add-in for Microsoft Visual Web Developer 2005 Express or the Popfly for Visual Studio Express Users?

I imagine that Microsoft is probably not bound by their own EULA and would be allowed to work around technical limitations in their own product to create these Add-Ins. But another potential interpretation is that creating these add-ins is possible and that there is no technical limitation in the Express products.

The problem here is how do you define a technical limitation. It’s obvious that the Express product did not remove support for add-ins in the compiled code. In fact, it seems it didn’t remove add-in support at all, it just didn’t provide a convenient manner for registering add-ins. Is an omission the same thing as technical limitation?

Jamie sent me some code samples to demonstrate that he is in fact only using public well documented APIs to get TestDriven.NET to work to show up in the Express menus. He’s not decompiling the code, using any crazy hacks or workarounds. It’s very simple straightforward code.

The only thing he does which might be interpreted as questionable is to write a specific registry setting so that the TestDriven.NET menu options show up within Visual Studio Express.

So it seems that supporting Add-Ins does not require any decompilation. All it requires is adding a specific registry entry. Does that violate the EULA? Well whether I think so or not doesn’t really matter. I’m not a lawyer and I’m pretty sure Microsoft’s lawyers would have no problem convincing a judge that this is the case.

I would hope that we should have a higher standard for technical limitation than something so obvious as a registry setting. If rooting around the registry can be considered decompilation and violate EULAs, we’ve got issues.

The Kicker

Also, if that is the case, then you have to wonder about this section in Microsoft’s letter to Jamie, which I glossed over until I noticed Leon Bambrick mention it

Thank you for not registering your project extender during installation and turning off your hacks by default. It appears that by setting a registry key your hacks can still be enabled. When do you plan to remove the Visual Studio express hacks, including your addin activator, from your product.

This is interesting on a couple levels.

First, if the lack of a registry entry is sufficient to count as a “technical limitation” and “de-activation” of a feature in Visual Studio Express, why doesn’t that standard also apply to TestDriven.NET? Having removed the registry setting that lets TD.NET work in Express, hasn’t Jamie complied?

Second, take a look at this snippet from TestDriven.NET’s EULA

Except as expressly permitted in this Agreement, Licensee shall not, and shall not permit \ others to: …

​(ii) reverse engineer, decompile, disassemble or otherwise reduce the Software to source code form;

…\ (v) use the Software in any manner not expressly \ authorised by this Agreement.

It seems that by Microsoft’s own logic of what counts as a license violation, Microsoft itself has committed such a violation by reverse engineering TestDriven.NET to enable a feature that was purposefully disabled via a registry hack.

The Heart Of The Matter

All this legal posturing and gamesmanship aside, let’s get to the heart of the matter. So it may well be that Microsoft is in its legal right (I’m no lawyer, so I don’t know for sure, but stick with me here). Hooray for you Microsoft. Being in the right is nice, but knowing when to exercise that right is a true sign of wisdom. Is this the time to exercise that right?

You’ve recently given yourself one black eye in the developer community. Are you prepared to give yourself yet another and continue to erode your reputation?

The justification you give is that products like this that enable disabled features in Visual Studio Express (a dubious claim) will hurt sales of the full featured Visual Studio.NET. Really?! If I were you, I’d worry more about the loss in sales represented by the potential exodus developers leaving due to your heavy handed tactics and missteps.

0 comments suggest edit

With all this talk of rockstar programmers, I like Ron Evan’s take when he says, “I Would Rather Be A Jazz Programmer”.

Here are some differences, as I see them:


  • One bit hit song, then disappears\
  • Embarrass themselves as they age\
  • Claims they wrote the song\
  • Keeps trying to get back that sound they used to have\
  • Gets back together with the old band after unsuccessful solo careers\
  • Wants to marry a model and have a movie cameo\
  • Won’t play without a contract and advance payment


  • One bit hit, and they become an influence\
  • Get cooler with age\
  • Claims the song is just a cool arrangement of a standard\
  • Keeps trying to produce a new sound\
  • Records with a variety of musicians over time\
  • Wants to become a professor at Berkeley School of Music\
  • Jams on the street corner just because they feel like it

While I like some rock bands such as U2 and I like some Jazz, my favorite music falls under the umbrella of Electronica including, but not limited to, BreakBeat, Trance, House, Trip-Hop, Electro, Jungle and Downtempo.

It occurred to me that I would rather be a DJ programmer than any of these.


Here are some reasons why:

  • Never suffer from the Not Invented Here syndrome.
  • Are masters of re-use, to a fault.
  • The best do produce new music from scratch when they see a specific need that isn’t being addressed by already existing music.
  • Are great at mash-ups and integrating parts of multiple songs to create a new and more interesting song, aka a remix.
  • The best get paid a lot for a couple of hours of easy work. How hard is it to spin some vinyl, CDs, or audio files from a laptop as some do now?

At the very least, I would like to be paid like a DJ. Not the guy at your local dive, I’m talking about the big names who get paid $50K for three hours of work. What kind of music reflects your coding style?

0 comments suggest edit

I don’t know about you, but I find it a pain to call stored procedures from code. Either I end up writing way too much code to specify each SqlParameter explicitly, or I use a tool like Microsoft’s Data Access Application Block’s SqlHelper classj to pass in the parameter values, which requires me to remember the correct parameter order (it actually supports both methods of calling a stored procedure). What a pain!

What I need is a strongly typed stored procedure. Something that’ll tell me which parameters to pass and will break at compile time if the parameters change in some way.

Subsonic can help with that. In general, Subsonic is most productive when combining its code generation with its dynamic query engine and Active Record. But sometimes, your stuck with Stored Procedures and want to make the best of it. Subsonic, via the sonic.exe command line tool, can generate strongly typed stored procedure wrappers saving you from writing a lot of boilerplate code.

I recently just finished updating Subtext to call all its stored procedures using Subsonic generated code. This post will walk you through setting up a toolbar button in Visual Studio.NET 2005 to do this, using Subtext as the example. This pretty much follows the example that Rob set in this post.

First, I made sure to put the latest and greates sonic.exe and SubSonic.dll in a known location. In Subtext, this is the dependencies folder, which on my machine is located:


The next step is to create a new External Tool button by selecting External Tools…from the Tools Menu.


This will bring up the following dialog.

External Tools

I filled in the fields like so:

  • Title: Subtext Subsonic SPs
  • Command: D:\Projects\Subtext\trunk\SubtextSolution\Dependencies\sonic.exe
  • Arguments: generatesps /config “$(SolutionDir)Subtext.Web” /out “$(SolutionDir)Subtext.Framework\Data\Generated”
  • Initial Directory: $(SolutionDir)

This tells Sonic.exe to find the Subsonic configuration within the Subtext.Web folder, but generate the stored procedure wrappers in a subfolder of the Subtext.Framework project.

With that in place, I then created a new Toolbar by selecting Customize from the Tools menu which brings up the following dialog.


Click on the New… button to create a new toolbar.


I called mine Subsonic. This adds a new empty toolbar to VS.NET. Now all I need to do is add my Subtext Stored Procedures button to it. Just click on the Commands tab.


Unfortunately, the External Tools command is not named in this dialog. However, since I know the first command is the one I want (it’s the same order as it is listed in the Tools Menu), I drag External Command 1 to my new Subsonic toolbar.

Subtext SPs

So now when I make a change to a stored procedure, or add/delete a stored procedure, I can just click on that button to regenerate the code that calls my stored procedures.

0 comments suggest edit

In a recent post, I compared the expressiveness of the Ruby style of writing code to the current C# style of writing code. I then went on and demonstrated one approach to achieving something close to Ruby’s expressiveness using Extension Methods in C# 3.0.

The discussion focused on how well each code sample expresses the intent of the author. Let’s look at the comparison:





C# 3.0 using Extension Methods:


It seems obvious to me that the C# 3.0 example is more expressive than the classic C# approach, but not everyone agrees. Several people have said something to the effect of:

Yeah, that’s great for those who speak English.

Another person mentioned that the Ruby style of code panders to English speakers? Really?! Really?!

Yet somehow, the classic C# example doesn’t pander to English speakers? In the Ruby example, I count 2 words in English, Minutes and Ago. In the classic C# example, I count 8 words in English-Date, Time, Now, Subtract, Time, Span, From, Minutes(decomposing the class names into their constituent words via Pascal Casing rules).

Not to mention that all of these code samples flow left-to-right, unlike languages such as Hebrew and Arabic which flow right to left.

Seems to me that if anything, the classic C# example panders just as much if not more to the English speaking world than the Ruby example.

One explanation given for this statement is the following:

DateTime.Now.Subtract(TimeSpan.FromMinutes(20)); follows a common convention across languages, a hierarchical OOP syntax that makes sense regardless of your native tongue

I don’t get it. How is 20.minutes.ago not hierarchical and object oriented yet we wouldn’t even take a second look at DateTime.Now.Day or 20.ToString(), both of which are currently in C# and familiar to developers.

The key goal in object oriented software is to attempt to develop abstractions and work with in the domain of those abstractions. That’s the foundation of OO. Working with a Product object and a Customer object rather than a large set of procedural methods makes it even possible to understand a large system.

Let’s look at a typical object oriented code sample found in an OO tutorial:

Customer customer = Load<Customer>(id);
Order order = customer.GetLastOrder();
ShippingProvider shipper = Shipping.Create();

I know I know! This code panders to English! Look at the way it’s written! GetLastOrder()? Shouldn’t that be ConseguirOrdenPasada()?

Keep in mind that this all stems from a discussion about Ruby, a language written by Yukihiro Matsumoto, a Japanese computer scientist.

Now why would a Japanese programmer write a programming language that “panders to English?”

Maybe because the only language in software that is universal is English. It’s just not possible to write a programming language that would be universally expressive in any human language. What might work for a Spanish speaker might be confusing to a Swahili speaker. Not to mention the difficulty in writing a programming language that would read left to right and right to left (Palindrome# anyone?).

Yet we must find common ground for a programming language, so choosing a human language we must. For historical reasons, English is that de-facto language. It’s the reason why all the major programming languages have English keywords and English words for its class libraries. It’s why you use the Color class in C# and not the Colour or 색깔 class.

Now I’m not some America-centrist who says this is the way it should be. I’m just saying this is the way it is. Feel free to create a programming language with all its major keywords in another language and see how widely it is adopted. It’s a fact of life. If you’re going to write software, you better learn some degree of English.

In conclusion, yes, 20.minutes.ago does pander to English, but only because all major programming languages pander to English. C# is no exception. In fact, pandering to English is our goal when trying to write readable software.

0 comments suggest edit

Are your unit tests a little flat lately? Have they lost their shine and seem a bit directionless? Maybe it’s time to jazz ’em up a bit with the latest release of MbUnit.

Andrew Stopford posted a list of bug fixes, improvements, and new features. The new feature I’m selfishly excited about is the new Attribute that can Extract an Embedded Resource. Finally, I have a patch submitted to MbUnit! :)

MbUnit has changed the way I write unit tests. Here’s a list of a few of my posts on MbUnit.

Now go and robustify your application.