code, tdd comments suggest edit

Globe from the
stock.xchng Most of the time when I’m testing my code, I only test it using the en-US culture since, …well…, I speak English and I live in the U.S. Isn’t the U.S. the only country that matters anyway? ;)

Fortunately, there are Subtext team members living in other countries ready to smack such nonsensical thoughts from my head and keep me honest about Localization and Internationalization issues.

Simone, who is an Italian living in New Zealand, pointed out that a particular unit test that works on my machine always fails on his machine. Here’s the test.

[RowTest]
[Row("4/12/2006", "04/12/2006 00:00:00 AM")]
[Row("20070123T120102", "01/23/2007 12:01:02 PM")]
[Row("12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
[Row("Wed, 12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
public void CanParseUnknownFormatUTC(string received, string expected)
{
  DateTime expectedDate = DateTimeHelper.ParseUnknownFormatUTC(received);
  Assert.AreEqual(expected, expectedDate.ToString("MM/dd/yyyy HH:mm:ss tt"));
}

The method being tested simply takes in a date string in an unknown format and performs a few heuristics in order to parse the date.

The way I test this method is very U.S. centric. I call ToString() and then match it to the expected string defined in the Row attributes (I can’t use actual DateTime values in the attributes).

So for the very first row, I expect that date to match 04/12/2006 00:00:00 AM. But when Simo runs the test over there in New Zealand, he gets12/04/2006 00:00:00 a.m.

Makes you wonder how anyone over there can keep an appointment with the month and date all backwards like that. ;)

Testing In Another Culture

At this point, I start thinking of convincing my wife to take a vacation in New Zealand so I can test this method properly. Hmmm… that’s probably not going to fly, with the newborn and all.

Another option is to go into my regional settings and change my locale to test temporarily, but that sort of defeats the purpose of automated tests once I change it back. What to do?

MbUnit to the rescue!

Once again, I discover a feature I hadn’t known about in MbUnit that solves this problem (Jeff and Jon, feel free to snicker).

Looking at the MbUnit TestDecorators page, I noticed there is a [MultipleCultureAttribute] decorator! Hmmm, I bet that could end up being useful.

Unfortunately, at the time, this decorator was not documented (I’ve since documented it), so I looked up the code on Koders real quick to see the documentation and saw that I simply need to pass in a comma delimited string of cultures. This allows me to run a single test multiple times, once for each culture listed.

Here is the updated test with my code correction.

[RowTest]
[Row("4/12/2006", "04/12/2006 00:00:00 AM")]
[Row("20070123T120102", "01/23/2007 12:01:02 PM")]
[Row("12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
[Row("Wed, 12 Apr 2006 06:59:33 GMT", "04/12/2006 06:59:33 AM")]
[MultipleCulture("en-US,en-NZ,it-IT")]
public void CanParseUnknownFormatUTC(string received, string expected)
{
  DateTime expectedDate = DateTimeHelper.ParseUnknownFormatUTC(received);
  Assert.AreEqual(DateTime.ParseExact(expected
    , "MM/dd/yyyy HH:mm:ss tt"
    , new CultureInfo("en-US")), expectedDate);
}

One cool note about how decorators like this work in MbUnit is the way it composes with the RowTest’s Row attributes. For example, in the above test, the test method will get called once per culture per Row for a grand total of 12 times.

So now my friends in faraway places will have the pleasure of unit tests that pass in their respective locales and I can feel like a better citizen of the world.

comments suggest edit

System.IO.Path

How often do you see code like this to create a file path?

public string GetFullPath(string fileName)
{
  string folder = ConfigurationManager.AppSettings["somefolder"];
  return folder + fileName;
}

Code like this drives me crazy because it is so prone to error. For example, when you set the folder setting, you have to remember to make sure it ends with a slash. Having too many things to remember makes this setup fragile.

Sure, you write some code to ensure that the folder has an ending slash, but I’d rather let someone write that code. For example, Microsoft.

The .NET framework is definitely huge so it can be understandable to miss out on some of the useful utility classes in there that will make your life as a developer easier.

public string GetFullPath(string filename)
{
  string folder = ConfigurationManager.AppSettings["somefolder"];
  return System.IO.Path.Combine(folder, filename);
}

The Path class is certainly well known and probably well used, but is still one of those classes that developers seem to never use to its full potential. For example, how often do you see this?

//make sure folder path ends with slash
string folder = GetFolderPath() + @"\";

Well that’s nice for Windows machines, but our world is changing and someday, you may want your code to run on Linux or, god forbid, a Mac! Instead, you could use this and be safe.

string folder = GetFolderPath() + Path.DirectorySeparatorChar;

That’ll make sure the slash leans in the correct direction based on the platform. Oh, and the next time I see code to parse a file name from a path, I’m going to slap the developer upside the head and mention this method:

string fileName = Path.GetFileName(fullPath);

System.Web.VirtualPathUtility {.clear}

Not knowing and using this class is forgivable because it didn’t exist until .NET 2.0. But now that you are reading this, you have no excuse. One great usage is for converting tilde paths to absolute paths.

Note: The tilde (~) character is called the root operatorin the context of ASP.NET virtual URLs. A little trivia for you.

For example, if you are running an app in a virtual application named “MyApp”, the following:

string path = VirtualPathUtility.ToAbsolutePath("~/Controls/Test.ascx");

Sets path to /MyApp/Controls/Test.ascx. No need to write your own ResolveUrl method.

Some other useful methods (there are many more than these listed)…

Method Description
AppendTrailingSlash Appends a / to the end of the path if none exists already.
Combine Analagous to Path.Combine, but for URLs.
MakeRelative Useful for getting the relative path from one directory to another (was it dot dot slash dot dot slash? Or just dot dot slash?)

System.Web.HttpUtility

This class has a wealth of methods for URL/HTML encoding and decoding. A small sampling…

Method Description
HtmlEncode Converts a string to an HTML encoded string.
HtmlDecode Decodes an HTML encoded string.
UrlEncode Converts a string to a URL encoded string.
UrlDecode Decodes a URL encoded string.

One particular method that is pretty neat in this class is HtmlAttributeEncode. This method is HtmlEncode’s lazy cousin. It does the minimal work to safely encode a string for HTML. For example, given this string:

<p>&</p>

HtmlEncode produces: &lt;p&gt;&amp;&lt;/p&gt;

wherease HtmlAttributeEncode produces: &lt;p>&amp;&lt;/p>

In other words, it only encodes left angle brackets, not the right ones.

System.Environment

This class contains a wealth of information about the current environment in which your code is executing. You can get access to the MachineName, the CommandLine, etc…

However, the one property I would like to get developers to use is a simple one:

//Instead of this
string s = "Blah\r\n";
//do this
string s = "Blah" + Environment.NewLine;

Again, this falls under the case that your code might actually run on a different operating system someday. Might as well acquire good habits now.

What Classes Am I Missing?

No matter how hard I can try, there is no way that I could make a complete list. In .NET 3.0, I’d probably add the new TimeZoneInfo class. What classes do you find extremely useful that are not so well known? Or worse, what classes have functionality that you see developers reinventing the wheel recreating, rather than using the existing class?

comments suggest edit

In my last post I mentioned that Subkismet is ready to put a thumping on comment SPAM for your web applications. Unfortunately I didn’t have much in the way of demo code.

Today, I have rectified that situation with a new site: http://subkismet.com/. Currently, this is just a one-page site with demonstrations of the three main spam fighting measures, along with source code.

I am really glad that I created this demo site because I realized my first release of Subkismet was incomplete and didn’t work. However, like Google, I cowardly hid behind the BETA moniker as an excuse. But no longer, everything is now working and the proof is in the demo.

If you download the latest source code, you’ll see that I’ve included the source code for http://subkismet.com/ as a separate web application project.

As we add new spam fighting kung fu to the library, we’ll keep the demo site updated as a proof that the code actually works.

comments suggest edit

Update: I’ve created a new NuGet Package for Subkismet (Package Id is “subkismet”) which will make it much easier to include this in your own projects.

Been a short break from blogging, but I’m ready to get back to writing aboutCody, I mean code!

My philosophy towards Open Source Software is that the more sharing that goes on between projects, the better off for everyone. As my friend Micah likes to say, A rising tide lifts all boats.

Towards that end, I’ve tried to structure Subtext as much as possible into distinct reusable libraries. The danger in that, of course, is the specter of premature generalization.

I haven’t always been successful at avoiding premature generalization which has led me to focus on consolidating code into less assemblies rather than more. My focus now is to let actual reuse guide when code gets pulled into its own library.

However, there is some useful reusable code I’ve written that is already in use by many others in the wild. This is code included in Subtext as part of its defense system against comment spam. For example:

I contributed the Akismet code to the DasBlog team who I am sure have made adjustments specific to their blog engine. The challenge I face now is how do I get any improvements they may have made back into my own implementation?

a can of
no-spam To answer that, I created the Subkismet project. It’s more than just an Akismet client for .NET, it’s a library of SPAM squashing code meant to be useful to developers who are building web applications that require user input such as Blogs, Forums, etc…

So far it has the three mean features I mentioned, but these alone go a long way to beating comment SPAM. In the future, I hope to incorporate even more tricks for beating comment spam as part of this library.

Hopefully I can convince DasBlog (and others such as BlogEngine.NET and ScribeSonic) to switch to Subkismet for their comment fighting support and help me craft a great API useful to many. This falls in line with my goal to have Subtext be an incubator for useful open source library code that other projects will want to take advantage of.

What’s With The Name?

I thought I should just use a nonsensical word that’s a play off of Subtext and Akismet. Besides, the domain name was available (not yet pointing anywhere).

Hosting

I’ve decided to host Subkismet on CodePlex, but with grave trepidations. Not too long ago, they had a major server issue and lost the source code for the .NET Identicon Handler project I started with Jeff Atwood and Jon Galloway.

Fortunately I had the source code on my machine so I was not terribly affected, but this is a serious blow to my confidence in their service. However, I do believe that CodePlex is great for small open source projects (though not yet convinced for large ones like Subtext) and I like their issue voting and wiki.

I’ll give them one more chance to impress me. Besides, this allow me to really try out their Subversion bridge when they release it.

Release Schedule

I’ve currently prepared a BETA release in order to get people using it and to provide feedback. It should be stable code as I pulled it from Subtext and cleaned it up a bit so it could be reused by others.

However, my next step is to refactor Subtext to reference this library and see if any API usability issues come up. If you implement it yourself, please let me know if you have any suggestions for improvements.

Once I complete the refactoring and convince others to use it and provide feedback, I’ll create a 1.0 release.

Please try outthe latest release and give me feedback!

comments suggest edit

A while back I mentioned the beginning of phase 1 of my total world domination plans. This morning at 3:55 AM, phase 1 is officially complete with the birth of our son, Cody Yokoyama Haack, all seven pounds and fourteen ounces of him.

First, the little dictator is ready to rule our household. Later, the world! (click to see larger)

Cody
062Cody
059

Actually, our affectionate nickname for Cody is “Little Thug” because of the skull cap they gave him and for the way he likes to mad dog us. Here’s a short little video to demonstrate.

Of course, he may have good reason to look upset at us, given these incriminating photos.

Cody
074 Cody
077 Cody
075

We brought Cody home the same day that he was born. Cody’s momma is doing just fine. From start to finish, the whole process took around 9 hours. It was fast, but furious. Fortunately, there were no complications and my wife was able to deliver without the aid of any drugs, which was her goal. Let’s just say I am in awe of her because I was trying to get pain killers and I was just holding her hand.

It was a long night, but an amazing experience. The little guy is a total trooper. When they pricked his heel to draw blood for various tests, he grumbled a bit, but didn’t end up wailing as I expected. Of course, thugs don’t wail.

There are more photos on flickr.

Technorati tags: Cody

comments suggest edit

Underwood
KeyboardOne thing that never gets old is when someone visits me and asks to check some email on my computer.

I always smile and gracefully hand over the keyboard and watch as nothing but gibberish pours onto the screen. This totally freaked out Jeff Atwood (ok, freak may be too strong a word, but allow me some dramatic license) as he watched in disbelief as I demonstrated my ability to tap on all the wrong keys, but see the right words show up on the screen.

It’s my dirty little secret—I type in Dvorak.

dvorak
layout

What keeps it interesting is that I type on a physical QWERTY keyboard, but use the Dvorak keyboard layout by switching my Input Language setting within the Regional and Language Control Panel applet. This explains why it looks like I tap the wrong keys if you watch me type.

Text Services and Input
Languages

I switched to Dvorak over five years ago as one of several desperate measures I took to attempt to reduce the pain of coding. As I wrote recently, your fingers travel roughly 16 miles in an average eight-hour workday.

At the time, I believed the prevailing idea that the QWERTY layout was specifically designed to reduce coding speed because typewriters used to jam if people typed too quickly. As the Freakonomics blog points out, there’s a continuing dispute on whether or not this is urban legend or in fact true.

The theory behind Dvorak is that the keys are supposed to be arranged in such a way that letters that occur with higher frequency in the English language are on the home row and under stronger fingers. For example, the letter e is under the left middle finger.

The goal is that your fingers would travel less during the course of typing, ideally reducing occurrences of repetitive stress injury, while also increasing typing speed and comfort.

Does it succeed? Hard to say. Personally, I think there’s a law of unintended consequences at work here. If you can type faster with this layout, and you still work 8 hours a day, doesn’t that mean that your fingers might end up traveling just as much?

At the very least it does mean your fingers pound on more keys during the day. So if your keyboard doesn’t have a light touch, it could end up being more painful. I use the GoldTouch keyboard which I find to have a light touch, but not too light. In the end, what probably helped more than switching to Dvorak was that I started taking more breaks to stretch. Typing less is a sure way to reduce the stress of typing.

While learning Dvorak, I had to try and totally give into it, which meant my productivity dived for a short while. Fortunately, it was a slow time at work and only took me a couple of weeks to get up to a decent speed.

Since typing is all about muscle memory, one thing I experimented with was trying to type in QWERTY on Macs, and Dvorak on Windows. I wondered if it would be possible for me to associate QWERTY with the Mac and retain my ability to type in QWERTY when on a Mac.

That didn’t work.

Well, it kinda worked. I can still touch type QWERTY, but at about 60% of my former speed.

comments suggest edit

As I mentioned before, I am the Product Manager for the Koders.com website. I am responsible for the search engine, the source code index, the forums, the blog and the Content Management System.

magnifying
glass My counterpart at Koders, Ben McDonald, is responsible for our client editions of the search engine which include the Enterprise Edition and the recently announced Pro Edition, which makes him one very busy fella.

He just recently blogged about a private beta we have going on for Pro Edition. The Pro Edition allows you to index and search code on your desktop. As far as I know, the initial beta only searches the file system, but future versions might index source control repositories just like the Enterprise Edition.

If you’re interested in trying it out and providing feedback, go ahead and sign up here.

The interesting part about this product for me is the tech:

Oh yeah, in case any of you are wondering we ended up with the following responses to the initial requirements laid out before us:

* 6.2 Mb installer\ * SQLite embedded database\ * Cassini Personal Web Server from Microsoft\ * To make sure developers have something to search immediately after installation, we’ve bundled the indexed source code of our implementation of a Amazon A9 OpenSearch client, broken down into two projects, the business layer and the web UI layer

I believe that’s a heavily customized version of the Cassini web server. The product works similarly to how Google Desktop works in that you search via the browser. This allows you to let other developers search code on your machine, should you so choose.

So what makes the Pro Edition different from just using a normal Desktop search? I’ll let Ben answer that in more detail. But I’m betting he’ll talk about how we provide some degree of semantic analysis of the code, allowing you to search specifically for a method or class for example.

comments suggest edit

Panel
View Microsoft recently released Windows Live Writer Beta 2, the long awaited next version of their blog editing tool. Although there are a few quirks with WLW, I find the user interface and usability to be really nice. They make great use of the right sidebar panel.

In their latest release, they’ve introduced a few more extensibility points including a Manifest, which allows you to have a branded weblog panel. More than just for cosmetic reasons, this will help those who manage more than one blog see in an instant which blog they are editing.

It looks like WLW is positioning to be the rich client interface into your blog, a direction I like.

Barry Dorrans just posted a manifest on his blog for Subtext based on the one Tim Heuer deployed to his own blog.

You can download the manifest from Barry’s blog. He also committed it to our Subversion repository, so it will be included in the next version of Subtext.

Subtext remains committed to providing a great experience when using Windows Live Writer with a Subtext blog. We were quick to support Really Simple Discovery (RSD) and the newMediaObject method of the MetaWeblog API. We’ll work hard on providing first class support for adding and deleting categories.

I have an open question for the WLW team. Is there a community officer I should be in communications with to get a heads up on future features that might require changes to the Subtext in order to provide first class support? I am wondering if this information was available somewhere and maybe I just missed it somehow. I would love to provide advanced feedback and that sort of thing if you are interested. Consider it an open offer. ;)

Now if we could just get WLW to support search and replace in their HTML editor, I’d be much happier.

comments suggest edit

I don’t know about you, but every company I’ve ever worked at had a Fort Knox like system in place for deploying code to the production server. Typically, deployment looks something like this (some with more steps, some with less):

  1. Grab the labeled (tagged) code from the version control system.
  2. Obviously, ensure that the application must compile.
  3. Another developer other than the author must review the code on some level and sign off on it.
  4. Automated unit tests must pass.
  5. If they exist, the automated system and integration tests must pass.
  6. The QA team tests the application and approves it.
  7. The deployment engineer (typically a developer or QA person) very carefully deploys the application attempting to avoid any downtime.

Interestingly enough, many of these companies didn’t have the same procedures for other documents and systems used to run the business. For example, one could in theory login to their CMS system and change the home page of the site to contain every expletive in the book just for fun and it would show up immediately.

Spreasheet photo by
http://www.flickr.com/photos/caterina/ There are a lot of people who want to make it so that the business user can write code by connecting legos. The typical examples include dynamic rules engines and their ilk. Yeah, let’s let Joe the finance guy tweak the rules on the rules engine on the fly by drawing lines and connecting boxes.

The problem with approaches like this is that it ignores the fact that the effect of these changes is no different than writing code, but often with much fewer checks on quality before it gets deployed to where it can do damage.

These systems often are lacking:

  • Version Control
  • Backup and Restore procedures
  • Quality Assurance testing
  • Formal Deployment procedures

A recent report (via Reddit) illustrates this point with a list of news stories on how errors in spreadsheets have cost businesses millions of dollars. A couple of telling snippets (emphasis mine). This one on the lack of version control and auditing:

http://www.namibian.com.na/2005/October/national/05E0F49179.html\ The Agricultural Bank of Namibia (Agribank) is teetering on the edge of bankruptcy. “There is no system of control on which the auditors can rely nor were there satisfactory auditing procedures that could be performed to obtain reasonable assurance that the provision for doubtful debts is adequate and valid,” note the auditors. Auditors found that its loan amount to the now defunct !Uri !Khubis abattoir changed from N$59,5 million on one spreadsheet to N$50,4 million on another, while the total arrears was decreased from a whopping N$9,8 million to only N$710 000.

And this one on the lack of training and Quality Assurance.

Only a matter of time before the spreadsheets hit the fan

  • Telegraph (UK), 30 June 2005\ In his paper “The importance and criticality of spreadsheets in the City of London” presented to Eusprig 2005, Grenville Croll of Frontline Systems (UK) Ltd. reported on a survey of 23 professionals in the £13Bn financial services sector. The interviewees said that spreadsheets were pervasive, and many were key and critical. There is almost no spreadsheet software quality assurance and people who create or modify spreadsheets are almost entirely self-taught. Two each disclosed a recent instance where material spreadsheet error had led to adverse effects involving many tens of millions of pounds.

The solution is not to make programming more like the way business users work now. The solution is to apply the lessons learned from software development into other business processes.

In the same way that companies rely on heavily trained developers and rigid deployment procedures in place for code, companies should make sure their business people are just as heavily trained in the software they use on a day to day basis. After all, million dollar decisions are based on the content of these systems daily.

For example, spreadsheets should be version controlled. Changes to rules within a rules engine should have to pass some automated tests and manual QA before being deployed. All of these should be peer reviewed.

tech comments suggest edit

Ok, this will be my last post on Twitter for the time being. My last two posts on the subject pointed out flaws with it, so I thought I’d follow up with something positive.

A lot of people just don’t get Twitter, dismissing it as hype. I was firmly in that camp until I tried it, and now am a total Twit (Twitter addict). This morning as I stepped into the shower, I was wondering why Twitter has such a hold. Jeff Atwood calls it the combination of blogging and IM. But I had this nagging feeling that I’ve used something like Twitter before. Then it hit me.

Twitter is no different from a chat room, but with better usability.

Searching the web, I found I’m not the first to compare Twitter to chat or IRC. But lets look at what problems with IRC and Chat that Twitter solves.

  • The Firewall Issue
  • The Channel Overload Issue
  • The Signal to Noise Ratio and Trolling
  • The conversation persistence problem

The Firewall Issue

Unlike IRC and many chat rooms back in the day, Twitter runs over port

  1. Thus, it is less likely to be blocked by corporate and personal firewalls. The target here is ubuiquity and getting through the firewall is an important factor.

Channel Overload

I remember when I first started using IRC and then various chat rooms, I ran into the question of which, of the thousands and thousands of channels, should I join? In this case, too many choices causes a headache.

Twitter solves this problem by giving you one choice. Channel You. Public timeline aside, you have full control of who gets to see your tweets and whose tweets you wish to see. Twitter is a completely customized chat room.

Signal to Noise Ration and Trolling

The complete customization I just mentioned also helps solve the trolling problem I mentioned. If someone is being a nuisance, remove them from you friends list. You can allow only your friends to see your tweets you if you wish.

The Conversation Persistence Problem

I remember jumping into a chat room in the middle of a conversation and wondering, what the hell are they talking about? The fact that twitter keeps an ongoing archive makes it easy to back up and get caught up to where everyone else is in the conversation.

Now I know that over time, IRC and other Chat clients solved many of these same problems in one form or another. Twitter has solved them all in a compelling manner. It has the immediacy of IM with the public facing aspects of a blog, and the social interaction of a chat room.

comments suggest edit

r_takeoff Jamie Cansdale recently wrote about some legal troubles he has with Microsoft. We were in the middle of an email correspondence on an unrelated topic when he told me about the new chapter in this long saga.

Jamie posted the entire email history and the three (so far) letters received from Microsoft’s legal team. Rather than jump to any conclusions, let’s dig into this a bit.

The Claim

First, let’s examine the claim. In the first letter from OLSWANG, the legal team representing Microsoft, the portion of the EULA for the Visual Studio Express suite of products that Jaime is allegedly in violation of is the following:

…you may use the software only as expressly permitted in this agreement. In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways… You may not work around any technical limitations in the software.

The letter continues with…

Your product enables users of Express to access Visual Studio functionality that has been de-activated in Express and to add new features of your own design to the product, thereby circumventing the measures put in place to prevent these scenarios.

What Technical Limitation?

The interesting thing about all this is that nowhere in all the emails is it specific about which “technical limitation” Jaime is supposedly working around. Exactly what functionality has been “de-activated”?

So I decided to take a look around to see what I could find. The best I could find is this feature comparison chart.

In the row with the heading Extensibility,it says this about the Express Products.

Use 3rd party controls and content. No Macros, Add-ins or Packages

So 3rd party controls and content are enabled, but Macros and Add-ins or packages are not enabled in this product.

When I pointed this out to Jaime, he pointed out that this is not true. If the Express editions could not support Add-Ins, how does Microsoft release a Reporting Add-in for Microsoft Visual Web Developer 2005 Express or the Popfly for Visual Studio Express Users?

I imagine that Microsoft is probably not bound by their own EULA and would be allowed to work around technical limitations in their own product to create these Add-Ins. But another potential interpretation is that creating these add-ins is possible and that there is no technical limitation in the Express products.

The problem here is how do you define a technical limitation. It’s obvious that the Express product did not remove support for add-ins in the compiled code. In fact, it seems it didn’t remove add-in support at all, it just didn’t provide a convenient manner for registering add-ins. Is an omission the same thing as technical limitation?

Jamie sent me some code samples to demonstrate that he is in fact only using public well documented APIs to get TestDriven.NET to work to show up in the Express menus. He’s not decompiling the code, using any crazy hacks or workarounds. It’s very simple straightforward code.

The only thing he does which might be interpreted as questionable is to write a specific registry setting so that the TestDriven.NET menu options show up within Visual Studio Express.

So it seems that supporting Add-Ins does not require any decompilation. All it requires is adding a specific registry entry. Does that violate the EULA? Well whether I think so or not doesn’t really matter. I’m not a lawyer and I’m pretty sure Microsoft’s lawyers would have no problem convincing a judge that this is the case.

I would hope that we should have a higher standard for technical limitation than something so obvious as a registry setting. If rooting around the registry can be considered decompilation and violate EULAs, we’ve got issues.

The Kicker

Also, if that is the case, then you have to wonder about this section in Microsoft’s letter to Jamie, which I glossed over until I noticed Leon Bambrick mention it

Thank you for not registering your project extender during installation and turning off your hacks by default. It appears that by setting a registry key your hacks can still be enabled. When do you plan to remove the Visual Studio express hacks, including your addin activator, from your product.

This is interesting on a couple levels.

First, if the lack of a registry entry is sufficient to count as a “technical limitation” and “de-activation” of a feature in Visual Studio Express, why doesn’t that standard also apply to TestDriven.NET? Having removed the registry setting that lets TD.NET work in Express, hasn’t Jamie complied?

Second, take a look at this snippet from TestDriven.NET’s EULA

Except as expressly permitted in this Agreement, Licensee shall not, and shall not permit \ others to: …

​(ii) reverse engineer, decompile, disassemble or otherwise reduce the Software to source code form;

…\ (v) use the Software in any manner not expressly \ authorised by this Agreement.

It seems that by Microsoft’s own logic of what counts as a license violation, Microsoft itself has committed such a violation by reverse engineering TestDriven.NET to enable a feature that was purposefully disabled via a registry hack.

The Heart Of The Matter

All this legal posturing and gamesmanship aside, let’s get to the heart of the matter. So it may well be that Microsoft is in its legal right (I’m no lawyer, so I don’t know for sure, but stick with me here). Hooray for you Microsoft. Being in the right is nice, but knowing when to exercise that right is a true sign of wisdom. Is this the time to exercise that right?

You’ve recently given yourself one black eye in the developer community. Are you prepared to give yourself yet another and continue to erode your reputation?

The justification you give is that products like this that enable disabled features in Visual Studio Express (a dubious claim) will hurt sales of the full featured Visual Studio.NET. Really?! If I were you, I’d worry more about the loss in sales represented by the potential exodus developers leaving due to your heavy handed tactics and missteps.

comments suggest edit

With all this talk of rockstar programmers, I like Ron Evan’s take when he says, “I Would Rather Be A Jazz Programmer”.

Here are some differences, as I see them:

Rockstar\

  • One bit hit song, then disappears\
  • Embarrass themselves as they age\
  • Claims they wrote the song\
  • Keeps trying to get back that sound they used to have\
  • Gets back together with the old band after unsuccessful solo careers\
  • Wants to marry a model and have a movie cameo\
  • Won’t play without a contract and advance payment

Jazzer\

  • One bit hit, and they become an influence\
  • Get cooler with age\
  • Claims the song is just a cool arrangement of a standard\
  • Keeps trying to produce a new sound\
  • Records with a variety of musicians over time\
  • Wants to become a professor at Berkeley School of Music\
  • Jams on the street corner just because they feel like it

While I like some rock bands such as U2 and I like some Jazz, my favorite music falls under the umbrella of Electronica including, but not limited to, BreakBeat, Trance, House, Trip-Hop, Electro, Jungle and Downtempo.

It occurred to me that I would rather be a DJ programmer than any of these.

phil-the-dj 

Here are some reasons why:

  • Never suffer from the Not Invented Here syndrome.
  • Are masters of re-use, to a fault.
  • The best do produce new music from scratch when they see a specific need that isn’t being addressed by already existing music.
  • Are great at mash-ups and integrating parts of multiple songs to create a new and more interesting song, aka a remix.
  • The best get paid a lot for a couple of hours of easy work. How hard is it to spin some vinyl, CDs, or audio files from a laptop as some do now?

At the very least, I would like to be paid like a DJ. Not the guy at your local dive, I’m talking about the big names who get paid $50K for three hours of work. What kind of music reflects your coding style?

comments suggest edit

I don’t know about you, but I find it a pain to call stored procedures from code. Either I end up writing way too much code to specify each SqlParameter explicitly, or I use a tool like Microsoft’s Data Access Application Block’s SqlHelper classj to pass in the parameter values, which requires me to remember the correct parameter order (it actually supports both methods of calling a stored procedure). What a pain!

What I need is a strongly typed stored procedure. Something that’ll tell me which parameters to pass and will break at compile time if the parameters change in some way.

Subsonic can help with that. In general, Subsonic is most productive when combining its code generation with its dynamic query engine and Active Record. But sometimes, your stuck with Stored Procedures and want to make the best of it. Subsonic, via the sonic.exe command line tool, can generate strongly typed stored procedure wrappers saving you from writing a lot of boilerplate code.

I recently just finished updating Subtext to call all its stored procedures using Subsonic generated code. This post will walk you through setting up a toolbar button in Visual Studio.NET 2005 to do this, using Subtext as the example. This pretty much follows the example that Rob set in this post.

First, I made sure to put the latest and greates sonic.exe and SubSonic.dll in a known location. In Subtext, this is the dependencies folder, which on my machine is located:

d:\projects\Subtext\trunk\SubtextSolution\Dependencies\

The next step is to create a new External Tool button by selecting External Tools…from the Tools Menu.

External
Tools...

This will bring up the following dialog.

External Tools
Dialog

I filled in the fields like so:

  • Title: Subtext Subsonic SPs
  • Command: D:\Projects\Subtext\trunk\SubtextSolution\Dependencies\sonic.exe
  • Arguments: generatesps /config “$(SolutionDir)Subtext.Web” /out “$(SolutionDir)Subtext.Framework\Data\Generated”
  • Initial Directory: $(SolutionDir)

This tells Sonic.exe to find the Subsonic configuration within the Subtext.Web folder, but generate the stored procedure wrappers in a subfolder of the Subtext.Framework project.

With that in place, I then created a new Toolbar by selecting Customize from the Tools menu which brings up the following dialog.

Customize
Dialog

Click on the New… button to create a new toolbar.

New
Toolbar

I called mine Subsonic. This adds a new empty toolbar to VS.NET. Now all I need to do is add my Subtext Stored Procedures button to it. Just click on the Commands tab.

Customize
Commands

Unfortunately, the External Tools command is not named in this dialog. However, since I know the first command is the one I want (it’s the same order as it is listed in the Tools Menu), I drag External Command 1 to my new Subsonic toolbar.

Subtext SPs
button

So now when I make a change to a stored procedure, or add/delete a stored procedure, I can just click on that button to regenerate the code that calls my stored procedures.

comments suggest edit

In a recent post, I compared the expressiveness of the Ruby style of writing code to the current C# style of writing code. I then went on and demonstrated one approach to achieving something close to Ruby’s expressiveness using Extension Methods in C# 3.0.

The discussion focused on how well each code sample expresses the intent of the author. Let’s look at the comparison:

Ruby:

20.minutes.ago

C#:

DateTime.Now.Subtract(TimeSpan.FromMinutes(20));

C# 3.0 using Extension Methods:

20.Minutes().Ago();

It seems obvious to me that the C# 3.0 example is more expressive than the classic C# approach, but not everyone agrees. Several people have said something to the effect of:

Yeah, that’s great for those who speak English.

Another person mentioned that the Ruby style of code panders to English speakers? Really?! Really?!

Yet somehow, the classic C# example doesn’t pander to English speakers? In the Ruby example, I count 2 words in English, Minutes and Ago. In the classic C# example, I count 8 words in English-Date, Time, Now, Subtract, Time, Span, From, Minutes(decomposing the class names into their constituent words via Pascal Casing rules).

Not to mention that all of these code samples flow left-to-right, unlike languages such as Hebrew and Arabic which flow right to left.

Seems to me that if anything, the classic C# example panders just as much if not more to the English speaking world than the Ruby example.

One explanation given for this statement is the following:

DateTime.Now.Subtract(TimeSpan.FromMinutes(20)); follows a common convention across languages, a hierarchical OOP syntax that makes sense regardless of your native tongue

I don’t get it. How is 20.minutes.ago not hierarchical and object oriented yet we wouldn’t even take a second look at DateTime.Now.Day or 20.ToString(), both of which are currently in C# and familiar to developers.

The key goal in object oriented software is to attempt to develop abstractions and work with in the domain of those abstractions. That’s the foundation of OO. Working with a Product object and a Customer object rather than a large set of procedural methods makes it even possible to understand a large system.

Let’s look at a typical object oriented code sample found in an OO tutorial:

Customer customer = Load<Customer>(id);
Order order = customer.GetLastOrder();
ShippingProvider shipper = Shipping.Create();
shipper.Ship(order);

I know I know! This code panders to English! Look at the way it’s written! GetLastOrder()? Shouldn’t that be ConseguirOrdenPasada()?

Keep in mind that this all stems from a discussion about Ruby, a language written by Yukihiro Matsumoto, a Japanese computer scientist.

Now why would a Japanese programmer write a programming language that “panders to English?”

Maybe because the only language in software that is universal is English. It’s just not possible to write a programming language that would be universally expressive in any human language. What might work for a Spanish speaker might be confusing to a Swahili speaker. Not to mention the difficulty in writing a programming language that would read left to right and right to left (Palindrome# anyone?).

Yet we must find common ground for a programming language, so choosing a human language we must. For historical reasons, English is that de-facto language. It’s the reason why all the major programming languages have English keywords and English words for its class libraries. It’s why you use the Color class in C# and not the Colour or 색깔 class.

Now I’m not some America-centrist who says this is the way it should be. I’m just saying this is the way it is. Feel free to create a programming language with all its major keywords in another language and see how widely it is adopted. It’s a fact of life. If you’re going to write software, you better learn some degree of English.

In conclusion, yes, 20.minutes.ago does pander to English, but only because all major programming languages pander to English. C# is no exception. In fact, pandering to English is our goal when trying to write readable software.

comments suggest edit

Are your unit tests a little flat lately? Have they lost their shine and seem a bit directionless? Maybe it’s time to jazz ’em up a bit with the latest release of MbUnit.

Andrew Stopford posted a list of bug fixes, improvements, and new features. The new feature I’m selfishly excited about is the new Attribute that can Extract an Embedded Resource. Finally, I have a patch submitted to MbUnit! :)

MbUnit has changed the way I write unit tests. Here’s a list of a few of my posts on MbUnit.

Now go and robustify your application.

comments suggest edit

UPDATE: Looks like Ian Cooper had posted pretty much the same code in the comments to Scott’s blog post. I hadn’t noticed it. He didn’t have a chance to compile it, so consider this post a validation of your example Ian! :)

Scott Hanselman recently wrote a post about how Ruby has tits or is the tits or something like that. I agree with much of it. Ruby is in many respects a nice language to use if you think in Ruby.

One of the comparisons of the syntactic sugar Scott showed was this:

Java:

new Date(new Date().getTime() - 20 * 60 * 1000);

Ruby:

20.minutes.ago

That is indeed nice. But I was on the phone with Rob Conery talking about this when it occurred to me that we’ll be able to do this with C# 3.0 extension methods. That link there is a blog post by Scott Guthrie talking about this feature.

Not having any time to install Orcas and try it out, I asked Rob Conery to be my code monkey and try this out. So we fired up GoToMeeting and started pair programming. Here is what we came up with:

public static class Extenders
{
  public static DateTime Ago(this TimeSpan val)
  {
    return DateTime.Now.Subtract(val);
  }

  public static TimeSpan Minutes(this int val)
  {
    return new TimeSpan(0, val, 0);
  }
}

Now we can write a simple console program to test this out.

class Program
{
  static void Main(string[] args)
  {
    Console.WriteLine(20.Minutes().Ago());
    Console.ReadLine();
  }
}

And it worked!

So that’s very close to the Ruby syntax and not too shabby. It would be even cleaner if we could create extension properties, but our first attempt didn’t seem to work and we ran out of time (Rob actually thinks eating lunch is important).

Found out from ScottGu that Extension Properties aren’t part of the language yet, but are being considered as a possibility in the future.

So now add this to the comparison:

C# 3.0

20.Minutes().Ago();

Just one of the many cool new language features coming soon.

comments suggest edit

There’s been a lot written about whether or not Microsoft is doing enough to support Open Source Projects on its platform. In the past, Microsoft’s report card in this area was not one to take home to mom.

Lately though, there’s been a lot of improvement, with initiatives like CodePlex as well as the many projects that Microsoft has opened up and moved over there. Many have expressed that there’s more that Microsoft can do and I for one believe that Microsoft is starting to listen.

If not Microsoft, at least Sam Ramji of Port 25 is. He’s effectively the Director of Open Source at Microsoft, though his official title is Director of Platform Technology Strategy.

Several members of the .NET open source community have been bouncing ideas around with Sam looking for ways for Microsoft to support these communities. I think we’ll see some big things come out of that, but it won’t happen overnight.

Meanwhile, as we wait for Microsoft to hammer out the details for potentially larger initiatives (with the help of the community), how can we as a community start supporting open source projects ourselves? How about an Open Source Incubator?

Like a good agile developer, the first iteration of the idea will start very small as a means to test the waters. Will developers participate? Will companies support this? Who knows? Let’s find out!

What’s In It For Me?

So far, Microsoft, via Sam, has agreed to support this effort (so far) with some MSDN licenses and MaximumASP has agreed to offer hosting (details being hashed out as we speak).

At the moment, this is a relatively informal idea, but if it catches on, we hope that more companies will want to support it (cheap publicity!) and we’ll have a successful model of not only how Microsoft can support the community, but how the community can support itself.

What about existing Open Source projects in need of licenses?

Good question! At the moment, this is a relatively informal experiment. If it works out, we’ll probably want to support both existing and new projects. An incubator doesn’t have to be just for new projects, does it?

If that answer doesn’t work for you, try reading the comments of Rob’s post. Maybe you can smooth talk Sam into giving your worthy project a license.

comments suggest edit

I’ve been invited to participate on a couple of panels at the upcoming DotNetNuke OpenForce ‘07 conference, November 5-8 in Las Vegas.

  • .NET Open Source Panel with Scott Guthrie
  • .NET Open Source Architectures Panel

I’m pretty excited to be on the same panel as ScottGu himself, the man who never sleeps. Both of the panels I am on are focused on Open Source on the .NET platform, something I love to talk about. Well that and Subtext.

So what is the DotNetNuke OpenForce conference?

This is a DotNetNuke conference (so DotNetNuke sessions are emphasized), but with a bigger focus than just DotNetNuke. Keeping true to its roots, they want to help expand the visibility of other open source projects on the .NET platform.

Towards that goal, DotNetNuke created the OpenForce concept which will provide a conference venue for some of the largest .NET open source projects. This is a way for open source projects to band together, showcase their technology, and exchange ideas and support.

This conference will be co-located with the ASP.NET Connections conference, so I’ll be able to attend sessions at both. So if you’re going to be at either conference, leave me a comment!

And I’ll do my best to keep the “You Knows” to a minimum. If you’re there, feel free to keep a scorecard.