personal comments edit

No, this was definitely not a good sight for me. With my team pressing the attack, an unfortunate turnover left our defense out of position to help as the opposing team quickly capitalized.Ezra and
Me (Click on any pic for a larger view).

Squinting toward the sun, I saw 6’ 3” Ezra Hendrickson of the Columbus Crew barreling towards me, the last man back on defense. Fortunately his midfielder’s cross was a bit too long, allowing me to pluck it out of Ezra’s path.

With Ezra backtracking towards me attempting to regain the attack, I looked up for a midfielder to distribute the ball to, finding Preki, former U.S. National team player, ready to receive the pass and orchestrate the counterattack.

Jerseys Might sound like a soccer lover’s wet dream, but this is just a small snippet of what I’ve been up to this past week. I attended the Los Angeles Galaxy’s Adult Fantasy Soccer Camp.

Yeah, the name is a bit snicker-worthy (Adult Fantasy Camp, heh-heh heh-heh?), but such camps are common for other sports such as baseball and basketball. It’s an opportunity for soccer lovers to get a taste of the pro treatment.

For me, it was four straight days of heaven. Two practices a day with former and current Galaxy coaches. Expert guidance from fitness trainers from the company who Jürgen Klinsmann selected to work with the German National team. Guest appearances from several soccer celebrities (though three of them happen to be in my soccer league).

The
Lounge The camp was held at the Home Depot Center, the home of the Los Angeles Galaxy. We used the same facilities they use, including their lockers, their training fields, and their lounge (seen here at right).

When I first arrived, my locker had two kits neatly displayed in my locker. An away jersey and a practice jersey. You can see my jersey in the picture below right.

Haack,
#8 Each day started with a morning session which commenced with a warmup lead by an instructor from Athelete’s Performance. I particularly appreciated their philosophy against static stretching before exercise as it can decrease performance. Instead, athletes get a stretch by working through the natural range of motions in various warm-up exercises. Static stretching is great for after workouts.

Afterwards, we ran through various drills and small sided games led by the expert coaches. The afternoon sessions focused more on actual playing and tactics, with the occasional shooting and defensive drill thrown in.

For the most part, the other participants were a diverse group, but the soccer community is a small world in Los Angeles. Four other participants were current or former teammates of mine.

Max, Me, and
DonnySchrizz,
Joaquin, and
Mateo

The Grand Finale of the camp is a full field game with professional MLS referees between two teams chosen by the coaches. On the penultimate day, I noticed the two coaches hunched over their notes furiously picking teams and making trades between the two of them. For the coaches, this was a matter of pride to see who could pick and coach the better team.

The morning of match day, they revealed the lineup, but did not reveal the allocations (marquee players to be announced) until lunchtime. The other team received a current pro and a former college player. We received a former college player, our second allocation not able to make it.

Fortunately, our coach is quite shrewd and happened to run into Preki who was there to interview for the Chivas coaching job. After his interview, he was happy to come out and play one half with us.

The coach gave me a decisision to make, would I stay in the attacking midfield, or reliquish the spot to Preki. That, my friends, is a no-brainer. With a player of that caliber around, I moved back to central defense, ready to get my Beckenbauer (actual compliment from a teammate after the game).

Although I felt we were the superior team, we ended up tying the match 3 to 3. A disappointing result considering how much we possesed the ball. Our last shot of the game nicked the post, almost giving us a game winner at the last second.

My friend Donny filmed much of the camp and will be putting together a video for the MLS. Being his carpool partner, it’ll have a lot of me in it. ;) If you’re a soccer lover in or near Los Angeles, I totally recommend this camp next year.

comments edit

Malawi
VillagersClean drinking water piped to my house is something I take for granted (yes, even in Los Angeles). When I stop to think about it a bit, I can’t imagine how tough it must be to not have clean drinking water.

For people in four remote areas of Malawi, this is a day to day reality. Fortunately, there are people trying to do something about it. Wells for Zoë is a charity dedicated to bringing clean water to these areas.

Jamie Cansdale, famous for TestDriven.NET (a tool near and dear to my heart), is doing his part to contribute to this good cause by starting a .NET software charity auction. Several generous vendors have donated licenses to great software you can bid on. If you have the means, consider making a bid. If not, perhaps you can make a small direct donation.

comments edit

Today my team had a friendly preseason game against Hollywood United. They fielded a mixture of a few players from their 40 and over team, but most of the players were from their main team.

While Alexi and Frank were not there today, Hollywood did field the Mean Machine, Vinnie Jones. He is a former professional player from England who was part of the Crazy Gang team that won the FA cup in 1988. He’s notorious for dirty play, but now more well known for his acting, such as the role of the Juggernaut in X-Men 3.

Grabbing the wrong soccer
ball

The
Juggernaut

Despite being known for his rough tactics, it still takes talent to play at the level he played at, and it showed today. He put in a sweet freekick to the upper corner. And yes, he looks even meaner in person than he does on film. He’s a tough looking Mofo.

Another well-known player was Steve Jones, former guitarist for the punk band the Sex Pistols, who also hosts Jonesy’s Jukebox. Jimmy Jean-Louis is also on the team. He plays the guy who can erase memories on the new NBC show, Heroes. They’re not kidding when they call themselves Hollywood United.

At the end of regulation, we were tied 4 to 4. Since it was a friendly and we had paid for the field for 10 more minutes, we played a short overtime period in which they scored two more goals, the final being 6 to 4. We should’ve faked injuries and left while we were tied. Wink

comments edit

Solar
Flare

I know what you’re probably thinking. Did Phil forget to take his meds today? Let me explain.

Yesterday I thought I would try my hand at upgrading to Vista. But first, being the conscientious geek that I am, I tried to mirror my current drive to a brand new drive.

I plopped the new drive carefully (as carefully as one can plop anything) into the machine, and the system wouldn’t boot. So I retreated, pulling the old drive out and putting everything back the way it was.

Or so I thought.

The machine still wouldn’t boot. I ran the entire suite of Dell diagnostics tests on it. I also ran CHKDSK /F on it. Nothing. I ran out and bought a new SATA cable. Nothing. As far as any software test could tell, the drive was fine. It was being recognized by the BIOS, it just wouldn’t boot.

I plopped my new drive (again carefully) into the primary spot and was able to perform a clean install on it. So my machine can boot on my new drive with Vista, but not my old drive with Windows XP. How strange! At least I can access my old drive from Vista in order to copy important files over to the new drive.

As an aside, why do we say we perform installs? It’s not like anyone is watching, nor would they really want to. Who is being entertained?

Today, I was still dealing with the aftermath of this drive failure when I chatted up Micah on Skype. He remarked that everyone in our company seems to be having computer issues today. Jon’s having problems with his sound card, Pat misplaced his laptop, and Micah’s USB flash-drive failed. He says,

Must be something in the air. Maybe there was a huge solar flare or something.

Then it struck me.

Light
Bulb

I remembered reading that in fact there is a huge solar flare headed towards Earth. Well there you go, say no more. That explains it. A solar flare hosed my drive. Right now I am trying to mirror my new drive over my old drive to see if it’ll boot up with Vista. If it does, then it would seem to me that somehow Windows got corrupted on my old drive. Perhaps a solar particle flipped the AllowBootAndJustWork bit to 0. Just my luck.

In any case, it’s a good thing I have a couple of backup machines to blog from.

comments edit

I don’t suffer from classic OCD (Obsessive Compulsive Disorder), but I do sometimes have OCD tendencies. Just ask my poor wife when we’re having dinner while my mind is still trying to resolve a thorny programming problem. Earth to Phil. Are you on this planet?

Lately, the object of my OCD-like tendencies is getting the Subtext unit test code coverage up to 40%. At the time of this writing (and after much work), it is at 38%. Why 40%? No reason. Just a nice round number that we’ve never yet hit. Remember, OCD isn’t necessarily a rational affliction.

If code coverage is my disease,TestDriven.NETwithNCoverExplorerintegration is my drug, andJamie Cansdaleis my dealer. He graciously gave me a pro license as a donation to the Subtext project.

So here’s the anatomy of a code coverage addiction. First, I simply right click on our unit test project, UnitTests.Subtext, from within VisualStudio.NET 2005 (also works with older versions of VS.NET). I select the Test With menu option and click Coverage as in the screenshot below.

Test With Coverage in VS.NET
2005

After running all of the unit tests, NCoverExplorer opens up with the a coverage report.

NCoverExplorer coverage results in Left
Pane

I can drill down to take a look at code coverage all the way down to the method level. In this case, let’s take a look at the Subtext.Akismet assembly. Expanding that node, I can drill down to the Subtext.Akismet namespace, then to the HttpClient class. Hey! The PostClient method only has 91% code coverage! I’ve gotta do something about that!

NCover Drill
Down

When I click on the method, NCoverExplorer shows me the code in the right pane along with which lines of code were covered. The lines in red were not executed by my unit test. Click on the below image for a detailed look.

NCoverExplorer Code Coverage
Pane

As you can see, there are a couple of exception cases I need to test. It turns out that one of these exception cases never happens, which is why I cannot get that line covered. This may be better served using a Debug.Assert statement than throwing an exception.

If you haven’t played around with TestDriven.NET and NCoverExplorer, give it a twirl. But be careful, this is powerful stuff and you may spend the next several hours writing all sorts of code to get that last line of code tested.

Here are a few posts I’ve written that you may find useful to eke out every last line of code coverage.

Now get out there and test!

code, tdd comments edit

Here we are already looking ahead to learn about the language features of C# 3.0 and I am still finding new ways to make my code better with good “old fashioned” C# 2.0.

Like many people, I tend to fall into certain habits of writing code. For example, today I was writing a unit test to test the source of a particular event. I wanted to make sure that the event is raised and that the event arguments were set properly. Here’s the test I started off with (some details changed for brevity) which reflects how I would do this in the old days.

private bool eventRaised = false;

[Test]
public void SettingValueRaisesEvent()
{
    Parameter param = new Parameter("num", "int", "1");
    param.ValueChanged += OnValueChanged;
    parameter.Value = "42"; //should fire event.

    Assert.IsTrue(eventRaised, "Event was not raised");
}

void OnValueChanged(object sender, ValueChangedEventArgs e)
{
    Assert.AreEqual("42", e.NewValue);
    Assert.AreEqual("1", e.OldValue);
    Assert.AreEqual("num", e.ParameterName);
    eventRaised = true;
}

A couple of things rub me the wrong way with this code.

First, I do not like relying on the member variable eventRaised because another test could inadverdently set that value, unless I make sure to reset it in the SetUp method. So now I need a SetUp method.

Second, I don’t like the fact that this test requires this separate event handler method, OnValueChanged. Ideally, I would prefer that the unit test be self contained as much as possible.

Then it hits me. Of course! I should use an anonymous delegate to handle that event. Here is the revised version.

[Test]
public void SettingValueRaisesEvent()
{
    bool eventRaised = false;
    Parameter param = new Parameter("num", "int", "1");
    param.ValueChanged += 
        delegate(object sender, ValueChangedEventArgs e)
        {
            Assert.AreEqual("42", e.NewValue);
            Assert.AreEqual("1", e.OldValue);
            Assert.AreEqual("num", e.ParameterName);
            eventRaised = true;
        };
    param.Value = "42"; //should fire event.

    Assert.IsTrue(eventRaised, "Event was not raised");
}

Now my unit test is completely self-contained in a single method. Lovely!

In general, I try not to use anonymous delegates all over the place, especially delegates with a lot of code. I think they can become confusing and hard to read. But this is a situation in which I think using an anonymous delegate is particularly elegant.

Contrast this approach to the approach using Rhino Mocks I wrote about a while ago. In that scenario, I was testing that a subscriber to an event handles it properly. In this case, I am testing the event source.

Technorati Tags: Tips, TDD, C#, Rhino Mocks

comments edit

Steve Harman just announced the release of Subtext 1.9.3. This is primarily a bug fix release, though there are a couple of small improvements.

You can download the latest bits here.

Many thanks to Steve and the rest of the Subtext crew for all the hard work in getting this release together. Ever since I wrote that Subtext Job Posting blog post, we’ve had a lot more active contributors lending a hand, which has been a big help. Your participation is very much appreciated!

With 1.9.3 out of the dock, all focus is now on getting Subtext 2.0 ready for deployment.

So far, progress on Subtext 2.0 has been going better than I expected. We have an early implementation of our plugin framework working, though we still have a lot of improvements and polishing to do on it. The Membership Provider is also working, though we have a few refactorings we’re considering to the data model.

code, tdd comments edit

A Scanning Test - From
http://www.sxc.hu/photo/517386

Last night a unit test saved my life (with apologies). Ok, maybe not my life, but the act of writing some unit tests did save me the embarrasment of an obscure bug which was sure to hit when I least expected it.  It is cases like this that made me into such a big fan of writing automated unit tests.

Not too long ago I wrote a C# Akismet API for Subtext. In writing the code, I followed design principles focused on making the API as testable as possible. For example, I applied Inversion of Control (IOC) by having the AkismetClient constructor take in an HttpClient instance as an argument. The HttpClient instance is responsible for making the actual HTTP request.

This allowed me to use Rhino Mocks to replace the HttpClient with a mock enabling me to build unit tests that ensured that the Akismet API was doing the right thing without having to make any actual web requests.

Of course this approach only delays the inevitable. I still want to have an automated test for the HttpClient class.

So last night, I took a step back and revisited this excellent post by Scott Hanselman in which he shows how to use Cassini in your unit tests. I decided to update his pioneering approach to use the latest incarnation of Cassini, WebServer.WebDev. I also decided to refactor what he did into a reusable TestWebServer class in order to make the barrier to entry in using it as low as possible.

Setup

WebServer.WebDev is the built in Web Server (formerly known as Cassini) used by Visual Studio.NET 2005. The main functionality of the server is located in WebDev.WebHost.dll. You can find this assembly in the GAC. On my machine, it is located in the following directory:

c:\WINDOWS\assembly\GAC_32\WebDev.WebHost\8.0.0.0__b03f5f7f11d50a3a\

Note that the .NET framework installs an explorer extension for the GAC so you won’t see this directory using Windows Explorer. I navigated to the directory using the command prompt.

Setting up the test web server is a two step process once you’ve located the WebDev.WebHost assembly.

  1. Copy WebDev.WebHost into your unit test project and add a reference to it.
  2. Add the TestWebServer.cs file into your unit test project.

Note: To make this really reusable, you could drop this class into a separate Unit Test Helper assembly that you reference in your unit test projects. If you do go that route, be sure to heed the “//NOTE” I left for you in the ExtractResources method.

TestWebServer Usage

The following shows a couple of ways you can use this test web server in your own unit tests. If you have a single test in a fixture that needs to use the server, you can do something like this:

using (TestWebServer webServer = new TestWebServer())
{
    webServer.Start();
    webServer.ExtractResource("ResourcePath.SomePage.aspx"
      , "SomePage.aspx");
    string response = webServer.RequestPage("SomePage.aspx");
    Assert.AreEqual("Done", response);
}

If you have a set of tests that need to use the web server, I suggest using the [TestFixtureSetUp] and [TestFixtureTearDown] attributes to start the web server just once for all the tests.

private TestWebServer webServer = new TestWebServer();
private Uri webServerUrl;

[TestFixtureSetUp]
public void TestFixtureSetUp()
{
    this.webServerUrl = this.webServer.Start();
}

[TestFixtureTearDown]
public void TestFixtureTearDown()
{
    if(this.webServer != null)
        this.webServer.Stop();
}

I added several helper methods to the TestWebServer class based on what Scott did.

ExtractResource takes in the full path to an embedded resource and extracts it into the test webroot. In the first code example above, I extracted an embedded resource into a file named SomePage.aspx. Be sure to call this method after the webserver is started.

RequestPage has two overloads. One which makes a simple GET request to the test web server, and the other which makes a POST request.

Discussion

In the past, I have gone to great lengths to not using a web server to unit test my code, as that takes us more into the realm of Integration testing. A while ago I wrote a post on how to simulate the HttpContext for unit tests without using a web server. This approach has been improved upon in the Subtext unit test codebase and has served me well.

But even that approach can only go so far. As I pointed out in my post on a Testing Mail Server, it’s a good thing to abstract out these extensibility points using an interface or a provider. But at some point, you have to test the concrete implementation. I can’t keep delegating functionality endlessly to another abstraction. Somebody has to make a real HTTP request.

So consider this approach the method of last resort.

Download

The test webserver class can be downloaded here from my company’s tools site.

comments edit

PDC 2007 Developer
Powered According to Brad Abrams, it looks like we’re going to have us another PDC in 2007. For those of you not in the know, PDC stands for Professional Developer Conference. These are conferences in which Microsoft shows off upcoming technologies that us developers will be using.

It’s not till October 2, so there’s still time to scrounge up the money to go, assuming I don’t lose it all at the blackjack tables at Mix 07 (I should have stayed at the craps table,Steve Mainecan back me up on that).

I probably know a back door or two I can sneak through if it comes to that. I’m looking forward to having all you illustrious developers in my neighborhood (Westside!). It’s too early for a roll call, but be sure that I hear about it when you decide to attend.

I do know where to get the best Korean food.

comments edit

Subtext has a pretty sweet Continuous Integration setup using CruiseControl.NET running inside a virtual server inside a real computer in my home office. The man responsible for this setup is Simone Chiaretta, who just unveiled his new English language blog, CodeClimber.

His former blog was in Italian, which made it diffult reading for me, as Babelfish totally mangled translations.  When Simone is not writing about broken ribs from kite surfing or bagging the latest peak, he just might get around to writing about the CCNET setup he created for Subtext. Wink

comments edit

Green Light from
http://www.sxc.hu/photo/669003Google Code Search is truly the search engine for the uber geek, and potentially a great source of sublime code and sublime comments.  K. Scott Allen, aka Mr. OdeToCode, posted a few choice samples of prose he found while searching through code (Scott, exactly what were you searching for?).

One comment he quotes strikes me as a particularly good point to remember about using locks in multithreaded programming.

Locks are analogous to green traffic lights: If you have a green light, that does not prevent the idiot coming the other way from plowing into you sideways; it merely guarantees to you that the idiot does not also have a green light at the same time. (from File.pm).

The point the comment makes is that a lock does not prevent another thread from going ahead and accessing and changing a member.  Locks only work when every thread is “obeying the rules”.

Unfortunately, unlike an intersection, there is no light to tell you whether or not the coast is clear. When you are about to write code to access a static member, you don’t necessarily know whether or not that member might require having a lock on it. To really know, you’d have to scan every access to that member.

This is why concurrency experts such as Joe Duffy recommend following a locking model.  A locking model can be as simple or as complex as needed by your situation.

In any case, have fun with Google Code Search. You might find yourself reviewing the code of a Star Trek text-based sex adventure.

comments edit

Just thought I would highlight something I mentioned in my last post because I thought it was particularly funny. I wrote about joys using Google Code Search to search through source code for interesting comments. Definitely a geeky pasttime.

However that geekiness is overshadowed by something interesting I found. In this case, it’s not the comments that are interesting, but the actual code itself.

It appears to be a lonely geek’s fantasy written in code. A text-based adventure about sexcapades with Dr. Beverly Crusher on the star trek enterprise. Here’s a tame snippet to give you an idea.

dancewithdesc ( actor ) = { "You and [Dr. Beverly Crusher] dance a quiet, soft, close dance around her quarters. It is really quite chaste, but you find yourself intoxicated by her scent and stimulated by your grasp and hers, both of which stray far lower on each others’ backs than is supposedly called for with this step.";

From
http://www.ex-astris-scientia.org/gallery/wost/beverly2.jpg

You have to read it to believe it (warning, trashy novel style adult content).

comments edit

UPDATE: Looks like the DNS issue is starting to get resolved. The fix may not have fully propagated yet.

If your Akismet Spam Filtering is currently broken, it is probably due to a DNS issue going around. I reported it to the akismet mailing list and found that people all over the world are having the same issue. It is not just a Comcast issue.

The temporary fix is to add the following entry into your hosts file:

72.21.44.242 rest.akismet.com

Hopefully the Akismet team will fix this problem shortly.

comments edit

My favorite unit testing framework just released a new version. Andrew Stopford has the announcement here and you can download the release from the MbUnit site.

I met Andrew at Mix 06 early this year and he’s a class act and great project lead. I’ve been following MbUnit’s progress on and off and am really happy with the team’s responsiveness to my submitted issues.

My one tiny contribution to the project was to purchase the mbunit.com domain for them. Perhaps a little bribe to get my feature requests looked at promptly. ;)

If you are wondering why I prefe MbUnit of NUnit, check out these posts:

comments edit

Ok, I could use some really expert help here. I really like using the built in WebServer.WebDev Web Server that is a part of Visual Studio

  1. For one thing, it makes getting a new developer working on Subtext (or any project) that much faster. Just get the latest code, and hit CTRL+F5 to see the site in your browser. No pesky IIS set up.

Today though, I ran into my first real problem with this approach. When running the latest Subtext code from our trunk, I am getting a SecurityException during a call to Server.Transfer.

Stepping through the code in the debugger, the page I transfer to executes just fine without throwing an exception.

Based on the stack trace, the exception occurs when the content is being flushed to the client. A security demand for Unmanaged Code is the cause of this during a call to the IHttpResponseElement.Send method of the HttpResponseUnmanagedBufferElement class.

What I don’t understand is why this particular class is handling my request instead of the HttpResponseBufferElement class? This code seems to work fine when I use IIS, so I think it’s a problem with WebServer.WebDev. Anybody know anyone who understands these internals well enough to enlighten me? I’d be eternally grateful.

I posted this question on the MSDN forums as well.

comments edit

From
http://www.v-brazil.com/culture/sports/football/player/ronaldinho-kid.jpg Recently while picking up a few items at Target, I decided to buy a cheapo soccer ball. Now those who know me know I’m a bit of a fanatic about playing soccer, willingly paying good money for a quality ball.

But this ball is not for playing outdoors. I keep it in my office so I can dribble it during breaks, deftly avoiding obstacles on my way to the bathroom, practicing moves during phone calls and long compilations.

It’s a minor thing, but I am already noticing improvement when playing for real, just through the benefits of visualization and practice. I wouldn’t recommend this for every sport. Images of Craig Andera with a hockey stick breaking furniture in his office come to mind.

As software developers, we tend to hold the idea of innate talent in very high regard. How often do you hear software pundits saying, Either you got it, or you don’t.

However, according to a recent Scientific American article, The Expert Mind, this may not be as much the case as we think.

At this point, many skeptics will finally lose patience. Surely, they will say, it takes more to get to Carnegie Hall than practice, practice, practice. Yet this belief in the importance of innate talent, strongest perhaps among the experts themselves and their trainers, is strangely lacking in hard evidence to substantiate it.

The article delves into studies of chessmasters who when briefly shown a random chessboard cannot recall the positions of its pieces any better than non-chessmasters, but when those pieces represent possible configurations due to game play, have a significantly stronger recall.

The article concludes that chessmasters build structures in their brains to recognize patterns in chess, and that to become an expert in chess takes around ten years.

The one thing that all expertise theorists agree on is that it takes enormous effort to build these structures in the mind. Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field. Even child prodigies, such as Gauss in mathematics, Mozart in music and Bobby Fischer in chess, must have made an equivalent effort, perhaps by starting earlier and working harder than others.

It turns out that the quality of effortful study is a big factor in moving from novice to expert. So not everyone will become an expert in 10 years, only those who continue to push themselves, examine their weaknesses and strengths, and study accordingly.

I figured I could move past my plateau as a soccer player by creating ways to practice better and more often, hence the soccer ball in my office.

I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot technical books, but many of them aren’t making me better as a developer. I pretty much read books on autopilot these days.

It’s not till I actually spend time to think about the implications and applications for concepts in the books, explain these concepts to others, and write code to test my understanding out, that I really feel growth in my craft.

Of course, that leaves me with the question of whether some people are innately more curious or better at studying and finding ways to improve themselves, but that’s a question for the researchers to work on.

If you haven’t already, I recommend reading the article, because my summary does not do it justice.

comments edit

Silver Bullet: From
http://www.tejasthumpcycles.com/Parts/LeversGripsctrls/Silver_Bullet/Silver_Bullet_Shift_Brake.jpg In his essay No Silver Bullet: Essence and Accidents of Software Engineering, Fred Brooks makes the following postulate:

There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.

This “law” was recently invoked by Joel Spolsky in his post Lego Blocks, which prompted an interesting rebuttal by Wesner Moise.

That assertion turns out to be pure nonsense, amply disproven by numerous advances in IDEs, languages, frameworks, componentization over the past few decades. Our expectations of software and our ability have risen. A year of work takes a month or a month of work takes a day.

Whether you agree with Wesner’s position or not comes down to how you define a single development.  It could be argued that the order of magnitude improvement we have now is a cumulative result of multiple improvements.

Regardless, perhaps a more lasting way to rephrase this assertion is to state that no single technology, development, or management technique will produce by itself an order-of magnitude improvement in meeting current business needs.

In other words, sure we can produce an order-of-magnitude more productivity now than we could before, but changing business climates and consumer needs have also increased by an order-of-magnitude. Just compare a modern game like Oblivion to an older game like Ultima I.

Oblivion
Screenshot Ultima
Screenshot

In a way, this is Parkinson’s Law at work:

work expands so as to fill the time available for its completion.

I’ll restate it to apply to software engineering:

Business needs and feature requirements increase to fill in the productivity gains due to silver bullets.

What do you think, is that sufficiently original to call it Haack’s Law? Wink

In any case, I think Joel’s original point still stands. Building software to meet current needs, will always be hard.  When you think about it, the dreams of building software with lego-like blocks has been realized. But only for those who need to write software that meets the needs of users in the 1960s.  For modern needs, it remains challenging.

comments edit

One of the hidden gems in ASP.NET 2.0 is the new expression syntax. For example, to display the value of a setting in the AppSettings section of your web.config, you can do this:

<asp:Label Text="<%$ AppSettings:AnotherSetting %>"
    ID="setting" 
    runat="server" />

Notice that the value of the Text property of the Label control is set to an expression that is similar to the DataBinding syntax (<%#), but instead of a pound sign (#) it uses a dollar sign ($).

Expressions are distinguished by the expression prefix. In the above example, the prefix is AppSettings.  The following is a short list of built in expression prefixes you can use. I am not sure if there are more:

  • Resources
  • ConnectionStrings
  • AppSettings

But like most things in ASP.NET, this system is extensible, allowing you to easily build your own custom expressions. In this blog post, I’ll walk you through building a query string expression builder. This will allow you to display a query string value like so:

<asp:Label Text="<%$ QueryString:SomeParamName %>"
    ID="setting" 
    runat="server" />

The first step is to create a class that inherits from System.Web.Compilation.ExpressionBuilder. Be sure not to confuse this with System.Web.Configuration.ExpressionBuilder

using System.Web.Compilation;

[ExpressionPrefix("QueryString")]
public class QueryStringExpressionBuilder : ExpressionBuilder
{
  //Implementation goes here...
}

ExpressionBuilder is an abstract class with a single abstract method to implement. This method returns an instance of CodeExpression which is part of the System.CodeDom namespace. For those not familiar with CodeDom, it’s short for Code Document Object Model. It is an API for automatic code generation. The CodeExpression class is an abstract representation of code that gets executed each time your custom expression is evaluated.

You’ll probably use something similar to the following implementation 99% of the time though (sorry for the ugly formatting, but this pretty much mimics the implementation in the MSDN documentation).

public override CodeExpression GetCodeExpression(
    BoundPropertyEntry entry
    , object parsedData
    , ExpressionBuilderContext context)
{
  Type type = entry.DeclaringType;
  PropertyDescriptor descriptor = 
    TypeDescriptor.GetProperties(type)
      [entry.PropertyInfo.Name];
  CodeExpression[] expressionArray = 
    new CodeExpression[3];
  expressionArray[0] = new 
    CodePrimitiveExpression(entry.Expression.Trim());
  expressionArray[1] = new 
    CodeTypeOfExpression(type);
  expressionArray[2] = new 
    CodePrimitiveExpression(entry.Name);

  return new CodeCastExpression(descriptor.PropertyType
    , new CodeMethodInvokeExpression(
        new CodeTypeReferenceExpression(GetType())
        , "GetEvalData"
       , expressionArray));
}

So what exactly is happening in this method? It is effectively generating code. In particular, it generates a call to a static method named GetEvalData which needs to be defined in this class. The return value of this method is then cast to the type returned by descriptor.PropertyType, which is why you see the CodeCastExpression wrapping the other code expressions.

The arguments passed to GetEvalData are represented by the CodeExpression array, expressionArray. The first argument is the expression to evaluate (this is the the part after the prefix). The second argument is the target type. This is the type of the class in which the expression is being evaluated. In our case, this would be the type System.Web.UI.WebControls.Label as we are using this expression within a Label control. The final argument is entry. This is the name of the property being set using the expression. In our case, this would be the Text property of the Label.

You could really build any sort of code tree within this method, but as I said before, most of the time, you will follow a similar pattern as this. In fact, I would probably put this method in some sort of abstract base class and then make sure to define the static GetEvalData method in your inheriting class.

Note, if you choose to move this method into an abstract base class as I described, you can’t make GetEvalData an abstract method in that class because we generated a call to a static method.

You could consider changing the above method to build a call to an instance method, but then you the generate code would have to create the instance everytime your expression is evaluated. It would not have access to an instance of the expression builder automatically. The choice is yours.

Here is the GetEvalData method we need to add to QueryStringExpressionBuilder.

public static object GetEvalData(string expression
    , Type target, string entry)
{
    if (HttpContext.Current == null 
      || HttpContext.Current.Request == null)
        return string.Empty;

    return HttpContext.Current
      .Request.QueryString[expression];
}

With the code for the builder completed, you simply need to add an entry within the compilation section under the system.web section of web.config like so:

<system.web>
  <compilation debug="true">
    <expressionBuilders>
      <add expressionPrefix="QueryString" 
        type="NS.QueryStringExpressionBuilder, AssemblyName"/>
    </expressionBuilders>
  </compilation>
</system.web>

This maps your custom expression class to the expression via its prefix.

In the MSDN examples, they tell you to drop your expression class file into the App_Code directory. This works when you are using the Website Project model. Fortunately, you can also use custom expressions with Web Application Projects. Simply compile your builder into an assembly and make sure to specify the AssemblyName as part of the type attribute when declaring your expression builder.

If you are using the WebSite project model and the App_Code directory, you should leave off the AssemblyName portion of the type.

comments edit

No, this is not a bait and switch post where I try to recruit you to work on Subtext.  A while ago I mentioned that I was participating in The Hidden Network.  So far, I really like it, though I think there is still room for improvement.

If you visit my site (since many of you are reading this in an RSS aggregator), you might have noticed a Jobs link at top.  The link will take you to a full listing of jobs.

The neat thing about this job listings page is that it is hosted by The Hidden Network. I simply added a CName to redirect jobs.haacked.com to hidden network.  They made it extremely easy for me to add a jobs section to my website.

Being bored, I figured I’d take a look through the list to see what kind of job opportunities are available.  Frankly, I am a little bit disappointed.  Many of the jobs sound like yaaaawners.  Perhaps more employers should read my guide, The Art Of The Job Post (If that came across as arrogant, whoops)

The Hidden Network is still pretty young, but over time I’d like to see a lot more jobs listed.  That would make using Geolocation to list jobs that are local to the reader more useful. I also think it’d be neat if I could annotate job postings.

There were a few that did catch my attention…

Sr. Admin, Programmer at Chuck E. Cheese I don’t know if the job itself sounds interesting, but hey! It’s Chuck E. Cheese!  Where a kid can be a kid! I’d ask for a signing bonus that includes free Pizza and  the passcode to play video games onsite for free.

Net, SQL, ASP Developer at Y! Music I have a buddy who works at Yahoo! in Santa Monica and loves it. In a bit of personal trivia, I actually worked on the original Launch.com website, which was later bought by Yahoo! (many iterations later).  I interviewed with Yahoo! in Santa Monica, but chose to go to SkillJam instead.

.NET Software Engineer at IGN If you’re into gaming, this could be alot fun.

The Motley Fool have several jobs listed.  Not sure what they’d be like to work for, but at least you’d get good investment advice while on the job.

There may well be others in there worth mentioning. I wasn’t that bored that I would read the details of every one.  The good thing about these listings is they appear to be real jobs, and not phishing expeditions by head hunters.

This may be a longshot to even ask, but if you end up actually applying for a job because you saw it on my blog, would you let me know? 

If you are an employer, consider posting a job.

comments edit

Participating in the comments section of particularly interesting blog posts is a lot of fun and helps build community.  But one of the annoyances in doing so is that there’s really no good way to keep track of comments.  Unlike new posts in someone’s RSS Feed, most aggregators won’t tell you when there is a new comment.

Sure, there is coComment, but since I like to post comments using my RSS Aggregator via the CommentAPI, coComment isn’t such a help there.

But help is on the way.  Dare Obasanjo recently announced the beta release of Jubilee, the code name for RSS Bandit 1.5.  One of the more interesting features (and my favorite) included in this release is comment watching.

When reading an interesting post, you can right click on the post and select the Watch Comments option.  The following screenshot demonstrates.

Watch
Comments

In a stroke of pure vanity, I will select a blog post in Andrew Stopford’s blog that makes a reference to me and click Watch Comments.

Now if I wait long enough, someone will eventually leave a comment on that post.  Of course, why leave it to chance? I went ahead and left a comment via the browser (sorry Andrew).

When RSS Bandit updates, it shows me that someone left a comment in my Developers category by turning that category green.

New
Comment!

Expanding that node, I can dig down to the post and read the new comment.

New
Comment!

Of course, this only works for blogs that support wfw:commentRss.  Unfortunately, one of my favorite blogs, CodingHorror, which happens to always have lively conversation in the comments section, doesn’t support it.  Jeff, it’s time to move to Subtext!

Kudos go out to Dare and Torsten!  Unfortunately, I’ve been overcommitted and have not been able to contribute to RSS Bandit lately.