0 comments suggest edit


I know what you’re probably thinking. Did Phil forget to take his meds today? Let me explain.

Yesterday I thought I would try my hand at upgrading to Vista. But first, being the conscientious geek that I am, I tried to mirror my current drive to a brand new drive.

I plopped the new drive carefully (as carefully as one can plop anything) into the machine, and the system wouldn’t boot. So I retreated, pulling the old drive out and putting everything back the way it was.

Or so I thought.

The machine still wouldn’t boot. I ran the entire suite of Dell diagnostics tests on it. I also ran CHKDSK /F on it. Nothing. I ran out and bought a new SATA cable. Nothing. As far as any software test could tell, the drive was fine. It was being recognized by the BIOS, it just wouldn’t boot.

I plopped my new drive (again carefully) into the primary spot and was able to perform a clean install on it. So my machine can boot on my new drive with Vista, but not my old drive with Windows XP. How strange! At least I can access my old drive from Vista in order to copy important files over to the new drive.

As an aside, why do we say we perform installs? It’s not like anyone is watching, nor would they really want to. Who is being entertained?

Today, I was still dealing with the aftermath of this drive failure when I chatted up Micah on Skype. He remarked that everyone in our company seems to be having computer issues today. Jon’s having problems with his sound card, Pat misplaced his laptop, and Micah’s USB flash-drive failed. He says,

Must be something in the air. Maybe there was a huge solar flare or something.

Then it struck me.


I remembered reading that in fact there is a huge solar flare headed towards Earth. Well there you go, say no more. That explains it. A solar flare hosed my drive. Right now I am trying to mirror my new drive over my old drive to see if it’ll boot up with Vista. If it does, then it would seem to me that somehow Windows got corrupted on my old drive. Perhaps a solar particle flipped the AllowBootAndJustWork bit to 0. Just my luck.

In any case, it’s a good thing I have a couple of backup machines to blog from.

0 comments suggest edit

I don’t suffer from classic OCD (Obsessive Compulsive Disorder), but I do sometimes have OCD tendencies. Just ask my poor wife when we’re having dinner while my mind is still trying to resolve a thorny programming problem. Earth to Phil. Are you on this planet?

Lately, the object of my OCD-like tendencies is getting the Subtext unit test code coverage up to 40%. At the time of this writing (and after much work), it is at 38%. Why 40%? No reason. Just a nice round number that we’ve never yet hit. Remember, OCD isn’t necessarily a rational affliction.

If code coverage is my disease,TestDriven.NETwithNCoverExplorerintegration is my drug, andJamie Cansdaleis my dealer. He graciously gave me a pro license as a donation to the Subtext project.

So here’s the anatomy of a code coverage addiction. First, I simply right click on our unit test project, UnitTests.Subtext, from within VisualStudio.NET 2005 (also works with older versions of VS.NET). I select the Test With menu option and click Coverage as in the screenshot below.

Test With Coverage in VS.NET

After running all of the unit tests, NCoverExplorer opens up with the a coverage report.

NCoverExplorer coverage results in Left

I can drill down to take a look at code coverage all the way down to the method level. In this case, let’s take a look at the Subtext.Akismet assembly. Expanding that node, I can drill down to the Subtext.Akismet namespace, then to the HttpClient class. Hey! The PostClient method only has 91% code coverage! I’ve gotta do something about that!

NCover Drill

When I click on the method, NCoverExplorer shows me the code in the right pane along with which lines of code were covered. The lines in red were not executed by my unit test. Click on the below image for a detailed look.

NCoverExplorer Code Coverage

As you can see, there are a couple of exception cases I need to test. It turns out that one of these exception cases never happens, which is why I cannot get that line covered. This may be better served using a Debug.Assert statement than throwing an exception.

If you haven’t played around with TestDriven.NET and NCoverExplorer, give it a twirl. But be careful, this is powerful stuff and you may spend the next several hours writing all sorts of code to get that last line of code tested.

Here are a few posts I’ve written that you may find useful to eke out every last line of code coverage.

Now get out there and test!

code, tdd 0 comments suggest edit

Here we are already looking ahead to learn about the language features of C# 3.0 and I am still finding new ways to make my code better with good “old fashioned” C# 2.0.

Like many people, I tend to fall into certain habits of writing code. For example, today I was writing a unit test to test the source of a particular event. I wanted to make sure that the event is raised and that the event arguments were set properly. Here’s the test I started off with (some details changed for brevity) which reflects how I would do this in the old days.

private bool eventRaised = false;

public void SettingValueRaisesEvent()
    Parameter param = new Parameter("num", "int", "1");
    param.ValueChanged += OnValueChanged;
    parameter.Value = "42"; //should fire event.

    Assert.IsTrue(eventRaised, "Event was not raised");

void OnValueChanged(object sender, ValueChangedEventArgs e)
    Assert.AreEqual("42", e.NewValue);
    Assert.AreEqual("1", e.OldValue);
    Assert.AreEqual("num", e.ParameterName);
    eventRaised = true;

A couple of things rub me the wrong way with this code.

First, I do not like relying on the member variable eventRaised because another test could inadverdently set that value, unless I make sure to reset it in the SetUp method. So now I need a SetUp method.

Second, I don’t like the fact that this test requires this separate event handler method, OnValueChanged. Ideally, I would prefer that the unit test be self contained as much as possible.

Then it hits me. Of course! I should use an anonymous delegate to handle that event. Here is the revised version.

public void SettingValueRaisesEvent()
    bool eventRaised = false;
    Parameter param = new Parameter("num", "int", "1");
    param.ValueChanged += 
        delegate(object sender, ValueChangedEventArgs e)
            Assert.AreEqual("42", e.NewValue);
            Assert.AreEqual("1", e.OldValue);
            Assert.AreEqual("num", e.ParameterName);
            eventRaised = true;
    param.Value = "42"; //should fire event.

    Assert.IsTrue(eventRaised, "Event was not raised");

Now my unit test is completely self-contained in a single method. Lovely!

In general, I try not to use anonymous delegates all over the place, especially delegates with a lot of code. I think they can become confusing and hard to read. But this is a situation in which I think using an anonymous delegate is particularly elegant.

Contrast this approach to the approach using Rhino Mocks I wrote about a while ago. In that scenario, I was testing that a subscriber to an event handles it properly. In this case, I am testing the event source.

Technorati Tags: Tips, TDD, C#, Rhino Mocks

0 comments suggest edit

Steve Harman just announced the release of Subtext 1.9.3. This is primarily a bug fix release, though there are a couple of small improvements.

You can download the latest bits here.

Many thanks to Steve and the rest of the Subtext crew for all the hard work in getting this release together. Ever since I wrote that Subtext Job Posting blog post, we’ve had a lot more active contributors lending a hand, which has been a big help. Your participation is very much appreciated!

With 1.9.3 out of the dock, all focus is now on getting Subtext 2.0 ready for deployment.

So far, progress on Subtext 2.0 has been going better than I expected. We have an early implementation of our plugin framework working, though we still have a lot of improvements and polishing to do on it. The Membership Provider is also working, though we have a few refactorings we’re considering to the data model.

code, tdd 0 comments suggest edit

A Scanning Test - From

Last night a unit test saved my life (with apologies). Ok, maybe not my life, but the act of writing some unit tests did save me the embarrasment of an obscure bug which was sure to hit when I least expected it.  It is cases like this that made me into such a big fan of writing automated unit tests.

Not too long ago I wrote a C# Akismet API for Subtext. In writing the code, I followed design principles focused on making the API as testable as possible. For example, I applied Inversion of Control (IOC) by having the AkismetClient constructor take in an HttpClient instance as an argument. The HttpClient instance is responsible for making the actual HTTP request.

This allowed me to use Rhino Mocks to replace the HttpClient with a mock enabling me to build unit tests that ensured that the Akismet API was doing the right thing without having to make any actual web requests.

Of course this approach only delays the inevitable. I still want to have an automated test for the HttpClient class.

So last night, I took a step back and revisited this excellent post by Scott Hanselman in which he shows how to use Cassini in your unit tests. I decided to update his pioneering approach to use the latest incarnation of Cassini, WebServer.WebDev. I also decided to refactor what he did into a reusable TestWebServer class in order to make the barrier to entry in using it as low as possible.


WebServer.WebDev is the built in Web Server (formerly known as Cassini) used by Visual Studio.NET 2005. The main functionality of the server is located in WebDev.WebHost.dll. You can find this assembly in the GAC. On my machine, it is located in the following directory:


Note that the .NET framework installs an explorer extension for the GAC so you won’t see this directory using Windows Explorer. I navigated to the directory using the command prompt.

Setting up the test web server is a two step process once you’ve located the WebDev.WebHost assembly.

  1. Copy WebDev.WebHost into your unit test project and add a reference to it.
  2. Add the TestWebServer.cs file into your unit test project.

Note: To make this really reusable, you could drop this class into a separate Unit Test Helper assembly that you reference in your unit test projects. If you do go that route, be sure to heed the “//NOTE” I left for you in the ExtractResources method.

TestWebServer Usage

The following shows a couple of ways you can use this test web server in your own unit tests. If you have a single test in a fixture that needs to use the server, you can do something like this:

using (TestWebServer webServer = new TestWebServer())
      , "SomePage.aspx");
    string response = webServer.RequestPage("SomePage.aspx");
    Assert.AreEqual("Done", response);

If you have a set of tests that need to use the web server, I suggest using the [TestFixtureSetUp] and [TestFixtureTearDown] attributes to start the web server just once for all the tests.

private TestWebServer webServer = new TestWebServer();
private Uri webServerUrl;

public void TestFixtureSetUp()
    this.webServerUrl = this.webServer.Start();

public void TestFixtureTearDown()
    if(this.webServer != null)

I added several helper methods to the TestWebServer class based on what Scott did.

ExtractResource takes in the full path to an embedded resource and extracts it into the test webroot. In the first code example above, I extracted an embedded resource into a file named SomePage.aspx. Be sure to call this method after the webserver is started.

RequestPage has two overloads. One which makes a simple GET request to the test web server, and the other which makes a POST request.


In the past, I have gone to great lengths to not using a web server to unit test my code, as that takes us more into the realm of Integration testing. A while ago I wrote a post on how to simulate the HttpContext for unit tests without using a web server. This approach has been improved upon in the Subtext unit test codebase and has served me well.

But even that approach can only go so far. As I pointed out in my post on a Testing Mail Server, it’s a good thing to abstract out these extensibility points using an interface or a provider. But at some point, you have to test the concrete implementation. I can’t keep delegating functionality endlessly to another abstraction. Somebody has to make a real HTTP request.

So consider this approach the method of last resort.


The test webserver class can be downloaded here from my company’s tools site.

0 comments suggest edit

PDC 2007 Developer
Powered According to Brad Abrams, it looks like we’re going to have us another PDC in 2007. For those of you not in the know, PDC stands for Professional Developer Conference. These are conferences in which Microsoft shows off upcoming technologies that us developers will be using.

It’s not till October 2, so there’s still time to scrounge up the money to go, assuming I don’t lose it all at the blackjack tables at Mix 07 (I should have stayed at the craps table,Steve Mainecan back me up on that).

I probably know a back door or two I can sneak through if it comes to that. I’m looking forward to having all you illustrious developers in my neighborhood (Westside!). It’s too early for a roll call, but be sure that I hear about it when you decide to attend.

I do know where to get the best Korean food.

0 comments suggest edit

Subtext has a pretty sweet Continuous Integration setup using CruiseControl.NET running inside a virtual server inside a real computer in my home office. The man responsible for this setup is Simone Chiaretta, who just unveiled his new English language blog, CodeClimber.

His former blog was in Italian, which made it diffult reading for me, as Babelfish totally mangled translations.  When Simone is not writing about broken ribs from kite surfing or bagging the latest peak, he just might get around to writing about the CCNET setup he created for Subtext. Wink

0 comments suggest edit

Green Light from
http://www.sxc.hu/photo/669003Google Code Search is truly the search engine for the uber geek, and potentially a great source of sublime code and sublime comments.  K. Scott Allen, aka Mr. OdeToCode, posted a few choice samples of prose he found while searching through code (Scott, exactly what were you searching for?).

One comment he quotes strikes me as a particularly good point to remember about using locks in multithreaded programming.

Locks are analogous to green traffic lights: If you have a green light, that does not prevent the idiot coming the other way from plowing into you sideways; it merely guarantees to you that the idiot does not also have a green light at the same time. (from File.pm).

The point the comment makes is that a lock does not prevent another thread from going ahead and accessing and changing a member.  Locks only work when every thread is “obeying the rules”.

Unfortunately, unlike an intersection, there is no light to tell you whether or not the coast is clear. When you are about to write code to access a static member, you don’t necessarily know whether or not that member might require having a lock on it. To really know, you’d have to scan every access to that member.

This is why concurrency experts such as Joe Duffy recommend following a locking model.  A locking model can be as simple or as complex as needed by your situation.

In any case, have fun with Google Code Search. You might find yourself reviewing the code of a Star Trek text-based sex adventure.

0 comments suggest edit

Just thought I would highlight something I mentioned in my last post because I thought it was particularly funny. I wrote about joys using Google Code Search to search through source code for interesting comments. Definitely a geeky pasttime.

However that geekiness is overshadowed by something interesting I found. In this case, it’s not the comments that are interesting, but the actual code itself.

It appears to be a lonely geek’s fantasy written in code. A text-based adventure about sexcapades with Dr. Beverly Crusher on the star trek enterprise. Here’s a tame snippet to give you an idea.

dancewithdesc ( actor ) = { "You and [Dr. Beverly Crusher] dance a quiet, soft, close dance around her quarters. It is really quite chaste, but you find yourself intoxicated by her scent and stimulated by your grasp and hers, both of which stray far lower on each others’ backs than is supposedly called for with this step.";


You have to read it to believe it (warning, trashy novel style adult content).

0 comments suggest edit

UPDATE: Looks like the DNS issue is starting to get resolved. The fix may not have fully propagated yet.

If your Akismet Spam Filtering is currently broken, it is probably due to a DNS issue going around. I reported it to the akismet mailing list and found that people all over the world are having the same issue. It is not just a Comcast issue.

The temporary fix is to add the following entry into your hosts file: rest.akismet.com

Hopefully the Akismet team will fix this problem shortly.

0 comments suggest edit

My favorite unit testing framework just released a new version. Andrew Stopford has the announcement here and you can download the release from the MbUnit site.

I met Andrew at Mix 06 early this year and he’s a class act and great project lead. I’ve been following MbUnit’s progress on and off and am really happy with the team’s responsiveness to my submitted issues.

My one tiny contribution to the project was to purchase the mbunit.com domain for them. Perhaps a little bribe to get my feature requests looked at promptly. ;)

If you are wondering why I prefe MbUnit of NUnit, check out these posts:

0 comments suggest edit

Ok, I could use some really expert help here. I really like using the built in WebServer.WebDev Web Server that is a part of Visual Studio

  1. For one thing, it makes getting a new developer working on Subtext (or any project) that much faster. Just get the latest code, and hit CTRL+F5 to see the site in your browser. No pesky IIS set up.

Today though, I ran into my first real problem with this approach. When running the latest Subtext code from our trunk, I am getting a SecurityException during a call to Server.Transfer.

Stepping through the code in the debugger, the page I transfer to executes just fine without throwing an exception.

Based on the stack trace, the exception occurs when the content is being flushed to the client. A security demand for Unmanaged Code is the cause of this during a call to the IHttpResponseElement.Send method of the HttpResponseUnmanagedBufferElement class.

What I don’t understand is why this particular class is handling my request instead of the HttpResponseBufferElement class? This code seems to work fine when I use IIS, so I think it’s a problem with WebServer.WebDev. Anybody know anyone who understands these internals well enough to enlighten me? I’d be eternally grateful.

I posted this question on the MSDN forums as well.

0 comments suggest edit

http://www.v-brazil.com/culture/sports/football/player/ronaldinho-kid.jpg Recently while picking up a few items at Target, I decided to buy a cheapo soccer ball. Now those who know me know I’m a bit of a fanatic about playing soccer, willingly paying good money for a quality ball.

But this ball is not for playing outdoors. I keep it in my office so I can dribble it during breaks, deftly avoiding obstacles on my way to the bathroom, practicing moves during phone calls and long compilations.

It’s a minor thing, but I am already noticing improvement when playing for real, just through the benefits of visualization and practice. I wouldn’t recommend this for every sport. Images of Craig Andera with a hockey stick breaking furniture in his office come to mind.

As software developers, we tend to hold the idea of innate talent in very high regard. How often do you hear software pundits saying, Either you got it, or you don’t.

However, according to a recent Scientific American article, The Expert Mind, this may not be as much the case as we think.

At this point, many skeptics will finally lose patience. Surely, they will say, it takes more to get to Carnegie Hall than practice, practice, practice. Yet this belief in the importance of innate talent, strongest perhaps among the experts themselves and their trainers, is strangely lacking in hard evidence to substantiate it.

The article delves into studies of chessmasters who when briefly shown a random chessboard cannot recall the positions of its pieces any better than non-chessmasters, but when those pieces represent possible configurations due to game play, have a significantly stronger recall.

The article concludes that chessmasters build structures in their brains to recognize patterns in chess, and that to become an expert in chess takes around ten years.

The one thing that all expertise theorists agree on is that it takes enormous effort to build these structures in the mind. Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field. Even child prodigies, such as Gauss in mathematics, Mozart in music and Bobby Fischer in chess, must have made an equivalent effort, perhaps by starting earlier and working harder than others.

It turns out that the quality of effortful study is a big factor in moving from novice to expert. So not everyone will become an expert in 10 years, only those who continue to push themselves, examine their weaknesses and strengths, and study accordingly.

I figured I could move past my plateau as a soccer player by creating ways to practice better and more often, hence the soccer ball in my office.

I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot technical books, but many of them aren’t making me better as a developer. I pretty much read books on autopilot these days.

It’s not till I actually spend time to think about the implications and applications for concepts in the books, explain these concepts to others, and write code to test my understanding out, that I really feel growth in my craft.

Of course, that leaves me with the question of whether some people are innately more curious or better at studying and finding ways to improve themselves, but that’s a question for the researchers to work on.

If you haven’t already, I recommend reading the article, because my summary does not do it justice.

0 comments suggest edit

Silver Bullet: From
http://www.tejasthumpcycles.com/Parts/LeversGripsctrls/Silver_Bullet/Silver_Bullet_Shift_Brake.jpg In his essay No Silver Bullet: Essence and Accidents of Software Engineering, Fred Brooks makes the following postulate:

There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.

This “law” was recently invoked by Joel Spolsky in his post Lego Blocks, which prompted an interesting rebuttal by Wesner Moise.

That assertion turns out to be pure nonsense, amply disproven by numerous advances in IDEs, languages, frameworks, componentization over the past few decades. Our expectations of software and our ability have risen. A year of work takes a month or a month of work takes a day.

Whether you agree with Wesner’s position or not comes down to how you define a single development.  It could be argued that the order of magnitude improvement we have now is a cumulative result of multiple improvements.

Regardless, perhaps a more lasting way to rephrase this assertion is to state that no single technology, development, or management technique will produce by itself an order-of magnitude improvement in meeting current business needs.

In other words, sure we can produce an order-of-magnitude more productivity now than we could before, but changing business climates and consumer needs have also increased by an order-of-magnitude. Just compare a modern game like Oblivion to an older game like Ultima I.

Screenshot Ultima

In a way, this is Parkinson’s Law at work:

work expands so as to fill the time available for its completion.

I’ll restate it to apply to software engineering:

Business needs and feature requirements increase to fill in the productivity gains due to silver bullets.

What do you think, is that sufficiently original to call it Haack’s Law? Wink

In any case, I think Joel’s original point still stands. Building software to meet current needs, will always be hard.  When you think about it, the dreams of building software with lego-like blocks has been realized. But only for those who need to write software that meets the needs of users in the 1960s.  For modern needs, it remains challenging.

0 comments suggest edit

One of the hidden gems in ASP.NET 2.0 is the new expression syntax. For example, to display the value of a setting in the AppSettings section of your web.config, you can do this:

<asp:Label Text="<%$ AppSettings:AnotherSetting %>"
    runat="server" />

Notice that the value of the Text property of the Label control is set to an expression that is similar to the DataBinding syntax (<%#), but instead of a pound sign (#) it uses a dollar sign ($).

Expressions are distinguished by the expression prefix. In the above example, the prefix is AppSettings.  The following is a short list of built in expression prefixes you can use. I am not sure if there are more:

  • Resources
  • ConnectionStrings
  • AppSettings

But like most things in ASP.NET, this system is extensible, allowing you to easily build your own custom expressions. In this blog post, I’ll walk you through building a query string expression builder. This will allow you to display a query string value like so:

<asp:Label Text="<%$ QueryString:SomeParamName %>"
    runat="server" />

The first step is to create a class that inherits from System.Web.Compilation.ExpressionBuilder. Be sure not to confuse this with System.Web.Configuration.ExpressionBuilder

using System.Web.Compilation;

public class QueryStringExpressionBuilder : ExpressionBuilder
  //Implementation goes here...

ExpressionBuilder is an abstract class with a single abstract method to implement. This method returns an instance of CodeExpression which is part of the System.CodeDom namespace. For those not familiar with CodeDom, it’s short for Code Document Object Model. It is an API for automatic code generation. The CodeExpression class is an abstract representation of code that gets executed each time your custom expression is evaluated.

You’ll probably use something similar to the following implementation 99% of the time though (sorry for the ugly formatting, but this pretty much mimics the implementation in the MSDN documentation).

public override CodeExpression GetCodeExpression(
    BoundPropertyEntry entry
    , object parsedData
    , ExpressionBuilderContext context)
  Type type = entry.DeclaringType;
  PropertyDescriptor descriptor = 
  CodeExpression[] expressionArray = 
    new CodeExpression[3];
  expressionArray[0] = new 
  expressionArray[1] = new 
  expressionArray[2] = new 

  return new CodeCastExpression(descriptor.PropertyType
    , new CodeMethodInvokeExpression(
        new CodeTypeReferenceExpression(GetType())
        , "GetEvalData"
       , expressionArray));

So what exactly is happening in this method? It is effectively generating code. In particular, it generates a call to a static method named GetEvalData which needs to be defined in this class. The return value of this method is then cast to the type returned by descriptor.PropertyType, which is why you see the CodeCastExpression wrapping the other code expressions.

The arguments passed to GetEvalData are represented by the CodeExpression array, expressionArray. The first argument is the expression to evaluate (this is the the part after the prefix). The second argument is the target type. This is the type of the class in which the expression is being evaluated. In our case, this would be the type System.Web.UI.WebControls.Label as we are using this expression within a Label control. The final argument is entry. This is the name of the property being set using the expression. In our case, this would be the Text property of the Label.

You could really build any sort of code tree within this method, but as I said before, most of the time, you will follow a similar pattern as this. In fact, I would probably put this method in some sort of abstract base class and then make sure to define the static GetEvalData method in your inheriting class.

Note, if you choose to move this method into an abstract base class as I described, you can’t make GetEvalData an abstract method in that class because we generated a call to a static method.

You could consider changing the above method to build a call to an instance method, but then you the generate code would have to create the instance everytime your expression is evaluated. It would not have access to an instance of the expression builder automatically. The choice is yours.

Here is the GetEvalData method we need to add to QueryStringExpressionBuilder.

public static object GetEvalData(string expression
    , Type target, string entry)
    if (HttpContext.Current == null 
      || HttpContext.Current.Request == null)
        return string.Empty;

    return HttpContext.Current

With the code for the builder completed, you simply need to add an entry within the compilation section under the system.web section of web.config like so:

  <compilation debug="true">
      <add expressionPrefix="QueryString" 
        type="NS.QueryStringExpressionBuilder, AssemblyName"/>

This maps your custom expression class to the expression via its prefix.

In the MSDN examples, they tell you to drop your expression class file into the App_Code directory. This works when you are using the Website Project model. Fortunately, you can also use custom expressions with Web Application Projects. Simply compile your builder into an assembly and make sure to specify the AssemblyName as part of the type attribute when declaring your expression builder.

If you are using the WebSite project model and the App_Code directory, you should leave off the AssemblyName portion of the type.

0 comments suggest edit

No, this is not a bait and switch post where I try to recruit you to work on Subtext.  A while ago I mentioned that I was participating in The Hidden Network.  So far, I really like it, though I think there is still room for improvement.

If you visit my site (since many of you are reading this in an RSS aggregator), you might have noticed a Jobs link at top.  The link will take you to a full listing of jobs.

The neat thing about this job listings page is that it is hosted by The Hidden Network. I simply added a CName to redirect jobs.haacked.com to hidden network.  They made it extremely easy for me to add a jobs section to my website.

Being bored, I figured I’d take a look through the list to see what kind of job opportunities are available.  Frankly, I am a little bit disappointed.  Many of the jobs sound like yaaaawners.  Perhaps more employers should read my guide, The Art Of The Job Post (If that came across as arrogant, whoops)

The Hidden Network is still pretty young, but over time I’d like to see a lot more jobs listed.  That would make using Geolocation to list jobs that are local to the reader more useful. I also think it’d be neat if I could annotate job postings.

There were a few that did catch my attention…

Sr. Admin, Programmer at Chuck E. Cheese I don’t know if the job itself sounds interesting, but hey! It’s Chuck E. Cheese!  Where a kid can be a kid! I’d ask for a signing bonus that includes free Pizza and  the passcode to play video games onsite for free.

Net, SQL, ASP Developer at Y! Music I have a buddy who works at Yahoo! in Santa Monica and loves it. In a bit of personal trivia, I actually worked on the original Launch.com website, which was later bought by Yahoo! (many iterations later).  I interviewed with Yahoo! in Santa Monica, but chose to go to SkillJam instead.

.NET Software Engineer at IGN If you’re into gaming, this could be alot fun.

The Motley Fool have several jobs listed.  Not sure what they’d be like to work for, but at least you’d get good investment advice while on the job.

There may well be others in there worth mentioning. I wasn’t that bored that I would read the details of every one.  The good thing about these listings is they appear to be real jobs, and not phishing expeditions by head hunters.

This may be a longshot to even ask, but if you end up actually applying for a job because you saw it on my blog, would you let me know? 

If you are an employer, consider posting a job.

0 comments suggest edit

Participating in the comments section of particularly interesting blog posts is a lot of fun and helps build community.  But one of the annoyances in doing so is that there’s really no good way to keep track of comments.  Unlike new posts in someone’s RSS Feed, most aggregators won’t tell you when there is a new comment.

Sure, there is coComment, but since I like to post comments using my RSS Aggregator via the CommentAPI, coComment isn’t such a help there.

But help is on the way.  Dare Obasanjo recently announced the beta release of Jubilee, the code name for RSS Bandit 1.5.  One of the more interesting features (and my favorite) included in this release is comment watching.

When reading an interesting post, you can right click on the post and select the Watch Comments option.  The following screenshot demonstrates.


In a stroke of pure vanity, I will select a blog post in Andrew Stopford’s blog that makes a reference to me and click Watch Comments.

Now if I wait long enough, someone will eventually leave a comment on that post.  Of course, why leave it to chance? I went ahead and left a comment via the browser (sorry Andrew).

When RSS Bandit updates, it shows me that someone left a comment in my Developers category by turning that category green.


Expanding that node, I can dig down to the post and read the new comment.


Of course, this only works for blogs that support wfw:commentRss.  Unfortunately, one of my favorite blogs, CodingHorror, which happens to always have lively conversation in the comments section, doesn’t support it.  Jeff, it’s time to move to Subtext!

Kudos go out to Dare and Torsten!  Unfortunately, I’ve been overcommitted and have not been able to contribute to RSS Bandit lately.

0 comments suggest edit

Now this is a stroke of genius.  If you want people to consider making their .NET applications work on Mono, give them a tool that informs them ahead of time how much trouble (or how easy) it will be to migrate to Mono.

That is exactly what Jonathan Pobst did with the Mono Migration Analyzer (found via Miguel de Icaza).  This tool analyzes compiled assemblies and generates a report identifying issues that might prevent your application from running on Mono.  This report serves as a guide to porting your application to Mono.

Having Subtext run on Mono is a really distant goal for us, but a tool like this could advance the timetable on such a feature, in theory.

I tried to run the analyzer on every assembly in the bin directory of Subtext, but the analyzer threw an exception, doh!  That’s my “Gift Finger” at work (I could not find where to submit error reports so I sent an email to Mr. Pobst. I hope he doesn’t mind).


I then re-ran the analyzer selecting only the Subtext.* assemblies.

Subtext Moma

As you can see, we call 12 methods that are still missing in Mono, 23 methods that are not yet implemented, and 13 on their to do list.  Clicking on View Detail Report provides a nice report on which methods are problematic.

In a really smart move, Moma also makes it quite easy to submit results to the Mono team.


This is a great way to help them plan ahead and prioritize their efforts.  Just for fun, I ran Moma against the BlogML 2.0 assembly and it passed with flying colors.   Moma Blogml


asp.net, code 0 comments suggest edit

One of the benefits of writing an ASP.NET book is that it forces me to spend a lot of time spelunking deep in the bowels of ASP.NET uncovering all sorts of little gems I never noticed the first time around.

Many of these little morsels should end up in the book, but I thought I would blog about a few of them as I go along. 

This is all part of the weird situation I find myself in while writing this book. I thought I would just sit down and all the words would flow. Instead, no matter how motivated I am, everytime I sit down to write I spend two hours procrastinating for every one hour of writing.  What gives!?

In any case, one of the gems I discovered is the ClientScriptManager.RegisterExpandoAttribute method.  This method allows you to add custom properties to a control.  These properties are not rendered in the HTML as attributes, but simply attached to the control in the DOM via javascript.

This is nice for control authors who want to make a custom control client scriptable, but still maintain XHTML compliance, since XHTML compliance doesn’t allow arbitrary attributes for tags.

The following is a really simple example.  I present here a custom control that inherits from Label.

public class ExpandoControl : Label
    //Code to be filled in.

The AddAttributesToRender method is the appropriate place to call RegisterExpandoAttribute.

protected override 
    void AddAttributesToRender(HtmlTextWriter writer)
                , "contenteditable", "true");

Now we can access the contenteditable property of this control via client script. The following javascript demonstrates.

var expando = document.getElementById('expando');
alert('Content editable: ' + expando.contenteditable);

This is a good approach to take to develop a client-side api for your custom controls.