comments edit

Update: I’ve created a new NuGet Package for Identicon Handler (Package Id is “IdenticonHandler”) which will make it much easier to include this in your own projects.

A while ago, Jeff Atwood blogged about Identicons for .NET. An Identicon is an anonymized visual glyph that can represent an IP address. I likened it to a Graphical Digital Fingerprint.


The original concept and Java implementation was created by Don Park.

Afterwards, Jeff and Jon Galloway became excited by the idea and ported Don’s code to C# and .NET 2.0 and released it on his website.

This weekend, we’ve spent some time working out a few kinks and performance improvements and are proud to release version 1.1 on CodePlex.

Why CodePlex?

We chose CodePlex for this project because the codebase for this is extremely small, so the patch issue I mentioned in my critique, A Comparison of TFS vs Subversion for Open Source Projects, is not quite as large an issue.

We don’t expect this project to grow very large and have a huge number of releases. This code does one thing, and hopefully, does it well.

So in that respect, CodePlex seems like a great host for this type of small project. It is really easy to get other developers up and running if need be.

Having said that, I probably wouldn’t host a large project here yet based on the critique I mentioned.

code comments edit

Lock After reading Scott Hanselman’s post on Managed Snobism which covers the snobbery some have against managed languages because they don’t “perform” well, I had to post the following rant in his comments:

What is it that makes huge populations of developers think they’re working on a Ferrari when their app is really just a Pinto? \ \ “I’m writing a web app that pulls data from a database and puts it on a web page. I never use ‘foreach’ because I heard it’s slower than explicitly iterating a for loop.

In my time as a developer I’ve experienced too many instances of this Micro Optimization, also known as Premature Optimization.

Premature optimization tends to lead “clever” developers to shoot themselves in the foot (metaphorically speaking, of course). Let’s look at one common example I’ve run into from time to time—double check locking for singletons.

Double Check Locking Refresher

As a refresher, here is an example of the double check pattern.

public sealed class MyClass
  private static object _synchBlock = new object();
  private static volatile MyClass _singletonInstance;

  //Makes sure only this class can create an instance.
  private MyClass() {}
  //Singleton property.
  public static MyClass Singleton
      if(_singletonInstance == null)
          // Need to check again, in case another cheeky thread 
          // slipped in there while we were acquiring the lock.
          if(_singletonInstance == null)
            _singletonInstance = new MyClass();

The premise behind this approach is that all this extra ugly code will wring out better performance by lazy loading the singleton. If it is never accessed, it never needs to be instantiated. Of course this raises the question, Why define a Singleton if it’s quite likely it’ll never get used?

The Singleton property checks the static singleton member for null. If it is null, it attempts to acquire a lock before checking if its null again. Why the second null check? Well in the time our current thread took to acquire the lock, another thread could have snuck in and initialized the singleton.

Note that we use the volatile keyword for the _singletonInstance static member. Why? Long story made short, this has to do with how different memory models can reorder reads and writes. For the current CLR you can ignore the volatile keyword in this case. But if you run your code on Mono or some other future platform, you may need it, so no point in not leaving it there.

Criticisms or If this is fast, how much faster is triple check locking?

Jeffrey Richter in his book CLR via C# criticizes this approach (starting on page 639) as “not that interesting” (Yes, he can be scathing!)

The double-check locking technique is less efficient than the class constructor technique because you need to construct your own lock object (in the class constructor) and write all of the additional locking code yourself.

The cost of initializing the singleton instance would have to be significantly more than the cost of instantiating the object used to synchronize access to it (not to mention all the conditional checks when accessing the singleton) to be worth it.

A Better Approach? The No Look Pass of Singletons

So what’s the better approach? Use a static initializer in what I call the No Check No Locking Technique.

public sealed class MyClass
  private static MyClass _singletonInstance = new MyClass();

  //Makes sure only this class can create an instance.
  private MyClass() {}
  //Singleton property.
  public static MyClass Singleton
      return _singletonInstance;

The CLR guarantees that the code in a static constructor (implicit or explicit) is only called once. You get all that thread safety for free! No need to write your own error prone locking code in this case and no need to dig through Memory Model implications. It just works, unlike your Pinto, sorry, “Ferrari”.

See, sometimes you can have your cake and eat it too. This code, which is simpler and easier to understand, happens to perform better and requires one less object instantiaton. How do you like them apples?

It turns out that this approach is also recommended for Java, as it was discovered that the double check locking approach wasn’t guaranteed to work.

What!? You’re Still Using Singletons?!

Now that I’ve gone through all this trouble to show you the proper way to create a Singleton, I leave you with this thought. Should a well designed system use Singletons in the first place, or is it just a stupid idea? That’s a topic for another time.

Please note that double check locking doesn’t only apply to Singletons. It just happens to be the place where it is most often seen in the wild.

comments edit

It’s comments like this that remind me why I enjoy blogging.

Holy shit!

I found this post whilst searching for POST timeout and thought it was a long shot for my problem. Well, it worked for me!

Thank you so much!!!

Not to mention that it serves to validate my previous point about Search Driven Development. It worked for this guy.

The comment is here in its original context.

comments edit

Search Magnifying
Glass With all the advances in software development in the past few years, I would have to point to Google and Google Groups as the two tools that provide the biggest productivity enhancements for me as a software developer. This fact is probably nothing new to any of you.

Search as a development tool is a phenomena some are starting to refer to asSearch Driven Development(Not to be confused with Test Driven Development).

Let’s face it, at the rate that new technology is being churned out these days, and given the huge size of many of these frameworks we use, it is impossible to learn everything up front. At some point, we have to stop RTFM’ing, put the documentation down, and start coding. And when we run into trouble, we thank our lucky stars that Google is there to save the day.

Wouldn’t it be great to have some of that search power integrated in your IDE? It turns out that have done just that. They provide two free IDE plugins available, one for Eclipse and one for Visual Studio.NET, on their website in the downloads section.

When you go to the site, there’s a little animation demonstrating the plugin. Click the View Again button if you missed it.

Here’s a screenshot I took of SmartSearch^TM^ in action. After typing out the method name, a moment later, the result shows up.

Screenshot of Koders SmartSearch<sup>TM</sup> in

The Smart Search feature is a bit Clippy like at times and sometimes exhibits a bit of lag, making it less useful than it could be. You may just want to turn it off and choose to use the plugin search box directly.

Though there is room for improvement, I think SmartSearch^TM^ is really a really interesting application of context based search and could be quite useful as a double check while writing code. Oh hey, there are already 100 implementations of this method. Let’s see how mine stacks up. Avert my eyes from the GPL licensed code!


Under the hood, these plugins make use of the search engine. This engine directly indexes source control repositories and allows users to quickly search and browse through Open Source code. It’s includes a nice interface and provides all the information necessary (such as the license) so you can make an informed decision on whether to use it or not. You can also choose to filter by language and license.

Given my interest in Open Source software, I had heard of but didn’t know about their plugins till today, when I had lunch with Darren Rush, the CEO of Little did I know until Darren contacted me via my blog, is based in Los Angeles! Darren turned me on to the term Search Driven Development.

Finally! A Los Angeles based company doing something really interesting in the Open Source space that isn’t part of “The Industry”. Very Cool!

As an aside, during our conversation, we wondered why L.A. doesn’t have anywhere close to the tech industry that the Bay Area does. We seem to have all the elements here, but not the community. I seem to think it’s because this area is dominated by the film industry. He pointed out that geography due to the horrible traffic creates pockets of communities. Probably a bit of both.

But I digress.

In any case, lest you think I’m shilling (Yeah, he bought lunch, but I can’t be bought that cheaply!), the other player in this field that I’ve heard about (apart from the obvious 800lb gorilla) is While their site has a nice color scheme and look and feel, I found Koders easier to use because of its similarity in layout to Google (did I mention Koders is L.A. based?).

I think sticking to the Google Search look (searchbox in the middle) is a smart move for any search site. As soon as I see such a site, I know what to do and where to type. Krugle has a beta plugin to Eclipse, but doesn’t seem to have anything for Visual Studio.NET yet., code comments edit

Some people think the ViewState is the spawn of the devil. Not one to be afraid of being in bed with the devil, I feel a tad bit less negative towards it, as it can be very useful.

Still, it has its share of disadvantages. It sure can get bloated. Not only that, but disabling ViewState can wreack havock with the functionality of many controls.

This is why ASP.NET 2.0 introduces the control state. The basic idea is that there is some state that should be considered the data for the control, while other state is necessary for the control to function. For example, the contents of a GridView. The control doesn’t absolutely need this data persisted across postbacks to function properly. You could choose to reload it from the database, Cache, or Session.

In contrast is the state of the selected node in a TreeView. This is state that is necessary for the control to function properly across postbacks.

Unlike the ViewState, the control state isn’t implemented as a property bag. You have to do a little bit of extra work to make use of it. Namely, there are two methods you have to implement in your custom control.

  • LoadControlState – Restores the control state from a previous page request. ASP.NET calls this method passing in the control state as an object to this method.
  • SaveControlState – Saves any changes to control state since the last post back. You need to return the state of the control as the return value of this method. ASP.NET will store it.

Your custom control must also register the fact that it needs the control state by calling Page.RegisterRequireControlState.

A Demonstration That Makes This All Clear As Mud

I’ve put together a simple control to demonstrate the control state. Now before I go any further, I must warn you not to copy and paste this implementation. This implementation is designed to clarify how the control state works. I will present another implementation that describes a safer approach, which you can feel free to copy and paste. You’ll see what I mean.

public class ControlStateDemo : WebControl
  public int ViewPostCount
    get { return (int)(ViewState["ViewProp"] ?? 0); }
    set { ViewState["ViewProp"] = value; }

  public int ControlPostCount
    get { return controlPostCount; }
    set { controlPostCount = value; }
  private int controlPostCount;

  protected override void OnInit(EventArgs e)
    //Let the page know this control needs the control state.

  protected override void OnLoad(EventArgs e)

  protected override void Render(HtmlTextWriter writer)
    writer.Write("<p>ViewState: " + this.ViewPostCount + "</p>");
    writer.Write("<p>ControlState:" + this.ControlPostCount + "</p>");
  protected override void LoadControlState(object savedState)
    int state = (int)(savedState ?? 0);
    this.controlPostCount = state;

  protected override object SaveControlState()
    return controlPostCount;

This control has two properties. One backed by the ViewState and the other backed by a private member variable. Notice that we register this control with the Page in the OnInit method.

In the OnLoad method, we increment each property. For demonstration purposes, we need these properties to change on each postback, and this is as good a method as any.

In the Render method, we simply output the values of the two properties. So far so good, eh?

Now we get to the LoadControlState method. This method is called by ASP.NET early in the control lifecyle (after OnInit but before LoadViewState) in order to provide your control with the saved control state from the previous request.

In this case, we can cast this value to an int and set the control’s state (the value of controlPostCount) to this value.

The SaveControlState method provides ASP.NET the data to store in the control state as the return value. In this example, we return the value of controlPostCount. This is how we knew we could cast the value to an int in LoadControlState.

Now if I drop this control onto a page with a Button control, let’s see what happens after a few postbacks.

Image showing the ViewState and Control State counters with values of

As expected, both values increment, as they are persisted across postbacks. But what happens if we disable ViewState on the page and click the button a few more times.

Image showing the ViewState counter with a value of 1 and the Control
State counters with a value of

As you can see, we retain the control state, while the ViewState is disabled.

But What About Inherited Controls?

I am so glad you asked! In this example, I inherited from WebControl, but what if I inherited from TreeControl, or some other control that made use of the control state. My implementation of LoadControlState and SaveControlState pretty much obliterates the control state for the base class.

The class I wrote here is intentionally simple to show you no real magic is going on. Let’s demonstrate the proper way to save and load the control state by creating a class that inherits from this control.

public class SubControlStateDemo : ControlStateDemo
  public int AnotherCount
    get { return this.anotherCount; }
    set { this.anotherCount = value; }

  private int anotherCount;

  protected override void OnLoad(EventArgs e)

  protected override void Render(HtmlTextWriter writer)
    writer.Write("<p>AnotherCount:" + this.AnotherCount + "</p>");

  protected override object SaveControlState()
    //grab the state for the base control.
    object baseState = base.SaveControlState();

    //create an array to hold the base control’s state 
    //and this control’s state.
    object thisState = new object[] {baseState, this.anotherCount};
    return thisState;

  protected override void LoadControlState(object savedState)
    object[] stateLastRequest = (object[]) savedState;
    //Grab the state for the base class 
    //and give it to it.
    object baseState = stateLastRequest[0];

    //Now load this control’s state.
    this.anotherCount = (int) stateLastRequest[1];

In this control, we inherit from the ControlStateDemo control I wrote earlier and added a new property called AnotherCount. The main thing to focus on here is our new implementation of SaveControlState and LoadControlState. We now take great pains to make sure that the base control gets the value it is expecting.

In SaveControlState, the first thing we do is grab the control state from the base control by calling base.SaveControlState. As you recall, this holds the value for the private member controlPostCount.

Since we want to add our own private member, anotherCount to the control state, we create an array to store both values and then return this array to the caller.

Within the LoadControlState method, we know we’re going to be passed in an object array and that the first element of the array is the control state for our base class. So in that method, we grab the first element and pass it to the method call base.LoadControlState, thus giving the base class what it expects to receive for its control state.

We then grab the second element, which is our control state, and set anotherCount to this value.

Let’s look at a screenshot of the result in action. Looks like everything is humming along nicely.

Screen showing our new property also being saved and restored

I would recommend using this approach anytime you implement control state in a custom control because you never know when you might override the control state for a base class.

comments edit

When you see the following in your CSS

  margin-top: 10px;
  margin-right: 20px;
  margin-bottom: 10px;
  margin-left: 20px;

It makes sense to convert it to this.

  margin: 10px 20px;

It’s cleaner and takes up less space.

There are a lot of ways you can optimize your CSS in this way. I’m not talking about compression, but optimization.

Today, The Daily Blog Tips site linked to a website called CleanCSS that can perform many of these optimizations for you. For example, feed it the above CSS and it will make that conversion. Very nice!

comments edit

I used to think the choice of using dashes vs underscores to separate words in an URL was simply a matter of personal preference. Nothing more than a religious choice.

Personally, I preferred underscores because I felt dashes intruded upon the words while underscores stayed at the bottom out of the way. So much so I had originally made that the default URL scheme in Subtext for friendly URLs and was using that myself.

It wasn’t till recently that I learned this debate has already been resolved. Years ago.

I wouldn’t say resolved really. Just that, there appears to be a really good reason to choose dashes over underscores. Apparently, Google sees the dash as a word separator, while the underscore is perceived to be part of the word. Something to do with being able to search for C++ style constant variables SUCH_AS_THIS in the title of a post.

The question is, does this still apply today? Does iteven matter?

To be on the safe side, I’m falling in line for now. Or rather, in dash. What are your thoughts?

comments edit

In January, I wrote that according to the Chinese Zodiac, this is the Year of the Golden Pig. According to foklore, this is a special event that occurs once every 600 years and brings great fortune to babies born during the year.

As an aside, not many realized that the Chinese New Year didn’t start until February 18 this year. So your January baby doesn’t make the cut. But don’t worry, read on.

Many historians and others have discounted the year of the golden pig as mere legend. No historical record 600 years ago point to the significance of this day.

That hasn’t stopped the baby boom from soaring on in places like China and Korea. Historical fact will not stop the masses from having their golden piglet.

piglets However, my friend Walter had an interesting observation. All of these extra children that will be born as part of this boom are competing for the same scarce resources. As they grow up, they’ll all be competing in entrance exams for limited spots in the various prestigious universities. And when they graduate, they’ll be competing for a limited set of jobs.

So will the Year of the Golden Pig actually be the Year of the Starving Pig?

It will be interesting to see what happens in these countries.

comments edit

You’ve spent hours setting up your blog on your favorite blog platform just right. Good for you! So how do you maintain your blog so that it remains at the top of its game?

It turns out, there are a large number of free web utilities useful for improving your blog’s effectiveness outside of your blog engine.

Image of hammer, ruler, and other tools. CC
licensed. Tools 4 Argentina - Some Rights Reserved

Everytime I come across one of these useful utilities, I bookmark it to my Blog Utilities folder. This folder is my blogger utility belt, full of tools to meet every need when composing blog posts or optimizing my site for bandwidth and speed.

I’ve chosen to focus on web utilities as they are quick and easy to use — no installation required. This is not a comprehensive list by far, as I am sure there are many others out there. Let me know what I missed in the comments.


The first three tools in this category are all website speed testers, but each offers something different, so I’ve listed them all.

  1. Web Page Analyzer
    • This tool is fairly comprehensive and may be the only one you really need for website speed analysis. Includes stats on every file and object downloaded and provides approximate download times for different connection rates.
  2. OctaGate Site Timer
    • I didn’t find this one to be as accurate as the first one because it attempted to download images referenced in my CSS files that were commented out. However, it provides a nicer graphical output that marks when the request was started, when it connected, and the time when the first and last bytes were received. It also highlights 404 errors in red, which is handy for finding missing files or bad URLs.
  3. HttpZip Compression Checker
    • Use this to check whether files from your website are being served with HTTP Compression on or off. Thanks to Jeff Atwood for pointing me to this one (among others).
  4. Dynamic Drive Online Image Optimizer-if you’re hardcore about your image compression, you should check out Ken Silverman’s Utility Page. But if you’re like me and just want a quick and easy web based utility for compressing images, this is your site. It can convert gif, jpg, and png files up to 300kb. It will also do conversions to other image types and display multiple results at various color levels and compression rates so you can pick the best one for your needs.
  5. Javascript Minimizer
    • This is an extremely simple tool. Paste in your javascript and click the button and reduce the size of your javascripts.
  6. CSS Minimizer
    • Just like the Javascript minimizer, but for Cascading Style Sheets.

Statistics and Search Engine Optimization

Get a handle on your web traffic with these sites.

  1. Website Grader
    • Gives your website a score in an attempt measure its effectiveness. Shows your PageRank, meta info, domain info, technoratic stats, etc… It generates a really neat report card for your blog.
  2. Google Webmaster Central - An absolute essential tool for those who care about users finding their site via Google. Especially pay attention to the Webmaster tools which include Sitemap support.
  3. Google Analytics-A free and full featured analytics package for your blog or website. Add some javascript to your page template and you’re in information overload land, but done up with nice charts and graphs.
  4. 103bees Search traffic analysis**- Unlike other stats packages, this one is focused purely on natural search engine traffic analytics. What are users searching for when they land on your site? This is a nice complement to Google Analytics. And it’s free! One caveat is that the script can be slow sometimes, which can play havoc with CSS based designs.
  5. Technorati - It’s so obvious, I almost forgot to list it. Register, claim your blog, and find out who is linking to you. You can add a little script to your blog that displays how many other posts link to yours.
  6. - The beauty of this site is that you can easily compare your website’s reach with several other websites on a single graph, thus starting a huge pissing contest.

Spicing Up Your Posts With Images

  1. Wikipedia Public Domain Image Resources
    • Images can bring a blog post to life. But rather than worrying about receiving a cease and desist letter for misusing copyrighted material, why not use images that are part of the Public Domain? This page is chock full of links to resources for free images.
  2. PicFindr -Despite it’s “Oh so Web 2.0” name (must everything end in a consonant plus “r” these days? At least it doesn’t have BETA anywhere), this tool is really great. It will search a set of free photo sites such as Stock.xchng, for free photographs.
  3. Flickr Creative Commons
    • Still haven’t found that picture that just hits the point you’re trying to make? Try the Flickr Creative Commons search engine. Remember, these photos are not public domain. You do need to abide by the license. But for the most part, the licenses are pretty lenient for you to reuse the photos in your own blog.
  4. Open Clip Art Library
    • Maybe you want your image to be iconicrather than photographic. Check out this free Public Domain clip art library to find an icon for every occasion.
  5. WP Clipart
    • Another Public Domain clip art library, though the quality tends to be less than the Open Clip Art Library.

Create and Improve Your Content

  1. Cliche Finder - Try to avoid using too many tired old cliches by running your post through this web based utility.
    • This is a fantastic site for basic hallway usability testing. Just submit your URL and real people will post comments with criticisms and praise for your site. The more specific you are about what you want testers to focus on, the better quality the feedback. Try it out.


  1. FeedBurner - This one gets special mention because it fits in so many categories. It’ll help optimize your bandwidth by serving your RSS feeds for you. Also, it includes a basic free stats package as well as a premium stats package that can replace Google Analytics. FeedBurner can also provide features your blogging platform might not, such as subscribing to RSS Feeds via email.

Special Mention

As I mentioned before, this post is focusing on web utilities. However, these two utilities are so essential, I just had to break my own rule and list them.

  1. Firebug Firefox Add-on - Ok, this breaks my rule as it isn’t technically a website, but it is a FireFox browser plugin so it might as well be a website, right? Well in any case, this tool is too important not to mention. It has it all. It can be used to time your websites download speeds, view the underlying HTTP information, measure the size of each file. Add to that a great Javascript debugger and CSS and DOM explorer. This is a must have tool.
  2. Windows Live Writer
    • I broke my rule again. This tool won’t help you write better content, but it’ll help you have fun doing it. Also, all the plugins available make it easy to add a little extra oomph to your blog posts by including Flickr images, formatted code, etc…

Again, I’m sure I missed someone’s favorite tool hear, so please let me know what I missed in the comments. And if you do, let me know which tool you’d remove from this list in order to add yours. I’ll try following up at a later time with an improved list.

Technorati Tags: Tips, href=””>Blogging, href=””>Utilities

code, tdd comments edit

Leon Bambrick (aka SecretGeek) has started a series on Agile methodologies and Test Driven Development (TDD) in which he brings up his own various hidden objections to TDD in order to see if his prejudices can be overcome.

One of the questions he asks is an age old argument against TDD. Who Tests the Tests?Leon sees potential for a stack overflow since, given that the tests are code, and that according to TDD, code should be tested, shouldn’t there be tests for the tests?

The short answer is that the code tests the tests, and the tests test the code.


Testing Atomic Clocks

Let me start with an analogy. Suppose you are travelling with an atomic clock. How would you know that the clock is calibrated correctly?

One way is to ask your neighbor with an atomic clock (because everyone carries one around) and compare the two. If they both report the same time, then you have a high degree of confidence they are both correct.

If they are different, then you know one or the other is wrong.

So in this situation, if the only question you are asking is, “Is my clock giving the correct time?”, then do you really need a third clock to test the second clock and a fourth clock to test the third? Not if all. Stack Overflow avoided!

Principle of Triangulation

This really follows from the principle of triangulation. Why do sailors without electronic navigation systems bring three sextants with them on board a ship?

With one sextant, you could rely on the manafacture testing to assume its measurements are correct, but wear and tear over time(not much unlike the wear and tear a codebase suffers over time) might make the measurements slightly off.

If you take measuremnts with two sextants, then you have enough information to decide if both are measuring accurately or if one is not. However in this situation, we need to know exactly which measurement is correct.

So we take a third sextant out. The two sextants that take measurements most closely together are most likely correct. Accurate enough to cross the Atlantic.

aspnet configuration comments edit

Are you tired of seeing your configuration settings as an endless list of key value pairs?

<add key="key0" value="value0" />
<add key="key1" value="value1" /> 
<add key="key2" value="value2" />

Would you rather see something more like this?

  someOtherSetting="value" />

Join the club. Not only is the first approach prone to typos (AppSettings["tire"] or AppSettings["tier] anyone?), too many of these things all bunched together can cause your eyes to glaze over. It is a lot easier to manage when settings are grouped in logical bunches.

A while back Craig Andera solved this problem with the Last Configuration Section Handler he’d ever need. This basically made it easy to specify a custom strongly typed class to represent a logical group of settings using XML Serialization. It led to a much cleaner configuration file.

But that was then and this is now. With ASP.NET 2.0, there’s an even easier way which I didn’t know about until Jeff Atwood recently turned me on to it.

So here is a quick run through in three easy steps.

Step one - Define your Custom Configuration Class

In this case, we’ll define a class to hold settings for a blog engine. We just need to define our class, inherit from System.Configuration.ConfigurationSection, and add a property per setting we wish to store.

using System;
using System.Configuration;

public class BlogSettings : ConfigurationSection
  private static BlogSettings settings 
    = ConfigurationManager.GetSection("BlogSettings") as BlogSettings;
  public static BlogSettings Settings
      return settings;

    , DefaultValue = 20
    , IsRequired = false)]
  [IntegerValidator(MinValue = 1
    , MaxValue = 100)]
  public int FrontPagePostCount
      get { return (int)this["frontPagePostCount"]; }
        set { this["frontPagePostCount"] = value; }

    , IsRequired=true)]
  [StringValidator(InvalidCharacters = "  ~!@#$%^&*()[]{}/;’\"|\\"
    , MinLength=1
    , MaxLength=256)]
  public string Title
    get { return (string)this["title"]; }
    set { this["title"] = value; }

Notice that you use an indexed property to store and retrieve each property value.

I also added a static property named Settings for convenience.

Step 2 - Add your new configuration section to web.config (or app.config).

      <section name="BlogSettings" type="Fully.Qualified.TypeName.BlogSettings,   
      AssemblyName" />
    title="You’ve Been Haacked" />

Step 3 - Enjoy your new custom configuration section {.clear}

string title = BlogSettings.Settings.Title;
Response.Write(title); //it works!!!

What I covered is just a very brief overview to get you a taste of what is available in the Configuration API. I wrote more about configuration in the book I’m cowriting with Jeff Atwood, Jon Galloway, and K. Scott Allen.

If you want to get a more comprehensive overview and the nitty gritty, I recommend reading Unraveling the Mysteries of .NET 2.0 Configuration by Jon Rista.

comments edit

Curb your enthusiasm season 1
dvdJuan Catalan must be feeling “Pretty good. Pretty, pretty, pretty, pretty, good.” His poor luck seemed to exceed Larry Davidian proportions when he was accused of murder. But his luck took a turn for the better after over five months in Jail.

Catalan claimed to be at a Dodgers game with his daughter when the murder occurred. His defense attorney scoured TV footage of crowd shots from the game but could not find Juan. After learning that the show Curb Your Enthusiasm, starring Larry David who co-created Seinfeld, had taken footage at the ballpark that day (I think I remember this episode!), HBO allowed the attorney to search through their footage and he found a time-stamped shot of Catalan in the outtakes.

HBO allowed Melnik to look through the footage, and he found a shot of Catalan with his 6-year-old daughter and two friends. The footage was time coded, confirming that Catalan was at the ballpark shortly before the time of the slaying 20 miles away in the San Fernando Valley.

“There he was in the outtakes,” said Gary S. Casselman, the attorney handling Catalan’s lawsuit. “He’s glad it’s over. It’s terrible to be in jail, and he thought he would never see his daughters again.”

I read this in the Los Angeles Times yesterday morning and laughed at his good fortune to be saved by a television show.

Catalan was not a fan of “Curb Your Enthusiasm” before his time in jail. “He is now,” Casselman said.

I bet he is. The full Los Angeles Times article is here.

comments edit

UPDATE: As an aside, it would probably be more accurate to say the FizzBuzz question is a Requirement. So where you read the term Spec, you can replace it with Requirement. Either way, the same thing applies. The only thing not ambiguous is the code. As they say, the code is the spec.

One last point, then I’m done with this topic of FizzBuzz and spec writing. In a recent post, I mentioned tongue firmly in cheekthat the FizzBuzz “spec” has certain flaws. Now I admit I’m taking this out of context a bit to make a point. FizzBuzz is an simple interviewquestion, not a spec, possbily intended to elicit this type of analysis from the candidate. Even so, I think there’s a good lesson to learn here.

My point was that all specs are merely rough approximations of the actual requirement. Specs are ambiguous, but software is not. Software doesn’t generally deal well with ambiguity. Change a random bit in memory and all hell breaks loose.

However, some of that was lost due to the extremely nitpicky point I made about the spec. So here’s another, still nitpicky, but a bit less so.

Every so called “correct” program written in the comments of Jeff’s blog had the following output.


But, doesn’t the following output meet the letter of the spec (difference in bold)?


My point being, the spec is explicit about replacing numbers divisible by three with “Fizz”, but it doesn’t say to replace numbers divisible by five.

Yes, I agree. Developers should not act like total logicians and nitpick every detail. Human language is inexact, and we have to deal with that fact of life. Unfortunately, sofware doesn’t have the same resiliency towards ambiguity. If this output was meant to be fed into another software system, this ambiguity would cause bad data, software crashes, who knows what calamity!

You might say I’m splitting hairs here. Of course I am because the compiler is going to split hairs. The Web Service I’m trying to call is going to split hairs. The HTML browser is going to try and not split hairs, but is going to ultimately fail. Software is all about splitting hairs.

Instead, we need to move beyond the spec and ask questions before writing code, during writing code, and after writing code. Do not be afraid to talk to the customer or customer representative. That’s all I was trying to say.

Thanks to Rob Conery who was trying to make this point in my comments, but it was lost on everybody. ;)

comments edit

UPDATE: You can now subscribe to my feed via email. This is a service offering from Feedburner. Sweet!

Flame from
Feedburner.comI’ve decided to make the jump and switch to Feedburner after the glowing recommendation from Jeff Atwood. However, being the paranoid sort, I decided to go ahead and pay for the MyBrand PRO feature.

This allows me to keep control over my feed by serving it from my own domain Not that I don’t trust FeedBurner, but if I ever want to take my feed back in-house, it’ll be a lot easier for me to change a DNS setting than to wait for them to perform a 301 redirect back to me.

If you’re reading this in your aggregator, my guess is that everything is working fine. If not, please do let me know if you encounter any problems. Flame on!

comments edit

I know, I know, you’d like to see the FizzBuzz discussion die a quick death, but trust me, this is an interesting point, or at least mildly amusing.

Sorry to revive the dead horse, but a comment in my blog brought up a very good point. In fact, I’m kicking myself for not noticing this myself, having been a math major and I love pointing out this type of minutiae.

In the original Fizz Buzz test, the functional spec asks the programmer to print the numbers from 1 to 100.

But as a commenter points out

Why can’t spec writers write? Unless you mean integers, there are an infinite number of real numbers ’from 1 to 100’

Exactly! There are an infinite range of numbers between 1 and 100. The specification is technically not clear enough. Writing a program to spec exactly would… well be impossible.

This is exactly why I said the following in a another comment

I still need to gather requirements! What platform must this FizzBuzz program support? Any performance requirements? Does the output need to be available over the web?…

Unfortunately, I missed the most important question I should have asked.

I assume you mean all intergers from 1 to 100 inclusive, is that correct?

I know what you’re thinking. In cases like this, developers should be able to intuit what the client means. If a developer asks Do you mean Integers or Real Numbers?, that developer is being a smart ass.

But my point is still valid. If a client says, I want a CRM system, you may know exactly what a CRM system is, but it may be totally different from what they think a CRM system is.

This really highlights the difficulty of writing good requirements and a good spec. You don’t know the background of the person you’re handing off the document to.

What makes perfect sense in your mind might mean something different to the reader.

Perhaps it’s situations like this that lead 37Signals to advocate getting rid of functional specs altogether.

Whether you go that extreme or not is not so important as keeping the lines of communication open with your client. Never accept a requirement and functional spec at face value. Specs are always a poor approximation of what the client really wants. All specs are broken to one degree or another (though that doesn’t mean they are all useless). Ask for clarification. Keep the dialog going.

This is also one reason why Big Design Up Front (BDUF) can really nail you in the butt. These subtle things are missed all the time. Having an iterative process where you’re not on the hook for requirements gathered months ago gathering dust helps mitigate the risk of incomplete and inaccurate requirements.

Even by thousands of software developers reading blogs.

comments edit

Update: I have an even better startlet for stopping and starting services in my comments.

If you’re running Vista, run, don’t walk, and go download and install Start++ (thanks to Omar Shahine for turning me on to this). Make it the first thing you do. Many thanks to Brandon Paddock who developed this nice little tool. He describes the tool in this post.

I have a message for Start++ from the Start menu. “You complete me!”.

Ok, terribly corny jokes aside, it’s the little things that save me lots of time in the long run. For example, starting and stopping SQL server is kind of annoying for me on Vista. Here’s my the typical workflow.

  1. Hit Windows Key, type in cmd
  2. type net stop mssql
  3. Doh! System error 1060 occurred. Right, I need to be an administrator.
  4. Grab the trackball
  5. Click on the Start menu
  6. Right click Command Prompt
  7. Click Run as administrator.
  8. Now type net stop mssql.

Is your hand hurting by now? Because mine is.

Of course, I’m an idiot. Or, I was an idiot. Now, I’ve mapped the Start++ keywords startsql and stopsql to automatically run the commands I need with elevated privileges.

Click for larger image.


Notice you can check the Run elevated checkbox for any command. Yes, I get the UAC prompt (Yes, I still have that sucker on), but that’s not such a big deal to me. Now my workflow is reduced to:

  1. Hit Windows Key, type in stopsql
  2. Hit the Left Arrow Key and Enter when the UAC prompt comes up.


For your convenience, I’ve exported the startsql and stopsql “startlets” and put them on my company’s tools site here. I figure this one alone saves me a few seconds every half hour.

If you are using a named instance of SQL Server, you will need to change the argument in the Arguments column like so:

/C "net start mssql$NameOfInstance"

I have a few hundred or so startlets I can think of adding. Happy shortcutting!

Technorati Tags: Tips, Vista, Start++, Productivity

comments edit

Thought I should take advantage of my latest bout of insomnia and do a slight redesign of my website. My goal was to clean it up a bit so it looks less crowded and cluttered.

I also removed the Flickr images because the script was slow to load and caused my site to load slowly. I didn’t think it added a whole lot anyways. I may look into creating a server-side flickr control later.

Here’s a screenshot, in case you’re reading this in an aggregator.

Thumbnail of my

Let me know what you think of the design. My next step will be to focus more on usability. If there’s anything that annoys you about my site (here’s where Jeff will chime in) do let me know.

comments edit

Stack of
PaperIn Jeff Atwood’s infamous FizzBuzz post, he quotes Dan Kegel who mentions.

Less trivially, I’ve interviewed many candidates who can’t use recursion to solve a real problem.

A programmer who doesn’t know how to use recursion isn’t necessarily such a bad thing, assuming the programmer is handy with the Stack data structure. Any recursive algorithm can be replaced with a non-recursive algorithm by using a Stack.

As an aside, I would expect any developer who knew how to use a stack in this way would probably have no problem with recursion.

After all, what is a recursive method really doing under the hood but implicitely making use of the call stack?

I’ll demonstrate a method that removes recursion by explicitely using an instance of the Stack class, and I’ll do so using a common task that any ASP.NET developer might find familiar. I should point out that I’m not recommending that you should or shouldn’t do this with methods that use recursion. I’m merely pointing out that you can do this.

In ASP.NET, a Web Page is itself a control (i.e. the Page class inherits from Control), that contains other controls. And those controls can possibly contain yet other controls, thus creating a tree structure of controls.

So how do you find a control with a specific ID that could be nested at any level of the control hierarchy?

Well the recursive version is pretty straightforward and similar to other methods I’ve written before.

public Control FindControlRecursively(Control root, string id)
  Control current = root;

  if (current.ID == id)
    return current;

  foreach (Control control in current.Controls)
    Control found = FindControlRecursively(control, id);
    if (found != null)
      return found;
  return null;

The recursion occures when we call FindControlRecursively within this method. Essentially what is happening (and this is a simplification) when we call that method is that our current execution point is pushed onto the call stack and the runtime starts executing the code for the internal method call. When that method finally returns, we pop our place from the stack and continue executing.

Rather than try to explain, let me just show you the non-recursive version of this method using a Stack.

public Control FindControlSansRecursion(Control root
    , string id)
  //seed it.
  Stack<Control> stack = new Stack<Control>();
  while(stack.Count > 0)
    Control current = stack.Pop();
    if (current.ID == id)
      return current;
    foreach (Control control in current.Controls)
  //didn’t find it.
  return null;

One thing to keep in mind is that both of these implementations assume that we won’t run into a circular reference problem in which a child control contains an ancestor node.

For the System.Web.UI.Control class we safe in making this assumption. If you try and create a circular reference, a StackOverflowException is thrown. The following code demonstrates this point.

Control control = new Control();
control.Controls.Add(new Control());
// This line will throw a StackOverflowException.

If the hierarchical structure you are using does allow circular references, you’ll have to keep track of which nodes you’ve already seen so that you don’t get caught in any infinitel loops.

comments edit

We’ve been having an internal debate within the Subtext mailing list over the merits of SourceForge vs Google Code Project Hosting vs Codeplex. Much of the discussion hinges around the benefits of Subversion for Open Source projects when compared to Team Foundation System (TFS).

Before I begin, I do not mean for this to devolve into a religious argument. This is merely my critique from the perspective of running an Open Source project. I personally think both are fine products and both probably work equally well in the corporate environment.

TFS Advantages

  • Easy of use. For developers with a background in using Visual Source Safe or Sourcegear Vault, the interface into TFS will be familiar. Subversion requires more of a learning curve for these developers, though this is mitigated by my suspicion that a large percentage of Open Source developers tend to use CVS and SVN already. 
  • Work Item integration is sweet. I’ve been contributing some code to the Subsonic project and I actually love the work item integration in VS.NET. It’s pretty nice to be able to review and close work items while working on the code.
  • Shelving is great. Certainly nothing stops you from doing something like this in Subversion by using conventions, but I like the syntactic workflow sugar this provides.

Subversion Advantages

  • Anonymous access. Users who want to look at the code, view the change history of the code, and update their local code to the latest version can do so form the convenience of their favorite Subversion client. This is much more cumbersome with TFS.
  • Patch Submission. This goes hand in hand with anonymous access. Users without commit access can have Subversion generate patch files consisting of their changes and submit these files. This makes it really easy for the casual contributor to quickly submit a patch as well as makes it easy for the Open Source development team to apply contributions to the source. This is a huge benefit to the project. Unfortunately with CodePlex, you either give commit access or you don’t. If you don’t, it’s a pain for users to submit patches and a pain for the project team to apply patches. Just ask Rob Conery what happens if you give commit access too freely.
  • Offline Support. Regardless of what Jeff says, offline mode does matter for many applications. For example, sometimes I have to connect to an obnoxious VPN that destroys my general internet connectivity. It’s nice to be able to connect, get latest, disconnect, work, connect, commit changes, disconnect. Try that with TFS.

Again, as source control systems, I believe they are both great systems. But for the needs of an open source project, I feel that Subversion has advantages. As far as I understand, TFS was designed as an enterprise source control system. However, the needs of the enterprise are often different from the needs of an Open Source team.

Subversion, itself open source, was used during its own development (when it became stable enough). So it is well suited to open source development.

If Codeplex supported Subversion, I would probably want to move Subtext over in a heartbeat. If you feel the same way I do, please vote for the work item entitled Subversion Support (SVN).

It looks like a lot of people would like to see this as well as it is the top vote getter on the Codeplex work item site.

And before you rail on me asking, Why Microsoft would ever consider such a move? Isn’t Codeplex a showcase for TFS and Microsoft Technology Open Source projects?

A member of the Codeplex team informed me that Codeplex is the home for any Open Source project - on any and all platforms. In fact, they do now host a few non-Microsoft projects. Of course their dependency on TFS does naturally limit the types of projects that would host there.