FogBugz 6 Recognizes Estimates And Actuals Are Not Bounded Evenly On Both Sides

A while ago I read Steve McConnel’s latest book, Software Estimation: Demystifying the Black Art, which is a fantastic treatise on the “Black Art” of software estimation.

One of the key discoveries the book highlights is just how bad people are at estimation, especially single point estimation.

One of several techniques given in the book focuses on providing three estimation points for every line item.

1. Best Case: If everything goes well, nobody gets sick, the sun shines on your face, how quickly could you get this feature complete?
2. Worst Case: If your dog dies, your significant other leaves you, and your brain turns to mush, what is the absolute longest time it would take to get this done? In other words, there is no way on Earth it would take longer than this time, unless you were shot.
3. Nominal Case: This is your best guess, based on your years of experience with building this type of widget. How long do you really think it will take?

The hope is that when development is complete, you’ll find that the actual time spent is between your best case and worst case. McConnell provides a quiz you can try out to discover that this is harder than it sounds.

Over time, as you reconcile your actual times into your past estimates, you’ll be able to figure out what I call your estimation batting average, a number that represents how accurate your estimates tend to be.

Once you have these three points for a given estimate, you can apply some formulas and your estimation batting average to create a probability distribution of when you might complete the project. Here is a simple example of what that might look like (though in real life there may be more point values).

• 20% 50 developer days
• 50% 70 developer days
• 80% 90 developer days

So the numbers above show that there’s only a 20% chance the project will be complete within 50 developer days and an 80% chance of completion if the development team is given 90 developer days.

This technique showcases the uncertainty involved in creating estimates and focuses on the probability that estimates really represent.

After reading this book, I fired up Excel and built a nice spreadsheet with the formulas in the book and columns for these three estimation points. Now I can simply enter my line items, plug in my best, worst, and nominal cases, and out pops a probability distribution of when the project will be complete.

However, as I mentioned before, the crux of this technique relies on that estimation batting average. But when you’re just starting out, you have no idea what that average is, so you have to pull it out of the air (I recommend pulling conservatively).

The reason I bring this all up is that I watched an interesting interview today on the ScobleShow. Robert Scoble interviewed FogCreek founder and well known technology blogger, Joel Spolsky.

Joel let it be known that they are building a new scheduling feature for FogBugz 6 that reflects the reality of software estimation better than typical scheduling software.

For example, one key observation he makes is that estimates tend to be much shorter than the actual time than they are longer.

For example, it’s quite common to estimate that a feature will take two days, only to have it take four days, or eight days. But it’s rare that the feature actually ends up taking one day. Obviously it’s impossible for that feature to take 0 days or -4 days.

This makes obvious sense when you think about it.

The amount by which you can finish a feature before an estimated time is constrained, but the amount of time that you can overshoot an estimate is boundless.

Yet many software scheduling software completely ignore this fact, hoping that an underestimation on one item will be offset by an overestimation of another. They assume these over and under estimates are balanced, which they are clearly not.

This new feature will attempt to take that into account as well as your track record for estimates (your batting average if you will), and provide a probablity of completion for various dates.

Sounds like a brilliant idea! If done well, that would be quite hot and allow me to chuck my hackish Excel spreadsheet.

Doubling Down At Mix 07

You’ve been H CKED will be well represented at Mix 07 this year. I thoroughly enjoyed Mix 06 last year and think 07 has the potential to be even better.

Do not worry, I’m leaving the retina scorching orange aloha shirt at home this time.

My only disappointment with Mix 07 so far is it doesn’t have quite the clever rhyme that Mix 06 did. My wife did a cheer for me as I left for last year’s the conference.

Mix! Oh-Six! Mix! Oh-Six! Mix! Oh Six!

“Mix! Oh-Seven!” doesn’t roll off the tonque quite in the same manner. No cheers from my wife this year.

There are many people I’m looking forward to hanging out with in attendance this year. I like to think that it was me who convinced Scott Hanselman to attend this year’s Mix conference. Jon Galloway, Jeff Atwood, and Rob Conery will all be there. It may take that many to drag me away from the craps table before I lose my future son’s college tuition.

I also think Steve (Lucky 21) Maine will be there (is that right Steve?). Last year he watched as the Craps dealers rolled their eyes when I only bought \$40 worth of chips (it’s all I had) at a \$10 table. I then proceeded to turn that into around \$300, only to lose the bulk of that at BlackJack later. No more BlackJack after Craps!

I think Adam (Is it 9 or 10?) Kinney will also be in attendance. If you are planning to be there, let me know in the comments!

If you’re a Subtext user, feel free to swing by and tell me how much Subtext is a [godawful mess run by a retarded monkey and is possibly responsible for your pet’s death|wonderful piece of sublime software that enriches your life in a way you never thought a .NET blog engine could] (or any option in between).

If you’re not a Subtext user, tell me how much you’ve heard that Subtext is a [godawful mess run by a retarded monkey and is possibly responsible for your pet’s death|wonderful piece of sublime software that enriches your life in a way you never thought a .NET blog engine could]

And if I have time left over, I may just attend a few sessions. Seriously though, I really enjoyed many of the sessions last year. Funny how much you can still learn while hung-over and sleep deprived. I kid. I kid.

Technorati tags: ,

Rhino Mocks 3.0 Released!

Ayende just announced the release of Rhino Mocks 3.0. The downloads are located here. If you aren’t subscribed to Ayende’s blog, I highly recommend it. This guy never sleeps and churns out code like a tornado.

Ever since I discovered mocking frameworks in general, and especially Rhino Mocks, mocking has become an essential part of my unit testing toolkit.

A while ago I wrote a short intro demonstrating how to write unit tests for events defined by an interface. This small example shows the usefulness of something like Rhino Mocks.

If you’re wondering what the difference between a mocks, stubs, and fakes, be sure to read Jeff Atwood’s Taxonomy of Pretend Objects.

Technorati tags: , , ,

Building A Strong Open Source Community Requires Empathy

A recent confrontational thread within the Subtext forums that I shared with Rob Conery got us into a discussion about the challenges of dealing with difficult members of an Open Source community. There are many approaches one can take. Some advocate not engaging disruptive community members. I tend to give everyone the benefit of the doubt at first. Rob often commends me for my paticence in dealing with users in the forums. Neither approach is necessarily better than the other. It’s a matter of style.

If there’s one thing I’ve learned about running an Open Source project, it’s that it takes two key qualities.

First, you really need to have a thick skin. You cannot please everybody, and if you’re doing something even remotely interesting, you’re going to piss off some people with the choices you make. But you can’t stop making choices, so be prepared to piss people off. It’s a part of the job. Just be mentally prepared for the attacks, fair or not.

Second, you have to have empathy for your users and developers. Sometimes what feels like an attack is really a misunderstanding based on cultural differences. I know some cultures tend to have a very brusque in-your-face way of discussion. What might be considered rude in one culture, is considered a normal even keeled discussion in another.

At other times there may be an underlying reason for the venting which really has nothing to do with you or your project.

Sure, it’s not really fair to take the brunt of someone’s wrath because of what happens elsewhere, but I find that humor and attempting to focus the discussion to specific objective complaints often helps defuse an argumentative thread.

In this particular case, the user ends up apologizing and writes about the aggravating events at work that led to his frustrations and lashing out in our forums.

Apology accepted, no hard feelings.

What about Toxic members? Sometimes there are members of the community who really are simply toxic trolls. They’re not interested in having any sort of real discussion. How do you deal with them? How do you tell them apart from someone who actually does care about your project, but is so ineloquent about expressing that?

I’ve been fortunate not to have experienced this with Subtext yet, but this excellent post How Open Source Projects Survive Poisonous People points out some great advice for identifying and dealing with poisonous people.

The post is a summary a video in which Ben Collins-Sussman and Brian Fitzpatrick, members of the Subversion team, discuss how to deal with poisonous people based on their experiences with Subversion.

Their points are specific to their experience running an Open Source project. But many of their points apply to any sort of community, not just Open Source.

Politeness, Respect, Trust, and Humility go along way to building a strong community. To that list I would also add Empathy.

Technorati tags: , ,

UPDATE: Made some corrections to the discussion of ReadOnlyCollection’s interface implementations near the bottom. Thanks to Thomas Freudenberg and Damien Guard for pointing out the discrepancy.

In a recent post I warned against needlessly using double check locking for static members such as a Singleton. By using a static initializer, the creation of your Singleton member is thread safe. However the story does not end there.

One common scenario I often run into is having what is effectively a Singleton collection. For example, suppose you want to expose a collection of all fifty states. This should never change, so you might do something like this.

```public static class StateHelper
{
private static readonly IList<State> _states = GetAllStates();

public static IList<State> States
{
get
{
return _states;
}
}

private static IList<State> GetAllStates()
{
IList<State> states = new List<State>();
//...
return states;
}
}```

While this code works just fine, there is potential for a subtle bug to be introduced in using this class. Do you see it?

The problem with this code is that any thread could potentially alter this collection like so:

`StateHelper.States.Add(new State("Confusion"));`

This is bad for a couple of reasons. First, we intend that this collection be read-only. Second, since multiple threads can access this collection at the same time, we can run into thread contention issues.

The design of this class does not express the intent that this collection is meant to be read-only. Sure, we used the `readonly` keyword on the private static member, but that means the `variable reference` is read only. The actual collection the reference points to can still be modified.

The solution is to use the generic `ReadOnlyCollection<T>` class. Here is an updated version of the above class.

```public static class StateHelper
{
private static ReadOnlyCollection<State> _states = GetAllStates();

public static IList<State> States
{
get
{
return _states;
}
}

{
IList<State> states = new List<State>();
//...
}
}```

Now, not only is our intention expressed, but it is enforced.

Notice that In the above example, the static `States` property still returns a reference of type `IList<State>` instead of returning a reference of type `ReadOnlyCollection<State>`.

This is a concrete example of the Decorator Pattern at work. The `ReadOnlyCollection<T>` is a decorator to the `IList<T>` class. It implements the `IList<T>` interface and takes in an existing collection as a parameter in its contstructor.

In this case, if I had any client code already making use of the `States` property, I would not have to recompile that code.

One drawback to this approach is that interface `IList<T>` contains an `Insert` method. Thus the developer using this code can attempt to add a `State`, which will cause a runtime error.

If this was a brand new class, I would probably make the return type of the `States` property `ReadOnlyCollection<State>` which explicitly implements the `IList<T>` and `ICollection<T>` interfaces, thus hiding the `Add` and `Insert` methods (unless of course you explicitly cast it to one of those interfaces). That way the intent of being a read-only collection is very clear, as there is no way (in general usage) to even attempt to add another state to the collection.

Technorati tags: , ,

GetRecentPost Ignores Categories In Windows Live Writer

It appears to me that Windows Live Writer completely ignores categories returned by the `getRecentPosts` Metaweblog API method.

It took me a long time to realize this because I write all my posts using WLW and it stores the categories for a recent post on the local machine. So as long as I do everything via WLW, I’d never notice.

But a recent bug report alerted me to the problem. I logged into my blog via the web admin interface and changed the categories. I refreshed the recent posts in WLW and opened up a post, and sure enough the categories for the post were not updated.

I was experiencing the same thing in Blogjet, but after making a small tweak in the code, everything works fine in BlogJet. Unfortunately WLW is still broken in this respect.

I’ve carefully analyzed the HTTP traffic with Fiddler and cannot figure out why this would happen. Everything looks absolutely correct on Subtext’s end. I must conclude it’s a bug with WLW.

Would someone be so kind to confirm this with a different blog engine for me? Just run through the repro steps I mentioned above and let me know if it really works for you. I’d really be grateful.

Just to be clear: Repro Steps

1. Create a post with no categories.
2. Use another tool (such as your blog's web admin) to specify several categories.
3. In Windows Live Writer, refresh the recent posts.
4. Click on the post to edit it.
5. Check whether or not the correct categories are selected in the category drop down.

Thanks Mucho!

Windows Live Writer and Html Entities

I’ve been banging my head against a couple of problems with the interaction between Subtext and Windows Live Writer that I thought I’d post on this here blog in the hopes that someone can help.

I expect that Mr. Hanselman might know the answer, but will only tell me after properly extolling DasBlog’s superiority over Subtext first. Very well.

Here’s the first issue. I’m kind of a fan of typography and go through the extra effort to use proper apostrophes and quotes.

For example. Instead of using ' for a quote, I will use ’. Instead of "quotes", I will use “real quotes”. It’s just how I roll.

For the apostrophe, I use the HTML entity code &#8217;. For quotes I use the opening quotes &#8220; followed by the closing quotes &#8221;.

However, when you enter these things in WLW and post them to your blog, it converts them to the actual characters. Thus when I query my database, I see `“quotes”` instead of `&#8220;quotes&#8221;` as I would expect.

I wish WLW would not screw around with these conversionsn, but until then, I was thinking about doing a simple conversion on the server back to the original entity encodings.

However, I can’t just call the HttpUtility.HtmlEncode method as that would encode the angle brackets et all. I still want the HTML as HTML, I just want the special characters to remain entity encoded.

Anyone have a clever method for doing this, or will I need to brute force this sucker?

Devious Scam In Which The Check They Send You Clears

There’s a really devious scam going around worth mentioning because of one compelling tactic the scammers use.

My dad received a letter the other day “informing” him that he was the lucky winner of some unclaimed prize money. Below is the letter he received.

They sent him a check for \$1,940 dollars and told him that all he needs to do to claim the prize money is deposit the check and send back a portion of that money for processing fees and identification purposes.

My dad’s first thought (which I imagine yours is as well) was, Oh! This must be a scam. They expect me to deposit their check and then send them a check from my bank account. After a few days, their check won’t clear and they’ll have my money.

For laughs, my dad decided to call the guy up to see what sort of crazy explanation he would provide. His answer caught my dad off-guard. He told him to wait till the check clears before sending them a check.

Huh? Wait a minute. So they want me to wait till the check clears? Doesn’t that mean the money is fully in my account? What if I never send them a check? I could just keep the money. If this is a scam, how are they making money?

Calling the Better Business Bureau provided the answer. They told my dad under no circumstances should he deposit that check. Yes, the check will clear, but probably because it was written by another victim defrauded by this same scam. Later when the scam is discovered by that victim, my dad would be liable for depositing a fraudelent check.

What’s really makes this scam compelling and likely to sucker a lot of people into falling for it is the mistaken belief that once a check clears, the money is in the clear. It’s not.

In any case, if you receive such a scam letter, the proper authorities to report it to is not the FBI but the Postal Inspectors, the law enforcement wing of the United States Postal Service (and the subject of a really cheesy movie, The Inspectors starring Louis Gosset Jr.).

I would suggest warning your family members who are prone to such scams. Especially those who consistently fall for those PayPal emails and keep opening up pictures of Anna Kournikova sent via email.

Open Source Programming Language Zeitgeist

When searching for source code in a particular language, what do the words being searched on tell you about that language?

Koders.com publishes an interesting Open Source Zeitgeist which focuses on search trends and patterns within open source code. This is very similar to Google’s Zeitgeist, but grouped by programming language and specific to open source code. This might help us gain some insight into answering the above question.

For example, compare this screenshot of the top Ruby, Java, and C# searches.

It’s hard to draw any conclusive conclusions based on this sample, but let me offer a few uninformed thoughts, and you can tell me how off-base I am.

Someone suggested that you sort of get a sense of the maturity of a language by the terms being searched. I can kind of see that if I define maturity in this case to mean how well the general developer community within this language understands the features of the particular language.

The idea is that if a language has been around for a long time, there might not be as many searches on basic language features and more searches that appear to be task focused, or at least on esoteric features of the language. I admit, I’m not exactly convinced. Is this true? Let’s take a look.

Take Ruby for example. Even though it’s been around as long as Java, it is only recently (past few years) that it has had a huge surge in popularity. Thus, many of the top search terms seem focused on programming constructs such as proxy, file, socket, and thread. This might reflect the large number of people just learning their way around the language.

Then again, Ruby developers are also searching on terms such as rails, controller, activerecord. These are mature software development concepts.

Whereas Java, which arguably is more mature and has a much larger community, the top terms are slightly more esoteric (md5, swing, tree) or just vain. Java developers search for "java" when searching Java code? How many search results does that produce? However, also in the top are the terms string and file. That makes sense since even though Java is mature, there are still lots of new Java developers.

What’s really interesting to me is the inclusion of "Hibernate" as number 10 in the Java results.

Contrast this to C# where It does not surprise me that dataset is number 2 for C#. It’s the workhorse for the RAD developer. It appears that in pure numbers, the DataSet is winning over OR/M and such. There are no search results for activerecord, NHibernate, Subsonic, OR/M, etc... Whether that is a sad thing or not I leave for a subsequent flame war.

What’s interesting to me is that PHP seems really focused on the domain. Being unfamiliar with PHP, I could totally be wrong, but with search terms like excel, mail, and forum, that’s the impression I get.

Sort of makes sense that an old established widely used scripting language would have its basic features already understood. Though I have no idea why the top search term would be none. Are PHP programmers nihilistic?

In any case, many of you are thinking I’m drawing too many conclusions from too little data. You are absolutely correct. This is mere idle speculation already colored by preconceived notions.

However, I do find it interesting to look at these results and ask, what do they say about these languages and their users?

Technorati tags: , ,

Why Are Developers So Fat?

Raymond Lewallen doesn’t mean to single anybody out, but in his latest post on the topic of living better, he observes that

...there is a decent percentage of programmers that are obviously overweight. You’ve heard people talk: that fat, glasses wearing, backpack toting guy MUST be a geek! Even if you don’t wear glasses and tote a backpack with a laptop inside, if you’re plain overweight, people assume you have a high probability of being a computer geek!

So what can you do about it? At the MVP summit, I observed many things (unhealthy things) that I believe people can do to curb their diets and become healthier, leaner people.

So based on events at a geek conference, the MVP summit, Raymond assumes that unhealthy diets and lack of exercise could be at fault. Let me propose another theory based on my experience today on the exhibition floor of the SD West conference.

As is common at conferences, several booths were giving out those ubiquitous one-size-fits all X-Large t-shirts emblazoned with a flaccid attempt at being hip and witty. All of this is lost on me as I receive a shirt I will never wear, as it looks like a dress on me. At least it will make a good rag for cleaning the next spill on my kitchen floor.

Contrast this to when I head over to SourceGear’s booth. I had the great pleasure to meet the founder of SourceGear, Eric Sink, in person. He is the author of one of my favorite blogs, in which he writes insightful posts on running a software company and software development in general.

Interesting random connection to Eric: He and I discovered that we both lived in the same apartment complex in Spain, but at different times.

Back to the story. They actually are giving out shirts that I would consider wearing outside of a conference hall. He asks if I would like one, to which I reply, "Sure!". It’s his next question that throws me aback.

Which size would you like?

Uh...excuse me. What was that?

You see, SourceGear had shirts in all sizes! Not only that, they had an ingenious ploy to get everyone wearing them. They were giving out a Wii and said they would randomly walk around and give people wearing the t-shirt tickets for the Wii raffle. Near the end of the day, it seemed like everyone was wearing their shirt.

Then it occurred to me. Developers are so used to being fit into a single mold, we often don't know better. We often do that to our users, forcing them to conform to our software rather than conforming our software to how users really work.

In the keynote, David Platt gave several examples of sites that work and don’t work. Using the Starbucks website as an example, he did a search for one in his area. It didn’t find one within the selected radius, 5 miles, and gave him an empty search result page asking him to search again. Come again?

As Platt points out, when you’re in need of a coffe, would you ask your friend, “Hey, where are all the Starbucks within five miles from here?”, or do you ask, “Hey, where is the nearest Starbucks?” and then make the decision to go or not based on that information. You do the latter and so should our software.

But I digress.

Getting back to my epiphany. It occurred to me that maybe developers are fat, because we’re all trying to fit into that X-Large conference t-shirt. Perhaps, if more companies focus on the user like SourceGear, we’ll see thinner developers wearing shirts that actually fit. Just maybe.

If so, remember to thank Eric for giving developers a reason to not get fat.

DotNetKicks Going Open Source

Gavin Joyce, creator of DotNetKicks, has decided to open the source for the site and allow the community to help out in implemeting features and bug fixes.

This makes a lot of sense for a community site like DotNetKicks. Not only can the community build the content, but it can contribute to the actual feature set of the site as well!

Gavin has always been a bit progressive with this site, offering half of his AdSense revenue to those who submit stories (you have to configure your AdSense in your profile and DotNetKicks will show your AdSense ID 50% of the time).

I think this is a great idea and wish him much luck. I also look forward to being able to contribute a little bit here and there (donating heavily with code from Subtext of course).

Double Check Locking and Other Premature Optimizations Can Shoot You In The Foot

After reading Scott Hanselman’s post on Managed Snobism which covers the snobbery some have against managed languages because they don’t “perform” well, I had to post the following rant in his comments:

What is it that makes huge populations of developers think they’re working on a Ferrari when their app is really just a Pinto?

I’m writing a web app that pulls data from a database and puts it on a web page. I never use 'foreach' because I heard it’s slower than explicitly iterating a for loop.

In my time as a developer I’ve experienced too many instances of this Micro Optimization, also known as Premature Optimization.

Premature optimization tends to lead “clever” developers to shoot themselves in the foot (metaphorically speaking, of course). Let’s look at one common example I’ve run into from time to time—double check locking for singletons.

Double Check Locking Refresher

As a refresher, here is an example of the double check pattern.

```public sealed class MyClass
{
private static object _synchBlock = new object();
private static volatile MyClass _singletonInstance;

//Makes sure only this class can create an instance.
private MyClass() {}

//Singleton property.
public static MyClass Singleton
{
get
{
if(_singletonInstance == null)
{
lock(_synchBlock)
{
// Need to check again, in case another cheeky thread
// slipped in there while we were acquiring the lock.
if(_singletonInstance == null)
{
_singletonInstance = new MyClass();
}
}
}
}
}
}```

The premise behind this approach is that all this extra ugly code will wring out better performance by lazy loading the singleton. If it is never accessed, it never needs to be instantiated. Of course this begs the question, Why define a Singleton if it’s quite likely it’ll never get used?

The `Singleton` property checks the static singleton member for null. If it is null, it attempts to acquire a lock before checking if its null again. Why the second null check? Well in the time our current thread took to acquire the lock, another thread could have snuck in and initialized the singleton.

Note that we use the `volatile` keyword for the `_singletonInstance` static member. Why? Long story made short, this has to do with how different memory models can reorder reads and writes. For the current CLR you can ignore the volatile keyword in this case. But if you run your code on Mono or some other future platform, you may need it, so no point in not leaving it there.

Criticisms or If this is fast, how much faster is triple check locking?

Jeffrey Richter in his book CLR via C# criticizes this approach (starting on page 639) as “not that interesting” (Yes, he can be scathing!)

The double-check locking technique is less efficient than the class constructor technique because you need to construct your own lock object (in the class constructor) and write all of the additional locking code yourself.

The cost of initializing the singleton instance would have to be significantly more than the cost of instantiating the object used to synchronize access to it (not to mention all the conditional checks when accessing the singleton) to be worth it.

A Better Approach? The No Look Pass of Singletons

So what’s the better approach? Use a static initializer in what I call the No Check No Locking Technique.

```public sealed class MyClass
{
private static MyClass _singletonInstance = new MyClass();

//Makes sure only this class can create an instance.
private MyClass() {}

//Singleton property.
public static MyClass Singleton
{
get
{
return _singletonInstance;
}
}
}```

The CLR guarantees that the code in a static constructor (implicit or explicit) is only called once. You get all that thread safety for free! No need to write your own error prone locking code in this case and no need to dig through Memory Model implications. It just works, unlike your Pinto, sorry, “Ferrari”.

See, sometimes you can have your cake and eat it too. This code, which is simpler and easier to understand, happens to perform better and requires one less object instantiaton. How do you like them apples?

It turns out that this approach is also recommended for Java, as it was discovered that the double check locking approach wasn’t guaranteed to work.

What!? You’re Still Using Singletons?!

Now that I’ve gone through all this trouble to show you the proper way to create a Singleton, I leave you with this thought. Should a well designed system use Singletons in the first place, or is it just a stupid idea? That’s a topic for another time.

Please note that double check locking doesn’t only apply to Singletons. It just happens to be the place where it is most often seen in the wild.

Identicon Handler For .NET On CodePlex

Update: I’ve created a new NuGet Package for Identicon Handler (Package Id is “IdenticonHandler”) which will make it much easier to include this in your own projects.

A while ago, Jeff Atwood blogged about Identicons for .NET. An Identicon is an anonymized visual glyph that can represent an IP address. I likened it to a Graphical Digital Fingerprint.

The original concept and Java implementation was created by Don Park.

Afterwards, Jeff and Jon Galloway became excited by the idea and ported Don’s code to C# and .NET 2.0 and released it on his website.

This weekend, we’ve spent some time working out a few kinks and performance improvements and are proud to release version 1.1 on CodePlex.

Why CodePlex?

We chose CodePlex for this project because the codebase for this is extremely small, so the patch issue I mentioned in my critique, A Comparison of TFS vs Subversion for Open Source Projects, is not quite as large an issue.

We don’t expect this project to grow very large and have a huge number of releases. This code does one thing, and hopefully, does it well.

So in that respect, CodePlex seems like a great host for this type of small project. It is really easy to get other developers up and running if need be.

Having said that, I probably wouldn’t host a large project here yet based on the critique I mentioned.

I Want This Shirt For My Son

The shirt forgets to list a couple spells. Charm Person and Stinking Cloud.

It's Comments Like This That Keep Me Blogging

It’s comments like this that remind me why I enjoy blogging.

Holy shit!

I found this post whilst searching for POST timeout and thought it was a long shot for my problem. Well, it worked for me!

Thank you so much!!!

Not to mention that it serves to validate my previous point about Search Driven Development. It worked for this guy.

The comment is here in its original context.

Increase Productivity With Search Driven Development

With all the advances in software development in the past few years, I would have to point to Google and Google Groups as the two tools that provide the biggest productivity enhancements for me as a software developer. This fact is probably nothing new to any of you.

Search as a development tool is a phenomena some are starting to refer to as Search Driven Development (Not to be confused with Test Driven Development).

Let's face it, at the rate that new technology is being churned out these days, and given the huge size of many of these frameworks we use, it is impossible to learn everything up front. At some point, we have to stop RTFM’ing, put the documentation down, and start coding. And when we run into trouble, we thank our lucky stars that Google is there to save the day.

Wouldn’t it be great to have some of that search power integrated in your IDE? It turns out that Koders.com have done just that. They provide two free IDE plugins available, one for Eclipse and one for Visual Studio.NET, on their website in the downloads section.

When you go to the site, there’s a little animation demonstrating the plugin. Click the View Again button if you missed it.

Here’s a screenshot I took of SmartSearchTM in action. After typing out the method name, a moment later, the result shows up.

The Smart Search feature is a bit Clippy like at times and sometimes exhibits a bit of lag, making it less useful than it could be. You may just want to turn it off and choose to use the plugin search box directly.

Though there is room for improvement, I think SmartSearchTM is really a really interesting application of context based search and could be quite useful as a double check while writing code. Oh hey, there are already 100 implementations of this method. Let’s see how mine stacks up. Avert my eyes from the GPL licensed code!

Under the hood, these plugins make use of the Koders.com search engine. This engine directly indexes source control repositories and allows users to quickly search and browse through Open Source code. It’s includes a nice interface and provides all the information necessary (such as the license) so you can make an informed decision on whether to use it or not. You can also choose to filter by language and license.

Given my interest in Open Source software, I had heard of Koders.com but didn’t know about their plugins till today, when I had lunch with Darren Rush, the CEO of Koders.com. Little did I know until Darren contacted me via my blog, Koders.com is based in Los Angeles! Darren turned me on to the term Search Driven Development.

Finally! A Los Angeles based company doing something really interesting in the Open Source space that isn’t part of “The Industry”. Very Cool!

As an aside, during our conversation, we wondered why L.A. doesn’t have anywhere close to the tech industry that the Bay Area does. We seem to have all the elements here, but not the community. I seem to think it’s because this area is dominated by the film industry. He pointed out that geography due to the horrible traffic creates pockets of communities. Probably a bit of both.

But I digress.

In any case, lest you think I’m shilling (Yeah, he bought lunch, but I can’t be bought that cheaply!), the other player in this field that I’ve heard about (apart from the obvious 800lb gorilla) is Krugle.com. While their site has a nice color scheme and look and feel, I found Koders easier to use because of its similarity in layout to Google (did I mention Koders is L.A. based?).

I think sticking to the Google Search look (searchbox in the middle) is a smart move for any search site. As soon as I see such a site, I know what to do and where to type. Krugle has a beta plugin to Eclipse, but doesn’t seem to have anything for Visual Studio.NET yet.

Gain Control Of Your Control State

Some people think the `ViewState` is the spawn of the devil. Not one to be afraid of being in bed with the devil, I feel a tad bit less negative towards it, as it can be very useful.

Still, it has its share of disadvantages. It sure can get bloated. Not only that, but disabling ViewState can wreack havock with the functionality of many controls.

This is why ASP.NET 2.0 introduces the control state. The basic idea is that there is some state that should be considered the data for the control, while other state is necessary for the control to function. For example, the contents of a GridView. The control doesn’t absolutely need this data persisted across postbacks to function properly. You could choose to reload it from the database, `Cache`, or `Session`.

In contrast is the state of the selected node in a `TreeView`. This is state that is necessary for the control to function properly across postbacks.

Unlike the `ViewState`, the control state isn’t implemented as a property bag. You have to do a little bit of extra work to make use of it. Namely, there are two methods you have to implement in your custom control.

• `LoadControlState` – Restores the control state from a previous page request. ASP.NET calls this method passing in the control state as an object to this method.
• `SaveControlState` – Saves any changes to control state since the last post back. You need to return the state of the control as the return value of this method. ASP.NET will store it.

Your custom control must also register the fact that it needs the control state by calling `Page.RegisterRequireControlState`.

A Demonstration That Makes This All Clear As Mud

I’ve put together a simple control to demonstrate the control state. Now before I go any further, I must warn you not to copy and paste this implementation. This implementation is designed to clarify how the control state works. I will present another implementation that describes a safer approach, which you can feel free to copy and paste. You’ll see what I mean.

```public class ControlStateDemo : WebControl
{
public int ViewPostCount
{
get { return (int)(ViewState["ViewProp"] ?? 0); }
set { ViewState["ViewProp"] = value; }
}

public int ControlPostCount
{
get { return controlPostCount; }
set { controlPostCount = value; }
}

private int controlPostCount;

protected override void OnInit(EventArgs e)
{
//Let the page know this control needs the control state.
Page.RegisterRequiresControlState(this);
base.OnInit(e);
}

{
ViewPostCount++;
ControlPostCount++;
}

protected override void Render(HtmlTextWriter writer)
{
writer.Write("<p>ViewState: " + this.ViewPostCount + "</p>");
writer.Write("<p>ControlState:" + this.ControlPostCount + "</p>");
base.Render(writer);
}

{
int state = (int)(savedState ?? 0);
this.controlPostCount = state;
}

protected override object SaveControlState()
{
return controlPostCount;
}
}```

This control has two properties. One backed by the `ViewState` and the other backed by a private member variable. Notice that we register this control with the Page in the `OnInit` method.

In the `OnLoad` method, we increment each property. For demonstration purposes, we need these properties to change on each postback, and this is as good a method as any.

In the `Render` method, we simply output the values of the two properties. So far so good, eh?

Now we get to the `LoadControlState` method. This method is called by ASP.NET early in the control lifecyle (after `OnInit` but before `LoadViewState`) in order to provide your control with the saved control state from the previous request.

In this case, we can cast this value to an int and set the control’s state (the value of controlPostCount) to this value.

The `SaveControlState` method provides ASP.NET the data to store in the control state as the return value. In this example, we return the value of `controlPostCount`. This is how we knew we could cast the value to an `int` in `LoadControlState`.

Now if I drop this control onto a page with a Button control, let’s see what happens after a few postbacks.

As expected, both values increment, as they are persisted across postbacks. But what happens if we disable ViewState on the page and click the button a few more times.

As you can see, we retain the control state, while the `ViewState` is disabled.

I am so glad you asked! In this example, I inherited from `WebControl`, but what if I inherited from `TreeControl`, or some other control that made use of the control state. My implementation of `LoadControlState` and `SaveControlState` pretty much obliterates the control state for the base class.

The class I wrote here is intentionally simple to show you no real magic is going on. Let’s demonstrate the proper way to save and load the control state by creating a class that inherits from this control.

```public class SubControlStateDemo : ControlStateDemo
{
public int AnotherCount
{
get { return this.anotherCount; }
set { this.anotherCount = value; }
}

private int anotherCount;

{
AnotherCount++;
}

protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
writer.Write("<p>AnotherCount:" + this.AnotherCount + "</p>");
}

protected override object SaveControlState()
{
//grab the state for the base control.
object baseState = base.SaveControlState();

//create an array to hold the base control’s state
//and this control’s state.
object thisState = new object[] {baseState, this.anotherCount};
return thisState;
}

{
object[] stateLastRequest = (object[]) savedState;

//Grab the state for the base class
//and give it to it.
object baseState = stateLastRequest[0];

this.anotherCount = (int) stateLastRequest[1];
}
}
```

In this control, we inherit from the `ControlStateDemo` control I wrote earlier and added a new property called `AnotherCount`. The main thing to focus on here is our new implementation of `SaveControlState` and `LoadControlState`. We now take great pains to make sure that the base control gets the value it is expecting.

In `SaveControlState`, the first thing we do is grab the control state from the base control by calling `base.SaveControlState`. As you recall, this holds the value for the private member `controlPostCount`.

Since we want to add our own private member, `anotherCount` to the control state, we create an array to store both values and then return this array to the caller.

Within the `LoadControlState` method, we know we’re going to be passed in an object array and that the first element of the array is the control state for our base class. So in that method, we grab the first element and pass it to the method call `base.LoadControlState`, thus giving the base class what it expects to receive for its control state.

We then grab the second element, which is our control state, and set `anotherCount` to this value.

Let’s look at a screenshot of the result in action. Looks like everything is humming along nicely.

I would recommend using this approach anytime you implement control state in a custom control because you never know when you might override the control state for a base class.

Technorati tags: , , ,

Dashes Vs Underscores In URLs

I used to think the choice of using dashes vs underscores to separate words in an URL was simply a matter of personal preference. Nothing more than a religious choice.

Personally, I preferred underscores because I felt dashes intruded upon the words while underscores stayed at the bottom out of the way. So much so I had originally made that the default URL scheme in Subtext for friendly URLs and was using that myself.

It wasn’t till recently that I learned this debate has already been resolved. Years ago.

I wouldn’t say resolved really. Just that, there appears to be a really good reason to choose dashes over underscores. Apparently, Google sees the dash as a word separator, while the underscore is perceived to be part of the word. Something to do with being able to search for C++ style constant variables SUCH_AS_THIS in the title of a post.

The question is, does this still apply today? Does it even matter?

To be on the safe side, I'm falling in line for now. Or rather, in dash. What are your thoughts?

Tags:

Quick CSS Optimization Tip

When you see the following in your CSS

``````div
{
margin-top: 10px;
margin-right: 20px;
margin-bottom: 10px;
margin-left: 20px;
}``````

It makes sense to convert it to this.

``````div
{
margin: 10px 20px;
}``````

It’s cleaner and takes up less space.

There are a lot of ways you can optimize your CSS in this way. I'm not talking about compression, but optimization.

Today, The Daily Blog Tips site linked to a website called CleanCSS that can perform many of these optimizations for you. For example, feed it the above CSS and it will make that conversion. Very nice!

Technorati tags: ,

Year of the Starving Pig?

In January, I wrote that according to the Chinese Zodiac, this is the Year of the Golden Pig. According to foklore, this is a special event that occurs once every 600 years and brings great fortune to babies born during the year.

As an aside, not many realized that the Chinese New Year didn’t start until February 18 this year. So your January baby doesn’t make the cut. But don’t worry, read on.

Many historians and others have discounted the year of the golden pig as mere legend. No historical record 600 years ago point to the significance of this day.

That hasn’t stopped the baby boom from soaring on in places like China and Korea. Historical fact will not stop the masses from having their golden piglet.

However, my friend Walter had an interesting observation. All of these extra children that will be born as part of this boom are competing for the same scarce resources. As they grow up, they’ll all be competing in entrance exams for limited spots in the various prestigious universities. And when they graduate, they’ll be competing for a limited set of jobs.

So will the Year of the Golden Pig actually be the Year of the Starving Pig?

It will be interesting to see what happens in these countries.

Improve Your Blog's Reach With These 20 Essential Web Utilities

You’ve spent hours setting up your blog on your favorite blog platform just right. Good for you! So how do you maintain your blog so that it remains at the top of its game?

It turns out, there are a large number of free web utilities useful for improving your blog’s effectiveness outside of your blog engine.

Tools 4 Argentina - Some Rights Reserved

Everytime I come across one of these useful utilities, I bookmark it to my Blog Utilities folder. This folder is my blogger utility belt, full of tools to meet every need when composing blog posts or optimizing my site for bandwidth and speed.

I’ve chosen to focus on web utilities as they are quick and easy to use — no installation required. This is not a comprehensive list by far, as I am sure there are many others out there. Let me know what I missed in the comments.

Optimization

The first three tools in this category are all website speed testers, but each offers something different, so I’ve listed them all.

1. Web Page Analyzer - This tool is fairly comprehensive and may be the only one you really need for website speed analysis. Includes stats on every file and object downloaded and provides approximate download times for different connection rates.
2. OctaGate Site Timer - I didn’t find this one to be as accurate as the first one because it attempted to download images referenced in my CSS files that were commented out. However, it provides a nicer graphical output that marks when the request was started, when it connected, and the time when the first and last bytes were received. It also highlights 404 errors in red, which is handy for finding missing files or bad URLs.
3. HttpZip Compression Checker - Use this to check whether files from your website are being served with HTTP Compression on or off. Thanks to Jeff Atwood for pointing me to this one (among others).
4. Dynamic Drive Online Image Optimizer - if you’re hardcore about your image compression, you should check out Ken Silverman’s Utility Page. But if you’re like me and just want a quick and easy web based utility for compressing images, this is your site. It can convert gif, jpg, and png files up to 300kb. It will also do conversions to other image types and display multiple results at various color levels and compression rates so you can pick the best one for your needs.
5. Javascript Minimizer - This is an extremely simple tool. Paste in your javascript and click the button and reduce the size of your javascripts.
6. CSS Minimizer - Just like the Javascript minimizer, but for Cascading Style Sheets.

Statistics and Search Engine Optimization

Get a handle on your web traffic with these sites.

1. Website Grader - Gives your website a score in an attempt measure its effectiveness. Shows your PageRank, meta info, domain info, technoratic stats, etc... It generates a really neat report card for your blog.
2. Google Webmaster Central - An absolute essential tool for those who care about users finding their site via Google. Especially pay attention to the Webmaster tools which include Sitemap support.
3. Google Analytics - A free and full featured analytics package for your blog or website. Add some javascript to your page template and you’re in information overload land, but done up with nice charts and graphs.
4. 103bees Search traffic analysis - Unlike other stats packages, this one is focused purely on natural search engine traffic analytics. What are users searching for when they land on your site? This is a nice complement to Google Analytics. And it’s free! One caveat is that the script can be slow sometimes, which can play havoc with CSS based designs.
5. Technorati - It’s so obvious, I almost forgot to list it. Register, claim your blog, and find out who is linking to you. You can add a little script to your blog that displays how many other posts link to yours.
6. Alexa.com - The beauty of this site is that you can easily compare your website’s reach with several other websites on a single graph, thus starting a huge pissing contest.

Spicing Up Your Posts With Images

1. Wikipedia Public Domain Image Resources - Images can bring a blog post to life. But rather than worrying about receiving a cease and desist letter for misusing copyrighted material, why not use images that are part of the Public Domain? This page is chock full of links to resources for free images.
2. PicFindr - Despite it’s “Oh so Web 2.0” name (must everything end in a consonant plus “r” these days? At least it doesn’t have BETA anywhere), this tool is really great. It will search a set of free photo sites such as Stock.xchng, for free photographs.
3. Flickr Creative Commons - Still haven’t found that picture that just hits the point you’re trying to make? Try the Flickr Creative Commons search engine. Remember, these photos are not public domain. You do need to abide by the license. But for the most part, the licenses are pretty lenient for you to reuse the photos in your own blog.
4. Open Clip Art Library - Maybe you want your image to be iconic rather than photographic. Check out this free Public Domain clip art library to find an icon for every occasion.
5. WP Clipart - Another Public Domain clip art library, though the quality tends to be less than the Open Clip Art Library.

1. Cliche Finder - Try to avoid using too many tired old cliches by running your post through this web based utility.
2. HallwayTesting.com - This is a fantastic site for basic hallway usability testing. Just submit your URL and real people will post comments with criticisms and praise for your site. The more specific you are about what you want testers to focus on, the better quality the feedback. Try it out.

Syndication

1. FeedBurner - This one gets special mention because it fits in so many categories. It’ll help optimize your bandwidth by serving your RSS feeds for you. Also, it includes a basic free stats package as well as a premium stats package that can replace Google Analytics. FeedBurner can also provide features your blogging platform might not, such as subscribing to RSS Feeds via email.

Special Mention

As I mentioned before, this post is focusing on web utilities. However, these two utilities are so essential, I just had to break my own rule and list them.

1. Firebug Firefox Add-on - Ok, this breaks my rule as it isn’t technically a website, but it is a FireFox browser plugin so it might as well be a website, right? Well in any case, this tool is too important not to mention. It has it all. It can be used to time your websites download speeds, view the underlying HTTP information, measure the size of each file. Add to that a great Javascript debugger and CSS and DOM explorer. This is a must have tool.
2. Windows Live Writer - I broke my rule again. This tool won’t help you write better content, but it’ll help you have fun doing it. Also, all the plugins available make it easy to add a little extra oomph to your blog posts by including Flickr images, formatted code, etc...

Again, I’m sure I missed someone’s favorite tool hear, so please let me know what I missed in the comments. And if you do, let me know which tool you’d remove from this list in order to add yours. I’ll try following up at a later time with an improved list.

Technorati Tags: , href="http://technorati.com/tags/Blogging">Blogging, href="http://technorati.com/tags/Utilities">Utilities

Who Tests The Tests?

Leon Bambrick (aka SecretGeek) has started a series on Agile methodologies and Test Driven Development (TDD) in which he brings up his own various hidden objections to TDD in order to see if his prejudices can be overcome.

One of the questions he asks is an age old argument against TDD. Who Tests the Tests? Leon sees potential for a stack overflow since, given that the tests are code, and that according to TDD, code should be tested, shouldn’t there be tests for the tests?

The short answer is that the code tests the tests, and the tests test the code.

Huh?

Testing Atomic Clocks

Let me start with an analogy. Suppose you are travelling with an atomic clock. How would you know that the clock is calibrated correctly?

One way is to ask your neighbor with an atomic clock (because everyone carries one around) and compare the two. If they both report the same time, then you have a high degree of confidence they are both correct.

If they are different, then you know one or the other is wrong.

So in this situation, if the only question you are asking is, "Is my clock giving the correct time?", then do you really need a third clock to test the second clock and a fourth clock to test the third? Not if all. Stack Overflow avoided!

Principle of Triangulation

This really follows from the principle of triangulation. Why do sailors without electronic navigation systems bring three sextants with them on board a ship?

With one sextant, you could rely on the manafacture testing to assume its measurements are correct, but wear and tear over time(not much unlike the wear and tear a codebase suffers over time) might make the measurements slightly off.

If you take measuremnts with two sextants, then you have enough information to decide if both are measuring accurately or if one is not. However in this situation, we need to know exactly which measurement is correct.

So we take a third sextant out. The two sextants that take measurements most closely together are most likely correct. Accurate enough to cross the Atlantic.

Technorati tags: , ,

Custom Configuration Sections in 3 Easy Steps

Are you tired of seeing your configuration settings as an endless list of key value pairs?

```<add key="key0" value="value0" />
... ```

Would you rather see something more like this?

```<MySetting
fileName="c:\temp"
someOtherSetting="value" />```

Join the club. Not only is the first approach prone to typos (`AppSettings["tire"]` or `AppSettings["tier] `anyone?), too many of these things all bunched together can cause your eyes to glaze over. It is a lot easier to manage when settings are grouped in logical bunches.

A while back Craig Andera solved this problem with the Last Configuration Section Handler he’d ever need. This basically made it easy to specify a custom strongly typed class to represent a logical group of settings using Xml Serialization. It led to a much cleaner configuration file.

But that was then and this is now. With ASP.NET 2.0, there’s an even easier way which I didn’t know about until Jeff Atwood recently turned me on to it.

So here is a quick run through in three easy steps.

Step one - Define your Custom Configuration Class

In this case, we’ll define a class to hold settings for a blog engine. We just need to define our class, inherit from System.Configuration.ConfigurationSection, and add a property per setting we wish to store.

```using System;
using System.Configuration;

public class BlogSettings : ConfigurationSection
{
private static BlogSettings settings
= ConfigurationManager.GetSection("BlogSettings") as BlogSettings;

public static BlogSettings Settings
{
get
{
return settings;
}
}

[ConfigurationProperty("frontPagePostCount"
, DefaultValue = 20
, IsRequired = false)]
[IntegerValidator(MinValue = 1
, MaxValue = 100)]
public int FrontPagePostCount
{
get { return (int)this["frontPagePostCount"]; }
set { this["frontPagePostCount"] = value; }
}

[ConfigurationProperty("title"
, IsRequired=true)]
[StringValidator(InvalidCharacters = "  ~!@#\$%^&*()[]{}/;’\"|\\"
, MinLength=1
, MaxLength=256)]
public string Title
{
get { return (string)this["title"]; }
set { this["title"] = value; }
}
}
```

Notice that you use an indexed property to store and retrieve each property value.

I also added a static property named Settings for convenience.

Step 2 - Add your new configuration section to web.config (or app.config).

```<configuration>
<configSections>
<section name="BlogSettings" type="Fully.Qualified.TypeName.BlogSettings,
AssemblyName" />
</configSections>
<BlogSettings
frontPagePostCount="10"
title="You’ve Been Haacked" />
</configuration>
```

Step 3 - Enjoy your new custom configuration section

```string title = BlogSettings.Settings.Title;
Response.Write(title); //it works!!!```

What I covered is just a very brief overview to get you a taste of what is available in the Configuration API. I wrote more about configuration in the book I’m cowriting with Jeff Atwood, Jon Galloway, and K. Scott Allen.

If you want to get a more comprehensive overview and the nitty gritty, I recommend reading Unraveling the Mysteries of .NET 2.0 Configuration by Jon Rista.

Technorati Tags: , href="http://technorati.com/tags/Configuration">Configuration, href="http://technorati.com/tags/ASP.NET">ASP.NET

Curb Your Enthusiasm Exonerates Wrongly Accused

Juan Catalan must be feeling “Pretty good. Pretty, pretty, pretty, pretty, good.” His poor luck seemed to exceed Larry Davidian proportions when he was accused of murder. But his luck took a turn for the better after over five months in Jail.

Catalan claimed to be at a Dodgers game with his daughter when the murder occurred. His defense attorney scoured TV footage of crowd shots from the game but could not find Juan. After learning that the show Curb Your Enthusiasm, starring Larry David who co-created Seinfeld, had taken footage at the ballpark that day (I think I remember this episode!), HBO allowed the attorney to search through their footage and he found a time-stamped shot of Catalan in the outtakes.

HBO allowed Melnik to look through the footage, and he found a shot of Catalan with his 6-year-old daughter and two friends. The footage was time coded, confirming that Catalan was at the ballpark shortly before the time of the slaying 20 miles away in the San Fernando Valley.

“There he was in the outtakes,” said Gary S. Casselman, the attorney handling Catalan’s lawsuit. “He’s glad it’s over. It’s terrible to be in jail, and he thought he would never see his daughters again.”

I read this in the Los Angeles Times yesterday morning and laughed at his good fortune to be saved by a television show.

Catalan was not a fan of “Curb Your Enthusiasm” before his time in jail. “He is now,” Casselman said.

I bet he is. The full Los Angeles Times article is here.

Technorati Tags: ,

Requirements and Specs Are Always Ambiguous

UPDATE: As an aside, it would probably be more accurate to say the FizzBuzz question is a Requirement. So where you read the term Spec, you can replace it with Requirement. Either way, the same thing applies. The only thing not ambiguous is the code. As they say, the code is the spec.

One last point, then I’m done with this topic of FizzBuzz and spec writing. In a recent post, I mentioned tongue firmly in cheek that the FizzBuzz “spec” has certain flaws. Now I admit I’m taking this out of context a bit to make a point. FizzBuzz is an simple interview question, not a spec, possbily intended to elicit this type of analysis from the candidate. Even so, I think there’s a good lesson to learn here.

My point was that all specs are merely rough approximations of the actual requirement. Specs are ambiguous, but software is not. Software doesn’t generally deal well with ambiguity. Change a random bit in memory and all hell breaks loose.

However, some of that was lost due to the extremely nitpicky point I made about the spec. So here’s another, still nitpicky, but a bit less so.

Every so called “correct” program written in the comments of Jeff’s blog had the following output.

``````1
2
Fizz
4
Buzz
Fizz
7
Fizz
Buzz
11
Fizz
13
14
FizzBuzz``````

But, doesn’t the following output meet the letter of the spec (difference in bold)?

``````1
2
Fizz
4
5Buzz
Fizz
7
8
Fizz
10Buzz
11
Fizz
13
14
FizzBuzz
``````

My point being, the spec is explicit about replacing numbers divisible by three with “Fizz”, but it doesn’t say to replace numbers divisible by five.

Yes, I agree. Developers should not act like total logicians and nitpick every detail. Human language is inexact, and we have to deal with that fact of life. Unfortunately, sofware doesn’t have the same resiliency towards ambiguity. If this output was meant to be fed into another software system, this ambiguity would cause bad data, software crashes, who knows what calamity!

You might say I’m splitting hairs here. Of course I am because the compiler is going to split hairs. The Web Service I’m trying to call is going to split hairs. The HTML browser is going to try and not split hairs, but is going to ultimately fail. Software is all about splitting hairs.

Instead, we need to move beyond the spec and ask questions before writing code, during writing code, and after writing code. Do not be afraid to talk to the customer or customer representative. That’s all I was trying to say.

Thanks to Rob Conery who was trying to make this point in my comments, but it was lost on everybody. ;)

Burning My Feeds

UPDATE: You can now subscribe to my feed via email. This is a service offering from Feedburner. Sweet!

I’ve decided to make the jump and switch to Feedburner after the glowing recommendation from Jeff Atwood. However, being the paranoid sort, I decided to go ahead and pay for the MyBrand PRO feature.

This allows me to keep control over my feed by serving it from my own domain http://feeds.haacked.com/haacked/. Not that I don’t trust FeedBurner, but if I ever want to take my feed back in-house, it’ll be a lot easier for me to change a DNS setting than to wait for them to perform a 301 redirect back to me.

If you’re reading this in your aggregator, my guess is that everything is working fine. If not, please do let me know if you encounter any problems. Flame on!

Why Can't Spec Writers Write...Specs?

I know, I know, you’d like to see the FizzBuzz discussion die a quick death, but trust me, this is an interesting point, or at least mildly amusing.

Sorry to revive the dead horse, but a comment in my blog brought up a very good point. In fact, I’m kicking myself for not noticing this myself, having been a math major and I love pointing out this type of minutae.

In the original Fizz Buzz test, the functional spec asks the programmer to print the numbers from 1 to 100.

But as a commenter points out...

Why can’t spec writers write? Unless you mean integers, there are an infinite number of real numbers ’from 1 to 100’

Exactly! There are an infinite range of numbers between 1 and 100. The specification is technically not clear enough. Writing a program to spec exactly would... well be impossible.

This is exactly why I said the following in a another comment...

I still need to gather requirements! What platform must this FizzBuzz program support? Any performance requirements? Does the output need to be available over the web?...

Unfortunately, I missed the most important question I should have asked.

I assume you mean all intergers from 1 to 100 inclusive, is that correct?

I know what you’re thinking. In cases like this, developers should be able to intuit what the client means. If a developer asks Do you mean Integers or Real Numbers?, that developer is being a smart ass.

But my point is still valid. If a client says, I want a CRM system, you may know exactly what a CRM system is, but it may be totally different from what they think a CRM system is.

This really highlights the difficulty of writing good requirements and a good spec. You don’t know the background of the person you’re handing off the document to.

What makes perfect sense in your mind might mean something different to the reader.

Perhaps it’s situations like this that lead 37Signals to advocate getting rid of functional specs altogether.

Whether you go that extreme or not is not so important as keeping the lines of communication open with your client. Never accept a requirement and functional spec at face value. Specs are always a poor approximation of what the client really wants. All specs are broken to one degree or another (though that doesn’t mean they are all useless). Ask for clarification. Keep the dialog going.

This is also one reason why Big Design Up Front (BDUF) can really nail you in the butt. These subtle things are missed all the time. Having an iterative process where you’re not on the hook for requirements gathered months ago gathering dust helps mitigate the risk of incomplete and inaccurate requirements.

Even by thousands of software developers reading blogs.

Trying Out A New Site Design

Thought I should take advantage of my latest bout of insomnia and do a slight redesign of my website. My goal was to clean it up a bit so it looks less crowded and cluttered.

I also removed the Flickr images because the script was slow to load and caused my site to load slowly. I didn’t think it added a whole lot anyways. I may look into creating a server-side flickr control later.

Here’s a screenshot, in case you’re reading this in an aggregator.

Let me know what you think of the design. My next step will be to focus more on usability. If there’s anything that annoys you about my site (here’s where Jeff will chime in) do let me know.

Start++ Is All That And Then Some

Update: I have an even better startlet for stopping and starting services in my comments.

If you’re running Vista, run, don’t walk, and go download and install Start++ (thanks to Omar Shahine for turning me on to this). Make it the first thing you do. Many thanks to Brandon Paddock who developed this nice little tool. He describes the tool in this post.

I have a message for Start++ from the Start menu. “You complete me!”.

Ok, terribly corny jokes aside, it’s the little things that save me lots of time in the long run. For example, starting and stopping SQL server is kind of annoying for me on Vista. Here’s my the typical workflow.

1. Hit Windows Key, type in cmd
2. type net stop mssql
3. Doh! System error 1060 occurred. Right, I need to be an administrator.
4. Grab the trackball
5. Click on the Start menu
6. Right click Command Prompt
8. Now type net stop mssql.

Is your hand hurting by now? Because mine is.

Of course, I’m an idiot. Or, I was an idiot. Now, I’ve mapped the Start++ keywords startsql and stopsql to automatically run the commands I need with elevated privileges.

Click for larger image.

Notice you can check the Run elevated checkbox for any command. Yes, I get the UAC prompt (Yes, I still have that sucker on), but that’s not such a big deal to me. Now my workflow is reduced to:

1. Hit Windows Key, type in stopsql
2. Hit the Left Arrow Key and Enter when the UAC prompt comes up.

Booya!

For your convenience, I’ve exported the startsql and stopsql “startlets” and put them on my company’s tools site here. I figure this one alone saves me a few seconds every half hour.

If you are using a named instance of SQL Server, you will need to change the argument in the Arguments column like so:

`/C "net start mssql\$NameOfInstance"`

I have a few hundred or so startlets I can think of adding. Happy shortcutting!

Technorati Tags: , , ,

Replacing Recursion With a Stack

In Jeff Atwood’s infamous FizzBuzz post, he quotes Dan Kegel who mentions.

Less trivially, I’ve interviewed many candidates who can’t use recursion to solve a real problem.

A programmer who doesn’t know how to use recursion isn’t necessarily such a bad thing, assuming the programmer is handy with the `Stack` data structure. Any recursive algorithm can be replaced with a non-recursive algorithm by using a `Stack`.

As an aside, I would expect any developer who knew how to use a stack in this way would probably have no problem with recursion.

After all, what is a recursive method really doing under the hood but implicitely making use of the call stack?

I’ll demonstrate a method that removes recursion by explicitely using an instance of the `Stack` class, and I’ll do so using a common task that any ASP.NET developer might find familiar. I should point out that I’m not recommending that you should or shouldn’t do this with methods that use recursion. I’m merely pointing out that you can do this.

In ASP.NET, a Web Page is itself a control (i.e. the `Page` class inherits from `Control`), that contains other controls. And those controls can possibly contain yet other controls, thus creating a tree structure of controls.

So how do you find a control with a specific ID that could be nested at any level of the control hierarchy?

Well the recursive version is pretty straightforward and similar to other methods I've written before.

```public Control FindControlRecursively(Control root, string id)
{
Control current = root;

if (current.ID == id)
return current;

foreach (Control control in current.Controls)
{
Control found = FindControlRecursively(control, id);
if (found != null)
return found;
}
return null;
}```

The recursion occures when we call `FindControlRecursively` within this method. Essentially what is happening (and this is a simplification) when we call that method is that our current execution point is pushed onto the call stack and the runtime starts executing the code for the internal method call. When that method finally returns, we pop our place from the stack and continue executing.

Rather than try to explain, let me just show you the non-recursive version of this method using a `Stack.`

```public Control FindControlSansRecursion(Control root
, string id)
{
//seed it.
Stack<Control> stack = new Stack<Control>();
stack.Push(root);

while(stack.Count > 0)
{
Control current = stack.Pop();
if (current.ID == id)
return current;

foreach (Control control in current.Controls)
{
stack.Push(control);
}
}

//didn’t find it.
return null;
}```

One thing to keep in mind is that both of these implementations assume that we won’t run into a circular reference problem in which a child control contains an ancestor node.

For the `System.Web.UI.Control` class we safe in making this assumption. If you try and create a circular reference, a `StackOverflowException` is thrown. The following code demonstrates this point.

```Control control = new Control();
// This line will throw a StackOverflowException.

If the hierarchical structure you are using does allow circular references, you’ll have to keep track of which nodes you’ve already seen so that you don’t get caught in any infinitel loops.

A Comparison of TFS vs Subversion for Open Source Projects

We’ve been having an internal debate within the Subtext mailing list over the merits of SourceForge vs Google Code Project Hosting vs Codeplex. Much of the discussion hinges around the benefits of Subversion for Open Source projects when compared to Team Foundation System (TFS).

Before I begin, I do not mean for this to devolve into a religious argument. This is merely my critique from the perspective of running an Open Source project. I personally think both are fine products and both probably work equally well in the corporate environment.

• Easy of use. For developers with a background in using Visual Source Safe or Sourcegear Vault, the interface into TFS will be familiar. Subversion requires more of a learning curve for these developers, though this is mitigated by my suspicion that a large percentage of Open Source developers tend to use CVS and SVN already.
• Work Item integration is sweet. I’ve been contributing some code to the Subsonic project and I actually love the work item integration in VS.NET. It’s pretty nice to be able to review and close work items while working on the code.
• Shelving is great. Certainly nothing stops you from doing something like this in Subversion by using conventions, but I like the syntactic workflow sugar this provides.

• Anonymous access. Users who want to look at the code, view the change history of the code, and update their local code to the latest version can do so form the convenience of their favorite Subversion client. This is much more cumbersome with TFS.
• Patch Submission. This goes hand in hand with anonymous access. Users without commit access can have Subversion generate patch files consisting of their changes and submit these files. This makes it really easy for the casual contributor to quickly submit a patch as well as makes it easy for the Open Source development team to apply contributions to the source. This is a huge benefit to the project. Unfortunately with CodePlex, you either give commit access or you don’t. If you don’t, it’s a pain for users to submit patches and a pain for the project team to apply patches. Just ask Rob Conery what happens if you give commit access too freely.
• Offline Support. Regardless of what Jeff says, offline mode does matter for many applications. For example, sometimes I have to connect to an obnoxious VPN that destroys my general internet connectivity. It’s nice to be able to connect, get latest, disconnect, work, connect, commit changes, disconnect. Try that with TFS.

Again, as source control systems, I believe they are both great systems. But for the needs of an open source project, I feel that Subversion has advantages. As far as I understand, TFS was designed as an enterprise source control system. However, the needs of the enterprise are often different from the needs of an Open Source team.

Subversion, itself open source, was used during its own development (when it became stable enough). So it is well suited to open source development.

If Codeplex supported Subversion, I would probably want to move Subtext over in a heartbeat. If you feel the same way I do, please vote for the work item entitled Subversion Support (SVN).

It looks like a lot of people would like to see this as well as it is the top vote getter on the Codeplex work item site.

And before you rail on me asking, Why Microsoft would ever consider such a move? Isn't Codeplex a showcase for TFS and Microsoft Technology Open Source projects?

A member of the Codeplex team informed me that Codeplex is the home for any Open Source project - on any and all platforms. In fact, they do now host a few non-Microsoft projects. Of course their dependency on TFS does naturally limit the types of projects that would host there.

How's This For Tech Support?

Micah Dylan (CEO of VelocIT) writes about a tech support call he had with Comcast, the local cable company.

To sum (though you should go read it) Micah can’t login to the Comcast website and the tech support guy tells him to call Microsoft claiming it’s a problem with the browser!

Comcast Really?! You expect your customer to call Microsoft and singlehandedly convince them to update their browser so it works with your website, rather than following the lead of millions of other websites and making the website work with the browser? Really?