comments edit

Subsonic
LogoRob Conery just announced that Beta 1 of SubSonic 2.0 is ready for your immediate data access needs. He’s looking for beta testers (open to anyone and everyone) to make sure this release is rock solid.

I may attempt to claim a significant contribution, but do not believe me. I only contributed a teeny-tiny amount of code to this release.

I am using a small bit of Subsonic in a current project (just using it to generate Stored Procedure wrappers since the existing database already has a legacy data model and stored procedures to work with).

While I’m talking about release dates for open source projects, I should mention that Subtext 1.9.5 will be released soon and afterwards we’ll turn our full focus to getting Subtext 2.0 out the door. I’ve made some progress on 2.0 while working on the 1.9 branch, so hopefully it will follow 1.9 shortly.

My cohorts and I finished our first draft of the book we’re working on, so I should hopefully have more time to work on Subtext. That is, till the kid arrives.

comments edit

I think it’s time to start a video collection of amazing talents people acquire when they have too much time on their hands. This one must surely qualify. It’s worth two minutes of your time to check it out.

Dice Stacking Video on
YouTube

Found via my doppleganger, the other Phil Haack.

blogging comments edit

Technorati recently released their latest State of The Blogosphere report (renamed to something about the Live Web to avoid confusion with the Dead Web) chock full of statistics and pretty graphs.

This would be interesting, if I were interested in anything other than myself. No, I don’t care about how other blogs are doing. I only care about Me Me ME!

How is MY Blog doing?

To find out I could check on some external sources. For example, Alexa.com shows that my site has experienced steady growth in the past three years (click on the chart to see the actual report page).

Alexa Graph of Haacked.com over 3
years

But lest I let that go to my head, let’s compare my site’s reach with my friend Jeff’s using Alexa’s comparison tool.

Hmmm, it may be high time I contrive my own crowd pleaser FizzBuzz post.

Moving on, let’s see what Technorati has to say.

Haacked.com on Technorati - Rank 6358 (1276 links from 473
blogs)

Wow. 6358 is a big number! That’s good right? Oh, maybe not. But we can see that 473 blogs have provided 1276 links to my blog. I should hit these suckers up for a loan!

Let’s swing over to see what Feedburner says:

Subscribers: 3,339. Site Visitors:
1,334

It’d be nice to have just one score to look at. Let’s swing over to the Website Grader.

Website Grade: 97/100 Page Rank:
6

I could have saved some time by just going here first. Hey Ma! I got an A! Can I leave the cage?

Looking Inward

Well, if there’s one thing I learned about happiness it’s to look for it inward, rather than relying on external validation. That way, you don’t have to let reality intrude on your carefully crafted world view. So let’s look at some internal statistics.

  • Posts - 1322
  • Comments - 2510
  • Spam Comments - 9818 (which is low because I periodically clean out the table)

Hmmm… I wonder what are my five most popular posts based on Ayende’s formula.

Title Web Views
Video: The Dave Chappelle Show 169,398
PHOTO: When Nerds Protest The RNC 81,353
Year of the Golden Pig 60,316
Response.Redirect vs Server.Transfer 54,076
Using a Regular Expression to Match HTML 37,807

I won’t lie. It depresses me a bit to learn that my three most popular posts have nothing to do with technology. Not only that, the most popular post by a longshot is a skit about a family with an unfortunate last name. It’s a mispelling of a horrible racial epithet, which happens to bring alot of bad spelling racists in search of god knows what.

What the Numbers Don’t Say

Well all these numbers are fine and good, but they can’t measure the enjoyment I get out of blogging. Nor can they measure the satisfaction that some readers (any reader?) gets from reading my posts. At least not until someone builds Satisfactorati or Satisfactorl.icio.us.

The numbers may not support my complete self-centered ego-centric view, but when has vanity and a self-inflated ego ever been subdued with so called “facts”?

So what is the state of your blog?

This post is a refresh to my Blogging Is Pure Vanity post from way back when.

comments edit

Jeff Atwood writes a great summary of Open Source Licenses. As far as I’m concerned, there’s really only four software licenses to worry about (open source or otherwise).

  1. Proprietary - The code is mine! You can’t look at it. You can’t reverse engineer it. Mine Mine Mine!
  2. GPL - You can do whatever you want with the code, but if you distribute the code or binaries, you must make your changes open via the GPL license.
  3. New BSD - Use at your own risk. Do whatever the hell you want with the code, just keep the license intact, credit me, and never sue me if the software blows your foot off. The MIT license is a notable alternative to the New BSD and is very very similar.
  4. Public Domain - Do whatever you want with the code. Period. No need to mention me ever again. You can forget I ever existed.

Yes, there are many more licenses, but I think you’ll do just fine if you just stick with these four. (Note, I am not a lawyer, take this advice at your own risk and never ever sue me. Ever.)

Of course, this really is focused on software, what about the content of your blog, or sample code in your blog?

For small code snippets in your blog, I recommend either explicitly releasing the samples to the Public Domain or pick the new BSD License.

UPDATE: I’ve updated this section based on feedback. Creative Commons is a poor choice for source code.

The tricky part in my mind is that there are two potential uses for source code snippets in a blog.

For example, you may just want to post the same code in your blog. In that case, I see the code as being content, for which CC might be appropriate. The other use is posting the code in an application. Then it really is source code, and CC is not appropriate.

In any case, my source code snippets are released to the Public Domain unless otherwise stated. I only ask that you do reference the blog post where you got the code from, but it is not required.

Note, except in the case of releasing content to the Public Domain, if you choose to license your code using an Open Source License or license your content using a Creative Commons license, it does not mean you give up your copyright to the material. You still own the copyright. The license just lets people know that they may make use of your content and what restrictions are in place. That is where the Some Rights Reserved phrase commonly associated with Creative Commons content comes from, as opposed to All Rights Reserved.

Also, keep in mind that you can choose to license code snippets in your blog differently from your blog’s content. Many people do not want to share their blog content, but do want to share code snippets. Just make it clear in your copyright notice.

If you want to know more about software licensing, check out my multi-part series on copyright law and software licensing for developers:

code, tech comments edit

Just something I noticed today. A lot of people (I may even be guilty of this) publish their emails on the web using the following format:

name at gmail dot com

Substitute gmail dot com with your favorite email domain.

The problem with this approach is that it is trivially easy to harvest email addresses in this format with Google.

Harvest

First, do a search for the following text (include the quotes):

”* at * dot com”

Now, all you need to do is run a regular expression over the results. For example, using your favorite regular expression tool, search for this:

(\w+)\s+at\s+(\w+)\s+dot\s+com

and replace with this:

$1@$2.com

Now before you blame me for giving the spammers another tool in their arsenal, I would be very surprised if spammers aren’t already doing this. I highly doubt I’m the first to think of it.

So what is a better way to communicate your email address without making it succeptible to harvesting? You could try mish-mashing your email with HTML entity codes. For example, when viewed in a browser, the following looks exactly the same as name at gmail dot com.

name at gmail dot com

The key is to somewhat randomly replace characters with entity codes, so that we all don’t use the exact same sequence. If we all replaced every letter with its corresponding entity code, it would be trivially easy to farm.

But by introducing some randomness, it becomes a lot more difficult to farm these emails. It’s possible, but would take more technical chops and computing power than the technique I just demonstrated.

comments edit

Code CompleteA while ago I read Steve McConnel’s latest book, Software Estimation: Demystifying the Black Art, which is a fantastic treatise on the “Black Art” of software estimation.

One of the key discoveries the book highlights is just how bad people are at estimation, especially single point estimation.

One of several techniques given in the book focuses on providing three estimation points for every line item.

  1. Best Case: If everything goes well, nobody gets sick, the sun shines on your face, how quickly could you get this feature complete?
  2. Worst Case: If your dog dies, your significant other leaves you, and your brain turns to mush, what is the absolute longest time it would take to get this done? In other words, there is no way on Earth it would take longer than this time, unless you were shot.
  3. Nominal Case: This is your best guess, based on your years of experience with building this type of widget. How long do you really think it will take?

The hope is that when development is complete, you’ll find that the actual time spent is between your best case and worst case. McConnell provides a quiz you can try out to discover that this is harder than it sounds.

Over time, as you reconcile your actual times into your past estimates, you’ll be able to figure out what I call your estimation batting average, a number that represents how accurate your estimates tend to be.

Once you have these three points for a given estimate, you can apply some formulas and your estimation batting average to create a probability distribution of when you might complete the project. Here is a simple example of what that might look like (though in real life there may be more point values).

  • 20% 50 developer days
  • 50% 70 developer days
  • 80% 90 developer days

So the numbers above show that there’s only a 20% chance the project will be complete within 50 developer days and an 80% chance of completion if the development team is given 90 developer days.

This technique showcases the uncertainty involved in creating estimates and focuses on the probability that estimates really represent.

After reading this book, I fired up Excel and built a nice spreadsheet with the formulas in the book and columns for these three estimation points. Now I can simply enter my line items, plug in my best, worst, and nominal cases, and out pops a probability distribution of when the project will be complete.

However, as I mentioned before, the crux of this technique relies on that estimation batting average. But when you’re just starting out, you have no idea what that average is, so you have to pull it out of the air (I recommend pulling conservatively).

The reason I bring this all up is that I watched an interesting interview today on the ScobleShow. Robert Scoble interviewed FogCreek founder and well known technology blogger, Joel Spolsky.

Joel let it be known that they are building a new scheduling feature for FogBugz 6 that reflects the reality of software estimation better than typical scheduling software.

For example, one key observation he makes is that estimates tend to be much shorter than the actual time than they are longer.

For example, it’s quite common to estimate that a feature will take two days, only to have it take four days, or eight days. But it’s rare that the feature actually ends up taking one day. Obviously it’s impossible for that feature to take 0 days or -4 days.

This makes obvious sense when you think about it.

The amount by which you can finish a feature before an estimated time is constrained, but the amount of time that you can overshoot an estimate is boundless.

Yet many software scheduling software completely ignore this fact, hoping that an underestimation on one item will be offset by an overestimation of another. They assume these over and under estimates are balanced, which they are clearly not.

This new feature will attempt to take that into account as well as your track record for estimates (your batting average if you will), and provide a probablity of completion for various dates.

Sounds like a brilliant idea! If done well, that would be quite hot and allow me to chuck my hackish Excel spreadsheet.

code, tdd, open source, tech comments edit

RhinoAyende just announced the release of Rhino Mocks 3.0. The downloads are located here. If you aren’t subscribed to Ayende’s blog, I highly recommend it. This guy never sleeps and churns out code like a tornado.

Ever since I discovered mocking frameworks in general, and especially Rhino Mocks, mocking has become an essential part of my unit testing toolkit.

A while ago I wrote a short intro demonstrating how to write unit tests for events defined by an interface. This small example shows the usefulness of something like Rhino Mocks.

If you’re wondering what the difference between a mocks, stubs, and fakes, be sure to read Jeff Atwood’s Taxonomy of Pretend Objects.

comments edit

You’ve been HAAA CKED will be well represented at Mix 07 this year. I thoroughly enjoyed Mix 06 last year and think 07 has the potential to be even better.

Do not worry, I’m leaving the retina scorchingorange aloha shirtat home this time.

My only disappointment with Mix 07 so far is it doesn’t have quite the clever rhyme that Mix 06 did. My wife did a cheer for me as I left for last year’s the conference.

Mix! Oh-Six! Mix! Oh-Six! Mix! Oh Six!

“Mix! Oh-Seven!” doesn’t roll off the tonque quite in the same manner. No cheers from my wife this year.

There are many people I’m looking forward to hanging out with in attendance this year. I like to think that it was me who convinced Scott Hanselman to attend this year’s Mix conference. Jon Galloway, Jeff Atwood, and Rob Conery will all be there. It may take that many to drag me away from the craps table before I lose my future son’s college tuition.

I also think Steve (Lucky 21) Maine will be there (is that right Steve?). Last year he watched as the Craps dealers rolled their eyes when I only bought $40 worth of chips (it’s all I had) at a $10 table. I then proceeded to turn that into around $300, only to lose the bulk of that at BlackJack later. No more BlackJack after Craps!

I think Adam (Is it 9 or 10?) Kinney will also be in attendance. If you are planning to be there, let me know in the comments!

If you’re a Subtext user, feel free to swing by and tell me how much Subtext is a [godawful mess run by a retarded monkey and is possibly responsible for your pet’s death|wonderful piece of sublime software that enriches your life in a way you never thought a .NET blog engine could] (or any option in between).

If you’re not a Subtext user, tell me how much you’ve heard that Subtext is a [godawful mess run by a retarded monkey and is possibly responsible for your pet’s death|wonderful piece of sublime software that enriches your life in a way you never thought a .NET blog engine could]

And if I have time left over, I may just attend a few sessions. Seriously though, I really enjoyed many of the sessions last year. Funny how much you can still learn while hung-over and sleep deprived. I kid. I kid.

oss, empathy, community comments edit

A recent confrontational thread within the Subtext forums that I shared with Rob Conery got us into a discussion about the challenges of dealing with difficult members of an Open Source community. There are many approaches one can take. Some advocate not engaging disruptive community members. I tend to give everyone the benefit of the doubt at first. Rob often commends me for my paticence in dealing with users in the forums. Neither approach is necessarily better than the other. It’s a matter of style.

If there’s one thing I’ve learned about running an Open Source project, it’s that it takes two key qualities.

First, you really need to have a thick skin. You cannot please everybody, and if you’re doing something even remotely interesting, you’re going to piss off some people with the choices you make. But you can’t stop making choices, so be prepared to piss people off. It’s a part of the job. Just be mentally prepared for the attacks, fair or not.

Deanna
Troi Second, you have to have empathy for your users and developers. Sometimes what feels like an attack is really a misunderstanding based on cultural differences. I know some cultures tend to have a very brusque in-your-face way of discussion. What might be considered rude in one culture, is considered a normal even keeled discussion in another.

At other times there may be an underlying reason for the venting which really has nothing to do with you or your project.

Sure, it’s not really fair to take the brunt of someone’s wrath because of what happens elsewhere, but I find that humor and attempting to focus the discussion to specific objective complaints often helps defuse an argumentative thread.

In this particular case, the user ends up apologizing and writes about the aggravating events at work that led to his frustrations and lashing out in our forums.

Apology accepted, no hard feelings.

What about Toxic members? Sometimes there are members of the community who really are simply toxic trolls. They’re not interested in having any sort of real discussion. How do you deal with them? How do you tell them apart from someone who actually does care about your project, but is so ineloquent about expressing that?

I’ve been fortunate not to have experienced this with Subtext yet, but this excellent post How Open Source Projects Survive Poisonous People points out some great advice for identifying and dealing with poisonous people.

The post is a summary a video in which Ben Collins-Sussman and Brian Fitzpatrick, members of the Subversion team, discuss how to deal with poisonous people based on their experiences with Subversion.

Their points are specific to their experience running an Open Source project. But many of their points apply to any sort of community, not just Open Source.

Politeness, Respect, Trust, and Humility go along way to building a strong community. To that list I would also add Empathy.

comments edit

UPDATE: Made some corrections to the discussion of ReadOnlyCollection’s interface implementations near the bottom. Thanks to Thomas Freudenberg and Damien Guard for pointing out the discrepancy.

In a recent post I warned against needlessly using double check locking for static members such as a Singleton. By using a static initializer, the creation of your Singleton member is thread safe. However the story does not end there.

One common scenario I often run into is having what is effectively a Singleton collection. For example, suppose you want to expose a collection of all fifty states. This should never change, so you might do something like this.

public static class StateHelper
{
  private static readonly IList<State> _states = GetAllStates();

  public static IList<State> States
  {
    get
    {
      return _states;
    }
  }

  private static IList<State> GetAllStates()
  {
    IList<State> states = new List<State>();
    states.Add(new State("Alabama"));
    states.Add(new State("Alaska"));
    //...
    states.Add(new State("Wyoming"));
    return states;
  }
}

While this code works just fine, there is potential for a subtle bug to be introduced in using this class. Do you see it?

The problem with this code is that any thread could potentially alter this collection like so:

StateHelper.States.Add(new State("Confusion"));

This is bad for a couple of reasons. First, we intend that this collection be read-only. Second, since multiple threads can access this collection at the same time, we can run into thread contention issues.

The design of this class does not express the intent that this collection is meant to be read-only. Sure, we used the readonly keyword on the private static member, but that means the variable reference is read only. The actual collection the reference points to can still be modified.

The solution is to use the generic ReadOnlyCollection<T> class. Here is an updated version of the above class.

public static class StateHelper
{
  private static ReadOnlyCollection<State> _states = GetAllStates();

  public static IList<State> States
  {
    get
    {
      return _states;
    }
  }

  private static ReadOnlyCollection<State> GetAllStates()
  {
    IList<State> states = new List<State>();
    states.Add(new State("Alabama"));
    states.Add(new State("Alaska"));
    //...
    states.Add(new State("Wyoming"));
    return new ReadOnlyCollection<State>(states);
  }
}

Now, not only is our intention expressed, but it is enforced.

Notice that In the above example, the static States property still returns a reference of type IList<State> instead of returning a reference of type ReadOnlyCollection<State>.

This is a concrete example of the Decorator Pattern at work. The ReadOnlyCollection<T> is a decorator to the IList<T> class. It implements the IList<T> interface and takes in an existing collection as a parameter in its contstructor.

In this case, if I had any client code already making use of the States property, I would not have to recompile that code.

One drawback to this approach is that interface IList<T> contains an Insert method. Thus the developer using this code can attempt to add a State, which will cause a runtime error.

If this was a brand new class, I would probably make the return type of the States property ReadOnlyCollection<State> which explicitly implements the IList<T> and ICollection<T> interfaces, thus hiding the Add and Insert methods (unless of course you explicitly cast it to one of those interfaces). That way the intent of being a read-only collection is very clear, as there is no way (in general usage) to even attempt to add another state to the collection.

comments edit

I’ve been banging my head against a couple of problems with the interaction between Subtext and Windows Live Writer that I thought I’d post on this here blog in the hopes that someone can help.

I expect that Mr. Hanselman might know the answer, but will only tell me after properly extolling DasBlog’s superiority over Subtext first. Very well.

Here’s the first issue. I’m kind of a fan of typography and go through the extra effort to use proper apostrophes and quotes.

For example. Instead of using ’ for a quote, I will use ’. Instead of “quotes”, I will use “real quotes”. It’s just how I roll.

For the apostrophe, I use the HTML entity code ’. For quotes I use the opening quotes “ followed by the closing quotes ”.

However, when you enter these things in WLW and post them to your blog, it converts them to the actual characters. Thus when I query my database, I see “quotes” instead of &#8220;quotes&#8221; as I would expect.

I wish WLW would not screw around with these conversionsn, but until then, I was thinking about doing a simple conversion on the server back to the original entity encodings.

However, I can’t just call the HttpUtility.HtmlEncode method as that would encode the angle brackets et all. I still want the HTML as HTML, I just want the special characters to remain entity encoded.

Anyone have a clever method for doing this, or will I need to brute force this sucker?

comments edit

It appears to me that Windows Live Writer completely ignores categories returned by the getRecentPosts Metaweblog API method.

It took me a long time to realize this because I write all my posts using WLW and it stores the categories for a recent post on the local machine. So as long as I do everything via WLW, I’d never notice.

But a recent bug report alerted me to the problem. I logged into my blog via the web admin interface and changed the categories. I refreshed the recent posts in WLW and opened up a post, and sure enough the categories for the post were not updated.

I was experiencing the same thing in Blogjet, but after making a small tweak in the code, everything works fine in BlogJet. Unfortunately WLW is still broken in this respect.

I’ve carefully analyzed the HTTP traffic with Fiddler and cannot figure out why this would happen. Everything looks absolutely correct on Subtext’s end. I must conclude it’s a bug with WLW.

Would someone be so kind to confirm this with a different blog engine for me? Just run through the repro steps I mentioned above and let me know if it really works for you. I’d really be grateful.

Just to be clear: Repro Steps

  1. Create a post with no categories.
  2. Use another tool (such as your blog’s web admin) to specify several categories.
  3. In Windows Live Writer, refresh the recent posts.
  4. Click on the post to edit it.
  5. Check whether or not the correct categories are selected in the category drop down.

Thanks Mucho!

comments edit

There’s a really devious scam going around worth mentioning because of one compelling tactic the scammers use.

My dad received a letter the other day “informing” him that he was the lucky winner of some unclaimed prize money. Below is the letter he received.

Sweepstakes
Letter

They sent him a check for $1,940 dollars and told him that all he needs to do to claim the prize money is deposit the check and send back a portion of that money for processing fees and identification purposes.

My dad’s first thought (which I imagine yours is as well) was, Oh! This must be a scam. They expect me to deposit their check and then send them a check from my bank account. After a few days, their check won’t clear and they’ll have my money.

For laughs, my dad decided to call the guy up to see what sort of crazy explanation he would provide. His answer caught my dad off-guard. He told him to wait till the check clears before sending them a check.

Huh? Wait a minute. So they want me to wait till the check clears? Doesn’t that mean the money is fully in my account? What if I never send them a check? I could just keep the money. If this is a scam, how are they making money?

Calling the Better Business Bureau provided the answer. They told my dad under no circumstances should he deposit that check. Yes, the check will clear, but probably because it was written by another victim defrauded by this same scam. Later when the scam is discovered by that victim, my dad would be liable for depositing a fraudelent check.

The
InspectorsWhat’s really makes this scam compelling and likely to sucker a lot of people into falling for it is the mistaken belief that once a check clears, the money is in the clear. It’s not.

In any case, if you receive such a scam letter, the proper authorities to report it to is not the FBI but the Postal Inspectors, the law enforcement wing of the United States Postal Service (and the subject of a really cheesy movie, The Inspectors starring Louis Gosset Jr.).

I would suggest warning your family members who are prone to such scams. Especially those who consistently fall for those PayPal emails and keep opening up pictures of Anna Kournikova sent via email.

comments edit

When searching for source code in a particular language, what do the words being searched on tell you about that language?

Koders.com publishes an interesting Open Source Zeitgeist which focuses on search trends and patterns within open source code. This is very similar to Google’s Zeitgeist, but grouped by programming language and specific to open source code. This might help us gain some insight into answering the above question.

For example, compare this screenshot of the top Ruby, Java, and C# searches.

Top Java Searches - 1. md5, 2.swing,
3.javaTop
C# Searches 1.system, 2.dataset,
3.openforecast Top Ruby Searches 1. proxy, 2.file,
3.socket Top PHP Searches 1. None, 2. excel,
3.mail

It’s hard to draw any conclusive conclusions based on this sample, but let me offer a few uninformed thoughts, and you can tell me how off-base I am.

Someone suggested that you sort of get a sense of the maturity of a language by the terms being searched. I can kind of see that if I define maturity in this case to mean how well the general developer community within this language understands the features of the particular language.

The idea is that if a language has been around for a long time, there might not be as many searches on basic language features and more searches that appear to be task focused, or at least on esoteric features of the language. I admit, I’m not exactly convinced. Is this true? Let’s take a look.

Take Ruby for example. Even though it’s been around as long as Java, it is only recently (past few years) that it has had a huge surge in popularity. Thus, many of the top search terms seem focused on programming constructs such as proxy, file, socket, and thread. This might reflect the large number of people just learning their way around the language.

Then again, Ruby developers are also searching on terms such as rails, controller, activerecord. These are mature software development concepts.

Whereas Java, which arguably is more mature and has a much larger community, the top terms are slightly more esoteric (md5, swing, tree) or just vain. Java developers search for “java” when searching Java code? How many search results does that produce?However, also in the top are the terms string and file. That makes sense since even though Java is mature, there are still lots of new Java developers.

What’s really interesting to me is the inclusion of “Hibernate” as number 10 in the Java results.

Contrast this to C# where It does not surprise me that dataset is number 2 for C#. It’s the workhorse for the RAD developer. It appears that in pure numbers, the DataSet is winning over OR/M and such. There are no search results for activerecord, NHibernate, Subsonic, OR/M, etc… Whether that is a sad thing or not I leave for a subsequent flame war.

What’s interesting to me is that PHP seems really focused on the domain. Being unfamiliar with PHP, I could totally be wrong, but with search terms like excel, mail, and forum, that’s the impression I get.

Sort of makes sense that an old established widely used scripting language would have its basic features already understood. Though I have no idea why the top search term would be none. Are PHP programmers nihilistic?

In any case, many of you are thinking I’m drawing too many conclusions from too little data. You are absolutely correct. This is mere idle speculation already colored by preconceived notions.

However, I do find it interesting to look at these results and ask, what do they say about these languages and their users?

comments edit

Raymond Lewallen doesn’t mean to single anybody out, but in his latest post on the topic of living better, he observes that

…there is a decent percentage of programmers that are obviously overweight. You’ve heard people talk: that fat, glasses wearing, backpack toting guy MUST be a geek! Even if you don’t wear glasses and tote a backpack with a laptop inside, if you’re plain overweight, people assume you have a high probability of being a computer geek!

So what can you do about it? At the MVP summit, I observed many things (unhealthy things) that I believe people can do to curb their diets and become healthier, leaner people.

So based on events at a geek conference, the MVP summit, Raymond assumes that unhealthy diets and lack of exercise could be at fault. Let me propose another theory based on my experience today on the exhibition floor of the SD West conference.

Burger and
Fries

As is common at conferences, several booths were giving out those ubiquitous one-size-fits all X-Large t-shirts emblazoned with a flaccid attempt at being hip and witty. All of this is lost on me as I receive a shirt I will never wear, as it looks like a dress on me. At least it will make a good rag for cleaning the next spill on my kitchen floor.

Contrast this to when I head over to SourceGear’s booth. I had the great pleasure to meet the founder of SourceGear, Eric Sink, in person. He is the author of one of my favorite blogs, in which he writes insightful posts on running a software company and software development in general.

Interesting random connection to Eric: He and I discovered that we both lived in thesame apartment complex in Spain, but at different times.

Back to the story. They actually are giving out shirts that I would consider wearing outside of a conference hall. He asks if I would like one, to which I reply, “Sure!”. It’s his next question that throws me aback.

Which size would you like?

Uh…excuse me. What was that?

You see, SourceGear had shirts in all sizes! Not only that, they had an ingenious ploy to get everyone wearing them. They were giving out a Wii and said they would randomly walk around and give people wearing the t-shirt tickets for the Wii raffle. Near the end of the day, it seemed like everyone was wearing their shirt.

Then it occurred to me. Developers are so used to being fit into a single mold, we often don’t know better. We often do that to our users, forcing them to conform to our software rather than conforming our software to how users really work.

In the keynote, David Platt gave several examples of sites that work and don’t work. Using the Starbucks website as an example, he did a search for one in his area. It didn’t find one within the selected radius, 5 miles, and gave him an empty search result page asking him to search again. Come again?

As Platt points out, when you’re in need of a coffe, would you ask your friend, “Hey, where are all the Starbucks within five miles from here?”, or do you ask, “Hey, where is the nearest Starbucks?” and then make the decision to go or not based on that information. You do the latter and so should our software.

But I digress.

Getting back to my epiphany. It occurred to me that maybe developers are fat, because we’re all trying to fit into that X-Large conference t-shirt. Perhaps, if more companies focus on the user like SourceGear, we’ll see thinner developers wearing shirts that actually fit. Just maybe.

If so, remember to thank Eric for giving developers a reason to not get fat.

comments edit

Gavin Joyce, creator of DotNetKicks, has decided to open the source for the site and allow the community to help out in implemeting features and bug fixes.

This makes a lot of sense for a community site like DotNetKicks. Not only can the community build the content, but it can contribute to the actual feature set of the site as well!

Gavin has always been a bit progressive with this site, offering half of his AdSense revenue to those who submit stories (you have to configure your AdSense in your profile and DotNetKicks will show your AdSense ID 50% of the time).

I think this is a great idea and wish him much luck. I also look forward to being able to contribute a little bit here and there (donating heavily with code from Subtext of course).

comments edit

Update: I’ve created a new NuGet Package for Identicon Handler (Package Id is “IdenticonHandler”) which will make it much easier to include this in your own projects.

A while ago, Jeff Atwood blogged about Identicons for .NET. An Identicon is an anonymized visual glyph that can represent an IP address. I likened it to a Graphical Digital Fingerprint.

Identicon
samples

The original concept and Java implementation was created by Don Park.

Afterwards, Jeff and Jon Galloway became excited by the idea and ported Don’s code to C# and .NET 2.0 and released it on his website.

This weekend, we’ve spent some time working out a few kinks and performance improvements and are proud to release version 1.1 on CodePlex.

Why CodePlex?

We chose CodePlex for this project because the codebase for this is extremely small, so the patch issue I mentioned in my critique, A Comparison of TFS vs Subversion for Open Source Projects, is not quite as large an issue.

We don’t expect this project to grow very large and have a huge number of releases. This code does one thing, and hopefully, does it well.

So in that respect, CodePlex seems like a great host for this type of small project. It is really easy to get other developers up and running if need be.

Having said that, I probably wouldn’t host a large project here yet based on the critique I mentioned.

code comments edit

Lock After reading Scott Hanselman’s post on Managed Snobism which covers the snobbery some have against managed languages because they don’t “perform” well, I had to post the following rant in his comments:

What is it that makes huge populations of developers think they’re working on a Ferrari when their app is really just a Pinto? \ \ “I’m writing a web app that pulls data from a database and puts it on a web page. I never use ‘foreach’ because I heard it’s slower than explicitly iterating a for loop.

In my time as a developer I’ve experienced too many instances of this Micro Optimization, also known as Premature Optimization.

Premature optimization tends to lead “clever” developers to shoot themselves in the foot (metaphorically speaking, of course). Let’s look at one common example I’ve run into from time to time—double check locking for singletons.

Double Check Locking Refresher

As a refresher, here is an example of the double check pattern.

public sealed class MyClass
{
  private static object _synchBlock = new object();
  private static volatile MyClass _singletonInstance;

  //Makes sure only this class can create an instance.
  private MyClass() {}
  
  //Singleton property.
  public static MyClass Singleton
  {
    get
    {
      if(_singletonInstance == null)
      {
        lock(_synchBlock)
        {
          // Need to check again, in case another cheeky thread 
          // slipped in there while we were acquiring the lock.
          if(_singletonInstance == null)
          {
            _singletonInstance = new MyClass();
          }
        }
      }
    }
  }
}

The premise behind this approach is that all this extra ugly code will wring out better performance by lazy loading the singleton. If it is never accessed, it never needs to be instantiated. Of course this raises the question, Why define a Singleton if it’s quite likely it’ll never get used?

The Singleton property checks the static singleton member for null. If it is null, it attempts to acquire a lock before checking if its null again. Why the second null check? Well in the time our current thread took to acquire the lock, another thread could have snuck in and initialized the singleton.

Note that we use the volatile keyword for the _singletonInstance static member. Why? Long story made short, this has to do with how different memory models can reorder reads and writes. For the current CLR you can ignore the volatile keyword in this case. But if you run your code on Mono or some other future platform, you may need it, so no point in not leaving it there.

Criticisms or If this is fast, how much faster is triple check locking?

Jeffrey Richter in his book CLR via C# criticizes this approach (starting on page 639) as “not that interesting” (Yes, he can be scathing!)

The double-check locking technique is less efficient than the class constructor technique because you need to construct your own lock object (in the class constructor) and write all of the additional locking code yourself.

The cost of initializing the singleton instance would have to be significantly more than the cost of instantiating the object used to synchronize access to it (not to mention all the conditional checks when accessing the singleton) to be worth it.

A Better Approach? The No Look Pass of Singletons

So what’s the better approach? Use a static initializer in what I call the No Check No Locking Technique.

public sealed class MyClass
{
  private static MyClass _singletonInstance = new MyClass();

  //Makes sure only this class can create an instance.
  private MyClass() {}
  
  //Singleton property.
  public static MyClass Singleton
  {
    get
    {
      return _singletonInstance;
    }
  }
}

The CLR guarantees that the code in a static constructor (implicit or explicit) is only called once. You get all that thread safety for free! No need to write your own error prone locking code in this case and no need to dig through Memory Model implications. It just works, unlike your Pinto, sorry, “Ferrari”.

See, sometimes you can have your cake and eat it too. This code, which is simpler and easier to understand, happens to perform better and requires one less object instantiaton. How do you like them apples?

It turns out that this approach is also recommended for Java, as it was discovered that the double check locking approach wasn’t guaranteed to work.

What!? You’re Still Using Singletons?!

Now that I’ve gone through all this trouble to show you the proper way to create a Singleton, I leave you with this thought. Should a well designed system use Singletons in the first place, or is it just a stupid idea? That’s a topic for another time.

Please note that double check locking doesn’t only apply to Singletons. It just happens to be the place where it is most often seen in the wild.

comments edit

It’s comments like this that remind me why I enjoy blogging.

Holy shit!

I found this post whilst searching for POST timeout and thought it was a long shot for my problem. Well, it worked for me!

Thank you so much!!!

Not to mention that it serves to validate my previous point about Search Driven Development. It worked for this guy.

The comment is here in its original context.