comments edit

Well Jon and I arrived safely, driving into Vegas around 4 PM last evening. Upon arriving, we met up with Miguel De Icaza, the founder of the Mono project, and headed over to the Mashup Lounge where we ran into John Osborne, a senior editor with O’ Reilly.

Being the small world that it is, John was a reviewer for the Windows Developer Power Tools book and happened to review the section I wrote on Tortoise CVS/SVN.

We were joined by Eric Kemp, one of the members of the Subsonic team and fun conversation on Open Source, Mono, Politics, etc… ensued.

Later on in the evening we headed over the BlogZone, a suite in the Venetian towers with a couple of X-Boxes, food, and drinks. We were later joined by Jeff Atwood, Scott Hanselman, Clemens Vasters, Steve Maine and a deadly game of Guitar Hero ensued.

Keynote is about to start, will write more later.

comments edit

If you’ve read my blog at all, you know I’m a big proponent of Continuous Integration (CI). For the Subtext project, we use CruiseControl.NET. I’ve written about our build process in the past.

Given the usefulness of having a build server, you can understand my frustration and sadness when our build server recently took a dive. I bought a replacement hard drive, but it was the wrong kind (a rookie mistake on my part, accidentally getting an IDE drive rather than SATA).

Members of the Subtext team such as Simo, Myself, and Scott Dorman have put in countless hours into perfecting the build server. If only we had CI Factory in our toolbelt before we started.

CI Factory is just that, a factory for creating CruiseControl.NET scripts. Scott Hanselman calls it a Continuous Integration accelerator. It bundles just about everything you need for a complete CI setup such as CCNET, NUnit or MbUnit, NCover, etc…

In the latest dnrTV episode, Jay Flowers, the creator of CI Factory, joins hosts Scott Hanselman and Carl Franklin to create a Continuous Integration setup using CI Factory in around an hour.

The project they chose to use as a demonstration is none other than Subtext! Given the number of hours we’ve taken to setup the Subtext build server, this is quite an ambituous undertaking to take, especially while being recorded.

Can you imagine having to write code while two guys provide color commentary? I’d probably wilt under that pressure, but Jay handles it with aplomb.

The video runs a bit long, but is worth watching if you plan to setup CI for your own project. The amount of XML configuration with CIFactory might seem daunting at first, but trust me when I say that it’s much worse for CCNET by itself. CIFactory reduces the amount of configuration by a lot, and Jay is constantly making it easier and easier to setup.

As an aside, Jay Flowers scores big points with me for also being a member of the MbUnit team, my favorite unit testing framework. Kudos to Jay, Scott, and Carl for a great show.

comments edit

Charlez Petzold makes the following lament in response to Jeff Atwood’s review of two WPF books, one being Petzold’s.

I’ve been mulling over Coding Horror’s analysis of two WPF books, not really thrilled about it, of course. The gist of it is that modern programming books should have color, bullet points, boxes, color, snippets, pictures, color, scannability, and color.

Does that remind you of anything?

Apparently the battle for the future of written communication is over. Prose is dead. PowerPoint has won.

With all due respect to Mr. Petzold, and he certainly deserves much respect, I think the comparison to PowerPoint is unfair and really misses the point.

Since when is technical writing prose?

Well it often does meet one of the definitions of prose.

​1. the ordinary form of spoken or written language, without metrical structure, as distinguished from poetry or verse.\ 2. matter-of-fact, commonplace, or dull expression, quality, discourse, etc.

Using that definition, I fail to see how the death of dull and commonplace expression signals a loss for the future of written communication. If anything, it’s a step in the right direction.

Technical writing is supposed to teach and help readers learn and retain information. Having visual aids not only helps cement the information in your mind, but also aids in finding that information when you need to look it up again.

Long passages of unbroken prose are great for getting lost in mental imagery when reading a novel, but it sucks for recall. Prose is alive and well in its proper place. Save the lengthy prose for the next great work of fiction, but cater to how the brain works when writing something meant to be absorbed, learned, and remembered.

Head First Design Patterns
Cover I think the Head First series really gets it when it comes to how the mind works and learns. From the introduction to Head First Design Patterns.

Your brain craves novelty. It’s always searching, scanning, waiting for something unusual. It was built that way, and it helps you stay alive.

Today, you’re less likely to be a tiger snack. But your brain’s still looking. You just never know.

So what does your brain do with all the routine, ordinary, normal things you encounter? Everything it can to stop them from interfering with the brain’s real job—recording things that matter. It doesn’t bother saving the boring things; they never make it past the “this is obviously not important” filter.

In a subsequent section, the book describes the Head First learning principles, a couple of which I quote below. I highly recommend reading this entire intro the next time you are in the bookstore.

Make it visual. Images are far more memorable than words alone, and make learning much more effective (up to 89% improvement in recall and transfer studies). It also makes things more understandable. Put the words within or near the graphics they relate to, rather than on the bottom or on another page, and learners will be up to twice as likely to solve problems related to the content.

Use a conversational and personalized style. In recent studies, students performed up to 40% better on post-learning tests if the content spoke directly to the reader, using a first-person conversational style rather than taking a formal tone.

What we see here is that studies after studies show that appropriate use of images and graphics improve recall. Not only that, but a casual tone, like that found in a blog, also helps recall.

Unfortunately, Petzold draws an unfair analogy between Adam Nathan’s WPF book and PowerPoint. We’ve all heard that PowerPoint is evil, but the evil is in how users misuse PowerPoint, not PowerPoint itself. PowerPoint certainly makes it easy to go to the extreme with noisy graphics resulting in garish crowded presentations.

It’s this proliferation of PowerPoint presentations that favor graphics to the detriment of the content that leads to the disdain towards PowerPoint. But it is also possible to create sublime presentations with PowerPoint with just the right amount of graphics.

Even Tufte would acknowledge that getting rid of graphics and bullet points completely is also extreme in the opposite direction and works against the real goal, to convey information in a manner that the audience can understand and retain it.

Drawing a comparison between Nathan’s book and PowerPoint suggests that the Nathan’s Book is all fluff and flash. But based on reading sample chapters, that is hardly the case. As Jeff wrote, the graphics, colors, and bullets all are used judiciously and appropriately. This isn’t the case of Las Vegas trying to pretend it is Florence. There’s real substance here.

code, tech, blogging comments edit

Several pople have asked me recently about the nice code syntax highlighting in my blog. For example:

public string Test()
  //Look at the pretty colors
  return "Yay!";

A long time ago, I wrote about using for converting code to HTML.

But these days, I use Omar Shahine’s Insert Code for Windows Live Writer plugin for, you guessed it, Windows Live Writer. This plugin just happens to use the Manoli code to perform syntax highlighting.


I recommend downloading and referencing the CSS stylesheet from the Manoli site and making sure to uncheck the Embed StyleSheet option in the plugin.

The dropshadow around the code is some CSS I found on the net.

comments edit

UPDATE: This functionality is now rolled into the latest version of MbUnit.

A long time ago Patrick Cauldwell wrote up a technique for managing external files within unit tests by embedding them as resources and unpacking the resources during the unit test. This is a powerful technique for making unit tests self contained.

If you look in our unit tests for Subtext, I took this approach to heart, writing several different methods in our UnitTestHelper class for extracting embedded resources.

Last night, I had the idea to make the code cleaner and even easier to use by implementing a custom test decorator attribute for my favorite unit testing framework, MbUnit.

Usage Examples

The following code snippets demonstrates the usage of the attribute within a unit test. These code samples assume an embedded resource already exists in the same assembly that the unit test itself is defined in.

This first test demonstrates how to extract the resource to a specific file. You can specify a full destination path, or a path relative to the current directory.

[ExtractResource("Embedded.Resource.Name.txt", "TestResource.txt")]
public void CanExtractResourceToFile()

The next demonstrates how to extract the resource to a stream rather than a file.

public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using(StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

As demonstrated in the previous example, you can access the stream via the static ExtractResourceAttribute.Stream property. This is only set if you don’t specify a destination.

In case you’re wondering, the stream is stored in a static member marked with the[ThreadStatic]attribute. That way if you are taking advantage of MbUnits ability torepeat a test multiple times using multiple threads, you should be OK.

What if the resource is embedded in another assembly other than the one you are testing?

Not to worry. You can specify a type (any type) defined in the assembly that contains the embedded resource like so:

  , "TestResource.txt"
  , ResourceCleanup.DeleteAfterTest
  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResource()

  , typeof(TypeInAssemblyWithResource))]
public void CanExtractResourceToStream()
  Stream stream = ExtractResourceAttribute.Stream;
  Assert.IsNotNull(stream, "The Stream is null");
  using (StreamReader reader = new StreamReader(stream))
    Assert.AreEqual("Hello World!", reader.ReadToEnd());

This attribute should go a long way to making unit tests that use external files cleaner. It also demonstrates how easy it is to extend MbUnit.

A big Thank You goes to Jay Flowers for his help with this code. And before I forget, you can download the code for thiscustom test decorator here.

Please note that I left in my unit tests for the attribute which will fail unless you change the embedded resource name to match an embedded resource in your own assembly.

comments edit

Take a good look at this picture.


That there is pretty much my Shuttle machine today, metaphorically speaking of course.

We had a brief power outage today which appears to have fried just my hard drive, if I’m lucky. This machine was hosting our build server within a VMWare virtual machine.

Fortunately my main machine was not affected by the outtage because it is connected to a

The real loss is all the time it will take me to get the build server up and running again. Not to mention we were planning an imminent release and rely on our build server to automatically prepare a release. I hate manual work.

comments edit

Before I begin, I should clarify what I mean by using a database as an API integration point.

In another life in a distant galaxy far far away, I worked on a project in which we needed to integrate a partner’s system with our system. The method of integration required that when a particular event occurred, they would write some data to a particular table in our database, which would then fire a trigger to perform whatever actions were necessary on our side (vague enough for ya?).

In this case, the data model and the related stored procedures made up the API used by the partner to integrate into our system.

So what’s the problem?

I always felt this was ugly in a few ways, I’m sure you’ll think of more.

  1. First, we have to make our database directly accessible to a third party, exposing ourselves to all the security risk that entails.
  2. We’re not really free to make schema changes as we have no abstraction layer between the database and any clients to the system.
  3. How exactly do you define a contract in SQL? With Web Services, you have XSD. With code, you have interfaces.

Personally, I’d like to have some sort of abstraction layer for my integration points so that I am free to change the underlying implementation.

Why am I bringing this up?

A little while ago, I was having a chat with a member of the Subtext team, telling him about the custom MembershipProvider we’re implementing for Subtext 2.0 to fit in with our data model. His initial reaction was that developer-users are going to grumble that we’re not using the “Standard” Membership Provider.

The “Standard”?

I question this notion of “The Standard Membership Provider”? Which provider is the standard? Is it the ActiveDirectoryMembershipProvider?

It is in anticipation of developer grumblings that I write this post to plead my case and perhaps rail against the wind.

The point of the Provider Model

You see, it seems that the whole point of the Provider Model is lost if you require a specific data model. The whole point of the provider model is to provide an abstraction to the underlying physical data store.

For example, Rob Howard, one of the authors of the Provider Pattern wrote this in the second part of his introduction to the Provider Pattern (emphasis mine).

A point brought up in the previous article discussed the conundrum the ASP.NET team faced while building the Personalization system used for ASP.NET 2.0. The problem was choosing the right data model: standard SQL tables versus a schema approach. Someone pointed out that the provider pattern doesn’t solve this, which is 100% correct. What it does allow is the flexibility to choose which data model makes the most sense for your organization. An important note about the pattern: it doesn’t solve how you store your data, but it does abstract that decision out of your programming interface.

What Rob and Microsoft realized is that no one data model fits all. Many applications will already have a data model for storing users and roles.

The idea is that if you write code and controls against the provider API, the underlying data model doesn’t matter. This is emphasized by the goals of the provider model according to the MSDN introduction…

The ASP.NET 2.0 provider model was designed with the following goals in mind:

  • To make ASP.NET state storage both flexible and extensible \
  • To insulate application-level code and code in the ASP.NET run-time from the physical storage media where state is stored, and to isolate the changes required to use alternative media types to a single well-defined layer with minimal surface area
  • To make writing custom providers as simple as possible by providing a robust and well-documented set of base classes from which developers can derive provider classes of their own

It is expected that developers who wish to pair ASP.NET 2.0 with data sources for which off-the-shelf providers are not available can, with a reasonable amount of effort, write custom providers to do the job.

Of course, Microsoft made it easy for all of us developers by shipping a full featured SqlMembershipProvider complete with database schema and stored procedures. When building a new implementation from scratch, it makes a lot of sense to use this implementation. If your needs fit within the implementation, then that is a lot of work that you don’t have to do.

Unfortunately, many developers took it to be the gospel truth and standard in how the the data model should be implemented. This is really only one possible database implementation of a Membership Provider.

An Example Gone Wrong

There is one particular open source application that I recall that already had a fantastic user and roles implementation at the time that the Membership Provder Model was released. Their existing implementation was in all respects, a superset of the features of the Membership Provider.

Naturally there was a lot of pressure to implement the Membership Provider API, so they chose to simply implement the SqlMembershipProvider’s tables side by side with their own user tables.

Stepping through the code in a debugger one day, I watched in disbelief when upon logging in as a user, the code started copying all users from the SqlMembershipProvider’s stock aspnet_* tables to the application’s internal user tables and vice versa. They were essentially keeping two separate user databases in synch on every login.

In my view, this was the wrong approach to take. It would’ve been much better to simply implement a custom MembershipProvider class that read from and wrote to their existing user database tables.

For the features of their existing users and roles implementation that the Membership Provider did not support, they could have been exposed via their existing API.

Yes, I’m armchair quarterbacking at this point as there may have been some extenuating circumstances I am not aware of. But I can’t imagine doing a full multi-table synch on every login being a good choice, especially for a large database of users. I’m not aware of the status of this implementation detail at this point in time.

The Big But

Someone somewhere is reading this thinking I’m being a bit overly dogmatic. They might be thinking

But, but I have three apps in my organization which communicate with each other via the database just fine. This is a workable solution for our scenario, thank you very much. You’re full of it.

I totally agree on all three counts.

For a set of internal applications within an organization, it may well make sense to integrate at the database layer, since all communications between apps occurs within the security boundary of your internal network and you have full control over the implementation details for all of the applications.

So while I still think even these apps could benefit from a well defined API or Web Service layer as the point of integration, I don’t think you should never consider the database as a potential integration point.

But when you’re considering integration for external applications outside of your control, especially applications that haven’t even been written yet, I think the database is a really poor choice and should be avoided.

Microsoft recognized this with the Provider Model, which is why controls written for the MembershipProvider are not supposed to assume anything about the underlying data store. For example, they don’t make direct queries against the “standard” Membership tables.

Instead, when you need to integrate with a membership database, use the API.

Hopefully future users and developers of Subtext will also recognize this when we unveil the Membership features in Subtext 2.0 and keep the grumbling to a minimum. Either that or point out how full of it I am and convince me to change my mind.

See also: Where the Provider Model Falls Short.

code comments edit

I don’t think it’s too much of a stretch to say that the hardest part of coding is not writing code, but reading it. As Eric Lippert points out, Reading code is hard.

First off, I agree with you that there are very few people who can read code who cannot write code themselves. It’s not like written or spoken natural languages, where understanding what someone else says does not require understanding why they said it that way.

Screenshot of
codeHmmm, now why did Eric say that in that particular way?

This in part is why reinventing the wheel is so common (apart from the need to prove you can build a better wheel). It’s easier to write new code than try and understand and use existing code.

It is crucial to try and make your code as easy to read as possible. Strive to be the Dr. Seuss of writing code. Making your code easy to read makes it easier to use.

The basics of readable code include the usual advice of following code conventions, formatting code properly, and choosing good names for methods and variables, among other things. This is all included within Code Complete which should be your software development bible.

Aside from all that, a key tactic to improve code readibility and usability is make your code’s intentions crystal clear.

Oftentimes it’s paying attention to the little things that can really help your code along this path. Let’s look at a few examples.

out vs ref

A while ago I encountered some code that looked something like this contrived example:

int y = 7;
bool success = TrySomething(someParam, ref y);

Ignore the terrible names and focus on the parameters. At a glance, what is your initial expectation of this code regarding its parameter?

When I encountered this code, I assumed that that the y parameter value passed in to this method is important somehow and that the method probably changes the value.

I then took a look at the method (keep in mind this is all extremely simplified from the actual code).

public bool TrySomething(object something, ref int y)
    y = resultOfCalculation(something);
    return false;
  return true;

Now this annoyed me. Sure, this method is perfectly valid and will compile. But notice that the value of y is never used. It is immediately assigned to something else.

The intention of this method is not clear. It’s intent is not to ever use the value of y, but to merely set it. But since the method uses the ref keyword, you are required to set the value of the parameter before you call it. You can’t do this:

int y;
bool success = TrySomething(someParam, ref y);

In this case, using the out keyword expresses the intentions much better.

public bool TrySomething(object something, out int y)
    y = resultOfCalculation(something);
    return false;
  return true;

It’s a really teeny tiny thing, something you might accuse me of being nitpicky even bringing it up, but anything you can do so that the reader of the code doesn’t have to interrupt her train of thought to figure out the meaning of the code will make your code more readable and the API more usable.

Boolean Arguments vs Enums

Brad Abrams touched upon this one a while ago. Let’s look at an example.

BlogPost p = CreatePost(post, true, false);

What exactly is this code doing? Well it’s obvious it creates a blog post. But what is that true indicate? Hard to say. I better pause, look up the method, and then move on. What a pain!

BlogPost p = CreatePost(post
  , PostStatus.Published, CommentStatus.CommentsDisabled);

In the second case, the intentions of the code is much clearer and there is no interruption for the reader to figure out the context of the true or false as in the first method.

Assigning a Value You Don’t Use

Another common example I’ve seen is where the result of a method is assigned to the value of a variable, but the variable is never used. I think this often happens because some developers falsely believe that if a method returns a value, that value has to be assigned to something.

Let’s look at an example that uses the TrySomething method I wrote earlier.

int y;
bool success = TrySomething(something, out y);
/*success is never used again.*/

Fortunately, Resharper makes this sort of thing stick out like a sore thumb. The problem here is that as a code reader, I’m left wondering if you meant to use the variable and forgot, or if this is an unecessary declaration. Do this instead.

int y;
TrySomething(something, out y);

Again, these are very small things, but they make a big difference. Don’t worry about coming across as anal (you will) because the payout is worth it in the end.

What are some examples that you can think of to make code more readable and usable?

UPDATE: Lesson learned. If you oversimplify your code examples, your main point is lost. Especially on the topic of code readability. Touche! I’ve updated the sample code to better illustrate my point. The comments may be out of synch with what you read here as a result.

UPDATE AGAIN: I found another great blog post about writing concise code that adds a lot to this discussion. It is part of the Fail Fast and Return Early school of thought. Short, concise and readable code - invert your logic and stop nesting already!

comments edit

According to FeedBurner, many of my readers are from London, so I thought you might enjoy this little tale.

Tonight, I met someone extremely famous, or so I was told. When I got home, I looked him up, and sure enough, he is huge in Europe. According to Wikipedia, “he has sold more albums in the UK than any other British solo artist in history”.

Have any of you heard of Robbie Williams?


My wife knew who he was immediately. Must be the fact that she’s a British citizen (she has dual Japanese citizenship as well). She played one of his songs from an Alice 97.3 compilation we have. I rather liked it.

It turns out that he runs (owns?) a soccer team in Los Angeles. We had a friendly scrimmage set up with them at UCLA. I fully expected we’d be playing on the intramural fields where everyone else plays, but instead we played on the immaculate UCLA Football team’s practice field.

This seems to be a trend I’m noticing among British music stars. They move to Los Angeles and start up soccer teams to manage. They also seem to have the means to absorb some of the best talent in Los Angeles in doing so.

As I’ve written before, Steve Jones of the Sex Pistols runs a team in my league. I have heard that Rod Stewart has a team in Los Angeles as well. I suppose if the day comes when I can’t run on the pitch, and if I had that sort of money, I could see running a soccer club (sorry, Footbal Club) as a fantastic hobby.


Not to be outdone, my team now has its own celebrity member. Santiago Cabrera from the TV show Heroes is now a member of our team.

Fortunately, he is a very talented soccer player, scoring a bycicle kick against us in our scrimmage tonight (he plays on the other team as well). Now if we could just get some former pros to join us to help solidify our midfield. Zidane, I’m looking at you buddy!

comments edit

Tim Heuer has been on a tear lately submitting some great new skins to the Subtext Skin Showcase, which is part of

The Showcase is the part of the site in which we display user submitted skins and allow others to download the skins. The other part of the site displays the default skins in Subtext.

Blue Terrafirma Dirtylicious Informatif

It appears that Tim has been porting some of the nicer designs in the Open Designs website, a website devoted to open source web design.

Tim happens to also be the creator of Origami (which you can see in use on Rob Conery’s Blog), which many consider to be the nicest skin in Subtext.

If you are a Subtext user, try out some of these skins. They may find their way into future releases of Subtext.

comments edit

Simone Chiaretta, a member of the Subtext team (not to mention many other projects), just released a Vista Gadget which allows you to monitor a CruiseControl.NET build within your sidebar.

It looks spiffier than the system tray applet that comes with CCNET.

Here’s a screenshot of it docked.

CCNET Gadget

And undocked.

CCNET Gadget

From the screenshots you can see the status of the projects he is monitoring. The good news is that the 1.9 build has been fixed since he took these screenshots.

Pretty nifty!

comments edit

I received a strange delinquency notice for a parking ticket. At first glance, it seemed normal enough. Yep, there’s my license plate number. Yep, the make of the car is correct. But look at this, the color of the car is wrong.

That’s strange since it’s not one of those cases where they indicated midnight blue when the car is black. No, they indicated red and my car is blue.

And one other minor detail was a bit off. The parking ticket was for Fillmore street in San Francisco and I live in Los Angeles.


I called the SF parking department and the nice woman on the phone looked into it and told me that the parking attendant made several errors in the citation and I can disregard the notice.

Several errors? I’ll say.

Like hallucinating a car that couldn’t possible be in San Francisco at the time? Or, perhaps there just happens to be a red car with the same make as mine and the same license plate number, just with a “B” where mine has an “8”.

comments edit

It wasn’t till 1987 that I experienced my first (and worst) case of technolust ever. The object that inspired such raw feelings of lust, of course, was the Commodore Amiga.

As a lowly Commodore 128 owner, which was really just a glorified Commodore 64 in a beige case, I bought every issue of the Commodore magazines of the day.

500These magazines started showing off these lush advertisements of the Commodore Amiga, boasting of its 4096 colors and 4-channel stereo sound.

I had to have it.

Looking back, I am shocked at how much my lust for the Amiga held sway over me. I purchased a copy of every Amiga magazine on the newstand, talked about it incessantly to anyone who would listen, and had vivid dreams of the Amiga’s amazing graphics capabilities.

And when I finally got my hands on it, it was every bit as good as I had hoped.

For many Amiga users at the time, the Amiga was true to its name (spanish for female friend) in that it was the closest thing to a girlfriend we had. Give me a break, I was only twelve at the time.

Like having a girlfriend, I spent countless hours with the computer, not to mention countless dollars on peripherals and upgrades. I remember hustling for tips at the local commissary in order to upgrade the beast from 512K to 1MB of ram (cost: $99).

The reason I bring this up is I came across a recent article on the Wired website entitled Top 10 Most Influential Amiga Games, which filled me with a rush of nostalgia.

I only had the pleasure to play two of the games listed, Defender of the Crown, in which catapulting castles was pure fun, and SpeedBall 2, which probably was responsible for the pile of broken joysticks I accumulated.

Defender of the Crown Catapult
Scene Speedball 2

Personally though, I thought Lords of the Rising Sun (also made by Cinemaware) was even better than Defender of the Crown.

Lords of the Rising Sun
Screenshot Lord of the rising sun screenshot with a

The game sequence in which you could snipe advancing siegers using a first-person bow and arrow with a little red laser point dot was exhilarating (sadly, I could not find a screenshot).

Speedball 1

I also liked Speedball 1 (shown here) slightly better than 2 because the side scrolling in 2 always threw me off.

I still have my Amiga 500 gathering dust in a storage cabinet in the garage. I’ve been meaning to unpack it and see if it still works, but my home is small and there’s really no room to set it up. I figure there must be a better way to try out my old games.

Amiga Emulation!

Digging around, I discovered there’s an active project to create an Amiga emulator for *nix called UAE. There’s a Windows port called, not surprisingly, WinUAE (click for full size).


Unfortunately, these projects cannot distribute the Amiga ROM nor its operating system due to copyright issues. However they do provide instructions on how to transfer the ROM and operating system over to your PC on their FAQ.

Amiga Forever

An even easier approach is to simply purchase Amiga Forever for around forty bucks. This is an ISO image that contains a preconfigured WinUAE with the original ROM and operating system files. Amiga Forever is sold by Cloanto who currently own certain intellectual property rights to the Amiga.

Amiga Forever comes with several games for the Amiga as well that vary with the edition purchased. The site also has a games section in which they list places to download more games.

For example, the Cinemaware site has disk images for pretty much all of their games available for free, including Lords of the Rising Sun.

Play Defender of the Crown Immediately

All this talk of Amiga emulation sounds like fun and everything, but seriously, do I need yet another time sink? If you’re jonesing for some Amiga gaming now and don’t want to be bothered with emulation, head over to the Cinemaware website and satiate your Amiga gaming kick by playing the Flash version of Defender of the Crown. Now about that time sink…

Though I owned a couple computers prior to the Amiga, the Amiga is truly the computer that fueled my fire for computing.

comments edit

The website just switched over its 1442 (and counting) blogs, containing 25,921 blog posts and 39,140 comments over to Subtext. As Jeff Julian reports, it only took them six hours.

Jeff posted a pic of the crew at work to make it happen (click for larger).

GWB'ers burning the midnight

Not depicted in the picture are members of the Subtext team who have tried their best to be responsive and helpful to the GWB team during their early planning phases for the move.

Subtext should handle the load just fine considering that they were running on .TEXT prior, and though we’ve made a lot of changes, we haven’t changed the data access code drastically.

Tip of the hat to Scott Watermasysk for building the original .TEXT code in a scalable manner, laying a good foundation for this sort of installation.

Already, the large site may have sussed out a caching bug we’ve been trying to track down for ages, but haven’t been able to reproduce.

Anyways, congratulations to the GWB team for a successful migration.

Technorati tags: Subtext, Geeks With Blogs

comments edit

Maybe this is obvious, but it wasn’t obvious to me. I’m binding some data in a repeater that has the following output based on two numeric columns in my database. It doesn’t matter why or what the data represents. It’s just two pieces of data with some formatting:

42, (123){.console}

Basically these are two measurements. Initially, I would databind this like so:

<%# Eval("First") %>, (<%# Eval("Second") %>)

The problem with this is that if the first field is null, I’m left with this output.

, (123){.console}

Ok, easy enough to fix using a format string:

<%# Eval("First", "{0}, ") %>(<%# Eval("Second") %>)

But now I’ve learned that if the first value is null, the second one should be blank as well. Hmm… I started to do it the ugly way:

<%# Eval("First", "{0}, ") %> <%# Eval("First").GetType() == 
  typeof(DBNull) ? "" : Eval("Second", "({0})")%>

*Sniff* *Sniff*. You smell that too? Yeah, stinky and hard to read. Then it occured to me to try this:

<%# Eval("First", "{0}, " + Eval("Second", "({0})")) %>

Now that code smells much much better! I put the second Eval statement as part of the format string for the first. Thus if the first value is null, the whole string is left blank. It’s all or nothing baby! Exactly what I needed.

comments edit

UPDATE: Luke Wroblewski posted a link in my comments to his Best Practices for Form Design PDF. It is 100+ pages chock full of good usability information concerning forms. Thanks Luke!

James Avery writes about the Art of Label Placement in which he links to a few great articles on form design and label placement.

Web Application Form Design by Luke Wroblewski - This article covers the best ways to arrange labels and submission buttons.

Web Application Form Design Expanded by Luke Wroblewski - Another great article from Luke W. expanding on the same topics.

Label Placement in Forms by Matteo Penzo - Matteo takes Luke’s advice but applies eyetracking to evaluate how usable it is.

Eye Tracking

Based on these articles, James decides that non-bold labels above input fields are the best for usability. Interestingly enough, a non-bold label just above the form field just happenes to be my personal preference as well.

And now, I know why.

Matteo Penzo’s research using Eye Tracking provides some empirical evidence that this arrangement is more usable.

comments edit

Rob Conery is soliciting our feedback for a panel on Open Source that he᾿ll be participating in at Mix07.

He᾿s joined by some big names in the world of Open Source Software including Miguel de Icaza. Hot Damn!

I won᾿t lie, I did want to be a part of the panel when I first heard about it (in part to get a free ticket, but also be cause I love hearing myself talk about Open Source) but did not make the cut. Now I see why and I᾿m kind of glad I᾿m not up there risking looking like a fool next to those guys.

Not to say that Rob is going to look foolish. He᾿s got a lot of smarts. You᾿ll do fine Rob! Trust me.

comments edit

How good are you at thinking on your feet?

Last night I watched the premier for a new show called Thank God You’re Here. It’s a sketch improv comedy show starring various comedy television and movie stars, who have to bluff their way through a scene. They are given costumes, a set, but no script.

The set of Thank God You're

The title of the show derives from the fact that the first line of each skit is “Thank god you’re here!”

I love improv comedy and I thought Neuman from Seinfeld was great as well as the dad from Malcom in the Middle. You can watch the premier online.

It’s reminiscent of one of my favorite improv shows ever, Drew Carey’s Who’s Line Is It Anyways?, but with better sets and costumes. Though it remains to be seen if they will ever top the funniest Whose Line episode ever with Richard Simmons.

Who’s Line Is It

code, sql comments edit

I’m not one to post a lot of quizzes on my blog. Let’s face it, while we may create altruistic reasons for posting quizzes such as:

  1. It’s an interesting problem I thought up
  2. It’s an interesting bug I ran into

we all know the real reasons for posting a quiz.

  1. It serves as blog filler.
  2. It’s a way to show off how smart the blogger is.

With that in mind, let me humbly present my latest SQL Quiz, which is something I ran into at work recently, and will not show off any smarts whatsoever.

The circumstances of this problem have been dramatically changed and simplified to both protect the guilty and save me from a lot of typing.

In this application, we have two tables. One contains a lookup list of various statistics. The second is a larger table of measurements for each of the statistics.

The following screenshot shows the data model.

Statistic table and Measurement

The following screenshot shows the list of contrived statistics.

Statistic Table

What we see above are the following:

  1. LOC per bug - Lines of code per bug.
  2. Simplicity Index - some magical number that purports to measure simplicity.
  3. Awe Factor - The awe factor for the source code.

For each of these statistics, the larger, the better.

The following is a view of the Measurement table.

Measurement Table

Each measurement has the previous score and current score (this is a denormalized version of the actual tables for the purposes of demonstration).

I needed to write a query that would show each of the stats for a given developer as well as a Trend Factor. The Trend Factor tells you whether or not the statistic is trending positive or negative, where positive is better and negative is worse.

Result of the

Here is my first cut at the stored procedure. It’s pretty straightforward. In order to make the important part of the query as clear as possible, I used a Common Table Expression to make sure the count of measurements for each statistic can be referenced as if it were a column.

CREATE PROC [dbo].[Statistics_GetForDeveloper](
  @Developer nvarchar(64)
WITH MeasurementCount(StatisticId, MeasurementCount) AS
    ,MeasurementCount = COUNT(1)
  FROM Statistic s
    LEFT OUTER JOIN Measurement m ON m.StatisticId = s.Id
  Statistic = s.Title
  , Developer
  , CurrentScore
  , PreviousScore
  , mc.MeasurementCount
  , TrendFactor = (CurrentScore - PreviousScore)/mc.MeasurementCount
FROM Statistic s
  INNER JOIN MeasurementCount mc ON mc.StatisticId = s.Id
  LEFT OUTER JOIN Measurement m ON m.StatisticID = s.Id
WHERE Developer = @Developer

I bolded the relevant part of the query. We calculate the TrendFactor by taking the current score, subtracting the previous score, and then dividing the difference by the number of measurements for that particular statistic. This tells us how that statistic is trending.

In this application, I am going to present an up arrow for trend factors larger than 0.1, a down arrow for trend factors less than -0.1, and a flat line for anything in between. A trend factor going upward is always considered a “good thing”.

The Challenge

This works for now because for each statistic, a larger value is considered better. But we need to add a new statistic, Deaths per LOC, which measures the number of deaths per line of code (gruesome, yes. But whoever said this industry is all roses and rainbows?). For this statistic, an upward trend is a “bad thing”.

Therefore, if the current score is larger than the previous score for this statistic, we would want the TrendFactor to be negative. Not only that, we may want to add more statistics in the future. Some for which larger values are better. And some for which smaller values are better.

So here is the quiz question. You are allowed to make a schema change to the Statistic table and to the stored procedure. What changes would you make to fulfill the requirements?

Bonus points, can you fulfill the requirements without using a CASE statement in the stored procedure?

Here is a SQL script that willl setup the tables and initial stab at the stored procedure for you. The script requires SQL Server 2005 or SQL Server Express 2005.