comments edit

Green Light from Code Search is truly the search engine for the uber geek, and potentially a great source of sublime code and sublime comments.  K. Scott Allen, aka Mr. OdeToCode, posted a few choice samples of prose he found while searching through code (Scott, exactly what were you searching for?).

One comment he quotes strikes me as a particularly good point to remember about using locks in multithreaded programming.

Locks are analogous to green traffic lights: If you have a green light, that does not prevent the idiot coming the other way from plowing into you sideways; it merely guarantees to you that the idiot does not also have a green light at the same time. (from

The point the comment makes is that a lock does not prevent another thread from going ahead and accessing and changing a member.  Locks only work when every thread is “obeying the rules”.

Unfortunately, unlike an intersection, there is no light to tell you whether or not the coast is clear. When you are about to write code to access a static member, you don’t necessarily know whether or not that member might require having a lock on it. To really know, you’d have to scan every access to that member.

This is why concurrency experts such as Joe Duffy recommend following a locking model.  A locking model can be as simple or as complex as needed by your situation.

In any case, have fun with Google Code Search. You might find yourself reviewing the code of a Star Trek text-based sex adventure.

comments edit

Just thought I would highlight something I mentioned in my last post because I thought it was particularly funny. I wrote about joys using Google Code Search to search through source code for interesting comments. Definitely a geeky pasttime.

However that geekiness is overshadowed by something interesting I found. In this case, it’s not the comments that are interesting, but the actual code itself.

It appears to be a lonely geek’s fantasy written in code. A text-based adventure about sexcapades with Dr. Beverly Crusher on the star trek enterprise. Here’s a tame snippet to give you an idea.

dancewithdesc ( actor ) = { "You and [Dr. Beverly Crusher] dance a quiet, soft, close dance around her quarters. It is really quite chaste, but you find yourself intoxicated by her scent and stimulated by your grasp and hers, both of which stray far lower on each others’ backs than is supposedly called for with this step.";


You have to read it to believe it (warning, trashy novel style adult content).

comments edit

UPDATE: Looks like the DNS issue is starting to get resolved. The fix may not have fully propagated yet.

If your Akismet Spam Filtering is currently broken, it is probably due to a DNS issue going around. I reported it to the akismet mailing list and found that people all over the world are having the same issue. It is not just a Comcast issue.

The temporary fix is to add the following entry into your hosts file:

Hopefully the Akismet team will fix this problem shortly.

comments edit

My favorite unit testing framework just released a new version. Andrew Stopford has the announcement here and you can download the release from the MbUnit site.

I met Andrew at Mix 06 early this year and he’s a class act and great project lead. I’ve been following MbUnit’s progress on and off and am really happy with the team’s responsiveness to my submitted issues.

My one tiny contribution to the project was to purchase the domain for them. Perhaps a little bribe to get my feature requests looked at promptly. ;)

If you are wondering why I prefe MbUnit of NUnit, check out these posts:

comments edit

Ok, I could use some really expert help here. I really like using the built in WebServer.WebDev Web Server that is a part of Visual Studio

  1. For one thing, it makes getting a new developer working on Subtext (or any project) that much faster. Just get the latest code, and hit CTRL+F5 to see the site in your browser. No pesky IIS set up.

Today though, I ran into my first real problem with this approach. When running the latest Subtext code from our trunk, I am getting a SecurityException during a call to Server.Transfer.

Stepping through the code in the debugger, the page I transfer to executes just fine without throwing an exception.

Based on the stack trace, the exception occurs when the content is being flushed to the client. A security demand for Unmanaged Code is the cause of this during a call to the IHttpResponseElement.Send method of the HttpResponseUnmanagedBufferElement class.

What I don’t understand is why this particular class is handling my request instead of the HttpResponseBufferElement class? This code seems to work fine when I use IIS, so I think it’s a problem with WebServer.WebDev. Anybody know anyone who understands these internals well enough to enlighten me? I’d be eternally grateful.

I posted this question on the MSDN forums as well.

comments edit

From Recently while picking up a few items at Target, I decided to buy a cheapo soccer ball. Now those who know me know I’m a bit of a fanatic about playing soccer, willingly paying good money for a quality ball.

But this ball is not for playing outdoors. I keep it in my office so I can dribble it during breaks, deftly avoiding obstacles on my way to the bathroom, practicing moves during phone calls and long compilations.

It’s a minor thing, but I am already noticing improvement when playing for real, just through the benefits of visualization and practice. I wouldn’t recommend this for every sport. Images of Craig Andera with a hockey stick breaking furniture in his office come to mind.

As software developers, we tend to hold the idea of innate talent in very high regard. How often do you hear software pundits saying, Either you got it, or you don’t.

However, according to a recent Scientific American article, The Expert Mind, this may not be as much the case as we think.

At this point, many skeptics will finally lose patience. Surely, they will say, it takes more to get to Carnegie Hall than practice, practice, practice. Yet this belief in the importance of innate talent, strongest perhaps among the experts themselves and their trainers, is strangely lacking in hard evidence to substantiate it.

The article delves into studies of chessmasters who when briefly shown a random chessboard cannot recall the positions of its pieces any better than non-chessmasters, but when those pieces represent possible configurations due to game play, have a significantly stronger recall.

The article concludes that chessmasters build structures in their brains to recognize patterns in chess, and that to become an expert in chess takes around ten years.

The one thing that all expertise theorists agree on is that it takes enormous effort to build these structures in the mind. Simon coined a psychological law of his own, the 10-year rule, which states that it takes approximately a decade of heavy labor to master any field. Even child prodigies, such as Gauss in mathematics, Mozart in music and Bobby Fischer in chess, must have made an equivalent effort, perhaps by starting earlier and working harder than others.

It turns out that the quality of effortful study is a big factor in moving from novice to expert. So not everyone will become an expert in 10 years, only those who continue to push themselves, examine their weaknesses and strengths, and study accordingly.

I figured I could move past my plateau as a soccer player by creating ways to practice better and more often, hence the soccer ball in my office.

I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot technical books, but many of them aren’t making me better as a developer. I pretty much read books on autopilot these days.

It’s not till I actually spend time to think about the implications and applications for concepts in the books, explain these concepts to others, and write code to test my understanding out, that I really feel growth in my craft.

Of course, that leaves me with the question of whether some people are innately more curious or better at studying and finding ways to improve themselves, but that’s a question for the researchers to work on.

If you haven’t already, I recommend reading the article, because my summary does not do it justice.

comments edit

Silver Bullet: From In his essay No Silver Bullet: Essence and Accidents of Software Engineering, Fred Brooks makes the following postulate:

There is no single development, in either technology or in management technique, that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity.

This “law” was recently invoked by Joel Spolsky in his post Lego Blocks, which prompted an interesting rebuttal by Wesner Moise.

That assertion turns out to be pure nonsense, amply disproven by numerous advances in IDEs, languages, frameworks, componentization over the past few decades. Our expectations of software and our ability have risen. A year of work takes a month or a month of work takes a day.

Whether you agree with Wesner’s position or not comes down to how you define a single development.  It could be argued that the order of magnitude improvement we have now is a cumulative result of multiple improvements.

Regardless, perhaps a more lasting way to rephrase this assertion is to state that no single technology, development, or management technique will produce by itself an order-of magnitude improvement in meeting current business needs.

In other words, sure we can produce an order-of-magnitude more productivity now than we could before, but changing business climates and consumer needs have also increased by an order-of-magnitude. Just compare a modern game like Oblivion to an older game like Ultima I.

Screenshot Ultima

In a way, this is Parkinson’s Law at work:

work expands so as to fill the time available for its completion.

I’ll restate it to apply to software engineering:

Business needs and feature requirements increase to fill in the productivity gains due to silver bullets.

What do you think, is that sufficiently original to call it Haack’s Law? Wink

In any case, I think Joel’s original point still stands. Building software to meet current needs, will always be hard.  When you think about it, the dreams of building software with lego-like blocks has been realized. But only for those who need to write software that meets the needs of users in the 1960s.  For modern needs, it remains challenging.

comments edit

One of the hidden gems in ASP.NET 2.0 is the new expression syntax. For example, to display the value of a setting in the AppSettings section of your web.config, you can do this:

<asp:Label Text="<%$ AppSettings:AnotherSetting %>"
    runat="server" />

Notice that the value of the Text property of the Label control is set to an expression that is similar to the DataBinding syntax (<%#), but instead of a pound sign (#) it uses a dollar sign ($).

Expressions are distinguished by the expression prefix. In the above example, the prefix is AppSettings.  The following is a short list of built in expression prefixes you can use. I am not sure if there are more:

  • Resources
  • ConnectionStrings
  • AppSettings

But like most things in ASP.NET, this system is extensible, allowing you to easily build your own custom expressions. In this blog post, I’ll walk you through building a query string expression builder. This will allow you to display a query string value like so:

<asp:Label Text="<%$ QueryString:SomeParamName %>"
    runat="server" />

The first step is to create a class that inherits from System.Web.Compilation.ExpressionBuilder. Be sure not to confuse this with System.Web.Configuration.ExpressionBuilder

using System.Web.Compilation;

public class QueryStringExpressionBuilder : ExpressionBuilder
  //Implementation goes here...

ExpressionBuilder is an abstract class with a single abstract method to implement. This method returns an instance of CodeExpression which is part of the System.CodeDom namespace. For those not familiar with CodeDom, it’s short for Code Document Object Model. It is an API for automatic code generation. The CodeExpression class is an abstract representation of code that gets executed each time your custom expression is evaluated.

You’ll probably use something similar to the following implementation 99% of the time though (sorry for the ugly formatting, but this pretty much mimics the implementation in the MSDN documentation).

public override CodeExpression GetCodeExpression(
    BoundPropertyEntry entry
    , object parsedData
    , ExpressionBuilderContext context)
  Type type = entry.DeclaringType;
  PropertyDescriptor descriptor = 
  CodeExpression[] expressionArray = 
    new CodeExpression[3];
  expressionArray[0] = new 
  expressionArray[1] = new 
  expressionArray[2] = new 

  return new CodeCastExpression(descriptor.PropertyType
    , new CodeMethodInvokeExpression(
        new CodeTypeReferenceExpression(GetType())
        , "GetEvalData"
       , expressionArray));

So what exactly is happening in this method? It is effectively generating code. In particular, it generates a call to a static method named GetEvalData which needs to be defined in this class. The return value of this method is then cast to the type returned by descriptor.PropertyType, which is why you see the CodeCastExpression wrapping the other code expressions.

The arguments passed to GetEvalData are represented by the CodeExpression array, expressionArray. The first argument is the expression to evaluate (this is the the part after the prefix). The second argument is the target type. This is the type of the class in which the expression is being evaluated. In our case, this would be the type System.Web.UI.WebControls.Label as we are using this expression within a Label control. The final argument is entry. This is the name of the property being set using the expression. In our case, this would be the Text property of the Label.

You could really build any sort of code tree within this method, but as I said before, most of the time, you will follow a similar pattern as this. In fact, I would probably put this method in some sort of abstract base class and then make sure to define the static GetEvalData method in your inheriting class.

Note, if you choose to move this method into an abstract base class as I described, you can’t make GetEvalData an abstract method in that class because we generated a call to a static method.

You could consider changing the above method to build a call to an instance method, but then you the generate code would have to create the instance everytime your expression is evaluated. It would not have access to an instance of the expression builder automatically. The choice is yours.

Here is the GetEvalData method we need to add to QueryStringExpressionBuilder.

public static object GetEvalData(string expression
    , Type target, string entry)
    if (HttpContext.Current == null 
      || HttpContext.Current.Request == null)
        return string.Empty;

    return HttpContext.Current

With the code for the builder completed, you simply need to add an entry within the compilation section under the system.web section of web.config like so:

  <compilation debug="true">
      <add expressionPrefix="QueryString" 
        type="NS.QueryStringExpressionBuilder, AssemblyName"/>

This maps your custom expression class to the expression via its prefix.

In the MSDN examples, they tell you to drop your expression class file into the App_Code directory. This works when you are using the Website Project model. Fortunately, you can also use custom expressions with Web Application Projects. Simply compile your builder into an assembly and make sure to specify the AssemblyName as part of the type attribute when declaring your expression builder.

If you are using the WebSite project model and the App_Code directory, you should leave off the AssemblyName portion of the type.

comments edit

No, this is not a bait and switch post where I try to recruit you to work on Subtext.  A while ago I mentioned that I was participating in The Hidden Network.  So far, I really like it, though I think there is still room for improvement.

If you visit my site (since many of you are reading this in an RSS aggregator), you might have noticed a Jobs link at top.  The link will take you to a full listing of jobs.

The neat thing about this job listings page is that it is hosted by The Hidden Network. I simply added a CName to redirect to hidden network.  They made it extremely easy for me to add a jobs section to my website.

Being bored, I figured I’d take a look through the list to see what kind of job opportunities are available.  Frankly, I am a little bit disappointed.  Many of the jobs sound like yaaaawners.  Perhaps more employers should read my guide, The Art Of The Job Post (If that came across as arrogant, whoops)

The Hidden Network is still pretty young, but over time I’d like to see a lot more jobs listed.  That would make using Geolocation to list jobs that are local to the reader more useful. I also think it’d be neat if I could annotate job postings.

There were a few that did catch my attention…

Sr. Admin, Programmer at Chuck E. Cheese I don’t know if the job itself sounds interesting, but hey! It’s Chuck E. Cheese!  Where a kid can be a kid! I’d ask for a signing bonus that includes free Pizza and  the passcode to play video games onsite for free.

Net, SQL, ASP Developer at Y! Music I have a buddy who works at Yahoo! in Santa Monica and loves it. In a bit of personal trivia, I actually worked on the original website, which was later bought by Yahoo! (many iterations later).  I interviewed with Yahoo! in Santa Monica, but chose to go to SkillJam instead.

.NET Software Engineer at IGN If you’re into gaming, this could be alot fun.

The Motley Fool have several jobs listed.  Not sure what they’d be like to work for, but at least you’d get good investment advice while on the job.

There may well be others in there worth mentioning. I wasn’t that bored that I would read the details of every one.  The good thing about these listings is they appear to be real jobs, and not phishing expeditions by head hunters.

This may be a longshot to even ask, but if you end up actually applying for a job because you saw it on my blog, would you let me know? 

If you are an employer, consider posting a job.

comments edit

Participating in the comments section of particularly interesting blog posts is a lot of fun and helps build community.  But one of the annoyances in doing so is that there’s really no good way to keep track of comments.  Unlike new posts in someone’s RSS Feed, most aggregators won’t tell you when there is a new comment.

Sure, there is coComment, but since I like to post comments using my RSS Aggregator via the CommentAPI, coComment isn’t such a help there.

But help is on the way.  Dare Obasanjo recently announced the beta release of Jubilee, the code name for RSS Bandit 1.5.  One of the more interesting features (and my favorite) included in this release is comment watching.

When reading an interesting post, you can right click on the post and select the Watch Comments option.  The following screenshot demonstrates.


In a stroke of pure vanity, I will select a blog post in Andrew Stopford’s blog that makes a reference to me and click Watch Comments.

Now if I wait long enough, someone will eventually leave a comment on that post.  Of course, why leave it to chance? I went ahead and left a comment via the browser (sorry Andrew).

When RSS Bandit updates, it shows me that someone left a comment in my Developers category by turning that category green.


Expanding that node, I can dig down to the post and read the new comment.


Of course, this only works for blogs that support wfw:commentRss.  Unfortunately, one of my favorite blogs, CodingHorror, which happens to always have lively conversation in the comments section, doesn’t support it.  Jeff, it’s time to move to Subtext!

Kudos go out to Dare and Torsten!  Unfortunately, I’ve been overcommitted and have not been able to contribute to RSS Bandit lately.

comments edit

Now this is a stroke of genius.  If you want people to consider making their .NET applications work on Mono, give them a tool that informs them ahead of time how much trouble (or how easy) it will be to migrate to Mono.

That is exactly what Jonathan Pobst did with the Mono Migration Analyzer (found via Miguel de Icaza).  This tool analyzes compiled assemblies and generates a report identifying issues that might prevent your application from running on Mono.  This report serves as a guide to porting your application to Mono.

Having Subtext run on Mono is a really distant goal for us, but a tool like this could advance the timetable on such a feature, in theory.

I tried to run the analyzer on every assembly in the bin directory of Subtext, but the analyzer threw an exception, doh!  That’s my “Gift Finger” at work (I could not find where to submit error reports so I sent an email to Mr. Pobst. I hope he doesn’t mind).


I then re-ran the analyzer selecting only the Subtext.* assemblies.

Subtext Moma

As you can see, we call 12 methods that are still missing in Mono, 23 methods that are not yet implemented, and 13 on their to do list.  Clicking on View Detail Report provides a nice report on which methods are problematic.

In a really smart move, Moma also makes it quite easy to submit results to the Mono team.


This is a great way to help them plan ahead and prioritize their efforts.  Just for fun, I ran Moma against the BlogML 2.0 assembly and it passed with flying colors.   Moma Blogml

Nice!, code comments edit

One of the benefits of writing an ASP.NET book is that it forces me to spend a lot of time spelunking deep in the bowels of ASP.NET uncovering all sorts of little gems I never noticed the first time around.

Many of these little morsels should end up in the book, but I thought I would blog about a few of them as I go along. 

This is all part of the weird situation I find myself in while writing this book. I thought I would just sit down and all the words would flow. Instead, no matter how motivated I am, everytime I sit down to write I spend two hours procrastinating for every one hour of writing.  What gives!?

In any case, one of the gems I discovered is the ClientScriptManager.RegisterExpandoAttribute method.  This method allows you to add custom properties to a control.  These properties are not rendered in the HTML as attributes, but simply attached to the control in the DOM via javascript.

This is nice for control authors who want to make a custom control client scriptable, but still maintain XHTML compliance, since XHTML compliance doesn’t allow arbitrary attributes for tags.

The following is a really simple example.  I present here a custom control that inherits from Label.

public class ExpandoControl : Label
    //Code to be filled in.

The AddAttributesToRender method is the appropriate place to call RegisterExpandoAttribute.

protected override 
    void AddAttributesToRender(HtmlTextWriter writer)
                , "contenteditable", "true");

Now we can access the contenteditable property of this control via client script. The following javascript demonstrates.

var expando = document.getElementById('expando');
alert('Content editable: ' + expando.contenteditable);

This is a good approach to take to develop a client-side api for your custom controls.

comments edit

UPDATE: In my original example, I created my own delegate for converting objects to strings. Kevin Dente pointed out that there is already a perfectly fine delegate for this purpose, the Converter delegate. I updated my code to use that instead. Thanks Kevin!  Just shows you the size and depth of the Framework libraries.

My recent post on concatenating a delimited string sparked quite a bit of commentary.  The inspiration for that post was some code I had to write for a project.  One constraint that I neglected to mention was that I was restricted to .NET 1.1.  Today, I revisit this topic, but with the power of .NET 2.0 in my pocket.

Let’s make our requirements a bit more interesting, shall we?

In this scenario, I have a new class creatively named SomeClass.  This class has a property also creatively named, SomeDate (how do I come up with these imaginative names?!). 

class SomeClass
    public SomeClass(DateTime someDate)
        this.SomeDate = someDate;

    public DateTime SomeDate;

Suppose I want to concatenate instances of this class together, but this time I want a pipe delimited list of the number of days between now and the SomeDate value.  For example, given the date 11/23/2006, the string should have a “1” since that date was one day ago.  Yes, this is a contrived example, but it will do.

Now I’ll define a new Join method that can take in a delimiter, an enumeration, and an instance of the Converter delegate.  The Converter delegate has the following signature.

delegate TOutput Converter<TIn,TOutput> (TIn input)

As an argument to my Join method, I specify that TOutput should be a string, leaving the input to remain generic.

public static string Join<T>(string delimiter
                             , IEnumerable<T> items
                             , Converter<T, string> converter)
    StringBuilder builder = new StringBuilder();
    foreach(T item in items)
    if (builder.Length > 0)
        builder.Length = builder.Length - delimiter.Length;

    return builder.ToString();

Now with this method defined, I can concatenate an array or collection of SomeClass instances like so:

SomeClass[] someClasses = new SomeClass[]
  new SomeClass(DateTime.Parse("1/23/2006"))
  , new SomeClass(DateTime.Parse("12/25/2005"))
  , new SomeClass(DateTime.Parse("5/25/2004"))

string result = Join<SomeClass>(|, someClasses
  , delegate(SomeClass item)
        TimeSpan ts = DateTime.Now - item.SomeDate;
      return ((int)ts.TotalDays).ToString();


Notice that I make use of an anonymous delegate that examines an instance of SomeClass and calculates the number of days that SomeDate is in the past.  This returns a string that will be concatenated together.

This code produces the following output.


This gives me a nice reusable method to concatenate collections of objects into delimited strings via the Converter generic delegate. This follows a common pattern in .NET 2.0 embodied by such methods as the List.ForEach method which uses the Action generic delegate and the Array.Find method which uses the Predicate generic delegate.

comments edit

Update: I also wrote a more generic version using anonymous delegates for .NET 2.0 as a followup to this post.

Here’s one for the tip jar. Every now and then I find myself concatening a bunch of values together to create a delimited string.  In fact, I find myself in that very position on a current project. In my case, I am looping through a collection of objects concatenating together three separate strings, each for a different property of the object (long story).

Usually when building such a string, I will append the delimiter to the end of the string I am building during each loop.  But after the looping is complete, I have to remember to peel off that last delimiter.  Let’s look at some code, simplified for the sake of this discussion.

The first thing we’ll define is a fake class for demonstration purposes. It only has one property.

internal class Fake
    public Fake(string propValue)
        this.SomeProp = propValue;

    public string SomeProp;
    public static Fake[] GetFakes()
        return new Fake[] {new Fake("one")
                , new Fake("two")
                , new Fake("three")

Now let’s look at one way to create a pipe delimited string from this array of Fake instances.

Fake[] fakes = Fake.GetFakes();

string delimited = string.Empty;
foreach(Fake fake in fakes)
    delimited += fake.SomeProp + "|";

delimited = delimited.Substring(0, delimited.Length - 1);

I never liked this approach because it is error prone. Do you see the problem? Yep, I forgot to make sure that delimited wasn’t empty when I called substring. I should correct it like so.

if(delimited.Length > 0)
    delimited = delimited.Substring(0, delimited.Length - 1);

When I write code like this, I almost always add a little disclaimer in the comments because I know someone down the line is going to call me an idiot for not using the StringBuilder class to concatenate the string. However, if I know that the size of the strings to concatenate and the number of concatenations will be small, there is no point to using the StringBuilder.  String Concatenations will win out. It all depends on the usage pattern.

But for the sake of completeness, let’s look at the StringBuilder version.

Fake[] fakes = Fake.GetFakes();

StringBuilder builder = new StringBuilder();
foreach(Fake fake in fakes)

string delimited = builder.ToString();
if(delimited.Length > 0)
    delimited = delimited.Substring(0, delimited.Length - 1);

Aesthetically speaking, this code is even uglier because it requires more code. And as I pointed out, depending on the usage pattern, it might not provide a performance benefit. Today, a better approach from a stylistic point of view came to mind. I don’t know why I didn’t think of it earlier.

Fake[] fakes = Fake.GetFakes();

string[] delimited = new string[fakes.Length];
for(int i = 0; i < fakes.Length; i++)
    delimited[i] = fakes[i].SomeProp;

string delimitedText = String.Join("|", delimited);

Since I know in advance how many items I am concatenating together (namely fakes.Length number of items), I can fully allocate a string array in advance, populate it with the property values, and then call the static String.Join method.

From a perf perspective, this is probably somewhere between string concatenation and StringBuilder, depending on the usage pattern. But for the most part, String.Join is quite fast, especially in .NET 2.0 (though my current project is on .NET 1.1.  Boohoo!).

Performance issues aside, this approach just feels cleaner to me.  It gets rid of that extra check to remove the trailing delimiter.  String.Join handles that for me.  To me, this is easier to understand.  What do you think?

comments edit

Here in the good ole U.S. and A, soccer doesn’t yet have the huge following or celebrity status that it does overseas. On one level, this is a good thing, as it means getting tickets for a game the day before is never too big a challenge.  On the downside, the quality of the game is often lacking especially when compared to watching a team like FC Barcelona.

However that may change in the future as it is one of the largest, if not largest, sports among kids today.  So while soccer players (excuse me, footballers) in the US don’t have the celebrity status of a Basketball player, there are plenty of fans interested in knowing what it’s like to be a professional player.

My buddies Donny and Cory (past soccer teammates) figured the same thing so they came up with an idea for a show that would highlight pro soccer players in the U.S. (and probably beyond at some point).  The show would basically be a sort of Day In The Life type of format.  I’ve been hearing Donny go on and on about this show for a while now, and it’s great to see it really happening.

Check out this promo video for the show called Beyond the Pitch.  My only contribution to this was to say I thought it was a brilliant idea. Something I’d definitely watch.  This video is a short cut from a pilot they filmed with Kevin Hartman. It’s used to sell the show and may never actually air as an episode.  Keep that in mind, as the final product will probably be even more polished.

comments edit

K. Scott Allen, famous for his OdeToCode blog signed on to be the fourth co-author. His expertise and writing ability will help to compensate for our lack of such things.

A little while ago I wrote an email to the subtext-devs mailing list mentioning that I will be cutting back my day-to-day involvement in Subtext until around spring/summer of the new year.  I will still be involved, of course, but I cannot spend as much time writing code as I have been in the past.

However, my Subtext recruiting post was quite successful and many new developers have joined in to keep Subtext humming along. Not only that, developers who have long been involved with Subtext have also picked up the slack with major contributions.  For that, I am appreciative to see things moving forward toward the best release of Subtext yet sometime in the new year.

So what exactly is keeping me so busy? 

I will be writing a Cook Book style book on ASP.NET 2.0 with my co-authors Jon Galloway and Jeff “CodingHorror” Atwood.  The three of us have long talked about writing a book together, and this opportunity from SitePoint came along at the right time.  We just signed the contract recently and already I am panicking about the various deadlines. Wink

It’s good to get the panic and self-doubts over with early (everyone will hate the book…no worse, they’ll be indifferent to the book and hate me, spitting on me at Mix 09 when I finally release the book years late for rotting their brains just reading a synposis) so I can get to the business of writing a fantastic book.

I’ve contributed a couple of sections to a book before (Windows Developer Power Tools), but that was nothing compared to co-authoring and writing a full third of a book.  I will be looking to my capable co-authors to make me look good.

So you may notice the frequency of blogging drop off for a bit, but I plan to pick it up a bit as I write the book, focusing on little pieces that relate to the book.  Wish us luck!

comments edit

Silver Bullet: From In a recent post I talked about how good design attempts to minimize the impact of changes to a system, often through Design Patterns.

When used appropriately, Design Patterns are a great tool for building a great design, but there is an important caveat to keep in mind anytime you apply a pattern. A Design Pattern might minimize the impact of one kind of change at the expense of amplifying another type of change.

What do I mean by this? One common pattern is the Abstract Factory pattern which is often manifested in .NET code via the Provider Model pattern. The Provider Model abstracts access to an underlying resource by providing a fixed API to the resource. This does a bang up job of insulating the consumer of the provider when changing the underlying resource.

The MembershipProvider is one such implementation of the provider model pattern. The consumer of the MembershipProvider API doesn’t need to change if the SqlMembershipProvider is swapped in favor of the ActiveDirectoryMembershipProvider. This is one way that the provider pattern attempts to minimize the impact of changes. It insulates against changes to the underlying data store.

However there is a hidden tradeoff with this pattern. Suppose the API itself changes often. Then, the impact of a single API change is multiplied across every concrete implementation of the provider. In the case of the MembershipProvider, this is pretty much a non-issue because the likelihood of changing the API is very small.

But the same cannot be said of the data access layer for software such as a blog (or similar software). A common approach is to implement a BlogDataProvider to encapsulate all data access so that the blog software can make use of multiple databases. The basic line of thought is that we can simply implement a concrete provider for each database we wish to support. So we might implement a SqlBlogDataProvider, a MySqlBlogDataProvider, a FireBirdBlogDataProvider, and so on.

This sounds great in theory, but it breaks down in practice because unlike the API to the MembershipProvider, the API for a BlogDatabaseProvider is going to change quite often. Pretty much every new feature one can think of often needs a backing data store.

Everytime we add a new column to a table to support a feature, the impact of that change is multiplied by the number of providers we support. I discussed this in the past in my post entitled Where the Provider Model Falls Short.

Every Design Pattern comes with inherent tradeoffs that we must be aware of. There is no silver bullet.

The key here when looking to apply patterns is to not follow a script for applying patterns blindly. Look at what changes often (in this case the database schema) and figure out how to minimize the impact of that change. In the above scenario, one option is to simply punt the work of supporting multiple databases to someone else in a more generic fashion.

For example, using something like NHibernate or Subsonic in this situation might mean that a schema change only requires changing one bit of code. Then NHibernate or Subsonic is responsible for making sure that the code works against its list of supported databases.

One might object to these approaches because they feel these approaches cannot possibly query every database they support as efficiently as database specific SQL as one would do in a database specific provider. But I think the 80/20 rule applies here. Let the dynamic query engine get you 80% of the way, and use a provider just for the areas that need it.

So again, this is not an indictment of the provider model. The provider model is extremely useful when used appropriately. As I mentioned, the Membership Provider is a great example. But if you really need to support multiple databases AND your database schema is succeptible to a lot of churn, then another pattern may be in order.

comments edit

This one is probably old news to many of you, but I just recently ran across it. Every time I want to add a new control to a new page, I get annoyed because I have to remember that annoying syntax for registering a control.

Let’s see…how does it go again? Do I have to add a TagName attribute? No, that’s for user controls. Hmmm, forget it, I’ll just dynamically add it! Well in the interest of reducing future angst, here are two examples of the syntax, one for a custom control and one for a user control.

<%@ Register TagPrefix="st" Namespace="Subtext.Web.Controls" 
  Assembly="Subtext.Web.Controls" %>
<%@ Register TagName="SomeControl" TagPrefix="st" 
  Src="~/Controls/SomeControl.ascx" %>

The first one registers the tag prefix st with the Subtext.Web.Controls namespace in the Subtext.Web.Controls assembly. The second one registers the tag name SomeControl with the user control SomeControl.ascx

Add this to the top of your page or user control and you can reference a control from this assembly like so:

<st:HelpToolTip id="blah" runat="server" HelpText="Blah!" />
<st:SomeControl id="foo" runat="server" />

A most helpful tooltip!

Fortunately, starting with ASP.NET 2.0, we can register a tag prefix within the Web.config file. This basically makes all the controls within that namespace and assembly available to all pages without having to add that ugly Register declaration.

        <add assembly="Subtext.Web.Controls"
                tagPrefix="st" />
        <add src="~/Controls/SomeControl.ascx"
                tagPrefix="st" />

Thanks to the ASP.NET 2.0 MVP Hacks book for this one.

comments edit

Blue Skies from
Stock.xchng We’ve all been there.  Your project stakeholder stands in your doorway with a coffee mug in hand and asks for one more teeny tiny change.

Yeeeaaah. It’d be great if you could just change the display to include the user’s middle name.  That’s pretty easy, right?

No problem!  Let’s see.  I’ll just need to modify the database schema to add the column, update several stored procedures to reflect the schema change, add a new property to the User class, update the data access code to reflect the new property, and finally update the various user controls that render or take in input for this information.

That’s quite a number of changes to the codebase for one measly little change.

The goal of good software design is to minimize the impact of changes in the code.  Many of you might be having the same reaction to this that you would if I just told you the sky is blue.  Well no duh!  Even so, I think this bears repeating again and again, because this principle is violated in subtle ways, which I will discuss in a follow-on post.

This is one reason that duplicate code is considered such an odoriferous code smell.  When a snippet of code is repeated, a change to the code affects every location in which that snippet is located.

Many Design Patterns focus on minimizing the impact of changes by attempting to look at what varies in a system and encapsulate it

For example, suppose you develop a class that monitors the power level of your Universal Power Supply (UPS) device.  When a power level change occurs, several UI widgets need to be updated.

A naïve implementation might have the UPS class keep a reference to each widget that needs to be updated and directly makes a call to various methods or properties of each widget to update the widget’s state.

The downside of this approach becomes apparent when you need to add a new widget or change a widget.  You now need to update the UPS class because of changes to the UI.  The UPS class is not insulated to changes in the UI

The Observer pattern addresses this issue by changing the direction of the dependency so that the UPS class (the observed) does not have direct knowledge of the UI widgets (the observers).  The widgets all implement a comment observer interface and the UPS class only needs to know about that one interface.  Add a new widget and the code for the class does not need to be updated.  Now the UPS class is insulated from changes to the UI.

Another example of code that is not resilient to change is a class with several methods that contain a similar switch statement.  Going back to the example of the UPS class, suppose the class has several operations it must do every few seconds.  But how it implements each operation is dependent on the current power state.

A naïve implementation might have a switch statement in each method that contains a case for each possible power state.  The problem with this approach is that when we need to add a new power state or edit how an existing state behaves, we have to update multiple existing methods.  The State pattern addresses this problem by encapsulating the behavior of a state in a class.  Thus each power state would be encapsulated in a class and the UPS class would simply delegate calls to its member state instance.

So where is the downside in all this? Seems like these patterns provide a win-win situation for us.  Well in these contrived examples it sure does, but not in every situation.  When used improperly, a pattern in one scenario can actually increase the impact of change in another.  Stay tuned.