comments edit

I am now a published DevSource article author. :)

Well actually, Bob Reselman (aka Mr. Coding Slave) did nearly all of the writing. I merely provided technical editing and proofing along with clarifying some sections. In return, he graciously gave me a byline.

The article is a beginners guide to exceptions.NET. and Bob did a bang-up job especially given the circumstances. He writes in a very approachable manner.

Hopefully, we can follow-up with a more in-depth version to take beginners to the next level in understanding best practices. One thing I wish we had discussed in the article is the guidelines around re-throwing exceptions. For example, I’ve seen many beginning developers make the following mistake…

public void SomeMethod()

{

    try

    {

        SomeOtherMethod(null);

    }

    catch(ArgumentNullException e)

    {

        //Code here to do something

 

        throw e; //Bad!

    }

The problem with this approach is the code in the catch clause is throwing the exception it caught. The runtime generates a stack trace from the point at which an exception is thrown. In this case, the stack trace will show the stack starting from the line throw e and not the line of code where the exception actually occurred.

If this is confusing, consider that the runtime doesn’t exactly distinguish between these three cases…

Exception exc = new Exception();

throw exc;

\

throw new Exception();

\

throw SomeExceptionBuilderFunction();

If you really intend to rethrow an exception without wrapping it in another exception, then the proper syntax is to use the throw keyword without specifying an exception. The original exception will propagate up with its stack trace intact.

public void SomeMethod()

{

    try

    {

        SomeOtherMethod(null);

    }

    catch(ArgumentNullException e)

    {

        //Code here to do something

 

        throw; //Better!

    }

However, even this can be improved on depending on why we are catching the exception in the first place. If we are performing cleanup code in the catch clause, it would be better to move that code to the finally block and not even catch the exception in the first place.

Also, using the throw syntax as illustrated above can affect the debuggability of a system. Christopher Blumme points this out in his annotation in the book Framework Design Guidelines (highly recommended) where he notes that attaching a debugger works by detecting exceptions that are left unhandled. If there is a series of catch and throw segments up the stack, the debugger might only attach to the last segment, far away from the actual point at which the exception occurred.

public void SomeMethod()

{

    try

    {

        SomeOtherMethod(null);

    }

    finally

    {

        CleanUp(); //Possibly even better

    }

}

Bob and I plan to follow-up with hopefully more articles covering exceptions. This is just a start.

comments edit

BoingBoing has this story on a guy who fixes computers in exchange for…er…special favors from female customers. First the plumber, and now the computer guy. This could start a whole new breed of dirty movies.

***Fade in cheesy 70s music with the bump-chi-ca-bumb-waaoow***\ ***Door rings***\ Housewife: (opens door) Who is it?\ IT Guy: I’m the IT guy. I’m here to fix your computer. (Follows up with cheesy line I refuse to print)\ ***Fornication ensues***

It’s a damn good thing I know how to fix computers around here. Now I just need to learn how to do the plumbing.

Yes, this is the first post that I feel I had to edit due to AdSense. The last thing I want to see are the ads that would be associated with this post should I have used the words porn and sex.

DOH!

comments edit

Rock Recently, two posts have given me an increased appreciation for what Flickr has accomplished with their clustering feature.

In his post Random Acts of Sensless Tagging, DonXML talks about the weakness of simple tagging schemes to understand the semantics of a tag. Today, Jeff Atwood follows up this thought by illustrating how a Google search for a single word also returns results that ignore other possible meanings of the word. He presents eBay as a better example with its “quasi-hierarchical” category results in the side bar.

Jeff even suggests a better approach using Markov chain probabilities to automatically suggest alternat semantics. For example, you search on “Jaguar” and in the search results you get a suggestion, “Did you mean: Jaguar cat, Jaquar Automobile, OSX Jaguar…”

Well this is exactly what Flickr does when searching for tags. Try searching for the tag “rock”. Flickr returns a set of clusters around the term. In the top cluster (at the time of this writing), there are pictures of bands music bands and guitarists. The other clusters involve stones, ocean, beaches as one might expect. The last one interestingly enough associates “rock” with “hard” and “cafe”.

comments edit

So in exactly one week Akumi and I will be on a plane to Spain where it rains in the plains (please excuse me). We are flying into Madrid in the morning of the 20^th^ where we’ll stay the night. There, we will meet up with Akumi’s brother and his wife who will have flown in from Japan a couple days earlier.

Madrid boasts such great sights as Museo del Prado, Palacio Real and Plaza Mayor, but the highlight for me will be to visit the place I lived for three years as a kid. I look forward to seeing how much it changed and if they ever cleaned up all the grafitti we the local hoodlums did (hey, this was the time when Breakin’, Electric Boogaloo and Beat Streat were out, what was a kid to do?).

I am almost certain the legend of my fútbol prowess in the Terraza Grande de los Apartamentos Torrejón will still be mentioned in hushed tones by the local kids who play there today. Assuming they can contain their laughter.

From Madrid, we are travelling to Léon for a day and a half, and then Bilbao for a day and a half. Then we are on a late train to the Catalan city of Barcelona for four days. No offense to my amigos Madrileños, but I am really looking forward to beautiful Barcelona. The first time I visited, I have hazy memories of its beauty. This time I hope to soak it in more.

comments edit

I would like to take this moment to point out that as far as I know (and the judges are still confirming this), I may have made history with my last blog post as the first geek blogger in history to mention Breakin’, Electric Boogaloo and Beat Streat all in the same blog post with proper IMDB linkage.

For you young whippersnappers who missed out on this fine piece of cinema history, you are in luck. Amazon.com offers up its Breakin’ Collection which packages all three of these movies. Perhaps the only movie missing is Krush Groove (which I admit, I never saw).

This is one chapter in the history of hip-hop and breakdance. Learn your roots, foo!

comments edit

Chicken An IM conversation I had with a teammate on a project today…

haacked: Hey, let’s just switch the whole proj to VS.NET 2005 while I’m fixing things.\ Teammate: i can’t tell if you’re kidding or not\ haacked: kidding.\ haacked: sort of.\ Teammate: yeah… we should be brave… but maybe a little later\ haacked: what? Are you chicken? BWOK! BWOK! BWOK! (emphasis added… editor)\ Teammate: i ain’t chicken!\ haacked: <!– (Because that’s so effective a motivator)\ Teammate: dude, i’ll recompile in .Net 2.0 and deploy to production right now!\ haacked: I DARE You! Do it man! Do it!\ Teammate: well, i would but i don’t have 2005 installed on this machine… but otherwise, i would totally do it\ Teammate: man if i had 2005 installed, you’d be so moded\

Notice that after all these years, the power of the BWOK BWOK still has its sway. It’s the real reason that lemmings jump off of cliffs. It isn’t because the other lemmings do it. It’s because one brash lemming challenged the others by calling them chicken and making the BWOK BWOK sound. Perhaps said lemming even flapped his furry little arms in a chicken imitation. That seems to really get people’s…er…feathers ruffled.

Oh, and if you are a current or potential client, I would never simply deploy a new platform mid-project like that. But this other guy might. ;)

personal, code comments edit

Wired News has a very interesting article on History’s worst software flaws.

It makes me think of my worst software bug when I first started off as an ASP developer right out of college. I was working on a large music community website and was told to implement a “Forgot Password” feature. Sounds easy enough. I coded it, ran a quick test, and then deployed it (that alone should rankle your feathers).

We didn’t quite have a formal deployment process at the time. A few days later, we find out that the code never sent out any emails, and never logged who made the requests, leaving us no way to really know how many users were affected.

I believe we found out (and my memory is hazy here), through a relative of our client’s president. After reviewing the code, there was no way it could have sent out emails. There was a glaring bug in there, which makes me wonder how it passed my test.

In any case, I coded on egg shells for a while after that, fearing I might lose my first coding job.

comments edit

Ingo Rammer writes about the theoretical limit to reducing latency. Since 1994, we’ve reduced latency by 10 times, but increased bandwidth by millions. We can make the pipes fatter, but we can’t make the data any faster than the speed of light unless, as Ingo points out, “you prove Einstein wrong”.

According to Einstein’s special theory of relativity, the speed of light is an absolute barrier. Not only is the speed of light itself limited, but anything that communicates information is also bound by that limit.

The reason for this is that the speed of light is the same in all frames of references. Suppose I’m in a train heading east from California to New York at half the speed of light and I pull out a flashlight (maybe it is dark in there). I face the flashlight toward the front of the train (say it is 100 meters away) and turn it on. The beam of light from my frame of reference appears to head east at 186,282.4 miles per second which is denoted by the constant c (as in the c in E=mc^2^) and reaches the front of the train in a split second.

Now suppose somebody in Nebraska happens to be sitting outside watching the trains go by and sees me turn on the flashlight. From his perspective, the beam of light travels west to east at exactly the same speed c. Interestingly enough though, during that same split second, the beam of light travels farther before it reaches the front of the train, because the train itself is moving. How is it possible that light, travelling at the same speed, travels two different distances in the same amount of time?

It doesn’t. The elapsed time itself is different from our two perspectives. This is the paradoxical (but experimentally verified) phenomena called time dilation.

So what does this have to do with latency? The perceived time dilation is the ratio between an external observer’s perceived time and the time perceived by an observer approaching the speed of light. As the the latter observer gets closer to the speed of light, the ratio approaches zero. This would violate causality. If we could send a ping faster than light, from one frame of reference an obsever would observe that the ping was sent before it was received (as expected by causality), but in another frame of reference, the observer would observe the ping as being sent after it was received.

So is it possible to prove Einstein wrong? Perhaps, but not likely. Time dilation has been experimentally verified. There is promise in Quantum Entanglement, but so far it seems impossible to to transmit information using this approach. There are a class of theoretical particles (non have been observed) that might be faster than light, but these would also run into c as a barrier. In this theory, the speed of light is impossibly slow. Good luck trying to rope one of them in to send your ping packet. Those particles are most likely travelling backwards in time, thus not violating causality. They just exhibit causality in a different direction.

If anything will prove Einstein wrong (and I am skeptical) is discovering that causality itself is not sancrosanct. Perhaps time itself is an illusion.

comments edit

The title of my post is meant to indicate that this post is not technical in nature, but rather just a bit of small talk, chit-chat, idle conversation. You know, the sort of surface level conversation meant to break the ice and pass the time. How is the weather where you are?

The weather where my parents and brother live is rather cold right now. Today’s high was 20° F with a low of 8° F (that’s -6.7° C and -13° C respectively). Tomorrow they will enjoy a brisk 12° F high and 2° F low (which is -11° C and -16° C). Brrrr!

That’s why we’re looking forward to having them thaw out by visiting us in December. Right now we’re enjoying a nice high of 75° F and a low of 56° (egads! Time to bust out a sweater!).

Meanwhile, we are excitedly looking forward to our trip to Spain coming up. We are flying into Madrid, travelling to León and then Bilbao. Afterwards we’re off to Barcelona for a few days before flying back.

Mi esposa y yo estamos practicando nos español para el viaje. When we need to use the bathroom, we are fully prepared to ask, but hope they point rather than give us directions.

[Listening to: Namistai - Paul van Dyk - Out there and back (CD 2) (8:21)]

comments edit

FogCreek commissioned a documentary about their summer intern project named Aardvark which ended up releasing Copilot, a product to help your mom with her computer woes.

I think it’s intriguing in the voyeuristic sense in that I like the idea of taking a look inside how other companies manage their software projects. But at $19.95 a pop, It’ll have to be picked up by NetFlix for me to watch it, which doesn’t seem likely.

It makes me wonder if I should buy it as a show of solidarity to say there is a demand for documentaries and movies that provide a realistic view of software development.

Perhaps not surprisingly, software development doesn’t get much respect nor recognition in Hollywood. The depictions out there that do exist are typically ludicrous (Swordfish anyone?). Neither do television shows address the subject.

There are plenty of shows about lawyers and doctors, but what about the software developers? You can’t sit there and tell me that open heart surgery is more exciting than completing a refactoring on a method and getting green bars on your unit tests. Ok, maybe it is a tad bit more exciting as a life is on the line as opposed to someone’s butt, but the interesting part of such shows such as Grey’s Anatomy and Law and Order are the backstories, not the medicine nor law being practiced.

Comics like Dilbert give a hint at the comedy potential for a show depicting software developers. So c’mon Hollywood, copy this idea (since that is the modus operandi). I saved you the trouble from having to think of it yourself.

comments edit

Logged into my ad-sense account and noticed that Google started a referral program. Very cool!

If you love to blog, why not make a little extra spending cash doing what you love, eh? It’s nice to have another revenue stream, no matter how small.

comments edit

Toilet A while ago I wrote that a client often often does not know what he wants until he sees it. I was referring to software development, but this is common across many professional services organization, such as plumbing for example.

This week I had the opportunity to be on the client side of things when we noticed our toilet was leaking. I thought I knew what the problem was. It seemed to me that the toilet was leaking from its base. So I told the plumber that and he came in and tightened the base. No water seemed to be coming from the base afterwards so he left.

Later that evening we noticed that the carpet behind the toilet was still wet. So I looked carefully and noticed the flex tube from the stop valve to the tank was dripping water.

The next day the plumber comes back and he replaces the flex tube and leaves. I take a look and notice it is still leaking. I call him and he returns and finally figures out that the ring where the flex tube’s connects to the tank is the culprit, and replaces the flush valve and other inner components (I’m no expert in toiletology). It took him three trips to fix the problem with the throne.

1st Lesson: Get to the real root of the problem.\ All in all, the toilet was fixed, but the quality of service was poor. The lesson here is rather than simply assuming the client (in this case me) knows what he wants, take the time to perform due diligence. He is the professional. What do I know about toilets except that they’re great for pondering life’s mysteries.

As software developers, we have to take the time to gather proper requirements and ask the right questions. Our clients certainly know their own domains very well, but they don’t necessarily know a lot about software and how exactly software can help them.

2nd Lesson: Double Check Your Work.\ Once you do gather requirements and do the job, make sure you succeed in delivering what the clients want. It helps if you define clear acceptance criteria up front. For example, my acceptance criteria were very clear. I want my toilet not to leak. Ideally the plumber, knowing all he knows about plumbing, should be thorough in making sure that requirement was met.

comments edit

So when should you choose to build a smart client rather than a web application (or in addition to). The typical answer I’ve seen is when extreme usability is required. As AJAX techniques get more mature, I think this will become less of a consideration.

As I thought of it more, it hit me. The same thing Jeff Atwood said about strored procedures, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations†applies to applications.

Smart Clients should be considered Application assembly language: for use in only the most performance critical situations.

This is why you won’t see the next version Halo running in a browser (though you might see the first version someday). This is also why you won’t run Photoshop in a browser. Performance is critical in such applications.

There are other important considerations as well, such as security. I wouldn’t run an RSA key ring in a browser. Also apps that constantly run and perform a service on your machine. For example, system tray icons, though even that concept seems to be changing.

Anyways, I need to chew on this some more. At this point in time, usability is still a concern. That is why I run a client RSS Aggregator and use w.Bloggar to post to my blog.

Although the deployment issues for web-based applications are great, the development environments for AJAX applications pale in comparison to writing a rich UI. There’s just something about writing object oriented compiled code that makes me cringe at writing everything as javascript.

comments edit

Rob Howard asks the question Is “Smart Client” a “Dumb Idea”. Obviously I don’t necessarily think so as I pointed out in my post Overlooked Problem With Web Based Applications.

However, as I thought about it more, I realized that part of the excitement over web applications is that they are starting to really deliver on the failed promises that Java made…

Write once, run anywhere.

Although closer to the truth is…

Write once, debug CSS and Javascript quirks everywhere.

The missing piece in my mind is that there is no built in support for managing local storage of your web based data. As websites get richer and richer, perhaps Smart clients aren’t the only way to solve this problem. All that is necessary is to develop an HTML specification for saving user data to the desktop.

Well we have such a thing now, they’re called “cookies”. But cookies are limited in size and not very useful for document management. The idea in mind is to create a specification similar to cookies, but that allow full structured documents to be stored on the client from the web server. Javascript running in the browser would have permission to modify these documents (subject to the same restrictions as modifying a cookie).

In this scenario, if you are offline and need to read a document or email, you simply navigate to the Url of the web application. The browser would then load the site from its internal cache. Or better yet, rather than loading the site (which might not be very useful), it loads a list of document “viewer” that the site registers with the browser. You choose the viewer, which is nothing more than a bit of javascript that is capable of listing and viewing locally stored documents.

When you reconnect, your browser sends the document to the site which merges your changes, giving you the option to resolve conflicts. It sounds a lot like smart clients, doesn’t it? The obvious difference is that your application would theoretically run on nearly any machine with a modern browser that supports these new standards and would not require the .NET platform nor a Java virtual machine.

In any case, this is my hand waving half-baked view of where we’re headed with web applications.

code, sql comments edit

Perhaps there is a better term I could be using when I referred to “dynamic SQL” in my last post. To my defense, I did mention using Prepared Statements.

The key point to keep in mind while reading the last post is that Dynamic SQL does not necessarily imply Inline SQL. By inline SQL, I mean concatenated sql statements flung all over the code like a first year classic ASP developer.

Like any good security minded developer, I detest inline SQL (as I define it here). A much better and safer approach is to use prepared parameterized SQL as Jeff Atwood outlines in this post.

So when I refer to Dynamic SQL I am referring to dynamically generated prepared parameterized SQL (that’s a mouthful). These are prepared parameterized SQL statements that are generated by machine and not by hand.

As Jeff points out in this post, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations.”

Taking that abstraction one step higher, you could also consider SQL itself to be a form of database intermediate language. A dynamic SQL engine generates SQL much like a compiler takes your C# code and generates IL? When that query is executed as a prepared parameterized query, it is “jitted” by the database server into a high performance database operation.

It seems to me a decent analogy for a Dynamic SQL engine such as those built into LLBLGen Pro, NHibernate, etc…

comments edit

Craig Andera posts a technique for handling exceptions thrown by a webservice. He takes the approach of adding a try catch block to each method.

A while ago I tackled the same problem, but I was unhappy with the idea of wrapping every inner method call with a try catch block. I figured there had to be a better way. Since SOAP is simply XML being sent over a wire, I figured there had to be a way for me to hook into the pipeline rather than modify my code.

What I came up with is my Exception Injection Technique Using a Custom Soap Extension. This allows you to simply add an additional attribute to each web method as in the sample below and have full control over how exceptions are handled and sent over the wire.

[WebMethod, SerializedExceptionExtension]

public string ThrowNormalException()

{

    throw new ArgumentNullException(“MyParameter”, \         ”Exception thrown for testing purposes”);

}

Read about the technique here and feel free to adapt it to your purposes.

comments edit

Now that ASP.NET 2.0 is released, a lot of developers will start to really dig into the provider model design pattern and specification and its various implementations. The provider model is really a blending of several design patterns, but most closely resembles the abstract factory.

Where the provider model really busts out the flashlight and shines is when an application (or subset of an application) has a fairly fixed API, but requires flexibility in the implementation. For example, the Membership Provider has a fixed API for dealing with users and roles, but depending on the configured provider, could be manipulating users in a database, in Active Directory or a 4’x6’ piece of plywood. The user of the provider doesn’t need to know.

Provider Misuse\ However, one common area where I’ve seen providers misused is in an attempt to abstract the underlying database implementation away from application. For example, in many open source products such as DotNetNuke, an underlying goal is to support multiple database providers. However, the provider model in these applications tend to be a free for all data access API. For example, in the .TEXT (and currently Subtext) codebase, there is one provider that is responsible for nearly all data access. It has a huge number of methods which aren’t factored into well organized coherent APIs.

The other inherent flaw in many of these approaches is they violate a key principle of good object oriented design…

Good design seeks to insulate code from the impact of changes to other code.

Take Subtext as an example. Suppose we want to add a column to a table. Not only do we have to update the base provider to account for the change, we also have to update every single concrete provider that implements the provider (assuming we had some). Effectively, the provider model in this case amplifies the effect of a schema change. The result is that It makes your proverbial butt look fat.

This is why you see an appalling lack of concrete providers for applications such as DotNetNuke, .TEXT, Subtext etc…. Despite the fact that they all implement the provider model, very few take (nor have) the time to implement a concrete provider.

A better way\ For most professional web projects, this is not really an issue since your client probably has little need to switch the database upon which the application is built. However, if you are building a system (such as an open source blogging engine) in which the user may want to plug in nearly any database, this is a much bigger issue.

After a bit of research and using an ORM tool on a project, I’ve stepped away from being a religious stored procedure fanatic and am much more open to looking at object/relational mapping tools and dynamic query engines. ORM tools such as LLBLGen Pro and NHibernate make use of dynamically generated prepared sql statements. Now before you dismiss this approach, bear with me. Because the statements are prepared, the performance difference between these queries and a stored procedure are marginal. In fact, a dynamic sql statement can often even outperform a stored proc because it is targeted to the specific case, whereas stored procs tend to support the general case. One example is the dynamic query that only selects the needed columns from a table and not every column.

Better Insulation\ The key design benefit of such a tool is that they insulate the main application from the impact of schema changes. Add a column to a table and perhaps you only need to change one class and a mapping file. The other key benefit is that these tools already support multiple databases. Every time the ORM vendor spends time implementing support for a new database system, you’re application supports that database for free! That’s a lot of upside.

Think about that for a moment. NHibernate currently supports DB2, Firebird, Access, MySql, PostgreSQL, Sql Server 2000, and SqlLite. Think about how much time and effort it would take to implement a provider for each of these databases.

The very fact that you don’t see a plethora of database providers for DNN, .TEXT, etc… is convincing evidence that the provider model falls short in being a great solution for abstracting the database layer from application code. It is great for small well defined APIs, but not well suited for a generalized data api where there tends to be a lot of code churn.

To this end, the Subtext developers are investigating the potential for using NHibernate in a future version of Subtext.

Referenced Links and other resources\