comments suggest edit

The title of my post is meant to indicate that this post is not technical in nature, but rather just a bit of small talk, chit-chat, idle conversation. You know, the sort of surface level conversation meant to break the ice and pass the time. How is the weather where you are?

The weather where my parents and brother live is rather cold right now. Today’s high was 20° F with a low of 8° F (that’s -6.7° C and -13° C respectively). Tomorrow they will enjoy a brisk 12° F high and 2° F low (which is -11° C and -16° C). Brrrr!

That’s why we’re looking forward to having them thaw out by visiting us in December. Right now we’re enjoying a nice high of 75° F and a low of 56° (egads! Time to bust out a sweater!).

Meanwhile, we are excitedly looking forward to our trip to Spain coming up. We are flying into Madrid, travelling to León and then Bilbao. Afterwards we’re off to Barcelona for a few days before flying back.

Mi esposa y yo estamos practicando nos español para el viaje. When we need to use the bathroom, we are fully prepared to ask, but hope they point rather than give us directions.

[Listening to: Namistai - Paul van Dyk - Out there and back (CD 2) (8:21)]

comments suggest edit

FogCreek commissioned a documentary about their summer intern project named Aardvark which ended up releasing Copilot, a product to help your mom with her computer woes.

I think it’s intriguing in the voyeuristic sense in that I like the idea of taking a look inside how other companies manage their software projects. But at $19.95 a pop, It’ll have to be picked up by NetFlix for me to watch it, which doesn’t seem likely.

It makes me wonder if I should buy it as a show of solidarity to say there is a demand for documentaries and movies that provide a realistic view of software development.

Perhaps not surprisingly, software development doesn’t get much respect nor recognition in Hollywood. The depictions out there that do exist are typically ludicrous (Swordfish anyone?). Neither do television shows address the subject.

There are plenty of shows about lawyers and doctors, but what about the software developers? You can’t sit there and tell me that open heart surgery is more exciting than completing a refactoring on a method and getting green bars on your unit tests. Ok, maybe it is a tad bit more exciting as a life is on the line as opposed to someone’s butt, but the interesting part of such shows such as Grey’s Anatomy and Law and Order are the backstories, not the medicine nor law being practiced.

Comics like Dilbert give a hint at the comedy potential for a show depicting software developers. So c’mon Hollywood, copy this idea (since that is the modus operandi). I saved you the trouble from having to think of it yourself.

comments suggest edit

Logged into my ad-sense account and noticed that Google started a referral program. Very cool!

If you love to blog, why not make a little extra spending cash doing what you love, eh? It’s nice to have another revenue stream, no matter how small.

comments suggest edit

Toilet A while ago I wrote that a client often often does not know what he wants until he sees it. I was referring to software development, but this is common across many professional services organization, such as plumbing for example.

This week I had the opportunity to be on the client side of things when we noticed our toilet was leaking. I thought I knew what the problem was. It seemed to me that the toilet was leaking from its base. So I told the plumber that and he came in and tightened the base. No water seemed to be coming from the base afterwards so he left.

Later that evening we noticed that the carpet behind the toilet was still wet. So I looked carefully and noticed the flex tube from the stop valve to the tank was dripping water.

The next day the plumber comes back and he replaces the flex tube and leaves. I take a look and notice it is still leaking. I call him and he returns and finally figures out that the ring where the flex tube’s connects to the tank is the culprit, and replaces the flush valve and other inner components (I’m no expert in toiletology). It took him three trips to fix the problem with the throne.

1st Lesson: Get to the real root of the problem.\ All in all, the toilet was fixed, but the quality of service was poor. The lesson here is rather than simply assuming the client (in this case me) knows what he wants, take the time to perform due diligence. He is the professional. What do I know about toilets except that they’re great for pondering life’s mysteries.

As software developers, we have to take the time to gather proper requirements and ask the right questions. Our clients certainly know their own domains very well, but they don’t necessarily know a lot about software and how exactly software can help them.

2nd Lesson: Double Check Your Work.\ Once you do gather requirements and do the job, make sure you succeed in delivering what the clients want. It helps if you define clear acceptance criteria up front. For example, my acceptance criteria were very clear. I want my toilet not to leak. Ideally the plumber, knowing all he knows about plumbing, should be thorough in making sure that requirement was met.

comments suggest edit

So when should you choose to build a smart client rather than a web application (or in addition to). The typical answer I’ve seen is when extreme usability is required. As AJAX techniques get more mature, I think this will become less of a consideration.

As I thought of it more, it hit me. The same thing Jeff Atwood said about strored procedures, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations†applies to applications.

Smart Clients should be considered Application assembly language: for use in only the most performance critical situations.

This is why you won’t see the next version Halo running in a browser (though you might see the first version someday). This is also why you won’t run Photoshop in a browser. Performance is critical in such applications.

There are other important considerations as well, such as security. I wouldn’t run an RSA key ring in a browser. Also apps that constantly run and perform a service on your machine. For example, system tray icons, though even that concept seems to be changing.

Anyways, I need to chew on this some more. At this point in time, usability is still a concern. That is why I run a client RSS Aggregator and use w.Bloggar to post to my blog.

Although the deployment issues for web-based applications are great, the development environments for AJAX applications pale in comparison to writing a rich UI. There’s just something about writing object oriented compiled code that makes me cringe at writing everything as javascript.

comments suggest edit

Rob Howard asks the question Is “Smart Client” a “Dumb Idea”. Obviously I don’t necessarily think so as I pointed out in my post Overlooked Problem With Web Based Applications.

However, as I thought about it more, I realized that part of the excitement over web applications is that they are starting to really deliver on the failed promises that Java made…

Write once, run anywhere.

Although closer to the truth is…

Write once, debug CSS and Javascript quirks everywhere.

The missing piece in my mind is that there is no built in support for managing local storage of your web based data. As websites get richer and richer, perhaps Smart clients aren’t the only way to solve this problem. All that is necessary is to develop an HTML specification for saving user data to the desktop.

Well we have such a thing now, they’re called “cookies”. But cookies are limited in size and not very useful for document management. The idea in mind is to create a specification similar to cookies, but that allow full structured documents to be stored on the client from the web server. Javascript running in the browser would have permission to modify these documents (subject to the same restrictions as modifying a cookie).

In this scenario, if you are offline and need to read a document or email, you simply navigate to the Url of the web application. The browser would then load the site from its internal cache. Or better yet, rather than loading the site (which might not be very useful), it loads a list of document “viewer” that the site registers with the browser. You choose the viewer, which is nothing more than a bit of javascript that is capable of listing and viewing locally stored documents.

When you reconnect, your browser sends the document to the site which merges your changes, giving you the option to resolve conflicts. It sounds a lot like smart clients, doesn’t it? The obvious difference is that your application would theoretically run on nearly any machine with a modern browser that supports these new standards and would not require the .NET platform nor a Java virtual machine.

In any case, this is my hand waving half-baked view of where we’re headed with web applications.

code, sql comments suggest edit

Perhaps there is a better term I could be using when I referred to “dynamic SQL” in my last post. To my defense, I did mention using Prepared Statements.

The key point to keep in mind while reading the last post is that Dynamic SQL does not necessarily imply Inline SQL. By inline SQL, I mean concatenated sql statements flung all over the code like a first year classic ASP developer.

Like any good security minded developer, I detest inline SQL (as I define it here). A much better and safer approach is to use prepared parameterized SQL as Jeff Atwood outlines in this post.

So when I refer to Dynamic SQL I am referring to dynamically generated prepared parameterized SQL (that’s a mouthful). These are prepared parameterized SQL statements that are generated by machine and not by hand.

As Jeff points out in this post, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations.”

Taking that abstraction one step higher, you could also consider SQL itself to be a form of database intermediate language. A dynamic SQL engine generates SQL much like a compiler takes your C# code and generates IL? When that query is executed as a prepared parameterized query, it is “jitted” by the database server into a high performance database operation.

It seems to me a decent analogy for a Dynamic SQL engine such as those built into LLBLGen Pro, NHibernate, etc…

comments suggest edit

Craig Andera posts a technique for handling exceptions thrown by a webservice. He takes the approach of adding a try catch block to each method.

A while ago I tackled the same problem, but I was unhappy with the idea of wrapping every inner method call with a try catch block. I figured there had to be a better way. Since SOAP is simply XML being sent over a wire, I figured there had to be a way for me to hook into the pipeline rather than modify my code.

What I came up with is my Exception Injection Technique Using a Custom Soap Extension. This allows you to simply add an additional attribute to each web method as in the sample below and have full control over how exceptions are handled and sent over the wire.

[WebMethod, SerializedExceptionExtension]

public string ThrowNormalException()

{

    throw new ArgumentNullException(“MyParameter”, \         ”Exception thrown for testing purposes”);

}

Read about the technique here and feel free to adapt it to your purposes.

comments suggest edit

Now that ASP.NET 2.0 is released, a lot of developers will start to really dig into the provider model design pattern and specification and its various implementations. The provider model is really a blending of several design patterns, but most closely resembles the abstract factory.

Where the provider model really busts out the flashlight and shines is when an application (or subset of an application) has a fairly fixed API, but requires flexibility in the implementation. For example, the Membership Provider has a fixed API for dealing with users and roles, but depending on the configured provider, could be manipulating users in a database, in Active Directory or a 4’x6’ piece of plywood. The user of the provider doesn’t need to know.

Provider Misuse\ However, one common area where I’ve seen providers misused is in an attempt to abstract the underlying database implementation away from application. For example, in many open source products such as DotNetNuke, an underlying goal is to support multiple database providers. However, the provider model in these applications tend to be a free for all data access API. For example, in the .TEXT (and currently Subtext) codebase, there is one provider that is responsible for nearly all data access. It has a huge number of methods which aren’t factored into well organized coherent APIs.

The other inherent flaw in many of these approaches is they violate a key principle of good object oriented design…

Good design seeks to insulate code from the impact of changes to other code.

Take Subtext as an example. Suppose we want to add a column to a table. Not only do we have to update the base provider to account for the change, we also have to update every single concrete provider that implements the provider (assuming we had some). Effectively, the provider model in this case amplifies the effect of a schema change. The result is that It makes your proverbial butt look fat.

This is why you see an appalling lack of concrete providers for applications such as DotNetNuke, .TEXT, Subtext etc…. Despite the fact that they all implement the provider model, very few take (nor have) the time to implement a concrete provider.

A better way\ For most professional web projects, this is not really an issue since your client probably has little need to switch the database upon which the application is built. However, if you are building a system (such as an open source blogging engine) in which the user may want to plug in nearly any database, this is a much bigger issue.

After a bit of research and using an ORM tool on a project, I’ve stepped away from being a religious stored procedure fanatic and am much more open to looking at object/relational mapping tools and dynamic query engines. ORM tools such as LLBLGen Pro and NHibernate make use of dynamically generated prepared sql statements. Now before you dismiss this approach, bear with me. Because the statements are prepared, the performance difference between these queries and a stored procedure are marginal. In fact, a dynamic sql statement can often even outperform a stored proc because it is targeted to the specific case, whereas stored procs tend to support the general case. One example is the dynamic query that only selects the needed columns from a table and not every column.

Better Insulation\ The key design benefit of such a tool is that they insulate the main application from the impact of schema changes. Add a column to a table and perhaps you only need to change one class and a mapping file. The other key benefit is that these tools already support multiple databases. Every time the ORM vendor spends time implementing support for a new database system, you’re application supports that database for free! That’s a lot of upside.

Think about that for a moment. NHibernate currently supports DB2, Firebird, Access, MySql, PostgreSQL, Sql Server 2000, and SqlLite. Think about how much time and effort it would take to implement a provider for each of these databases.

The very fact that you don’t see a plethora of database providers for DNN, .TEXT, etc… is convincing evidence that the provider model falls short in being a great solution for abstracting the database layer from application code. It is great for small well defined APIs, but not well suited for a generalized data api where there tends to be a lot of code churn.

To this end, the Subtext developers are investigating the potential for using NHibernate in a future version of Subtext.

Referenced Links and other resources\

comments suggest edit

It seems Sony has overstepped the line with some DRM it used to protect its music CDs.

The article is very technical and goes deep into the internals of how Windows works, but for you non-techies, the bottom line is that Sony wrote what amounts to Spyware to protect its music. They used the very same techniques that virus and spyware writers use to cloack their programs. To add insult to injury, the malware is poorly written with no clear way to uninstall it. It cloaks itself and also creates an exploit in that any program named with a “$sys$” is cloaked from the operating system. This is most likely in violation of a variety of laws against this sort of thing (like the SPY act).

The shortsighted outcome of this approach is that Sony is effectively planting malware on those who choose to LEGALLY purchase their music. So from Sony’s perspective, they are infecting the systems of the good guys and creating a disincentive to purchase music legally.

As many people pointed out in the comments of the article, they would feel safer downloading music from an illegal source rather than installing proprietary software used to listen to DRM protected music. Perception is everything and if the public perceives that Sony requires installing spyware to play their copy protected music, they will look for alternative means to get the music.

I wouldn’t be surprised if a class action suit resulted from this.

via Jon Galloway

comments suggest edit

I just had to post this in its full glory. Great leadup Matt!

You just came to Texas Tech University as a freshman…

You are SO PROUD that you were chosen to be the “Bell Ringer”. Your job is to ring the school’s bell during the big game to help pump up the crowd…

Your whole family, all your friends, and 15 million ESPN viewers see you on Saturday’s telecast ringing the team’s bell. But, due to the tragically unfortunate placement of the bell, the camera, and your body, your whole family, all of your friends, and 15 million ESPN viewers see this instead….

[Via public MattBerther : ISerializable]

comments suggest edit

While it may be exciting to see Microsoft jumping aboard the web-based application bandwagon, something that I am experiencing right now reminds me why I think there is still a strong place for rich “smart” clients.

There is an important piece of information in an email someone sent me and when I try to login to GMail I get…

Gmail is temporarily unavailable. Cross your fingers and try again in a few minutes. We’re sorry for the inconvenience.

At least with a rich client like Outlook, I would have had that email on my local machine. I also use Yahoo Notepad for important information and have had that site be down a few times when I needed a critical piece of information. It makes me realize that I shouldn’t trust these services to host my important data. I want it on my own machine where I can get to it.

comments suggest edit

I have a question for those of you who host a blog with a hosting provider such as WebHost4Life. Do you make sure to remove write access for the ASPNET user to the bin directory? If so, would you be willing to enable write access for an installation process?

The reason I ask is that I’ve created a proof of concept for a potential nearly no-touch upgrade tool for upgrading .TEXT to Subtext. This particular tool is geared towards those who have .TEXT hosted with 3rd party hosting, although even those who host their own server may want to take advantage of it.

The way it works is that you simply drop a single upgrader assembly into the bin directory of an existing .TEXT installation. You also drop an UpgradeToSubtext.aspx file in your admin directory (This provides a bit of safety so that only an admin can initiate the upgrade process).

Afterwards, you simply point your browser to Admin/UpgradeToSubtext.aspx which initiates the upgrade proces.

The upgrad tool finds the connection string in the existing web.config and displays a message with the actions it is about to take. After you hit the upgrade button, it backs up important .TEXT files, unzips an embedded zip file which contains all the binaries and content files for Subtext. It also runs an embedded sql script to create all new subtext tables and stored procedures and copies your .TEXT data over. It doesn’t modify any existing tables so it is possible to rollback in case something goes wrong. Finally, it overwrites the web.config file with a Subtext web.config file, making sure to set the connection string properly.

It’s a very nice and automated procedure, but it has a key flaw. It requires write access to your bin directory.

An alternate approach that avoids writing to the bin directory is to have the user manually deploy all the Subtext binaries to the bin directory. The upgrade process would run the same, but it would only need write access to the web directory to deploy the various content files. Giving the ASPNET user write access to the web directory is not an unreasonable request since the gallery feature of .TEXT did create folders and require write access.

If you are considering upgrading from .TEXT to Subtext, or if you just have an opinion, please chime in.

comments suggest edit

The subject of this post is the title of an interesting article on page 58 of this month’s Wired magazine. The author, Patrick Di Justo, shows that compared to 1950 prices, we are paying more of our annual income for houses, but we get a lot more for our dollar.

For example, the average square feet per persion in 1950 was 289.1 compared to 896.2 today. Price per square foot, when adjusted for inflation is actually lower today than in 1950. One of the more striking numbers is the square feet annual income buys today as compared to then. Then one could buy 429.3 square feet while today one can buy 930.1.

What I would love to see is this analysis applied to Los Angeles home prices from 1950 to present.

comments suggest edit

Software pundit Joel Spolsky finally added titles to his RSS feeds (among other site improvements) and it’s about time. The title “November 5, 2005” tells me nothing about what he’s saying. Love him or hate him (why choose one or the other. Choose both!), Joel is definitely worth reading.

comments suggest edit

Every day I look at my current code and go, “Damn, that’s some sweet shit!” But I also look at code I wrote a month ago and say, “What a freakin’ idiot I was to write that!” So in a month, the code I’m writing today will have been written by an idiot.

It looks like I am not the only one who feels that way.

It seems that at no matter which date if I look back to the code I wrote six months prior, I always wonder “what kind of crack was I smoking when I wrote that?” For the most part it’s not likely to end up on the daily wtf, but still, does this cycle ever end? Or at least get longer?

I suppose the optimistic way to look at it is that I am still learning pretty steadily, and not becoming stagnant. I’m also able to resist the temptation and go back and fiddle with what isn’t broke. I do kinda feel bad for anyone that has to maintain any of my older stuff (actually not really, suckers).

[Via Pragmatic Entropy]