comments edit

So when should you choose to build a smart client rather than a web application (or in addition to). The typical answer I’ve seen is when extreme usability is required. As AJAX techniques get more mature, I think this will become less of a consideration.

As I thought of it more, it hit me. The same thing Jeff Atwood said about strored procedures, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations†applies to applications.

Smart Clients should be considered Application assembly language: for use in only the most performance critical situations.

This is why you won’t see the next version Halo running in a browser (though you might see the first version someday). This is also why you won’t run Photoshop in a browser. Performance is critical in such applications.

There are other important considerations as well, such as security. I wouldn’t run an RSA key ring in a browser. Also apps that constantly run and perform a service on your machine. For example, system tray icons, though even that concept seems to be changing.

Anyways, I need to chew on this some more. At this point in time, usability is still a concern. That is why I run a client RSS Aggregator and use w.Bloggar to post to my blog.

Although the deployment issues for web-based applications are great, the development environments for AJAX applications pale in comparison to writing a rich UI. There’s just something about writing object oriented compiled code that makes me cringe at writing everything as javascript.

comments edit

Rob Howard asks the question Is “Smart Client” a “Dumb Idea”. Obviously I don’t necessarily think so as I pointed out in my post Overlooked Problem With Web Based Applications.

However, as I thought about it more, I realized that part of the excitement over web applications is that they are starting to really deliver on the failed promises that Java made…

Write once, run anywhere.

Although closer to the truth is…

Write once, debug CSS and Javascript quirks everywhere.

The missing piece in my mind is that there is no built in support for managing local storage of your web based data. As websites get richer and richer, perhaps Smart clients aren’t the only way to solve this problem. All that is necessary is to develop an HTML specification for saving user data to the desktop.

Well we have such a thing now, they’re called “cookies”. But cookies are limited in size and not very useful for document management. The idea in mind is to create a specification similar to cookies, but that allow full structured documents to be stored on the client from the web server. Javascript running in the browser would have permission to modify these documents (subject to the same restrictions as modifying a cookie).

In this scenario, if you are offline and need to read a document or email, you simply navigate to the Url of the web application. The browser would then load the site from its internal cache. Or better yet, rather than loading the site (which might not be very useful), it loads a list of document “viewer” that the site registers with the browser. You choose the viewer, which is nothing more than a bit of javascript that is capable of listing and viewing locally stored documents.

When you reconnect, your browser sends the document to the site which merges your changes, giving you the option to resolve conflicts. It sounds a lot like smart clients, doesn’t it? The obvious difference is that your application would theoretically run on nearly any machine with a modern browser that supports these new standards and would not require the .NET platform nor a Java virtual machine.

In any case, this is my hand waving half-baked view of where we’re headed with web applications.

code, sql comments edit

Perhaps there is a better term I could be using when I referred to “dynamic SQL” in my last post. To my defense, I did mention using Prepared Statements.

The key point to keep in mind while reading the last post is that Dynamic SQL does not necessarily imply Inline SQL. By inline SQL, I mean concatenated sql statements flung all over the code like a first year classic ASP developer.

Like any good security minded developer, I detest inline SQL (as I define it here). A much better and safer approach is to use prepared parameterized SQL as Jeff Atwood outlines in this post.

So when I refer to Dynamic SQL I am referring to dynamically generated prepared parameterized SQL (that’s a mouthful). These are prepared parameterized SQL statements that are generated by machine and not by hand.

As Jeff points out in this post, “Stored Procedures should be considered database assembly language: for use in only the most performance critical situations.”

Taking that abstraction one step higher, you could also consider SQL itself to be a form of database intermediate language. A dynamic SQL engine generates SQL much like a compiler takes your C# code and generates IL? When that query is executed as a prepared parameterized query, it is “jitted” by the database server into a high performance database operation.

It seems to me a decent analogy for a Dynamic SQL engine such as those built into LLBLGen Pro, NHibernate, etc…

comments edit

Craig Andera posts a technique for handling exceptions thrown by a webservice. He takes the approach of adding a try catch block to each method.

A while ago I tackled the same problem, but I was unhappy with the idea of wrapping every inner method call with a try catch block. I figured there had to be a better way. Since SOAP is simply XML being sent over a wire, I figured there had to be a way for me to hook into the pipeline rather than modify my code.

What I came up with is my Exception Injection Technique Using a Custom Soap Extension. This allows you to simply add an additional attribute to each web method as in the sample below and have full control over how exceptions are handled and sent over the wire.

[WebMethod, SerializedExceptionExtension]

public string ThrowNormalException()

{

    throw new ArgumentNullException(“MyParameter”, \         ”Exception thrown for testing purposes”);

}

Read about the technique here and feel free to adapt it to your purposes.

comments edit

Now that ASP.NET 2.0 is released, a lot of developers will start to really dig into the provider model design pattern and specification and its various implementations. The provider model is really a blending of several design patterns, but most closely resembles the abstract factory.

Where the provider model really busts out the flashlight and shines is when an application (or subset of an application) has a fairly fixed API, but requires flexibility in the implementation. For example, the Membership Provider has a fixed API for dealing with users and roles, but depending on the configured provider, could be manipulating users in a database, in Active Directory or a 4’x6’ piece of plywood. The user of the provider doesn’t need to know.

Provider Misuse\ However, one common area where I’ve seen providers misused is in an attempt to abstract the underlying database implementation away from application. For example, in many open source products such as DotNetNuke, an underlying goal is to support multiple database providers. However, the provider model in these applications tend to be a free for all data access API. For example, in the .TEXT (and currently Subtext) codebase, there is one provider that is responsible for nearly all data access. It has a huge number of methods which aren’t factored into well organized coherent APIs.

The other inherent flaw in many of these approaches is they violate a key principle of good object oriented design…

Good design seeks to insulate code from the impact of changes to other code.

Take Subtext as an example. Suppose we want to add a column to a table. Not only do we have to update the base provider to account for the change, we also have to update every single concrete provider that implements the provider (assuming we had some). Effectively, the provider model in this case amplifies the effect of a schema change. The result is that It makes your proverbial butt look fat.

This is why you see an appalling lack of concrete providers for applications such as DotNetNuke, .TEXT, Subtext etc…. Despite the fact that they all implement the provider model, very few take (nor have) the time to implement a concrete provider.

A better way\ For most professional web projects, this is not really an issue since your client probably has little need to switch the database upon which the application is built. However, if you are building a system (such as an open source blogging engine) in which the user may want to plug in nearly any database, this is a much bigger issue.

After a bit of research and using an ORM tool on a project, I’ve stepped away from being a religious stored procedure fanatic and am much more open to looking at object/relational mapping tools and dynamic query engines. ORM tools such as LLBLGen Pro and NHibernate make use of dynamically generated prepared sql statements. Now before you dismiss this approach, bear with me. Because the statements are prepared, the performance difference between these queries and a stored procedure are marginal. In fact, a dynamic sql statement can often even outperform a stored proc because it is targeted to the specific case, whereas stored procs tend to support the general case. One example is the dynamic query that only selects the needed columns from a table and not every column.

Better Insulation\ The key design benefit of such a tool is that they insulate the main application from the impact of schema changes. Add a column to a table and perhaps you only need to change one class and a mapping file. The other key benefit is that these tools already support multiple databases. Every time the ORM vendor spends time implementing support for a new database system, you’re application supports that database for free! That’s a lot of upside.

Think about that for a moment. NHibernate currently supports DB2, Firebird, Access, MySql, PostgreSQL, Sql Server 2000, and SqlLite. Think about how much time and effort it would take to implement a provider for each of these databases.

The very fact that you don’t see a plethora of database providers for DNN, .TEXT, etc… is convincing evidence that the provider model falls short in being a great solution for abstracting the database layer from application code. It is great for small well defined APIs, but not well suited for a generalized data api where there tends to be a lot of code churn.

To this end, the Subtext developers are investigating the potential for using NHibernate in a future version of Subtext.

Referenced Links and other resources\

comments edit

It seems Sony has overstepped the line with some DRM it used to protect its music CDs.

The article is very technical and goes deep into the internals of how Windows works, but for you non-techies, the bottom line is that Sony wrote what amounts to Spyware to protect its music. They used the very same techniques that virus and spyware writers use to cloack their programs. To add insult to injury, the malware is poorly written with no clear way to uninstall it. It cloaks itself and also creates an exploit in that any program named with a “$sys$” is cloaked from the operating system. This is most likely in violation of a variety of laws against this sort of thing (like the SPY act).

The shortsighted outcome of this approach is that Sony is effectively planting malware on those who choose to LEGALLY purchase their music. So from Sony’s perspective, they are infecting the systems of the good guys and creating a disincentive to purchase music legally.

As many people pointed out in the comments of the article, they would feel safer downloading music from an illegal source rather than installing proprietary software used to listen to DRM protected music. Perception is everything and if the public perceives that Sony requires installing spyware to play their copy protected music, they will look for alternative means to get the music.

I wouldn’t be surprised if a class action suit resulted from this.

via Jon Galloway

comments edit

I just had to post this in its full glory. Great leadup Matt!

You just came to Texas Tech University as a freshman…

You are SO PROUD that you were chosen to be the “Bell Ringer”. Your job is to ring the school’s bell during the big game to help pump up the crowd…

Your whole family, all your friends, and 15 million ESPN viewers see you on Saturday’s telecast ringing the team’s bell. But, due to the tragically unfortunate placement of the bell, the camera, and your body, your whole family, all of your friends, and 15 million ESPN viewers see this instead….

[Via public MattBerther : ISerializable]

comments edit

While it may be exciting to see Microsoft jumping aboard the web-based application bandwagon, something that I am experiencing right now reminds me why I think there is still a strong place for rich “smart” clients.

There is an important piece of information in an email someone sent me and when I try to login to GMail I get…

Gmail is temporarily unavailable. Cross your fingers and try again in a few minutes. We’re sorry for the inconvenience.

At least with a rich client like Outlook, I would have had that email on my local machine. I also use Yahoo Notepad for important information and have had that site be down a few times when I needed a critical piece of information. It makes me realize that I shouldn’t trust these services to host my important data. I want it on my own machine where I can get to it.

comments edit

I have a question for those of you who host a blog with a hosting provider such as WebHost4Life. Do you make sure to remove write access for the ASPNET user to the bin directory? If so, would you be willing to enable write access for an installation process?

The reason I ask is that I’ve created a proof of concept for a potential nearly no-touch upgrade tool for upgrading .TEXT to Subtext. This particular tool is geared towards those who have .TEXT hosted with 3rd party hosting, although even those who host their own server may want to take advantage of it.

The way it works is that you simply drop a single upgrader assembly into the bin directory of an existing .TEXT installation. You also drop an UpgradeToSubtext.aspx file in your admin directory (This provides a bit of safety so that only an admin can initiate the upgrade process).

Afterwards, you simply point your browser to Admin/UpgradeToSubtext.aspx which initiates the upgrade proces.

The upgrad tool finds the connection string in the existing web.config and displays a message with the actions it is about to take. After you hit the upgrade button, it backs up important .TEXT files, unzips an embedded zip file which contains all the binaries and content files for Subtext. It also runs an embedded sql script to create all new subtext tables and stored procedures and copies your .TEXT data over. It doesn’t modify any existing tables so it is possible to rollback in case something goes wrong. Finally, it overwrites the web.config file with a Subtext web.config file, making sure to set the connection string properly.

It’s a very nice and automated procedure, but it has a key flaw. It requires write access to your bin directory.

An alternate approach that avoids writing to the bin directory is to have the user manually deploy all the Subtext binaries to the bin directory. The upgrade process would run the same, but it would only need write access to the web directory to deploy the various content files. Giving the ASPNET user write access to the web directory is not an unreasonable request since the gallery feature of .TEXT did create folders and require write access.

If you are considering upgrading from .TEXT to Subtext, or if you just have an opinion, please chime in.

comments edit

The subject of this post is the title of an interesting article on page 58 of this month’s Wired magazine. The author, Patrick Di Justo, shows that compared to 1950 prices, we are paying more of our annual income for houses, but we get a lot more for our dollar.

For example, the average square feet per persion in 1950 was 289.1 compared to 896.2 today. Price per square foot, when adjusted for inflation is actually lower today than in 1950. One of the more striking numbers is the square feet annual income buys today as compared to then. Then one could buy 429.3 square feet while today one can buy 930.1.

What I would love to see is this analysis applied to Los Angeles home prices from 1950 to present.

comments edit

Software pundit Joel Spolsky finally added titles to his RSS feeds (among other site improvements) and it’s about time. The title “November 5, 2005” tells me nothing about what he’s saying. Love him or hate him (why choose one or the other. Choose both!), Joel is definitely worth reading.

comments edit

Every day I look at my current code and go, “Damn, that’s some sweet shit!” But I also look at code I wrote a month ago and say, “What a freakin’ idiot I was to write that!” So in a month, the code I’m writing today will have been written by an idiot.

It looks like I am not the only one who feels that way.

It seems that at no matter which date if I look back to the code I wrote six months prior, I always wonder “what kind of crack was I smoking when I wrote that?” For the most part it’s not likely to end up on the daily wtf, but still, does this cycle ever end? Or at least get longer?

I suppose the optimistic way to look at it is that I am still learning pretty steadily, and not becoming stagnant. I’m also able to resist the temptation and go back and fiddle with what isn’t broke. I do kinda feel bad for anyone that has to maintain any of my older stuff (actually not really, suckers).

[Via Pragmatic Entropy]

comments edit

By now, every developer and his mother knows that VS.NET 2005 and SQL Server 2005 has been released. Prepare for the generics explosion as legions of .NET developers retrofit code, happily slapping <T> wherever it fits.

I predict the number of angle brackets in C# will initially increase by 250% only to settle over time to around 75% above current numbers. If you don’t count the angle brackets in C# comments, could be even higher.

But before you go too hog wild with generics, do consider that generics have an overhead associated with them, especially generics involving a value type. Their benefits do not come completely free.

As Rico Mariani pointed out in his PDC talk, generics involve a form of code generation by the run-time. His rule of thumb was that when your collection contains around 50 500 or so items, you’ll the benefits outweigh the overhead. But as always, measure measure measure.

In general, the strong typing and improved code productivity outweigh any performance concerns I have with small collections.

UPDATE: Whoops, I mistyped the number of items Rico mentioned. He said 500, not 50. Thanks for the correction Rico.

comments edit

The great people at WebHost4Life moved my database and web server to new Windows 2003 servers. They put them in the same server block and I noticed a significant speed increase in the time it takes my blog to load. This explains why my site was down this morning.

Hopefully, this server has much less abusive tenants than my last one did.

comments edit

Eric Lippert does a great job of defining the term Idempotent. I’ve used this term many times both to sound smart and because it is so succinct.

The one place I find idempotence really important is creating update scripts for a database such as the Subtext database. In an open source project geared towards other devs, you just have to assume that people are tweaking and applying various updates to the database. You really have no idea in what condition the database is going to be in. That’s where idempotence can help.

For example, if an update script is going to add a column to a table, I try to make sure the column isn’t already there already, before adding the column. That way, if I run the script twice, three times, twenty times, the table is the same as if I ran it once. I don’t end up adding the column multiple times.

comments edit

Sometimes someone writes a post that makes you say, “Oh shit!”. For example, Jon Galloway writes that writing a windows service just to run a scheduled process is a bad idea.

And he presents a very nice case. Nice enough that I take back the times I have condescendingly said that Windows Services are easy to write in .NET. I probably should look through some of the services I have written in the past. I know of one I could easily convert to a console app and gain functionality.

However, I think the decision sometimes isn’t so easy as that. One service I have written in the past is a Socket Server that takes in encrypted connections and communicates back with the client. Now that obviously needs to run all the time and is best served as a Windows Service. The problem then was since I had written the Windows Service code to be generalized, I was able to implement many other services very quickly, even ones with timers that ran on a schedule.

However, the most challenging ones to write, happened to be the ones that ran on a schedule, since the scheduling requirements kept changing and I realized I was going down the path to implementing…well…the Windows Task Scheduler.

In general, I think Jon’s right. If all you are doing is running a scheduled task, use Windows Task Scheduler until you reach the point that your system’s need are no longer met by the scheduler. This follows the principle of doing only what is necessary and implementing the simplest solution that works.

In a conversation, Jon mentioned that a lot of developers perceive Windows Services to be a more “professional” solution than task scheduling a console app. But one way to think of a service is an application that responds to requests and system events, not necessarily a scheduled task. So to satisfy both camps you could consider creating a service that takes in requests, and a scheduled task to make the requests. For example, a service might have a file system watcher active and a scheduled task might write the file. I don’t suggest adding all this complexity to something that can be very simple.

For me, I also like writing windows services because I have a system for quickly creating installation packages very quickly. What I need to do is spend some time creating an installer task for setting up a windows task scheduler job. That way I can do the same for a scheduled console app.