comments edit

Arizona Tea You probably didn’t know this, but this blog is more than a technical blog. It is also your source for solving all the world’s problems. For example, there is the age old question, “Where did the matching sock go?” Did you notice the extra amount of lint in the dryer? Problem solved.

Today I noticed something interesting in the cupboard. We bought some Arizona Iced Tea Mix. It’s the powder stuff you mix with water to produce a refreshing drink. I noticed that not only is it packaged in a tin, but right there on the tin it lets you know that it is a collectible tin. Really? Wow! I can’t wait to get the whole set!

So that got me thinking, it makes a ton sense to label it as a collectible. Otherwise that tin becomes an ordinary piece of garbage when the little tea packages run out. But now, my friends, it is a wonderful mantle piece ready to be displayed with pride to other collectors. Perhaps I will be the one to puff out my chest and brag how I found the rare Peach Tea Tin with the misprint logo, eliciting a bit of envy among other collectors.

But why stop there. I notice that candy bars produce a lot of litter on the streets. Why not have our Snicker bars come in collectible wrappers? Or that impossible to break plastic shell that electronic devices are packaged in? Maybe if we made every newspaper issue a special collector’s edition, we could really reduce our landfill clutter.

The end result is that we will have moved from a centralized landfill structure (and all us geeks know centralized is bad! Booo!) into a decentralized peer-to-peer (good! yay!) landfill structure. It is the Napsterization of garbage…er collectibles (though perhaps Bittorentization is more accurate, but Napsterization sounds better).

comments edit

Subtext Logo Okay, it is survey time for all you Subtext users! I would like to know which Subtext skin you use for your blog. Please leave a comment with the skin that you use. If you use a custom skin, then just say Custom.

I have heard many say that the skins in Subtext look like they were designed by a developer rather than a designer. Well surprise surprise, probably because they were! I have a sneaking suspicion that many are not in use.

The reason this is important to me is that rather than simply continuing to add more and more skins to Subtext (generating a huge list), I would like to weed out some of the more drab skins. But instead of deleting skins, I was thinking I could simply replace them with newer skins that still fit the spirit of the original skin’s name. For example, I might replace the skin White with a new design that is very light in color and tone. Make sense?

comments edit

I have been considering using a separate library for generating the RSS and Atom feeds in Subtext. My first thought was to use RSS.NET but I noticed that there seemed to be no recent activity.

I contacted the admin and found out that RSS.NET has been bought by ToolButton Inc and will be released as a product. Very cool!

In the meanwhile, I still need an open source RSS library to package up with Subtext. Fortunately, RSS.NET was developed under the MIT license which, as I mentioned before, is very compatible with our BSD license.

So one option is to simply copy the code into our Subtext code base. My only qualm about this approach is that I would like to keep stand-alone libraries that are not central to the Subtext Domain out of the Subtext codebase as much as possible, preferring to reference them as an external library.

Ideally, I would like to start a new project that is essentially a fork in RSS.NET, perhaps called FeedGenerator.NET (call me the forkmaster). I could probably host it on CodePlex in order to give me an opportunity to try it out and provide feedback. Would anyone find such a library useful other than us blog engine developers? Anyone have a better name?

I probably wouldn’t spend much time on this project except to provide changes and bug fixes as needed by Subtext. It would by no means be intended to compete with Web 2.0 Tools products, since they are probably going to be much more full featured than our humble needs. Besides, under the MIT license, any improvements we make would be available for them to roll into their product (following the terms of the license of course). It is the beauty of the MIT and BSD licenses.

Any thoughts? Suggestion? Etc…?

comments edit

In my last post, one of the restrictions listed when running in medium trust is that HTTP access is only allowed to the same domain. It is possible in web.config to add a single domain via the originUrl attribute of the <trust> element as described by Cathal.

To add more than one domain requires editing machine.config or creating a custom trust policy which will not be accessible to many users in a hosted environment. This may pose a big problem for those who care about trackbacks since even if you could modify machine.config, there is no way to predetermine every domain you will trackback.

One solution is to beg your hosting environment to relax the WebPermission in medium trust. If trackbacks and pingbacks are important to you, you shouldn’t be above begging. ;)

Another is for someone to create a passthrough trackback system in a fully trusted environment. Essentially this would act on behalf of the medium trusted trackback creator and forward a trackback to the final destination. It would require blogging engines affected by medium trust to trust this single domain. Of course the potential for abuse is high and the rewards are low (unless people out there absolutely love trackbacks).

log4net logging aspnet comments edit

UPDATE: Mea Culpa! It seems like Log4Net has no problems with medium trust and an external log4net file. I have written an updated post that talks about the problem I did run into and how I solved it.

A while ago I wrote a quick and dirty guide to configuring Log4Net for ASP.NET. Unfortunately, this technique does not work with ASP.NET 2.0 when running in medium trust.. This technique continues to work with medium trust!

While digging into the problem I found this blog post (from an aptly titled blog) by Kevin Jones.

This article from Microsoft discusses the ramifications of running ASP.NET 2.0 in medium trust more thoroughly. Here is a list of constraints placed on medium trust applications.

The main constraints placed on medium trust Web applications are:

  • OleDbPermission is not available. This means you cannot use the ADO.NET managed OLE DB data provider to access databases. However, you can use the managed SQL Server provider to access SQL Server databases.
  • EventLogPermission is not available. This means you cannot access the Windows event log.
  • ReflectionPermission is not available. This means you cannot use reflection.
  • RegistryPermission is not available. This means you cannot access the registry.
  • WebPermission is restricted. This means your application can only communicate with an address or range of addresses that you define in the <trust> element.
  • FileIOPermission is restricted. This means you can only access files in your application’s virtual directory hierarchy. Your application is granted Read, Write, Append, and PathDiscovery permissions for your application’s virtual directory hierarchy.

You are also prevented from calling unmanaged code or from using Enterprise Services.

Fortunately there is a way to specify that a configuration section within web.config should not require ConfigurationPermission. Simply add the requirePermission="false" attribute to the <section> declaration within the <configSections> area like so:

<configSections>
    <section name="log4net" 
      type="log4net.Config.Log4NetConfigurationSectionHandler
      , log4net"     
      requirePermission="false"/>
</configSections>

Unfortunately this applies to configuration sections within the web.config file. I have not found a way to specify that ASP.NET should not require ConfigurationPermission on an external configuration file. As I stated in my post on Log4Net, I prefer to put my Log4Net configuration settings in a separate configuration file. If anyone knows a way to do this, please let me know!

So in order to get Log4Net to work, I added the declaration above to the web.config file and copied the settings within the Log4Net.config file (pretty much cut and paste everything except the top xml declaration) into the web.config file. I then removed the assembly level XmlConfigurator attribute from AssemblyInfo.cs as it is no longer needed. Instead, I added the following line to the Application_Start method in Global.asax.cs.

protected void Application_Start(Object sender, EventArgs e)
{
    log4net.Config.XmlConfigurator.Configure();
}

So in summary, here are the changes I made to get Log4Net to work again in medium trust.

  • Added the log4Net section declaration in the configSections section of web.config and made sure the requirePermission attribute is set to the value false.
  • Moved the log4Net settings into web.config.
  • Removed the assembly attribute XmlConfigurator
  • Added the call to XmlConfigurator.Configure() to the Application_Start method in Global.asax.cs.

I have been working on getting the version of Subtext in our Subversion trunk to run in a medium trust environment, but there have been many challenges. Some of the components we use do not appear to run in a medium trust environment such as the FreeTextBox. Fortunately, we have a workaround for that issue which is to change the RichTextEditor node in web.config to use the PlainTextRichTextEditorProvider (which is a mouthful and should probably be renamed to PlainTextEditorProvider).

comments edit

Apple Ad Recently I poked lighthearted fun at one of the recent Apple commercials called “Network”

There are many who dislike the ads and claim that the ad is blatantly not true. One example is this humorous and well written rebuttal by Seth Stevenson.

Seth points out…

The final straw, for me, is that the spots make unconvincing claims. The one titled “Network” has a funny bit where “that new digital camera from Japan” is represented by a Japanese woman in a minidress. While Hodgman has trouble talking with the woman, Long speaks Japanese and shares giggles with her because “everything just kind of works with a Mac.” Now, I happen to have a digital camera from Japan, and it works just fine with my PC. It did from the moment I connected it.

Good point Seth. Perhaps the ad would have been more accurate had it been a Japanese printer rather than a digital camera. And I mean a printer that really is from Japan. My wife’s dad left us an Epson PM-950C Printer (among other things). This printer is not listed on the Epson U.S. site, but presumably can be found in the Japanese site.

In order to test it out, we plugged it into the USB port on her five year old iBook and printed a web page. Worked like a charm.

Fast forward a couple of weeks and we want to use the printer on my PC. We plugged it in and get the Windows cannot find the driver for the hardware dialog and were unable to print. Effectively, it didn’t just work. I then downloaded a driver that is supposed to be the U.S. equivalent. Still didn’t work.

My wife laughed out lound and said, “See! The ads are right!” I caught myself making excuses explaining to her that there are technical reasons beyond her comprehension and that I just need to toy with it for a moment to get it to work.

But I caught myself. Why am I making excuses for PCs and Microsoft? The experiment was quite simple. I plugged in a Japanese printer into a Macintosh, and we were able to print. I plug in the same printer into our PC (which was recently re-installed) and it didn’t work. Anecdotal evidence, sure, but adds weight to the Apple claim that things just work with the Apple and they don’t just work with Windows.

And responding with, “Well it will with Vista” is not a satisfactory answer. Tomorrow cars will fly, the U.S. will win the World Cup, and we will control our computers by having conversations with them and manipulating holograms. But the fact of the matter is that we all live in today and I will give Apple a bit of credit here.

comments edit

A while ago I had the idea of posting a picture of the day. Of course I didn’t mean every day, but the title might lead one to believe so. Therefore in order to reduce confusion, here is the next photo in my Picture of the Moment series.

Geisha
Trio

This is the most favorited and interesting picture according to Flickr in my Flickr account. So naturally that tells you that it wasn’t me who took this photo, but my lovely wife while on a trip to Japan. I was reminded of this picture because someone recently added a comment.

Speaking of Flickr, I love how you can comment specific regions of a picture by drawing box on the region. If you click on the image, you can see my feeble attempts at humor by moving your mouse of the photo. My comments are in sharp contrast to the elegance of three Geisha walking through a crowd.

comments edit

Jeff asks the question, “Isn’t programming supposed to be fun?”? Ha ha ha, naive little bunny. As if the person who invented programming had fun in mind. Silly rabbit.

You’ve heard the cliche. The reason it is called work is because it is work. Programming is fun when it is just for fun.

Cynical jokes aside, I actually did work for fun at my last job. Well technically, I worked for FUN. My former employer was bought by a company that then changed its name to Fun Technologies. It is listed under the symbol FUN on the Toronto Stock Exchange and the Alternative Invesment Market of the London Stock Exchange.

When I visited headquarters in Toronto the CEO gave me a hat with the ticker symbol emlazoned on the front. I think I still have it somewhere. I would wear it and go around telling people that I work for FUN. It was mildly funny for about two minutes. But as is my style, I dragged it out for thirty.

sql comments edit

Bullet Working as a team against a common database schema can be a real challenge. Some teams prefer to have their local code connect to a centralized database, but this approach can create many headaches. If I make a schema change to a shared database, but am not ready to check in my code, that can break the site for another developer. For a project like Subtext, it is just not feasible to have a central database.

Instead, I prefer to work on a local copy of the database and propagate changes via versioned change scripts. That way, when I check in my code, I can let others know which scripts to run on their local database when they get latest source code. Of course this can be also be a big challenge as the number of scripts starts to grow and developers are stuck bookkeeping which scripts they have run and which they haven’t.

That is why I always recommend to my teams that we script schema and data changes in an idempotent manner whenever possible. That way, it is much easier to simply batch updates together in a single file (per release for example) and a developer simply runs that single script any time an update is made.

As an example, suppose we have a Customer table and we need to add a column for the customer’s favorite color. I would script it like so:

IF NOT EXISTS 
(
    SELECT * FROM [information_schema].[columns] 
    WHERE   table_name = 'Customer' 
    AND table_schema = 'dbo'
    AND column_name = 'FavoriteColorId'
)
BEGIN
    ALTER TABLE [dbo].[Customer]
    ADD FavoriteColorId int
END

This script basically checks for the existence of the FavoriteColorId column on the table Customer and if it doesn’t exist, it adds it. You can run this script a million times, and it will only make the schema change once.

You’ll notice that I didn’t query against the system tables, instead choosing to lookup the information in an INFORMATION_SCHEMA view named Columns. This is the Microsoft recommendation as they reserve the right to change the system tables at any time. The information views are part of the SQL-92 standard, so they are not likely to change.

There are 20 schema views in all, listed below with their purpose (aggregated from SQL Books). Note that in all cases, only data accessible to the user executing the query against the information_schema views is returned.

Name Returns
CHECK_CONSTRAINTS Check Constraints
COLUMN_DOMAIN_USAGE Every column that has a user-defined data type.
COLUMN_PRIVILEGES Every column with a privilege granted to or by the current user in the current database.
COLUMNS Lists every column in the system
CONSTRAINT_COLUMN_USAGE Every column that has a constraint defined on it.
CONSTRAINT_TABLE_USAGE Every table that has a constraint defined on it.
DOMAIN_CONSTRAINTS Every user-defined data type with a rule bound to it.
DOMAINS Every user-defined data type.
KEY_COLUMN_USAGE Every column that is constrained as a key
PARAMETERS Every parameter for every user-defined function or stored procedure in the datbase. For functions this returns one row with return value information.
REFERENTIAL_CONSTRAINTS Every foreign constraint in the system.
ROUTINE_COLUMNS Every column returned by table-valued functions.
ROUTINES Every stored procedure and function in the database.
SCHEMATA Every database in the system.
TABLE_CONSTRAINTS Every table constraint.
TABLE_PRIVILEGES Every table privilege granted to or by the current user.
TABLES Every table in the system.
VIEW_COLUMN_USAGE Every column used in a view definition.
VIEW_TABLE_USAGE Every table used in a view definition.
VIEWS Every View

When selecting rows from these views, the table must be prefixed with information_schema as in SELECT * FROM information_schema.tables.

Please note that the information schema views are based on a SQL-92 standard so some of the terms used in these views are different than the terms in Microsoft SQL Server. For example, in the example above, I set table_schema = 'dbo'. The term schema refers to the owner of the database object.

Here is another code example in which I add a constraint to the Customer table.

IF NOT EXISTS(
    SELECT * 
    FROM [information_schema].[referential_constraints] 
    WHERE constraint_name = 'FK_Customer_Color' 
      AND constraint_schema = 'dbo'
)
BEGIN
  ALTER TABLE dbo.Customer WITH NOCHECK 
  ADD CONSTRAINT
  FK_Customer_Color FOREIGN KEY
  (
    FavoriteColorId
  ) REFERENCES dbo.Color
  (
    Id
  )
END

I generally don’t go to all this trouble for stored procedures, user defined functions, and views. In those cases I will use Enterprise manager generate a full drop and create script. When a stored procedure is dropped and re-created, you don’t lose data as you would if you dropped and re-created a table that contained some data.

With this approach in hand, I can run an update script with new schema changes confident that I any changes in the script that I have already applied will not be applied again. The same approach works for lookup data as well. Simply check for the data’s existence before inserting the data. It is a little bit more work up front, but it is worth the trouble and schema changes happen less frequently than code or stored procedure changes.

code comments edit

Plug I got a lot of great feedback from my post on Building Plugins Resilient to Versioning, which proposes an event-based self-subscription model to plugins.

Craig Andera points out that we can get many of the same benefits by having plugins implement an abstract base class instead of an interface. This is definitely a workable solution and is probably isomorphic to this event based approach.

Dimitri Glazkov was the voice of dissent in the comments to the post pointing out that the application loses granular control over plugins in this approach. I was not convinced at the time as I was focused on keeping the surface area of the plugin interface that is exposed to the application very small. When the surface area is small, there is less reason for the interface to change and the less reason to break the interface.

However a simple thought experiment makes me realize that we do need to have the application retain granular control over which plugins can respond to which events. This is the scenario.

Suppose our plugin framework defines three events, MessageSending, MessageSent, MessageReceiving and someone writes a plugin that responds to all three events. Later, someone else writes a plugin that only responds to MessageReceiving. If the blog user wants to chain the functionality of that plugin to the existing plugin, so that both fire when a message is received, then all is well.

But suppose this new plugin’s handling of the MessageReceiving event should replace the handling of the old plugin. How would we do this? We can’t just remove the old plugin because then we lose its handling of the other two events. Dimitri was right all along on this point, we need more granular control.

It makes sense to have some sort of admin interface in which we can check and uncheck individual plugins and whether or not they are allowed to respond to specific events. However, this is not too difficult with the event based approach.

.NET’s event pattern is really an implementation of the Observer pattern, but using delegates rather than interfaces. After all, what is a delegate under the hood but yet another class? When any code attaches a method to an event, it is in effect registering a callback method with the event source. This is the step where we can obtain more granular information about our plugins.

In the Application that hosts the plugin, events that require this granular control (not every event will) could be defined like so.

private event EventHandler messageReceived;

public event EventHandler MessageReceived
{
    add
    {
        RegisterPlugin(value.Method.DeclaringType);
        AddEvent(value);
    }
    
    remove
    {
        UnRegisterPlugin(value.Method.DeclaringType);
        RemoveEvent(value);
    }
}

So when adding and removing the event, we register the plugin with the system and then we add the event to some internal structure. For the purposes of this discussion, I’ll present some simple implementations.

void AddEvent(EventHandler someEvent)
{
    //We could choose to add the event 
    //to a hash table or some other structure
    this.messageReceived += someEvent;
}

void RemoveEvent(EventHandler someEvent)
{
    this.messageReceived -= someEvent;
}
                
private void RegisterPlugin(Type type)
{
    //using System.Diagnostics;
    StackTrace stack = new StackTrace();
    StackFrame currentFrame = stack.GetFrame(1);
    Console.WriteLine("Registering: " + type.Name 
         + " to event " + currentFrame.GetMethod().Name);
}

private void UnRegisterPlugin(Type type)
{
    StackTrace stack = new StackTrace();
    StackFrame currentFrame = stack.GetFrame(1);

    Console.WriteLine("UnRegistering: " + type.Name 
        + " to event " + currentFrame.GetMethod().Name);
}

As stated in the comments, the AddEvent method attaches the event handler in the standard way. I could have chosen to put it in a hash table or some other structure. Perhaps in a real implementation I would.

The RegisterPlugin method examines the call stack so that it knows which event to register. In a real implementation this would probably insert or update some record in a database somewhere so the application knows about it. Note that this should happen when the application is starting up or sometime before the user can start using the plugin. Otherwise there is no point to having access control.

public void OnMessageReceived()
{
    EventHandler messageEvent = this.messageReceived;
    if(messageEvent != null)
    {
        Delegate[] delegates = messageEvent.GetInvocationList();
        foreach(Delegate del in delegates)
        {
            if (EnabledForEvent(del.Method.DeclaringType, 
                "MessageReceived"))
            {
                del.DynamicInvoke(this, EventArgs.Empty);
            }
        }
    }
}

Now, when we invoke the event handler, instead of simply invoking the event, we examine the delegate chain (depending on how we store the event handlers) and dynamically invoke only the event handlers that we allow. How is that for granular control?

In this approach, the implementation for the application host is a bit more complicated, but that complexity is totally hidden from the plugin developer, as it should be.

subtext comments edit

Seems like every day now someone asks me if I plan on moving Subtext over to CodePlex. I figure it would save me a lot of trouble if I just answer this question here once and for all.

Though of course I can’t answer once and for all. I can only answer for the here and now. And right now, I have thought about it, but have not strongly considered it for the following reasons.

Not Feeling the Pain

First of all, I am not really feeling a lot of pain with our current setup at SourceForge. We have CruiseControl.NET humming along nicely, a great build process, and are very happy with Subversion. Life is good, why should we change?

Also, we already made one switch from CVS to Subversion. To yet again switch source control systems is a big hassle. There would have to be a huge benefit to doing so to make it worthwhile. A minor benefit is not enough.

Source Control

As you know, I am a big fan of Subversion and TortoiseSVN. Source Control bindings in Visual Studio have been the biggest nightmare second only to Front Page extensions that I have had the pleasure to deal with. For example, I work with one client who uses Vault and another who uses Visual Source Safe. Switching between the two is such a pain in the rear as I have to remember to switch the SCC provider before I start working on one or the other.

As far as I am concerned, there is a big hump to overcome to get me comfortable with using SCC again. I understand that the Codeplex people are working on Turtle which is a TortoiseSVN like interface to CodePlex. When this is as solid as TortoiseSVN, perhaps we can talk.

Also, does Team System source control version renames and moves? That is a big plus in Subversion. Does it work over HTTPS? Are checkins atomic? Are branching and tagging fast? I haven’t looked into this and would love to know.

Source Control History

Can we import our Subversion history into Team System at CodePlex? Our version history is very important to us. At least to me. I would hate to lose that.

CruiseControl.NET

It is probably only a matter of time before someone writes a plugin for CruiseControl.NET that works with Team System. But this would be important to me. Now Simone tells me that Team System has something equivalent that would replace CCNET as part of CodePlex. If this is the case, then I would love to see details.

MbUnit

As you might also know, I love me some MbUnit. I made the switch from NUnit a while ago and have never looked back. If CodePlex has a CCNet replacement, will it integrate with MbUnit. I know Team System has its own unit test framework, but does it have the Rollback and RowTest attributes and a TypeFixture or equivalents? And if you tell me about its extensibility model and that I can write my own, I ask in response, why should I? I already have those things.

Summary

At this point, I would love to hear more details about CodePlex that address my concerns. Perhaps a demo video that shows me what we’re missing. But until these issues are addressed, or if all the other Subtext developers are chomping at the bit for CodePlex and threaten a mutiny if we do not switch over, I do not see any urgency or reason to switch now. Sometimes being bleeding edge just leaves you with a bloody mess.

comments edit

You’ve probably seen the recent Apple commercials with the two guys holding hands. One introduces himself as a PC and the other introduces himself as an Apple Macintosh. They hold hands because they speak each other’s language. Along comes a Japanese woman representing a Japanese digital camera who sidles up to the Mac guy and holds his hand. The Mac speaks her language too. If you haven’t seen it, you can click on the image below to see it on YouTube by clicking on the image below.

Apple
Commercial

At the end of the commercial, she looks over at the PC guy and says something in Japanese to the Mac guy that elicits chuckling between the two. The inside joke is that she thinks the PC guy looks like an Otaku.

Otaku refers to a specific flavor of Japanese geekdom in which the geek is obsessed with anime, manga, action figures, and video games. In Japan it traditionally has very negative connotations and would be considered a pejorative. It probably has similar connotations to the term Nerd back in the 80s before Nerds became rich and drove Bentleys.

However, this month’s issue of Wired (14.07) has a short article by Tony McNicol that describes how this subculture has morphed into a thriving industry and a trendy lifestyle export.

“Otaku have joined the mainstream to become a major cultural icon,” says Tokyo journalist and social observer Kaori Shoji. “They’ve been lurking on the edge of hip for some years. Now they’ve gone completely legit.” In a recent column for the Japan Times, Shoji wrote about women who were desperately trying to land otaku boyfriends and the trouble they were having competing with the ultrageeks’ preferred romantic companions-racy images of anime idols freely available online.

Ok, so this still isn’t necessarily a unqualified compliment, but associating PC with becoming a hip cultural icon is probably the last thing Apple desires. Apple is supposed to be the hip one wearing shades in the room. Perhaps if the commercial intended to reflect modern shifts in Japan culture, we should see the woman trying to sidle up to the PC guy who would spurn her advances as he is too busy playing Oblivion.

sql comments edit

With Ted Neward’s recent post on the morass that is Object-Relational mapping, there has been a lot of discussion going around on the topic. In the comments on Atwood’s post on the subject, some commenters ask why put data in a relational database. Why not use an object database?

The Relational Model is a general theory of data management created by Edgar F. Codd based on predicate logic and set theory. As such, it has a firm mathematical foundation for storing data with integrity and for efficiently pulling data using set based operations. Also, as a timeless mathematical theory it has no specific ties to any particular framework, platform, or application.

Now enter object databases which I am intrigued by, but have yet to really dig into. From what I have read (and if I am off base, please enlighten me) these databases allow you to store your object instances directly in the database, probably using some sort of serialization format.

Seems to me this could introduce several problems. First, it potentially makes set based operations that are not based on the object model inefficient. For example, to build a quick ad-hock report, I would have to write some code to traverse the object hierarchy, which might not be an efficient means to obtaining the particular data. Perhaps an object query language would help mitigate or even solve this. I don’t know.

Another issue is that your data is now more opaque. There are all sorts of third party tools that work with relational data almost without regards to the database platform. It is quite easy to take Access and generate a report against an existing SQL database or to use other tools for exporting data out of a relational database. But since object oriented databases lack a formal mathematical foundation, it may be difficult to create a standard for connecting to and querying object databases that every vendor will agree on.

One last issue is more big picture. It seems to me it ties the data too much to the current code implementation. I have worked on a project that was originally written in classic ASP with no objects. The code I wrote used .NET and objects to access the same data repository. Fortunately, since the data was in a normalized relational database, it was not a problem to understand the data simply from looking at a schema and load it into my new objects.

How would that work with an object database? If I stored my Java objects in an OO database today, would I be able to load that data into my .NET objects tomorrow without having to completely change the database? What about in the future when I move on from .NET objects to message oriented programming or agent oriented programming?

Ultimately, the choice between an OO database and a relational database really depends on the particular requirements of the project at hand. However the thought of tying an application to an OO database at this point in time gives me reason to pause. This could lock me into a technology that works today, but is superseded tomorrow. On several projects I have worked on, we totally revamped the core technology (typically ASP to ASP.NET), but we rarely scrapped and recreated the database. The database engine might change over the years (Sql 6.5 to Sql 7 to Sql 2000 to Sql 2005), but the data model survives.

code comments edit

UPDATE: Added a followup part 2 to this post on the topic of granular control.

white plug We have all experienced the trouble caused by plugins that break when an application that hosts the plugins get upgraded. This seems to happen everytime I upgrade Firefox or Reflector.

On a certain level, this is the inevitable result of balancing stability with innovation and improvements. But I believe it is possible to insulate your plugin architecture from versioning so that such changes happen very infrequently. In designing a plugin architecture for Subtext, I hope to avoid breaking existing plugins during upgrades except for major version upgrades. Even then I would hope to avoid breaking changes unless absolutely necessary.

I am not going to focus on how to build a plugin architecture that dynamically loads plugins. There are many examples of that out there. The focus of this post is how to design plugins for change.

Common Plugin Design Pattern

A common plugin design defines a base interface for all plugins. This interface typically has an initialization method that takes in a parameter which represents the application via an interface. This might be a reference to the actual application or some other instance that can represent the application to the plugin on the application’s behalf (such as an application context).

public interface IPlugin
{
   void Initialize(IApplicationHost host);
}

public interface IApplicationHost
{
   //To be determined.
}

This plugin interface not only provides the application with a means to initialize the plugin, but it also serves as a marker interface which helps the application find it and determine that it is a plugin.

For application with simple plugin needs, this plugin interface might also have a method that provides a service to the application. For example, suppose we are building a SPAM filtering plugin. We might add a method to the interface like so:

public interface IPlugin
{
   void Initialize(IApplicationHost host);

   bool IsSpam(IMessage message);
}

Now we can write an actual plugin class that implements this interface.

public class KillSpamPlugin : IPlugin
{
    public void Initialize(IApplicationHost host)
    {
    }

    public bool IsSpam(IMessage message)
    {
        //It is all spam to me!
        return true;
    }
}

For applications that will have many different plugins, it is common to have multiple plugin interfaces that all inherit from IPlugin such as ISpamFilterPlugin, ISendMessagePlugin, etc…

Problems with this approach

This approach is not resilient to change. The application and plugin interface is tightly coupled. Should we want to add a new operation to the application that this plugin can handle, we would have to add a new method to the interface. This would break any plugins that have been already been written to the interface. We would like to be able to add new features to the application without having to change the plugin interface.

Immutable Interfaces

When discussing interfaces, you often hear that an interface is an invariant contract. This is true when considering code that implements the interface. Adding a method to an interface in truth creates a new interface. Any existing classes that implemented the old interface are broken by changing the interface.

As an example, consider our plugin example above. Suppose IPlugin is compiled in its own assembly. We also compile KillSpamPlugin in the assembly KillSpamPlugin, which references the IPlugin assembly. Now in our host application, we try and load our plugin. The following example is for demonstration purposes only.

string pluginType  = "KillSpamPlugin, KillSpamPlugin";
Type t = Type.GetType(pluginType);
ConstructorInfo ctor = t.GetConstructor(Type.EmptyTypes);
IPlugin plugin = (IPlugin)ctor.Invoke(null);

This works just fine. Now add a method to IPlugin and just compile that assembly. When you run this client code, you get a System.TypeLoadException.

A Loophole In Invariant Interfaces?

However in some cases this invariance does not apply to the client code that references an object via an interface. In this case, there is a bit of room for change. Specifically, you can add new methods and properties to that interface without breaking the client code. Of course the code that implements the interface has to be recompiled, but at least you do not have to recompile the client.

In the above example, did you notice that we didn’ have to recompile the application when we changed the IPlugin interface? This is true for two reasons. First, the application does not reference the new method added to the IPlugin interface. If you had changed the existing signature, there would have been problems. Second, the application doesn’t implement the interface, so changing it doesn’t require the application to be rebuilt.

A better approach.

So how can we apply this to our plugin design? First, we need to look at our goal. In this case, we want to isolate changes in the application from the plugin. In particular, we want to make it so that the plugin interface does not have to change, but allow the application interface to change.

We can accomplish this by creating a looser coupling between the application and the plugin interface. One means of doing this is with events. So rather than having the plugin define various methods that the application can call, we return to the first plugin definition above which only has one method, Initialize which takes in an instance of IApplicationHost. IApplicationHost looks like the following:

public interface IApplicationHost
{
    event EventHandler<CommentArgs> CommentReceived;
}
    
//For Demonstration purposes only.
public class CommentArgs : EventArgs
{
    public bool IsSpam;
}

Now if we wish to write a spam plugin, it might look like this:

public class KillSpamPlugin
{
    public void Initialize(IApplicationHost host)
    {
        host.CommentReceived 
               += new EventHandler<CommentArgs>(OnReceived);
    }

    void OnReceived(object sender, CommentArgs e)
    {
        //It is still all spam to me!
        e.IsSpam = true;
    }
}

Now, the application knows very little about a plugin other than it has a single method. Rather than the application calling methods on the plugin, plugins simply choose which application events it wishes to respond to.

This is the loose coupling we hoped to achieve. The benefit of this approach is that the plugin interface pretty much never needs to change, yet we can change the application without breaking existing plugins. Specifically, we are free to add new events to the IApplicationHost interface without problems. Existing plugins will ignore these new events while new plugins can take advantage of them.

Of course it is still possible to break existing plugins with changes to the application. By tracking dependencies, we can see that the plugin references both IApplicationHost and CommentArgs classes. Any changes to the signature for an existing property or method in these classes could break an existing plugin.

Event Overload

One danger of this approach is that if your application is highly extensible, IApplicationHost could end up with a laundry list of events. One way around that is to categorize events into groups via properties of the IApplicationHost. Here is an example of how that can be done:

public interface IApplicationHost
{
    UIEventSource UIEvents { get; }
    MessageEventSource MessageEvents { get; }
    SecurityEventSource SecurityEvents { get; }
}

public class UIEventSource
{
    event EventHandler PageLoad;
}

public class SecurityEventSource
{
    event EventHandler UserAuthenticating;
    event EventHandler UserAuthenticated;
}

public class MessageEventSource
{
    event EventHandler Receiving;
    event EventHandler Received;
    event EventHandler Sending;
    event EventHandler Sent;
}    

In the above example, I group events into event source classes. This way, the IApplicationHost interface stays a bit more uncluttered.

Caveats and Summary

So in the end, having the plugins respond to application events gives the application the luxury of not having to know much about the plugin interfaces. This insulates existing plugins from breaking when the application changes because there is less need for the plugin interface to change often. Note that I did not cover dealing with strongly typed assemblies. In that scenario, you may have to take the additional step of publishing publisher policies to redirect the version of the application interface that the plugin expects.

personal comments edit

Lightning StormWe just had a short lived but crazy loud lightning storm. I was up late working on Subtext because I couldn’t sleep when I started hearing the loud clap of thunder. The eerie part is that between the thunder claps, it was relatively silent outside until the next thunder clap.

I counted less than a second between some of the lightning flashes and the resultant thunder, meaning the bolts were pretty damn close. The immense loudness was awesome and set off all the car alarms in the neighborhood as well as a building alarm. Perhaps the school or church nearby. I was busy trying to shut down my computer.

As the lightning and thunder subsided, the rain started to come down really hard for about 15 minutes. Then nothing. Strangely enough, my dog didn’t make a sound. I wonder if she slept through the whole thing.

personal comments edit

Movie Poster Watched a free screening of this last night in Downtown Los Angeles as part of a film festival. Prior to the screening, Jon Bon Jovi and Richie Sambora played some live music and Al Gore was on hand to introduce the movie. Mayor Villaraigosa was also there oddly suited in a tux and comically referring to Richie as Richie Santora.

The basic premise of the documentary is that humans are affecting the Earth’s climate, this is a very bad thing, we can do something about it, and we don’t have to destroy the economy to do it.

This particular premise is also the subject of much controversy, which Al addresses in the documentary. He points out that there is an overwhelming scientific consensus that humans are affecting the Earth’s climate change. A study of 928 peer reviewed abstracts in scientific journals between 1993 and 2003 on the topic of climate change showed absolutely none disagreed with the consensus. The controversy, in large part has been hyped up by the popular media as well as the minority nay-sayers.

The movie also points out that the idea of humans affecting the climate is not so farfetched and in fact has a precedent. Consider the hole in the ozone layer that was a big worry back in the 80s. The EPA states that ozone deplection was caused by human activities.

Recent headlines are showing the effects of global warmings.

The key point to stress is that Al Gore does not believe that we have to sacrifice our economy or jobs in order to become more ecologically friendly. In fact, he believes it will lead to more jobs. As a case in point, Japan’s auto industry has the most stringent auto emission standards and highest eco ranking and they are kicking the butt of Ford and GM in profits.

One of the most revealing points (as well as embarassing for the U.S.) is when he shows that the U.S. auto emission standards are significantly below China! China! Since when is China the example to the U.S. on clean air?

At the end, Al points out that turning around the negative effects of human activities can be done, and there is a precedent. We don’t worry much about ozone depletion today because the U.S. took the lead in creating treaties to ban CFC production.

This is a worthwhile movie to see whatever your political affiliations, because it is an issue that will eventually affect everyone regardless of political affiliations. Republicans aren’t immune to the weather. If you are skeptical that we are causing global warming, then see it anyways and refute the claims.

code, tdd comments edit

Rhino I am working on some code using the Model View Presenter pattern for an article I am writing. I am using an event based approach based on the work that Fowler did. For the sake of this discussion, here is an example of a simplified view interface.

public interface IView
{
    event EventHandler Load;
}

In the spirit of TDD I follow this up with the shell of my Presenter class

public class Presenter
{
    public Presenter(IView view)
    {
        throw new NotImplementedException("Not implemented.");
    }
}

And this is where I reached my first dilemma. What is the best way to write my first unit test to test that the presenter class attaches itself to the view’s events? Well I could write a stub class that implements the interface and add a method to the stub that will raise the event. In this example, that would be quite easy, but in the real world, the interface might have multiple properties or methods and why bother going through the trouble to implement them all just to test one event? This is where a mock testing framework such as Rhino Mocks comes into play.

[Test]
public void VerifyAttachesToViewEvents()
{
    MockRepository mocks = new MockRepository();
    IView viewMock = (IView)mocks.CreateMock(typeof(IView));
    viewMock.Load += null;
    LastCall.IgnoreArguments();
    mocks.ReplayAll();
    new Presenter(viewMock);
    mocks.VerifyAll();   
}

The second line of code creates a dynamic proxy that implements the IView interface. In the third line, I set an expectation that the Load event will be attached to. The line afterwards tells Rhino Mocks to ignore the arguments in the last call. In other words, the Rhino Mocks will expect the that the Load event will be attached, but don’t worry about which method delegate gets attached to the event. Without that line, the test would expect that null is attached to the load event, which we do not want.

Finally we call ReplayAll(). I kinda think this is a misnomer since what it really is doing, as far as I know, is telling the mock framework that we are done setting all our expectations and we are going to actually conduct the test now. Up until this method call, every method, property, or event set on the mock instance is telling the mock that we are going to call that particular member. If one of the expected members is not invoked, the test has failed.

So finally after setting all these expectations, I create an instance of Presenter which is the code being tested. I then ask the mock framework to verify that all our expectations were met. Of course this test fails, which is good, since I haven’t yet implemented the presenter. Implementing the presenter is pretty straightforward.

public class Presenter
{
    IView view;
    public Presenter(IView view)
    {
        this.view = view;
        this.view.Load += new EventHandler(view_Load);
    }

    void view_Load(object sender, EventArgs e)
    {
        throw new Exception("Not implemented.");
    }
}

Now my test passes. But wait! It gets better. Now suppose I want to write a new test to test that the presenter handles the Load event. How do I raise the Load event on my mock IView instance? Rhino Mocks provides a way. First I will add a boolean property to the Presenter class named EventLoaded and then write the following test. This will allow me to know whether or not the event was raised. This is a contrived example of course. In a real project, you probably have some other condition you could test to verify that an event was raised.

I then write my test.

[Test]
public void VerifyLoadEventHandled()
{
    MockRepository mocks = new MockRepository();
    IView viewMock = (IView)mocks.CreateMock(typeof(IView));
    viewMock.Load += null;
    IEventRaiser loadRaiser 
         = LastCall.IgnoreArguments().GetEventRaiser();
    mocks.ReplayAll();
    Presenter presenter = new Presenter(viewMock);
    loadRaiser.Raise(viewMock, EventArgs.Empty);
    mocks.VerifyAll();
    Assert.IsTrue(presenter.EventLoaded);
}

This test looks similar to the last test, but note the bolded line (fourth line). This line creates an event raiser for the Load event (ignoring arguments to the event of course). Now I can use that later to raise the event after I create the presenter. Running this test fails, as expected. We have to finish the implementation of the Presenter class as follows:

public class Presenter
{
    IView view;
    public Presenter(IView view)
    {
        this.view = view;
        this.view.Load += new EventHandler(view_Load);
    }

    public bool EventLoaded
    {
        get { return this.eventLoaded; }
        set { this.eventLoaded = value; }
    }

    bool eventLoaded;

    void view_Load(object sender, EventArgs e)
    {
        this.eventLoaded = true;
    }
}

Now when I run the test, it succeeds. Pretty dang nifty. Many thanks to Ayende for clearing up some confusion I had with the Rhino Mocks documentation surrounding events.

blogging comments edit

Via a reader, I learned about iBox which is similar to Lightbox JS but with more features. It boasts the following benefits:

  • Small size. Total javascript is under 11kb.
  • Versatile. Handles images, external HTML pages, inline DIV elements.
  • Self contained. The script does not require any external libraries.
  • Support for non-JS users, which isn’t hugely important to me.

Check out the iBox Demo.

personal comments edit

Greg Duncan is now my hero for this find. This just proves that I am still unable to let go of the 80s.

Unfortunately, some of the flagship songs of certain groups are missing in place of other videos. For example, instead of Pour Some Sugar from Def Leppard we get Photograph. Likewise where is Loveshack from the B52s?

Some notable videos (both good and bad):

There are too many to list.