comments edit

creek Yesterday afternoon the wife and I took a bike ride along Ballona Creek. Now the word creek might conjure up an image for you of tall brushy trees on grass shores following the languid contours of a water way occasionally broken up by a small rapid or waterfall. Maybe the picture on the right reflects that vision.

If so, you do not live in Los Angeles.

Ballona Creek The picture at the left here reflects the concrete reality of Los Angeles water ways. Los Angeles rivers and creeks are heavily angular and typically filled with a murky green substance one hesitates to call “water” in fear of insulting the life sustaining liquid.

Now this may not look like a nice place for a bike ride, but consider for a moment the alternative - Biking down Venice Boulevard: Dodging gas guzzling H2 drivers in vehicles so large they require binoculars to see us lowly bikers. Stopping at a light every five seconds. Breathing in the smog ridden fumes of cars that I know have failed their smog check. Nah, that is not for me.

Personally, my favorite rides are up in the ocean tinged fresh air hills of Santa Monica Mountains (those of us who have lived in Alaska pretty much call everything here in Los Angeles a “hill”, with the exception of Mt. Baldy).

But there is something to be said of a ride that one can take right from the door of his or her own home. A short ride down the street takes us to the Ballona Creek Bike Trail entrance. From there, miles and miles of trail without interruptions from stop lights and motorized vehicles. And for the most part, it doesn’t really smell all that bad and there are some nice views of L.A. here and there.

But what really makes this ride great is the prize at the end. Marina Del Rey and Playa Del Rey. As the water turns from green sludge to a bluer color that reminds you that it is indeed water we are riding next to, the view opens up to the harbor. Sail boats drift by as the trail continues on and takes you right along the beach with people playing volleyball, jumping in the water, and throwing sand all over the place. It is really beautiful there.

We stopped at Tanner’s coffee to sit back for a moment and enjoy frosty beverage (it was the only thing we could find at that moment) and then headed back. Next time I will be sure to take some pictures.

comments edit

Bug I recently ran into a perplexing problem that I believe is a bug in ASP.NET 2.0.

Subtext dynamically loads UserControls into the page when fulfilling a request. When commenting on a post, we load a user control that contains the comment form fields and some instances of the RequiredFieldValidator validation control.

While testing, I noticed I kept getting javascript errors when trying to post a comment. Here is the error message:

missing ; before statement

Viewing the source, I noticed the error occurs in the javascript generated by the ASP.NET runtime for client side validation. Here is a tiny snippet of the line with the problem.

var ControlWithValidators.ascx_validateThat = document.all ? ...

Notice the problem? There is a dot in the variable name that Javascript does not like since ControlWithValidators is not an object. What the?

After some digging around I found the culprit. I won’t bore you with the nitty gritty details. When dynamically adding controls to a page, it is a good idea to specify an id before adding them to the Controls collection. That way the controls can reload their state on Postback. However the snippet of code I found was giving the controls an ID that contained a period.

To prove this was indeed the culprit, I created a new simple VS.NET 2005 Web Application Project that exhibits the bug. The page dynamically loads a user control that contains a validation control. Here is the Page_Load method of the web page. When you compile this and run the page, you will see the javascript error.

protected void Page_Load(object sender, EventArgs e)
{
    Control control = LoadControl("ControlWithValidators.ascx");
    control.ID = "ControlWithValidators.ascx";
    placeholder.Controls.Add(control);
}

The quick fix was to simply replace the period with an underscore when assigning the id. Hopefully this helps you if you ever run into something so obscure.

If you are interested in duplicating the bug, you can download the validator bug demo solution here. By the way, does anyone know where is the best place to report this kind of thing?

comments edit

Here is a quick little nugget for you custom provider implementers. I recently scanned through this article on MSDN that describes how to implement a custom provider and found some areas for improvement.

Reading the section Loading and Initializing Custom Providers I soon encountered a bad smell. No, it was not my upper lip, but rather a code smell. Following the samples when implementing custom providers would lead to a lot of duplicate code.

It seemed to me that much of that code is very generic. Did I just say generics?

Simone (blog in Italian), a Subtext developer, recently refactored all our Providers to inherit from the Microsoft ProviderBase class.

One of the first things he did was to create a generic provider collection:

using System;
using System.Configuration.Provider;

public class GenericProviderCollection<T> 
    : ProviderCollection 
    where T : System.Configuration.Provider.ProviderBase
{

    public new T this[string name]
    {
        get { return (T)base[name]; }
    }

    public override void Add(ProviderBase provider)
    {
        if (provider == null)
            throw new ArgumentNullException("provider");

        if (!(provider is T))
            throw new ArgumentException
                ("Invalid provider type", "provider");

        base.Add(provider);
    }
}

That relatively small bit of code should keep you from having to write a bunch of cookie cutter provider collections. But there is more that can be done. Take a look at the LoadProviders in Listing 6 of that article.

There are two things that bother me about that method listing. First is the unnecessary double check locking, which Richter poo poos in his book CLR via C#. The second is the fact that this method is begging for code re-use. I created a static helper class with the following method to encapsulate this logic (apologies for the weird formatting. I want it to fit width-wise):

/// <summary>
/// Helper method for populating a provider collection 
/// from a Provider section handler.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
public static GenericProviderCollection<T> 
        LoadProviders<T>(string sectionName, out T provider) 
            where T : ProviderBase
{
    // Get a reference to the provider section
    ProviderSectionHandler section = 
        (ProviderSectionHandler)WebConfigurationManager
              .GetSection(sectionName);

    // Load registered providers and point _provider
    // to the default provider
    GenericProviderCollection<T> providers = new 
          GenericProviderCollection<T>();
    ProvidersHelper.InstantiateProviders
          (section.Providers, providers, typeof(T));

    provider = providers[section.DefaultProvider];
    if (provider == null)
        throw new ProviderException(
            string.Format(
                  "Unable to load default '{0}' provider", 
                        sectionName));
    
    return providers;
}

This method returns a collection of providers for the specified section name. It also returns the default provider via an out parameter. So now, within my custom provider class, I can let the static constructor instantiate the provider collection and set the default provider in one fell swoop like so:

public abstract class SearchProvider : ProviderBase
{
    private static SearchProvider provider = null;
    private static GenericProviderCollection<SearchProvider> 
       providers = ProviderHelper.LoadProviders<SearchProvider>
           ("SearchProvider", out provider);
}

By employing the power of generics, writing new custom providers with a minimal amount of code is a snap. Hope you find this code helpful.

comments edit

Jim Holmes and James Avery announced that their new book Windows Developer Power Tools is going to be published in November.

Since I haven’t read the book yet, I cannot claim it will absolutely be the must read book of the summer. But I have a good feeling about this one for some reason.

Ok. Ok. I admit, I am biased because I am a contributor. Take a look at the table of contents here. My contributions are in Chapter 11, “Accessing Subversion and CVS with TortoiseSVN/CVS” as well as in Chapter 13 where I wrote about one of my favorite topics, Subtext.

I also reviewed the chapter on MbUnit with Andrew Stopford.

Having seen a couple other draft chapters, I am really excited about this book and not just because I contributed. Open source projects can be notorious for lacking in documentation. I can’t be sure, but I imagine that this book may be the impetus for some of these projects to actually write some documentation. But I am going out on a limb by saying that.

My contribution to this book is much more significant and original than my pseudo-contribution to another book this year.

comments edit

Arizona Tea You probably didn’t know this, but this blog is more than a technical blog. It is also your source for solving all the world’s problems. For example, there is the age old question, “Where did the matching sock go?” Did you notice the extra amount of lint in the dryer? Problem solved.

Today I noticed something interesting in the cupboard. We bought some Arizona Iced Tea Mix. It’s the powder stuff you mix with water to produce a refreshing drink. I noticed that not only is it packaged in a tin, but right there on the tin it lets you know that it is a collectible tin. Really? Wow! I can’t wait to get the whole set!

So that got me thinking, it makes a ton sense to label it as a collectible. Otherwise that tin becomes an ordinary piece of garbage when the little tea packages run out. But now, my friends, it is a wonderful mantle piece ready to be displayed with pride to other collectors. Perhaps I will be the one to puff out my chest and brag how I found the rare Peach Tea Tin with the misprint logo, eliciting a bit of envy among other collectors.

But why stop there. I notice that candy bars produce a lot of litter on the streets. Why not have our Snicker bars come in collectible wrappers? Or that impossible to break plastic shell that electronic devices are packaged in? Maybe if we made every newspaper issue a special collector’s edition, we could really reduce our landfill clutter.

The end result is that we will have moved from a centralized landfill structure (and all us geeks know centralized is bad! Booo!) into a decentralized peer-to-peer (good! yay!) landfill structure. It is the Napsterization of garbage…er collectibles (though perhaps Bittorentization is more accurate, but Napsterization sounds better).

comments edit

Subtext Logo Okay, it is survey time for all you Subtext users! I would like to know which Subtext skin you use for your blog. Please leave a comment with the skin that you use. If you use a custom skin, then just say Custom.

I have heard many say that the skins in Subtext look like they were designed by a developer rather than a designer. Well surprise surprise, probably because they were! I have a sneaking suspicion that many are not in use.

The reason this is important to me is that rather than simply continuing to add more and more skins to Subtext (generating a huge list), I would like to weed out some of the more drab skins. But instead of deleting skins, I was thinking I could simply replace them with newer skins that still fit the spirit of the original skin’s name. For example, I might replace the skin White with a new design that is very light in color and tone. Make sense?

comments edit

I have been considering using a separate library for generating the RSS and Atom feeds in Subtext. My first thought was to use RSS.NET but I noticed that there seemed to be no recent activity.

I contacted the admin and found out that RSS.NET has been bought by ToolButton Inc and will be released as a product. Very cool!

In the meanwhile, I still need an open source RSS library to package up with Subtext. Fortunately, RSS.NET was developed under the MIT license which, as I mentioned before, is very compatible with our BSD license.

So one option is to simply copy the code into our Subtext code base. My only qualm about this approach is that I would like to keep stand-alone libraries that are not central to the Subtext Domain out of the Subtext codebase as much as possible, preferring to reference them as an external library.

Ideally, I would like to start a new project that is essentially a fork in RSS.NET, perhaps called FeedGenerator.NET (call me the forkmaster). I could probably host it on CodePlex in order to give me an opportunity to try it out and provide feedback. Would anyone find such a library useful other than us blog engine developers? Anyone have a better name?

I probably wouldn’t spend much time on this project except to provide changes and bug fixes as needed by Subtext. It would by no means be intended to compete with Web 2.0 Tools products, since they are probably going to be much more full featured than our humble needs. Besides, under the MIT license, any improvements we make would be available for them to roll into their product (following the terms of the license of course). It is the beauty of the MIT and BSD licenses.

Any thoughts? Suggestion? Etc…?

comments edit

In my last post, one of the restrictions listed when running in medium trust is that HTTP access is only allowed to the same domain. It is possible in web.config to add a single domain via the originUrl attribute of the <trust> element as described by Cathal.

To add more than one domain requires editing machine.config or creating a custom trust policy which will not be accessible to many users in a hosted environment. This may pose a big problem for those who care about trackbacks since even if you could modify machine.config, there is no way to predetermine every domain you will trackback.

One solution is to beg your hosting environment to relax the WebPermission in medium trust. If trackbacks and pingbacks are important to you, you shouldn’t be above begging. ;)

Another is for someone to create a passthrough trackback system in a fully trusted environment. Essentially this would act on behalf of the medium trusted trackback creator and forward a trackback to the final destination. It would require blogging engines affected by medium trust to trust this single domain. Of course the potential for abuse is high and the rewards are low (unless people out there absolutely love trackbacks).

log4net logging aspnet comments edit

UPDATE: Mea Culpa! It seems like Log4Net has no problems with medium trust and an external log4net file. I have written an updated post that talks about the problem I did run into and how I solved it.

A while ago I wrote a quick and dirty guide to configuring Log4Net for ASP.NET. Unfortunately, this technique does not work with ASP.NET 2.0 when running in medium trust.. This technique continues to work with medium trust!

While digging into the problem I found this blog post (from an aptly titled blog) by Kevin Jones.

This article from Microsoft discusses the ramifications of running ASP.NET 2.0 in medium trust more thoroughly. Here is a list of constraints placed on medium trust applications.

The main constraints placed on medium trust Web applications are:

  • OleDbPermission is not available. This means you cannot use the ADO.NET managed OLE DB data provider to access databases. However, you can use the managed SQL Server provider to access SQL Server databases.
  • EventLogPermission is not available. This means you cannot access the Windows event log.
  • ReflectionPermission is not available. This means you cannot use reflection.
  • RegistryPermission is not available. This means you cannot access the registry.
  • WebPermission is restricted. This means your application can only communicate with an address or range of addresses that you define in the <trust> element.
  • FileIOPermission is restricted. This means you can only access files in your application’s virtual directory hierarchy. Your application is granted Read, Write, Append, and PathDiscovery permissions for your application’s virtual directory hierarchy.

You are also prevented from calling unmanaged code or from using Enterprise Services.

Fortunately there is a way to specify that a configuration section within web.config should not require ConfigurationPermission. Simply add the requirePermission="false" attribute to the <section> declaration within the <configSections> area like so:

<configSections>
    <section name="log4net" 
      type="log4net.Config.Log4NetConfigurationSectionHandler
      , log4net"     
      requirePermission="false"/>
</configSections>

Unfortunately this applies to configuration sections within the web.config file. I have not found a way to specify that ASP.NET should not require ConfigurationPermission on an external configuration file. As I stated in my post on Log4Net, I prefer to put my Log4Net configuration settings in a separate configuration file. If anyone knows a way to do this, please let me know!

So in order to get Log4Net to work, I added the declaration above to the web.config file and copied the settings within the Log4Net.config file (pretty much cut and paste everything except the top xml declaration) into the web.config file. I then removed the assembly level XmlConfigurator attribute from AssemblyInfo.cs as it is no longer needed. Instead, I added the following line to the Application_Start method in Global.asax.cs.

protected void Application_Start(Object sender, EventArgs e)
{
    log4net.Config.XmlConfigurator.Configure();
}

So in summary, here are the changes I made to get Log4Net to work again in medium trust.

  • Added the log4Net section declaration in the configSections section of web.config and made sure the requirePermission attribute is set to the value false.
  • Moved the log4Net settings into web.config.
  • Removed the assembly attribute XmlConfigurator
  • Added the call to XmlConfigurator.Configure() to the Application_Start method in Global.asax.cs.

I have been working on getting the version of Subtext in our Subversion trunk to run in a medium trust environment, but there have been many challenges. Some of the components we use do not appear to run in a medium trust environment such as the FreeTextBox. Fortunately, we have a workaround for that issue which is to change the RichTextEditor node in web.config to use the PlainTextRichTextEditorProvider (which is a mouthful and should probably be renamed to PlainTextEditorProvider).

comments edit

Apple Ad Recently I poked lighthearted fun at one of the recent Apple commercials called “Network”

There are many who dislike the ads and claim that the ad is blatantly not true. One example is this humorous and well written rebuttal by Seth Stevenson.

Seth points out…

The final straw, for me, is that the spots make unconvincing claims. The one titled “Network” has a funny bit where “that new digital camera from Japan” is represented by a Japanese woman in a minidress. While Hodgman has trouble talking with the woman, Long speaks Japanese and shares giggles with her because “everything just kind of works with a Mac.” Now, I happen to have a digital camera from Japan, and it works just fine with my PC. It did from the moment I connected it.

Good point Seth. Perhaps the ad would have been more accurate had it been a Japanese printer rather than a digital camera. And I mean a printer that really is from Japan. My wife’s dad left us an Epson PM-950C Printer (among other things). This printer is not listed on the Epson U.S. site, but presumably can be found in the Japanese site.

In order to test it out, we plugged it into the USB port on her five year old iBook and printed a web page. Worked like a charm.

Fast forward a couple of weeks and we want to use the printer on my PC. We plugged it in and get the Windows cannot find the driver for the hardware dialog and were unable to print. Effectively, it didn’t just work. I then downloaded a driver that is supposed to be the U.S. equivalent. Still didn’t work.

My wife laughed out lound and said, “See! The ads are right!” I caught myself making excuses explaining to her that there are technical reasons beyond her comprehension and that I just need to toy with it for a moment to get it to work.

But I caught myself. Why am I making excuses for PCs and Microsoft? The experiment was quite simple. I plugged in a Japanese printer into a Macintosh, and we were able to print. I plug in the same printer into our PC (which was recently re-installed) and it didn’t work. Anecdotal evidence, sure, but adds weight to the Apple claim that things just work with the Apple and they don’t just work with Windows.

And responding with, “Well it will with Vista” is not a satisfactory answer. Tomorrow cars will fly, the U.S. will win the World Cup, and we will control our computers by having conversations with them and manipulating holograms. But the fact of the matter is that we all live in today and I will give Apple a bit of credit here.

comments edit

A while ago I had the idea of posting a picture of the day. Of course I didn’t mean every day, but the title might lead one to believe so. Therefore in order to reduce confusion, here is the next photo in my Picture of the Moment series.

Geisha
Trio

This is the most favorited and interesting picture according to Flickr in my Flickr account. So naturally that tells you that it wasn’t me who took this photo, but my lovely wife while on a trip to Japan. I was reminded of this picture because someone recently added a comment.

Speaking of Flickr, I love how you can comment specific regions of a picture by drawing box on the region. If you click on the image, you can see my feeble attempts at humor by moving your mouse of the photo. My comments are in sharp contrast to the elegance of three Geisha walking through a crowd.

comments edit

Jeff asks the question, “Isn’t programming supposed to be fun?”? Ha ha ha, naive little bunny. As if the person who invented programming had fun in mind. Silly rabbit.

You’ve heard the cliche. The reason it is called work is because it is work. Programming is fun when it is just for fun.

Cynical jokes aside, I actually did work for fun at my last job. Well technically, I worked for FUN. My former employer was bought by a company that then changed its name to Fun Technologies. It is listed under the symbol FUN on the Toronto Stock Exchange and the Alternative Invesment Market of the London Stock Exchange.

When I visited headquarters in Toronto the CEO gave me a hat with the ticker symbol emlazoned on the front. I think I still have it somewhere. I would wear it and go around telling people that I work for FUN. It was mildly funny for about two minutes. But as is my style, I dragged it out for thirty.

sql comments edit

Bullet Working as a team against a common database schema can be a real challenge. Some teams prefer to have their local code connect to a centralized database, but this approach can create many headaches. If I make a schema change to a shared database, but am not ready to check in my code, that can break the site for another developer. For a project like Subtext, it is just not feasible to have a central database.

Instead, I prefer to work on a local copy of the database and propagate changes via versioned change scripts. That way, when I check in my code, I can let others know which scripts to run on their local database when they get latest source code. Of course this can be also be a big challenge as the number of scripts starts to grow and developers are stuck bookkeeping which scripts they have run and which they haven’t.

That is why I always recommend to my teams that we script schema and data changes in an idempotent manner whenever possible. That way, it is much easier to simply batch updates together in a single file (per release for example) and a developer simply runs that single script any time an update is made.

As an example, suppose we have a Customer table and we need to add a column for the customer’s favorite color. I would script it like so:

IF NOT EXISTS 
(
    SELECT * FROM [information_schema].[columns] 
    WHERE   table_name = 'Customer' 
    AND table_schema = 'dbo'
    AND column_name = 'FavoriteColorId'
)
BEGIN
    ALTER TABLE [dbo].[Customer]
    ADD FavoriteColorId int
END

This script basically checks for the existence of the FavoriteColorId column on the table Customer and if it doesn’t exist, it adds it. You can run this script a million times, and it will only make the schema change once.

You’ll notice that I didn’t query against the system tables, instead choosing to lookup the information in an INFORMATION_SCHEMA view named Columns. This is the Microsoft recommendation as they reserve the right to change the system tables at any time. The information views are part of the SQL-92 standard, so they are not likely to change.

There are 20 schema views in all, listed below with their purpose (aggregated from SQL Books). Note that in all cases, only data accessible to the user executing the query against the information_schema views is returned.

Name Returns
CHECK_CONSTRAINTS Check Constraints
COLUMN_DOMAIN_USAGE Every column that has a user-defined data type.
COLUMN_PRIVILEGES Every column with a privilege granted to or by the current user in the current database.
COLUMNS Lists every column in the system
CONSTRAINT_COLUMN_USAGE Every column that has a constraint defined on it.
CONSTRAINT_TABLE_USAGE Every table that has a constraint defined on it.
DOMAIN_CONSTRAINTS Every user-defined data type with a rule bound to it.
DOMAINS Every user-defined data type.
KEY_COLUMN_USAGE Every column that is constrained as a key
PARAMETERS Every parameter for every user-defined function or stored procedure in the datbase. For functions this returns one row with return value information.
REFERENTIAL_CONSTRAINTS Every foreign constraint in the system.
ROUTINE_COLUMNS Every column returned by table-valued functions.
ROUTINES Every stored procedure and function in the database.
SCHEMATA Every database in the system.
TABLE_CONSTRAINTS Every table constraint.
TABLE_PRIVILEGES Every table privilege granted to or by the current user.
TABLES Every table in the system.
VIEW_COLUMN_USAGE Every column used in a view definition.
VIEW_TABLE_USAGE Every table used in a view definition.
VIEWS Every View

When selecting rows from these views, the table must be prefixed with information_schema as in SELECT * FROM information_schema.tables.

Please note that the information schema views are based on a SQL-92 standard so some of the terms used in these views are different than the terms in Microsoft SQL Server. For example, in the example above, I set table_schema = 'dbo'. The term schema refers to the owner of the database object.

Here is another code example in which I add a constraint to the Customer table.

IF NOT EXISTS(
    SELECT * 
    FROM [information_schema].[referential_constraints] 
    WHERE constraint_name = 'FK_Customer_Color' 
      AND constraint_schema = 'dbo'
)
BEGIN
  ALTER TABLE dbo.Customer WITH NOCHECK 
  ADD CONSTRAINT
  FK_Customer_Color FOREIGN KEY
  (
    FavoriteColorId
  ) REFERENCES dbo.Color
  (
    Id
  )
END

I generally don’t go to all this trouble for stored procedures, user defined functions, and views. In those cases I will use Enterprise manager generate a full drop and create script. When a stored procedure is dropped and re-created, you don’t lose data as you would if you dropped and re-created a table that contained some data.

With this approach in hand, I can run an update script with new schema changes confident that I any changes in the script that I have already applied will not be applied again. The same approach works for lookup data as well. Simply check for the data’s existence before inserting the data. It is a little bit more work up front, but it is worth the trouble and schema changes happen less frequently than code or stored procedure changes.

code comments edit

Plug I got a lot of great feedback from my post on Building Plugins Resilient to Versioning, which proposes an event-based self-subscription model to plugins.

Craig Andera points out that we can get many of the same benefits by having plugins implement an abstract base class instead of an interface. This is definitely a workable solution and is probably isomorphic to this event based approach.

Dimitri Glazkov was the voice of dissent in the comments to the post pointing out that the application loses granular control over plugins in this approach. I was not convinced at the time as I was focused on keeping the surface area of the plugin interface that is exposed to the application very small. When the surface area is small, there is less reason for the interface to change and the less reason to break the interface.

However a simple thought experiment makes me realize that we do need to have the application retain granular control over which plugins can respond to which events. This is the scenario.

Suppose our plugin framework defines three events, MessageSending, MessageSent, MessageReceiving and someone writes a plugin that responds to all three events. Later, someone else writes a plugin that only responds to MessageReceiving. If the blog user wants to chain the functionality of that plugin to the existing plugin, so that both fire when a message is received, then all is well.

But suppose this new plugin’s handling of the MessageReceiving event should replace the handling of the old plugin. How would we do this? We can’t just remove the old plugin because then we lose its handling of the other two events. Dimitri was right all along on this point, we need more granular control.

It makes sense to have some sort of admin interface in which we can check and uncheck individual plugins and whether or not they are allowed to respond to specific events. However, this is not too difficult with the event based approach.

.NET’s event pattern is really an implementation of the Observer pattern, but using delegates rather than interfaces. After all, what is a delegate under the hood but yet another class? When any code attaches a method to an event, it is in effect registering a callback method with the event source. This is the step where we can obtain more granular information about our plugins.

In the Application that hosts the plugin, events that require this granular control (not every event will) could be defined like so.

private event EventHandler messageReceived;

public event EventHandler MessageReceived
{
    add
    {
        RegisterPlugin(value.Method.DeclaringType);
        AddEvent(value);
    }
    
    remove
    {
        UnRegisterPlugin(value.Method.DeclaringType);
        RemoveEvent(value);
    }
}

So when adding and removing the event, we register the plugin with the system and then we add the event to some internal structure. For the purposes of this discussion, I’ll present some simple implementations.

void AddEvent(EventHandler someEvent)
{
    //We could choose to add the event 
    //to a hash table or some other structure
    this.messageReceived += someEvent;
}

void RemoveEvent(EventHandler someEvent)
{
    this.messageReceived -= someEvent;
}
                
private void RegisterPlugin(Type type)
{
    //using System.Diagnostics;
    StackTrace stack = new StackTrace();
    StackFrame currentFrame = stack.GetFrame(1);
    Console.WriteLine("Registering: " + type.Name 
         + " to event " + currentFrame.GetMethod().Name);
}

private void UnRegisterPlugin(Type type)
{
    StackTrace stack = new StackTrace();
    StackFrame currentFrame = stack.GetFrame(1);

    Console.WriteLine("UnRegistering: " + type.Name 
        + " to event " + currentFrame.GetMethod().Name);
}

As stated in the comments, the AddEvent method attaches the event handler in the standard way. I could have chosen to put it in a hash table or some other structure. Perhaps in a real implementation I would.

The RegisterPlugin method examines the call stack so that it knows which event to register. In a real implementation this would probably insert or update some record in a database somewhere so the application knows about it. Note that this should happen when the application is starting up or sometime before the user can start using the plugin. Otherwise there is no point to having access control.

public void OnMessageReceived()
{
    EventHandler messageEvent = this.messageReceived;
    if(messageEvent != null)
    {
        Delegate[] delegates = messageEvent.GetInvocationList();
        foreach(Delegate del in delegates)
        {
            if (EnabledForEvent(del.Method.DeclaringType, 
                "MessageReceived"))
            {
                del.DynamicInvoke(this, EventArgs.Empty);
            }
        }
    }
}

Now, when we invoke the event handler, instead of simply invoking the event, we examine the delegate chain (depending on how we store the event handlers) and dynamically invoke only the event handlers that we allow. How is that for granular control?

In this approach, the implementation for the application host is a bit more complicated, but that complexity is totally hidden from the plugin developer, as it should be.

subtext comments edit

Seems like every day now someone asks me if I plan on moving Subtext over to CodePlex. I figure it would save me a lot of trouble if I just answer this question here once and for all.

Though of course I can’t answer once and for all. I can only answer for the here and now. And right now, I have thought about it, but have not strongly considered it for the following reasons.

Not Feeling the Pain

First of all, I am not really feeling a lot of pain with our current setup at SourceForge. We have CruiseControl.NET humming along nicely, a great build process, and are very happy with Subversion. Life is good, why should we change?

Also, we already made one switch from CVS to Subversion. To yet again switch source control systems is a big hassle. There would have to be a huge benefit to doing so to make it worthwhile. A minor benefit is not enough.

Source Control

As you know, I am a big fan of Subversion and TortoiseSVN. Source Control bindings in Visual Studio have been the biggest nightmare second only to Front Page extensions that I have had the pleasure to deal with. For example, I work with one client who uses Vault and another who uses Visual Source Safe. Switching between the two is such a pain in the rear as I have to remember to switch the SCC provider before I start working on one or the other.

As far as I am concerned, there is a big hump to overcome to get me comfortable with using SCC again. I understand that the Codeplex people are working on Turtle which is a TortoiseSVN like interface to CodePlex. When this is as solid as TortoiseSVN, perhaps we can talk.

Also, does Team System source control version renames and moves? That is a big plus in Subversion. Does it work over HTTPS? Are checkins atomic? Are branching and tagging fast? I haven’t looked into this and would love to know.

Source Control History

Can we import our Subversion history into Team System at CodePlex? Our version history is very important to us. At least to me. I would hate to lose that.

CruiseControl.NET

It is probably only a matter of time before someone writes a plugin for CruiseControl.NET that works with Team System. But this would be important to me. Now Simone tells me that Team System has something equivalent that would replace CCNET as part of CodePlex. If this is the case, then I would love to see details.

MbUnit

As you might also know, I love me some MbUnit. I made the switch from NUnit a while ago and have never looked back. If CodePlex has a CCNet replacement, will it integrate with MbUnit. I know Team System has its own unit test framework, but does it have the Rollback and RowTest attributes and a TypeFixture or equivalents? And if you tell me about its extensibility model and that I can write my own, I ask in response, why should I? I already have those things.

Summary

At this point, I would love to hear more details about CodePlex that address my concerns. Perhaps a demo video that shows me what we’re missing. But until these issues are addressed, or if all the other Subtext developers are chomping at the bit for CodePlex and threaten a mutiny if we do not switch over, I do not see any urgency or reason to switch now. Sometimes being bleeding edge just leaves you with a bloody mess.

comments edit

You’ve probably seen the recent Apple commercials with the two guys holding hands. One introduces himself as a PC and the other introduces himself as an Apple Macintosh. They hold hands because they speak each other’s language. Along comes a Japanese woman representing a Japanese digital camera who sidles up to the Mac guy and holds his hand. The Mac speaks her language too. If you haven’t seen it, you can click on the image below to see it on YouTube by clicking on the image below.

Apple
Commercial

At the end of the commercial, she looks over at the PC guy and says something in Japanese to the Mac guy that elicits chuckling between the two. The inside joke is that she thinks the PC guy looks like an Otaku.

Otaku refers to a specific flavor of Japanese geekdom in which the geek is obsessed with anime, manga, action figures, and video games. In Japan it traditionally has very negative connotations and would be considered a pejorative. It probably has similar connotations to the term Nerd back in the 80s before Nerds became rich and drove Bentleys.

However, this month’s issue of Wired (14.07) has a short article by Tony McNicol that describes how this subculture has morphed into a thriving industry and a trendy lifestyle export.

“Otaku have joined the mainstream to become a major cultural icon,” says Tokyo journalist and social observer Kaori Shoji. “They’ve been lurking on the edge of hip for some years. Now they’ve gone completely legit.” In a recent column for the Japan Times, Shoji wrote about women who were desperately trying to land otaku boyfriends and the trouble they were having competing with the ultrageeks’ preferred romantic companions-racy images of anime idols freely available online.

Ok, so this still isn’t necessarily a unqualified compliment, but associating PC with becoming a hip cultural icon is probably the last thing Apple desires. Apple is supposed to be the hip one wearing shades in the room. Perhaps if the commercial intended to reflect modern shifts in Japan culture, we should see the woman trying to sidle up to the PC guy who would spurn her advances as he is too busy playing Oblivion.

sql comments edit

With Ted Neward’s recent post on the morass that is Object-Relational mapping, there has been a lot of discussion going around on the topic. In the comments on Atwood’s post on the subject, some commenters ask why put data in a relational database. Why not use an object database?

The Relational Model is a general theory of data management created by Edgar F. Codd based on predicate logic and set theory. As such, it has a firm mathematical foundation for storing data with integrity and for efficiently pulling data using set based operations. Also, as a timeless mathematical theory it has no specific ties to any particular framework, platform, or application.

Now enter object databases which I am intrigued by, but have yet to really dig into. From what I have read (and if I am off base, please enlighten me) these databases allow you to store your object instances directly in the database, probably using some sort of serialization format.

Seems to me this could introduce several problems. First, it potentially makes set based operations that are not based on the object model inefficient. For example, to build a quick ad-hock report, I would have to write some code to traverse the object hierarchy, which might not be an efficient means to obtaining the particular data. Perhaps an object query language would help mitigate or even solve this. I don’t know.

Another issue is that your data is now more opaque. There are all sorts of third party tools that work with relational data almost without regards to the database platform. It is quite easy to take Access and generate a report against an existing SQL database or to use other tools for exporting data out of a relational database. But since object oriented databases lack a formal mathematical foundation, it may be difficult to create a standard for connecting to and querying object databases that every vendor will agree on.

One last issue is more big picture. It seems to me it ties the data too much to the current code implementation. I have worked on a project that was originally written in classic ASP with no objects. The code I wrote used .NET and objects to access the same data repository. Fortunately, since the data was in a normalized relational database, it was not a problem to understand the data simply from looking at a schema and load it into my new objects.

How would that work with an object database? If I stored my Java objects in an OO database today, would I be able to load that data into my .NET objects tomorrow without having to completely change the database? What about in the future when I move on from .NET objects to message oriented programming or agent oriented programming?

Ultimately, the choice between an OO database and a relational database really depends on the particular requirements of the project at hand. However the thought of tying an application to an OO database at this point in time gives me reason to pause. This could lock me into a technology that works today, but is superseded tomorrow. On several projects I have worked on, we totally revamped the core technology (typically ASP to ASP.NET), but we rarely scrapped and recreated the database. The database engine might change over the years (Sql 6.5 to Sql 7 to Sql 2000 to Sql 2005), but the data model survives.

code comments edit

UPDATE: Added a followup part 2 to this post on the topic of granular control.

white plug We have all experienced the trouble caused by plugins that break when an application that hosts the plugins get upgraded. This seems to happen everytime I upgrade Firefox or Reflector.

On a certain level, this is the inevitable result of balancing stability with innovation and improvements. But I believe it is possible to insulate your plugin architecture from versioning so that such changes happen very infrequently. In designing a plugin architecture for Subtext, I hope to avoid breaking existing plugins during upgrades except for major version upgrades. Even then I would hope to avoid breaking changes unless absolutely necessary.

I am not going to focus on how to build a plugin architecture that dynamically loads plugins. There are many examples of that out there. The focus of this post is how to design plugins for change.

Common Plugin Design Pattern

A common plugin design defines a base interface for all plugins. This interface typically has an initialization method that takes in a parameter which represents the application via an interface. This might be a reference to the actual application or some other instance that can represent the application to the plugin on the application’s behalf (such as an application context).

public interface IPlugin
{
   void Initialize(IApplicationHost host);
}

public interface IApplicationHost
{
   //To be determined.
}

This plugin interface not only provides the application with a means to initialize the plugin, but it also serves as a marker interface which helps the application find it and determine that it is a plugin.

For application with simple plugin needs, this plugin interface might also have a method that provides a service to the application. For example, suppose we are building a SPAM filtering plugin. We might add a method to the interface like so:

public interface IPlugin
{
   void Initialize(IApplicationHost host);

   bool IsSpam(IMessage message);
}

Now we can write an actual plugin class that implements this interface.

public class KillSpamPlugin : IPlugin
{
    public void Initialize(IApplicationHost host)
    {
    }

    public bool IsSpam(IMessage message)
    {
        //It is all spam to me!
        return true;
    }
}

For applications that will have many different plugins, it is common to have multiple plugin interfaces that all inherit from IPlugin such as ISpamFilterPlugin, ISendMessagePlugin, etc…

Problems with this approach

This approach is not resilient to change. The application and plugin interface is tightly coupled. Should we want to add a new operation to the application that this plugin can handle, we would have to add a new method to the interface. This would break any plugins that have been already been written to the interface. We would like to be able to add new features to the application without having to change the plugin interface.

Immutable Interfaces

When discussing interfaces, you often hear that an interface is an invariant contract. This is true when considering code that implements the interface. Adding a method to an interface in truth creates a new interface. Any existing classes that implemented the old interface are broken by changing the interface.

As an example, consider our plugin example above. Suppose IPlugin is compiled in its own assembly. We also compile KillSpamPlugin in the assembly KillSpamPlugin, which references the IPlugin assembly. Now in our host application, we try and load our plugin. The following example is for demonstration purposes only.

string pluginType  = "KillSpamPlugin, KillSpamPlugin";
Type t = Type.GetType(pluginType);
ConstructorInfo ctor = t.GetConstructor(Type.EmptyTypes);
IPlugin plugin = (IPlugin)ctor.Invoke(null);

This works just fine. Now add a method to IPlugin and just compile that assembly. When you run this client code, you get a System.TypeLoadException.

A Loophole In Invariant Interfaces?

However in some cases this invariance does not apply to the client code that references an object via an interface. In this case, there is a bit of room for change. Specifically, you can add new methods and properties to that interface without breaking the client code. Of course the code that implements the interface has to be recompiled, but at least you do not have to recompile the client.

In the above example, did you notice that we didn’ have to recompile the application when we changed the IPlugin interface? This is true for two reasons. First, the application does not reference the new method added to the IPlugin interface. If you had changed the existing signature, there would have been problems. Second, the application doesn’t implement the interface, so changing it doesn’t require the application to be rebuilt.

A better approach.

So how can we apply this to our plugin design? First, we need to look at our goal. In this case, we want to isolate changes in the application from the plugin. In particular, we want to make it so that the plugin interface does not have to change, but allow the application interface to change.

We can accomplish this by creating a looser coupling between the application and the plugin interface. One means of doing this is with events. So rather than having the plugin define various methods that the application can call, we return to the first plugin definition above which only has one method, Initialize which takes in an instance of IApplicationHost. IApplicationHost looks like the following:

public interface IApplicationHost
{
    event EventHandler<CommentArgs> CommentReceived;
}
    
//For Demonstration purposes only.
public class CommentArgs : EventArgs
{
    public bool IsSpam;
}

Now if we wish to write a spam plugin, it might look like this:

public class KillSpamPlugin
{
    public void Initialize(IApplicationHost host)
    {
        host.CommentReceived 
               += new EventHandler<CommentArgs>(OnReceived);
    }

    void OnReceived(object sender, CommentArgs e)
    {
        //It is still all spam to me!
        e.IsSpam = true;
    }
}

Now, the application knows very little about a plugin other than it has a single method. Rather than the application calling methods on the plugin, plugins simply choose which application events it wishes to respond to.

This is the loose coupling we hoped to achieve. The benefit of this approach is that the plugin interface pretty much never needs to change, yet we can change the application without breaking existing plugins. Specifically, we are free to add new events to the IApplicationHost interface without problems. Existing plugins will ignore these new events while new plugins can take advantage of them.

Of course it is still possible to break existing plugins with changes to the application. By tracking dependencies, we can see that the plugin references both IApplicationHost and CommentArgs classes. Any changes to the signature for an existing property or method in these classes could break an existing plugin.

Event Overload

One danger of this approach is that if your application is highly extensible, IApplicationHost could end up with a laundry list of events. One way around that is to categorize events into groups via properties of the IApplicationHost. Here is an example of how that can be done:

public interface IApplicationHost
{
    UIEventSource UIEvents { get; }
    MessageEventSource MessageEvents { get; }
    SecurityEventSource SecurityEvents { get; }
}

public class UIEventSource
{
    event EventHandler PageLoad;
}

public class SecurityEventSource
{
    event EventHandler UserAuthenticating;
    event EventHandler UserAuthenticated;
}

public class MessageEventSource
{
    event EventHandler Receiving;
    event EventHandler Received;
    event EventHandler Sending;
    event EventHandler Sent;
}    

In the above example, I group events into event source classes. This way, the IApplicationHost interface stays a bit more uncluttered.

Caveats and Summary

So in the end, having the plugins respond to application events gives the application the luxury of not having to know much about the plugin interfaces. This insulates existing plugins from breaking when the application changes because there is less need for the plugin interface to change often. Note that I did not cover dealing with strongly typed assemblies. In that scenario, you may have to take the additional step of publishing publisher policies to redirect the version of the application interface that the plugin expects.

personal comments edit

Lightning StormWe just had a short lived but crazy loud lightning storm. I was up late working on Subtext because I couldn’t sleep when I started hearing the loud clap of thunder. The eerie part is that between the thunder claps, it was relatively silent outside until the next thunder clap.

I counted less than a second between some of the lightning flashes and the resultant thunder, meaning the bolts were pretty damn close. The immense loudness was awesome and set off all the car alarms in the neighborhood as well as a building alarm. Perhaps the school or church nearby. I was busy trying to shut down my computer.

As the lightning and thunder subsided, the rain started to come down really hard for about 15 minutes. Then nothing. Strangely enough, my dog didn’t make a sound. I wonder if she slept through the whole thing.