0 comments suggest edit

Scott Hanselman sets the geek-o-sphere abuzz with his latest (and apparently now annual) Ultimate Developer and Power Users Tool List for Windows.  The publishing of this list usually coincides with a productivity drop for me as I find many new toys to play with.  Unfortunately, many tools don’t work so well when running as a non-admin.

This year I was pleased to find my name and my humble little blog on his list.  Quite pleased in fact until it struck me. 

Wait one doggone minute!

Is Scott calling me a tool?  An ultimate tool no less.  We’ll see about that!

0 comments suggest edit

Scott writes about making DasBlog work on Mobile Devices.  The approach he takes is to programmatically detect that the device is a mobile device and then present an optimized TinyHTML (his term) theme.

Ideally though, wouldn’t it be nice to have mobile versions of every theme?  In fact, I thought this could be handled without any code at all via CSS media types.

Unfortunately (or is that fortunately) I don’t own a BlackBerry or any such mobile device with a web browser, so I can’t test this, but in theory, another approach would be to declare a CSS file specifically for mobile devices like so:

<link rel="stylesheet" href="mobile.css" type="text/css" 
    media="handheld" />

The mobile browser should use this CSS to render its screen while a regular browser would ignore this.  Should being the operative word here.  Unfortunately, at least for Scott’s Blackberry, it doesn’t.  He told me he does include a mobile stylesheet declaration and the BlackBerry doesn’t pick it up.  Does anyone know which devices, if any, do support this attribute?

For those devices that do, a skin in subtext can be made mobile ready by specifing the media attribute in the Style element of Skin.config like so (note this feature is available in Subtext 1.5).

<Style href="mobile.css" media="handheld" />

Refer to my recent overview of Subtext skinning to see the media attribute in play for printable views, which does seem to work for IE and Firefox.

0 comments suggest edit

Update: Rob renamed his project to Subsonic.

Rob Conery just released ASP.NET ActionPack 1.0.1 on his blog today.  This project is definitely one to watch!  He is essentially taking some of the principles of developing web apps with Ruby on Rails and porting those ideas to ASP.NET.  Just watch this great screencast to get a taste of the progress he has made in a short time.

So far I am very impressed with this guy.  Yesterday evening I sent the link to the screencast to Jon Galloway who then wondered why he was using strings for table names.  I told him to quit bothering me about it and post something in the Codeplex forum.  But Jon, being the simultaneous type of guy he is, had already posted a comment on Rob’s blog before I could finish my sentence.  This all happened last night.  This morning I notice the sixth bullet point in Rob’s announcement states that he added struct in classes for column names.  Apparently he had received the comment, made the change, and sent a reply to Jon in two hours.

Now that is a quick turnaround and good customer service! ;)

Not only that, but this guy lives in Kuaui, Hawaii! I don’t know how he gets anything done unless it’s the rainy season right now. Subtext would definitely languish if I lived in Kuaui.

0 comments suggest edit

SpamLately my blog has been hit with a torrential downpour of comment spam.  I’ve been able to fight much of it off with some creative regular

expressions in my ReverseDos configuration file.  Of course keyword filtering, even Bayesian filtering, can only go so far.  We need to supplement these approaches with something else.

But first, in order to combat SPAM, we need to identify the enemy.  Are we fighting against automated bots relentlessly crawling the web and posting comments?  Or are these low paid humans behind the keywords?  Are they attacking via the Comment API or posting to an HTML form?

My assumption has been that these are bots, but I plan to add some diagnostics to my blog to test that assumption someday soon.  Lets run with the assumption that the bulk of comment spam is generated by bots.  In this case, we need to examine the behavioral differences between bots and humans for clues in how to combat spam.

For example, an automated script can pretty much post a spam comment instantaneously.  What if your blog engine timed the interval between sending out the content and receiving a comment?If the comment came back too quickly, then we have high confidence that it is spam.

Certainly this is easily defeated by a spammer by adding a delay, but an artificial delay is costly to an automated script trying to hit the most blogs possible in the shortest amount of time.  Anything to slow down the spammers is worthwhile.

Another potential approach is to require javascript to comment.  Perhaps your comment form doesn’t even exist without some javascript to insert it in there.  The theory behind this approach is that most automated scripts won’t evaluate javascript. They simply want to post to some form fields.  Unfortunately this hinders the accessibility of your site for users who turn off javascript, but it may be worth the price.  Spammers will eventually figure this one out too, but it does add a nice computation cost to implement javascript handling in an automated spambot.

Ultimately, these approaches are more about the behavior of the spammer than the content.  For example, when I first started working on Subtext, I added two features that at the time blocked a significant amount of spam for me.  The first was to not allow duplicate comments.  I found that a lot of comment spam simply posted the same thing over and over.

The second feature was to require a delay between comment spam originating from the same IP address.  Using a sliding timeout of only two minutes seemed to defuse spam bombs which would try to post a large number of comments in a short period of time.

Later, I added ReverseDOS to help catch the spam that made it through these approaches.  Over time, I’ve noticed that comment spam starts to look more and more like legitimate messages, like the current crop of “Nice Site!” spam. 

The one thing that every comment spam has in common is a link.  Ultimately, the only way to stop content spam via a content-based approach is to simply not allow any comment that contains a link in any way shape or form.  But how awful would that be for the many legitimate commenters who wish to share a link?

No, we must do something better. I currently don’t think we’ll ever win the battle, but we can work to stay one step ahead.

0 comments suggest edit

The other tactic I neglected to mention in my previous post on combatting comment spam is more big picture.  How do we remove the incentive for spammers to comment spam in the first place?

Apparently the rel=”nofollow” approach has done little to curb comment spammers despite many predictions (including my own).  I still think it is an important step in removing one incentive, but what else can be done to remove this incentive?

With the lack of results from the rel=”nofollow” approach, the lesson we learn is that either the incentive for comment spam isn’t necessarily Google rankings or that there are enough unpatched blogs out in the wild that it still does help the google rank to post comments indiscriminately.  Or both.

If a spam commenter can put a link in the comments of several thousand blogs, then certainly that translates to tens to hundreds of thousands of eyeballs on that link, and maybe a few hundred clickthroughs (yes, I’m pulling these numbers out of my rear).  When someone clicks through, the spammer gets paid a small amount from the owner of the site.

Warning, here is where I go off the deep end in brainstorming solutions.  Forgive my naivete.

What if the marketers who pay for these links to be spread around found out that comment spammers were creating negative feelings for their products by posting comments on sites that were vehemently against having these advertisements.  Would they care?  Would they be interested in not paying for click throughs from sites who have specifically opted-out of such links? 

I’m probably dreaming here, but stay with me for a moment as I flesh out a quick thought experiment.  Suppose these sites did care.  One option is for them to not pay for links that originate from sites that specifically opt-out of comment advertising.  For example, by registering with some central opt-out site.

Another approach would be for sites that receive click-throughs to initiate a trackback like mechanism in which they request a comment spam policy from the blog.  If the blog does not explicitely endorse their product, the link does not get paid.

Of course the big flaw in this experiment is that these sites probably do not care and wouldn’t go to the trouble to implement these approaches to being a good citizen.  They just want the links to come in.  Even negative publicity is good publicity.  So what can we do? Is there a way to make them care? Is there a way to make comment spam less lucrative?

0 comments suggest edit


Another blog linked to this post and mentioned watching the video up to the slow motion practice session.  Be sure to keep watching past it to see the woman juggling a soccer ball, while playing double dutch, with a flaming soccer ball and jump rope.  Ronaldinho never did that!

0 comments suggest edit

With the Subtext 1.9 release just around the corner, this is probably a good time to highlight some minor, but important, changes to skinning in Subtext.

We made some breaking changes to Skins.config file format to make the naming more consistent with the purpose.  There was a lot of confusion before.  The following is a snippet from a pre-Subtext 1.9 Skins.config file.

<?xml version="1.0"?>
<SkinTemplates xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    <SkinTemplate SkinID="RedBook" 
        <Script Src="~/Admin/Resources/Scripts/niceForms.js" />
        <Style href="niceforms-default.css" />
        <Style href="print.css" media="print" />

And here is how that snippet will change in Subtext 1.9.

<?xml version="1.0"?>
<SkinTemplates xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    <SkinTemplate Name="RedBook" 
        <Script Src="~/Admin/Resources/Scripts/niceForms.js" />
        <Style href="niceforms-default.css" />
        <Style href="print.css" media="print" />

The key differences are in the SkinTemplate element. The following attributes have been renamed:

  • SkinID was changed to Name
  • Skin was changed to TemplateFolder
  • SecondaryCss was changed to StyleSheet

Another new change is that the Style element now supports a new attribute named conditional. If specified, Subtext will wrap the stylesheet declaration with an IE specific conditional comment. This is commonly used for stylesheets that contain IE specific CSS hacks. For example…

<Style href="IEHacks.css" conditional="if ie" />

Gets rendered as…

<!--[if ie]>
<Style href="IEHacks.css" conditional="if IE" />

Thus only IE will see that style declaration.

tags: Subtext, Skinning, Skins, Blogs

subtext, open source 0 comments suggest edit

In my previous post, I outlined some minor changes to the skinning model for Subtext. In this post, I will give a high level overview of how skinning works in Subtext.

Subtext renders a Skin by combining a set of CSS stylesheets with a set of .ascx controls located in a specific skin folder.  If you look in the Skins directory for example, you might see a set of folders like this.

Subtext Skin

Skin Template

A common misperception is that each folder represents a Skin.  In fact, each folder represents something we call a Skin Template, and can be used to render multiple skins.  One way to think of it is that each folder contains a family of skins.

Each folder contains a series of .ascx controls used to render each skin in that skin family as well as some CSS stylesheets and images used for individual skins or for the entire family.

For example, the Redbook template folder contains three skins, Redbook, BlueBook, and GreenBook.  In the screenshot below, we can see that there are three CSS stylesheets that specifically correspond to these three skins.  How does Subtext know that these three stylsheets define three different skins while the other stylesheets in this folder do not?


The answer is that this is defined within the Skins.config file.


The Skins.config file is located in the Admin directory and contains an XML definition for every skin.  Here is a snippet of the file containing the definition for the Redbook family of skins. This snippet shows the definition of the BlueBook skin.

<?xml version="1.0"?>
<SkinTemplates  xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
    <SkinTemplate Name="BlueBook" 
        <Script Src="~/Admin/Resources/Scripts/niceForms.js" />
        <Style href="~/skins/_System/csharp.css" />
        <Style href="~/skins/_System/commonstyle.css" />
        <Style href="~/skins/_System/commonlayout.css" />
        <Style href="niceforms-default.css" />
        <Style href="print.css" media="print" />

There is a SkinTemplate node for each Skin within the system (I know, not quite consistent now that I think of it. Should probably be named Skin). 

The Name attribute defines the name of this particular skin. 

The TemplateFolder attribute specifies the physical skin template folder in which the all the ascx controls and images are located in. 

The StyleSheet attribute specifies which stylesheet defines the primary CSS stylesheet for this skin. 

For example, the GreenBook skin definition looks just like the BlueBook skin definition except that the StyleSheet attribute references Green.css instead of Blue.css.

Within the SkinTemplate node is a collection of Script nodes and a collection of Style nodes.  These specify any client scripts (such as Javascript) and other CSS files that should be included when rendering this skin.  As you can see, the tilde (~) syntax works for specifying a path to a file and a developer can specify a media and a conditional for each CSS stylesheet.


I keep mentioning that Subtext depends on a collection of .ascx user controls when it renders a family of skins. Let’s talk about them for a moment. 

In the second screenshot above, you may have noticed a directory named Controls.  This contains the bulk of the .ascx controls used to render a specific skin.  There was also a control named PageTemplate.ascx in the parent directory.


Each skin in a family of skins is rendered by the same set of UserControl instances.  The only difference between two skins within the same family is the CSS stylesheet used (which can account for quite a difference as we learn from CSS Zen Garden).

The PageTemplate.ascx control defines the overall template for a skin, and then each of these user controls in the Controls directory is responsible for archiving a specific portion of the blog.

Select a different skin from another skin family, and Subtext will load in a completely different set of UserControl files, that all follow the same naming convention.

Drawbacks and Future Direction

This is one of the drawbacks to the current implementation.  For example, rather than using data binding syntax, each .ascx file is required to define certain WebControl instances with specific IDs.  The underlying Subtext code then performs a FindControl searching for these specific controls and sets their values in order to populate them.  This naming convention is often the most confusing part of implementing a new skin for developers.

It used to be that if a WebControl was removed from an .ascx file (perhaps you didn’t want to display a particular piece of information), this would cause an exception as Subtext could not find that control. We’ve tried to remedy that as much as possible.

In the future, we hope to implement a more flexible MasterPage based approach in which the underlying code provides a rich data source and each skin can bind to whichever data it wishes to display via data binding syntax.

From a software perspective, this changes the dependency arrow in the opposite direction.  Rather than the underlying code having to know exactly which controls a skin must provide, it will simply provide data and it is up to the individual skin to pick and choose which data it wishes to bind to.


We provided the Naked skin so that developers creating custom skins could play around with an absolutely bare-bone skin and see just how each of the controls participates in rendering a blog.

tags: Subtext, Skins, Skinning, Blogs

0 comments suggest edit

This is my third post about Skinning in Subtext. Previously I talked about some breaking changes.  Then I gave a high level overview of skinning in Subtext.  In this post I want to mention one new feature for those who use custom skins.

Subtext 1.9 actually reduces the the number of pre-packaged skins that come with it out of the box.  That’s right, we got rid of the skins that screamed, “Hey! I was designed by a developer who wears plaid pants with flannel shirts!”.  Over time, we hope to add more polished designs.

Of course we don’t want to leave developers with custom developer designed skins out in a lurch.  Taking an informal poll I found that a majority of Subtext users deploy a custom skin typically based on one of the out-of-the-box skins. 

As I described in the overview, skins are configured via a file named Skins.config.  One problem with having all the skin definitions in this file is that any customizations a user might make are overwritten when upgrading to a new version of Subtext.

It is incumbent upon the user to merge new changes in.  We thought we could make this better so we have introduced the new Skins.User.config file.

The format for this file is exactly the same as the format for Skins.config.  The only difference is that we do not include such a file in the Subtext distribution.  Thus you can place your custom skin definitions in this file and it will not get overwritten when upgrading.

From now on, it is recommended that if you customize an existing skin, you should rename the folder and place your skin definition in Skins.User.config.

tags: Subtext, Skins, Skinning, Blogs

0 comments suggest edit

Gratuitious nature pic for no good reason other than I love Windows
WriterJust sticking my head above water long enough to take a breath and to link to some rubbish called the Programmer’s Bill of Rights that Jeff Atwood declares on his blog.

I don’t understand this guy.  You let this sort of dangerous propaganda spread and software departments will become much more efficient and be able to build better systems with less money. 

You realize what that means, don’t you?  Companies will be able to get more done with less people

For those who lose their jobs because of this, blame Atwood.  Then again, if you’re reading his blog, you’re probably not the target audience that would get laid off due to increased efficiency.  Readers of my blog on the other hand …

0 comments suggest edit

This post is an ode to one of my favorite, albeit extremely minor, additions to .NET 2.0.  This is the method that I am sure we have all written in some sort of StringHelper library of some sort, but are now glad it is included in the framework as it makes our code a tad bit cleaner and shuts up that pesky FxCop warning about using the length of the string to test for empty strings.

And under the hood, it does the right thing.

public static bool IsNullOrEmpty(string value)
      if (value != null)
            return (value.Length == 0);
      return true;

If you haven’t met this method, do get well acquainted.

0 comments suggest edit

Tonight we had our first game of the new season against Hollywood United who now feature Alexi Lalas and Frank Le Boeuf, among several other former pro and national team players.  Let’s just say the result wasn’t pretty.  At least not pretty for us as they dismantled us 12 to 0.

As bad a beating as it was, it was exciting to be playing against such a high caliber team.  And we managed to put together some good plays, but pretty much looked erratic and rushed with the ball.

I asked one of my teammate’s wife to take some photos, but being a night game, none of them turned out too well.  Here is the best I got showing both these players.

Frank and Alexi on the

I had one good play in which I dummied the ball, faking Alexi and taking him out of the play as he was hot on my back (I know, I’m reaching here).  Of course later in the game, he embarrassed me with a nutmeg.

After the game we were chatting with a player from their team who is going to be in a new TV show called Heros.  He told us his superpower in the show was that if he touches you he paralyzes you.  Now we know what happened on the field.

Next game we hope to hold them to single digits.  We’re in need of a keeper if you know any former standout keepers.

0 comments suggest edit

Stephen Colbert of Comedy Central’s The Colbert Report is in my opinion the funniest comedian on television right now. As Mugatu (played by Will Ferrell, the funniest comedian in movies) would say, “It’s that damn Stephen Colbert! He’s so hot right now! Stephen Colbert.”

This clip had me rolling on the floor laughing for its spot on parody of Dungeons and Dragons. It is probably funnier to those of us who geeked out with D&D back in the day than for the unitiated.

After watching it, one gets the sense that he really was a player. It turns out he is a true fan, based on this Gamespy report.

0 comments suggest edit

Interesting post on the 37Signals blog regarding competing against Google.  Harb references a post by Paul Graham about Kiko’s founders putting their site up on eBay. 

Long story short, it sounds like Kiko came out with a web-based calendar, but when Google came out with theirs, Kiko’s growth stopped, users defected, and they threw in the towel.

Paul’s solution is to stay out of Google’s way.

Kind of sounds like what could happen to Blogjet with the advent of Windows Live Writer.  However Harb offers a more optimistic take on the events.

No. Don’t run, don’t hide. Be different. You can’t outdo Google by trying to match them point-by-point, but you don’t have to. There are other, better ways to fight. Compete differently.

Perhaps the same can be said for the creator of BlogJet vs Microsoft.  I was chatting with Jon Galloway about this and I suggested this would really hurt BlogJet, perhaps even spell its demise in the long run.  How does a tiny company challenge Microsoft and compete against a free product?  Jon was more optimistic and pointed to SourceGear as a counterexample.

I was not convinced.  Writing software to post to a blog has a much lower bar of entry compared to a source control system.  But so does writing a Calendar app, no?

It will be interesting to see if Dmitry can competes differently and come out with some creative means to keep Blogjet viable.  Good luck.

0 comments suggest edit

Work on Subtext 1.9 is progressing well with more and more contributors chipping in. No firm release date yet, but hopefully soon.  The latest builds are pretty stable, but there a few more minor bug fixes to get in there. Also we plan to implement the MetaWeblog API newMediaObject method in order to better support Windows Live Writer.

If you like to live dangerously, you can download one of the latest builds and give it a twirl.  Please note that link is running off of a VMWare Server within a Shuttle box in my office/guest room. So if we have guests, that link is going down for the night.

Of course, should you find a bug, you know the drill.

In the meanwhile, you can track our status by checking out our CruiseControl.NET statistics page.  As you can see, our code coverage from unit tests is steadily climbing, currently at 34%.  There are 23584 lines of code with 1367 FxCop warnings.

0 comments suggest edit

That is a fine question in need of a good answer.  The answer for implementors is easily found in the spec.  For the rest of us there is the exceedingly sparse entry (at the time of this writing) in Wikipedia.

That entry is somewhat pathetic at the moment. I mean who wrote that crap?! Hmmm.. Well taking a look at the history… I see that the perpetrator was…oh! me.

Well I am sure one of you can do a much better job than I did of elaborating on the topic.

0 comments suggest edit

Since my blog has been getting a bit geek heavy for my wife’s taste (see comment


The above pic was taken by my brother-in-law when the in-laws visited this past spring.

Lazy Sea

On that same trip, we took a boat ride in Oceanside and took a pic of these orgyastic sea lions. In public no less! Shameful.

0 comments suggest edit

ChoirIn Jeff Atwood’s latest post entitled Source Control: Anything But SourceSafe he is preaching the gospel message to choose something other than Visual Source Safe and I am screaming amen in the choir section.

There are three common reasons I hear for sticking with Visual Source Crap (I sometimes swap that last word with one that doesn’t break the acronym).

1. It is free!

UPDATE: As a lot of people pointed out, VSS isn’t free. What I meant was that it comes with the MSDN Universal Subscription, so many companies already have a copy of VSS.

So is Subversion.  I was on a project recently in which VSS corrupted the code twice!  The time spent administering to it and time lost was a lot more costly than a license to SourceGear Vault.

2. We know how to use it and don’t want to learn a new system.

When I hear this, what I am really hearing is we like our bad habits and don’t want to spend the time to learn good habits.  Besides, Eric Sink already wrote a wonderful tutorial.

3. We have so much invested in VSS.

Well you had a lot invested in classic ASP (or other such technology) and that didn’t stop you from switching over to ASP.NET (Ruby on Rails, Java, etc…), did it?

The reason I spend time and energy trying to convince clients to switch is that it saves them money and saves me headaches.  It really is worth the effort.

For Open Source projects (or any project that receives user code contributions), Subversion and CVS have the nice benefit of a patching feature making it easy to contribute without having write access.

tags: Source Control

0 comments suggest edit

Book Oh boy are you in for a roller coaster ride now!

Let me start with a question, How do you iterate through a large collection of data without loading the entire collection into memory?

The following scenario probably sounds quite familiar to you. You have a lot of data to present to the user. Rather than slapping all of the data onto a page, you display one page of data at a time.

One technique for this approach is to define an interface for paged collections like so…

/// <summary>
/// Base interface for paged collections.
/// </summary>
public interface IPagedCollection
    /// <summary>
    /// The Total number of items being paged through.
    /// </summary>
    int MaxItems

/// <summary>
/// Base interface for generic paged collections.
/// </summary>
public interface IPagedCollection<T> 
    : IList<T>, IPagedCollection

The concrete implementation of a generic paged collection is really really simple.

/// <summary>
/// Concrete generic base class for paged collections.
/// </summary>
/// <typeparam name="T"></typeparam>
public class PagedCollection<T> : List<T>, IPagedCollection<T>
    private int maxItems;

    /// <summary>
    /// Returns the max number of items to display on a page.
    /// </summary>
    public int MaxItems
        get { return this.maxItems; }
        set { this.maxItems = value; }

A method that returns such a collection will typically have a signature like so:

public IPagedCollection<DateTime> GetDates(int pageIndex
    , int pageSize)
    //Some code to pull the data from database 
    //for this page index and size.
    return new PagedCollection<DateTime>();

A PagedCollection represents one page of data from the data source (typically a database). As you can see from the above method, the consumer of the PagedCollection handles tracking the current page to display. This logic is not encapsulated by the PagedCollection at all. This makes a lot of sense in a web application since you will only show one page at a time.

But there are times when you might wish to iterate over every page as in a streaming situation.

For example, suppose you need to perform some batch transformation of a large number of objects stored in the database, such as serializing every object into a file.

Rather than pulling every object into memory and then iterating over the huge collection ending up with a really big call to Flush() at the end (or calling flush after each iteration, ending up in too much flushing), a better approach might be to page through the objects calling the Flush() method after each page of objects.

The CollectionBook class is useful just for that purpose. It is a class that makes use of iterators to iterate over every page in a set of data without having to load every record into memory.

You instantiate the CollectionBook with a PagedCollectionSource delegate. This delegate is used to populate the individual pages of the data we are iterating over.

public delegate IPagedCollection<T> 
    PagedCollectionSource<T>(int pageIndex, int pageSize);

When iterating over the pages of a CollectionBook instance, each iteration will call the delegate to retrieve the next page (an instance of IPagedCollection<T>) of data. This uses the new **iterators feature of C# 2.0.

Here is the code for the enumerator.

///Iterates through each page one at a time, calling the 
/// PagedCollectionSource delegate to retrieve the next page.
public IEnumerator<IPagedCollection<T>> GetEnumerator()
  if (this.pageSize <= 0)
    throw new InvalidOperationException
      ("Cannot iterate a page of size zero or less");

  int pageIndex = 0;
  int pageCount = 0;

  if (pageCount == 0)
    IPagedCollection<T> page 
      = pageSource(pageIndex, this.pageSize);
    pageCount = (int)Math.Ceiling((double)page.MaxItems / 
    yield return page;

  //We've already yielded page 0, so start at 1
  while (++pageIndex < pageCount)
    yield return pageSource(pageIndex, this.pageSize);

The following is an example of instantiating a CollectionBook using an anonymous delegate.

CollectionBook<string> book = new CollectionBook<string>(
    delegate(int pageIndex, int pageSize)
        return pages[pageIndex];
    }, 3);

I wrote some source code and a unit test you can download that demonstrates this technique. I am including a C# project library that contains these classes and one unit test. To get the unit test to work, simply reference your unit testing assembly of choice and uncomment a few lines.

Technorati Tags: Tips, TDD, C#, Generics, Iterators

0 comments suggest edit

There is a lot of buzz around Windows Live Writer.  I Might as well throw my two cents into the fray.  Keep in mind that I do understand this is a beta.

The Good

  • It uses the paragraph tag <p> around your paragraphs rather than <br /> tags.
  • Keyboard shortcuts for switching views.
  • Integrates your blog’s style when previewing.  This is an impressive feature.  When I preview the post I am working on, it displays the post as if it was posted on my site.  I have seen a temporary blog post with the message This is a temporary post that was not deleted. Please delete this manually. (c3eaf4ab-0941-4881-9d17-d4f62bde069e)  in a few blogs related to obtaining a blog’s style. My guess is that Windows Live Writer posts to the blog, then deletes the post.  It takes a diff to figure out what changed.  That’s just my guess.
  • When adding a link, it has a field for the rel tag.  Nice!
  • It handles my unusual FTP setup using an FTP virtual directory. Not every tool deals with that correctly.
  • Inserting images allows you to apply a few simple effects. Very neat!
  • Support for Really Simple Discovery (RSD).  I just added RSD to Subtext.
  • Plugins!

The Needs Improvement

Since this is a beta, I didn’t feel right calling it “The Bad”.

  • Doesn’t let you specify the title attribute of a tag.
  • Inserts   all over the place as you edit text.
  • What key do I hit to Find Next?
  • No Find and Replace? Really?
  • Be nice to have HTML highlighting in the HTML View.
  • If I markup a word with the <code> tag, then cut and paste that word elsewhere in the Normal view, it marks up the pasted word with the font tag rather than using the underlying tag.

The Wish

  • Using typographic characters for singe quotes and double quotes ala MS Word.
  • Integration with CSS styles from the blog. Be nice to have a pulldown for various CSS classes.
  • Configurable shortcuts. I use the <acronym /> and <code /> tag all the time and have shortcuts for these configured in w.bloggar.

All in all, I really like it so far.  It has a nice look and feel to it and if they iron out a lot of these kinks, this may be the one to beat.

Unfortunately, this may cause some delays in Subtext 1.9 unless I can fight off the vicious urge to write some Microformat plugins. Why must these people introduce such interesting toys into my life! Where will I find the time? ;)

Speaking of which, Tim Heuer already has written a couple of plugins

[Download Windows Live Writer]