comments edit

ChoirIn Jeff Atwood’s latest post entitled Source Control: Anything But SourceSafe he is preaching the gospel message to choose something other than Visual Source Safe and I am screaming amen in the choir section.

There are three common reasons I hear for sticking with Visual Source Crap (I sometimes swap that last word with one that doesn’t break the acronym).

1. It is free!

UPDATE: As a lot of people pointed out, VSS isn’t free. What I meant was that it comes with the MSDN Universal Subscription, so many companies already have a copy of VSS.

So is Subversion.  I was on a project recently in which VSS corrupted the code twice!  The time spent administering to it and time lost was a lot more costly than a license to SourceGear Vault.

2. We know how to use it and don’t want to learn a new system.

When I hear this, what I am really hearing is we like our bad habits and don’t want to spend the time to learn good habits.  Besides, Eric Sink already wrote a wonderful tutorial.

3. We have so much invested in VSS.

Well you had a lot invested in classic ASP (or other such technology) and that didn’t stop you from switching over to ASP.NET (Ruby on Rails, Java, etc…), did it?

The reason I spend time and energy trying to convince clients to switch is that it saves them money and saves me headaches.  It really is worth the effort.

For Open Source projects (or any project that receives user code contributions), Subversion and CVS have the nice benefit of a patching feature making it easy to contribute without having write access.

tags: Source Control

comments edit

Book Oh boy are you in for a roller coaster ride now!

Let me start with a question, How do you iterate through a large collection of data without loading the entire collection into memory?

The following scenario probably sounds quite familiar to you. You have a lot of data to present to the user. Rather than slapping all of the data onto a page, you display one page of data at a time.

One technique for this approach is to define an interface for paged collections like so…

/// <summary>
/// Base interface for paged collections.
/// </summary>
public interface IPagedCollection
{
    /// <summary>
    /// The Total number of items being paged through.
    /// </summary>
    int MaxItems
    {
        get;
        set;
    }
}

/// <summary>
/// Base interface for generic paged collections.
/// </summary>
public interface IPagedCollection<T> 
    : IList<T>, IPagedCollection
{ 
}

The concrete implementation of a generic paged collection is really really simple.

/// <summary>
/// Concrete generic base class for paged collections.
/// </summary>
/// <typeparam name="T"></typeparam>
public class PagedCollection<T> : List<T>, IPagedCollection<T>
{
    private int maxItems;

    /// <summary>
    /// Returns the max number of items to display on a page.
    /// </summary>
    public int MaxItems
    {
        get { return this.maxItems; }
        set { this.maxItems = value; }
    }
}

A method that returns such a collection will typically have a signature like so:

public IPagedCollection<DateTime> GetDates(int pageIndex
    , int pageSize)
{
    //Some code to pull the data from database 
    //for this page index and size.
    return new PagedCollection<DateTime>();
}

A PagedCollection represents one page of data from the data source (typically a database). As you can see from the above method, the consumer of the PagedCollection handles tracking the current page to display. This logic is not encapsulated by the PagedCollection at all. This makes a lot of sense in a web application since you will only show one page at a time.

But there are times when you might wish to iterate over every page as in a streaming situation.

For example, suppose you need to perform some batch transformation of a large number of objects stored in the database, such as serializing every object into a file.

Rather than pulling every object into memory and then iterating over the huge collection ending up with a really big call to Flush() at the end (or calling flush after each iteration, ending up in too much flushing), a better approach might be to page through the objects calling the Flush() method after each page of objects.

The CollectionBook class is useful just for that purpose. It is a class that makes use of iterators to iterate over every page in a set of data without having to load every record into memory.

You instantiate the CollectionBook with a PagedCollectionSource delegate. This delegate is used to populate the individual pages of the data we are iterating over.

public delegate IPagedCollection<T> 
    PagedCollectionSource<T>(int pageIndex, int pageSize);

When iterating over the pages of a CollectionBook instance, each iteration will call the delegate to retrieve the next page (an instance of IPagedCollection<T>) of data. This uses the new **iterators feature of C# 2.0.

Here is the code for the enumerator.

///<summary>
///Iterates through each page one at a time, calling the 
/// PagedCollectionSource delegate to retrieve the next page.
///</summary>
public IEnumerator<IPagedCollection<T>> GetEnumerator()
{
  if (this.pageSize <= 0)
    throw new InvalidOperationException
      ("Cannot iterate a page of size zero or less");

  int pageIndex = 0;
  int pageCount = 0;

  if (pageCount == 0)
  {
    IPagedCollection<T> page 
      = pageSource(pageIndex, this.pageSize);
    pageCount = (int)Math.Ceiling((double)page.MaxItems / 
      this.pageSize);
    yield return page;
  }

  //We've already yielded page 0, so start at 1
  while (++pageIndex < pageCount)
  {
    yield return pageSource(pageIndex, this.pageSize);
  }
}

The following is an example of instantiating a CollectionBook using an anonymous delegate.

CollectionBook<string> book = new CollectionBook<string>(
    delegate(int pageIndex, int pageSize)
    {
        return pages[pageIndex];
    }, 3);

I wrote some source code and a unit test you can download that demonstrates this technique. I am including a C# project library that contains these classes and one unit test. To get the unit test to work, simply reference your unit testing assembly of choice and uncomment a few lines.

Technorati Tags: Tips, TDD, C#, Generics, Iterators

comments edit

There is a lot of buzz around Windows Live Writer.  I Might as well throw my two cents into the fray.  Keep in mind that I do understand this is a beta.

The Good

  • It uses the paragraph tag <p> around your paragraphs rather than <br /> tags.
  • Keyboard shortcuts for switching views.
  • Integrates your blog’s style when previewing.  This is an impressive feature.  When I preview the post I am working on, it displays the post as if it was posted on my site.  I have seen a temporary blog post with the message This is a temporary post that was not deleted. Please delete this manually. (c3eaf4ab-0941-4881-9d17-d4f62bde069e)  in a few blogs related to obtaining a blog’s style. My guess is that Windows Live Writer posts to the blog, then deletes the post.  It takes a diff to figure out what changed.  That’s just my guess.
  • When adding a link, it has a field for the rel tag.  Nice!
  • It handles my unusual FTP setup using an FTP virtual directory. Not every tool deals with that correctly.
  • Inserting images allows you to apply a few simple effects. Very neat!
  • Support for Really Simple Discovery (RSD).  I just added RSD to Subtext.
  • Plugins!

The Needs Improvement

Since this is a beta, I didn’t feel right calling it “The Bad”.

  • Doesn’t let you specify the title attribute of a tag.
  • Inserts   all over the place as you edit text.
  • What key do I hit to Find Next?
  • No Find and Replace? Really?
  • Be nice to have HTML highlighting in the HTML View.
  • If I markup a word with the <code> tag, then cut and paste that word elsewhere in the Normal view, it marks up the pasted word with the font tag rather than using the underlying tag.

The Wish

  • Using typographic characters for singe quotes and double quotes ala MS Word.
  • Integration with CSS styles from the blog. Be nice to have a pulldown for various CSS classes.
  • Configurable shortcuts. I use the <acronym /> and <code /> tag all the time and have shortcuts for these configured in w.bloggar.

All in all, I really like it so far.  It has a nice look and feel to it and if they iron out a lot of these kinks, this may be the one to beat.

Unfortunately, this may cause some delays in Subtext 1.9 unless I can fight off the vicious urge to write some Microformat plugins. Why must these people introduce such interesting toys into my life! Where will I find the time? ;)

Speaking of which, Tim Heuer already has written a couple of plugins

[Download Windows Live Writer]

comments edit

Subtext Logo No, this is not a case of breaking and entering. With preparations for the next release of Subtext, apparently our SourceForge ranking has climbed up into the top 15. Now keep in mind this is a ranking of the most active projects, and is not a ranking of the success or value of a project. It is an amalgamation of source control checkins, discussion forum activity, and other statistical factors.

Even so, it is satisfying to see our project listed as #13 in the most active projects last week. It is an indication that more and more people are getting involved and having fun building this puppy.

I hadn’t even noticed until several people pointed it out.

I imagine that number will dip after we release when we all take a collective sigh of relief.

comments edit

Or did I?

JackAce, a former coworker of mine and total poker addict, happened upon this fine piece of artwork and astutely snapped this picture for posterity with his T-Mobile (click through to go to the original flickr pic).

Picture of my last name graffiti'd on a park
bench

Soon the entire city of Angels will know the name of Haack!!! BWAHAHA!

Unfortunately the idiot taggers didn’t follow my instructions. I told them to tag my URL, not my name.

Eeeeediots!

Good taggers are hard to find. You might promote your blog by writing good content, posting comments in other blogs, and other such nonsense, and hey that’s cool for you. But this is how we roll in L.A.

Westside!

comments edit

Here are a couple of useful methods for getting information about the caller of a method. The first returns the calling method of the current method. The second returns the type of the caller. Both of these methods require declaring the System.Diagnostics namespace.

private static MethodBase GetCallingMethod()
{
  return new StackFrame(2, false).GetMethod();
}

private static Type GetCallingType()
{
  return new StackFrame(2, false).GetMethod().DeclaringType;
}

Pop Quiz! Why didn’t I apply the principle of code re-use and implement the second method like so?

public static Type GetCallingType()
{
    return GetCallingMethod().DeclaringType;
}

A virtual cigar and the admiration of your peers to the first person to answer correctly.

comments edit

A while ago Ian Griffiths wrote about an improvement to his TimedLock class in which he changed it from a class to a struct. This change resulted in a value type that implements IDisposable. I had a nagging question in the back of my mind at the time that I quickly forgot about. The question is wouldn’t instances of that type get boxed when calling Dispose?

So why would I wonder that? Well let’s take a look at some code and go spelunking in IL. The following humble struct is the star of this investigation.

struct MyStruct : IDisposable
{
    public void Dispose()
    {
        Console.WriteLine("Disposing");
    }
}

Let’s write an application that will instantiate this struct and call its Dispose method via the interface.

public class App
{
    public void DemoDisposable()
    {
        IDisposable disposable = new MyStruct();
        DisoseIt(disposable);
    }
    
    public void DisoseIt(IDisposable disposable)
    {
        disposable.Dispose();
    }
}

Finally we will take our trusty Reflector out and examine the IL (I will leave out the method header).

.maxstack 2
.locals init (
    [0] [mscorlib]System.IDisposable disposable1,
    [1] NeverLockThis.MyStruct struct1)
L_0000: ldloca.s struct1
L_0002: initobj NeverLockThis.MyStruct
L_0008: ldloc.1 
L_0009: box NeverLockThis.MyStruct
L_000e: stloc.0 
L_000f: ldarg.0 
L_0010: ldloc.0 
L_0011: call instance void 
    NeverLockThis.App::DisoseIt([mscorlib]System.IDisposable)
L_0016: ret 

Notice the bolded line has a boxing instruction. As we can see, our struct gets boxed before the Dispose method is called.

The using statement requires that the object provided to it implements IDisposable. Here is a snippet from the MSDN2 docs on the subject.

The using statement allows the programmer to specify when objects that use resources should release them. The object provided to the using statement must implement the IDisposable interface. This interface provides the Dispose method, which should release the object’s resources.

I wondered if the using statement enforced the IDisposable constraint in the same way a method would. Let’s find out. We will add the following new method to the App class.

public void UseMyStruct()
{
    MyStruct structure = new MyStruct();
    using (structure)
    {
        Console.WriteLine(structure.ToString());
    }
}

This code now implicitely calls the Dispose method via the using block. Cracking it open with Reflector reveals…

.maxstack 1
.locals init (
    [0] NeverLockThis.MyStruct struct1,
    [1] NeverLockThis.MyStruct struct2)
L_0000: ldloca.s struct1
L_0002: initobj NeverLockThis.MyStruct
L_0008: ldloc.0 
L_0009: stloc.1 
L_000a: ldloca.s struct1
L_000c: constrained NeverLockThis.MyStruct
L_0012: callvirt instance string object::ToString()
L_0017: call void [mscorlib]System.Console::WriteLine(string)
L_001c: leave.s L_002c
L_001e: ldloca.s struct2
L_0020: constrained NeverLockThis.MyStruct
L_0026: callvirt instance void 
    [mscorlib]System.IDisposable::Dispose()
L_002b: endfinally 
L_002c: ret 
.try L_000a to L_001e finally handler L_001e to L_002c

As you can see, there is no sign of a box statement anywhere to be seen. Forgive me for ever doubting you .NET team. As expected, it does the right thing. I just had to be sure. But do realize that if you pass in a value type that implements IDisposable to a method that takes in IDisposable, a box instruction will occur.

comments edit

Today I was thinking about how much I enjoy doing business with a particular company and realized my natural inclination was not to blog about it. Yet when a company really drops the ball, I have no problem airing my criticisms. I don’t know about you, but I don’t like to be around people who are only critical and never offer praise. It gets to be tiresome.

So today I would like to give a shout out to Paychex (I am not in any way affiliated with them other than being a customer). We signed up with them to handle our payroll and everything has been extremely smooth. We had a few problems (not related to them) when we initially started and they went way above and beyond to help us straighten our books related to payroll.

Not only that, the payroll specialist who handles our account is extremely courteous and helpful on the phone. Just an all around cool guy. It is very telling when you are looking forward to calling in payroll because you like talking to the rep. Paychex just seems to understand customer service and gets it right.

So tell me, who deserves your shout out today?

asp.net, code, asp.net mvc comments edit

UPDATE: For a more full featured implementation of this pattern for ASP.NET Web Forms, check out the WebForms MVP project! It’s available as a NuGet package! Install-Package WebFormsMVP

Martin Fowler recently split the Model-View-Presenter pattern into two new patterns, Supervising Controller and Passive View. They are pretty much two different flavors of the MVP pattern differentiated by how much application logic is placed within the view.

The goal of this post is to demonstrate an end-to-end walk-through of the process of implementing the Supervising Controller pattern starting with a rough schematic. My goal is not explain the Supervising Controller pattern in detail but to help others take it out of the sounds nice in theory bucket and move it into I can and will use this in a real project, I promise bucket. There are many ways to implement the pattern for any given scenario, so keep in mind that this is not the one true way, just one way.

The Schematic

For this fictional scenario, I might as well pick an interface that I am familiar with, a blogging engine. In particular I will create a very very simple page to edit the body of a blog post and add and remove tags associated with the post. In trying to keep the example simple, I leave out all minor and extraneous details such as a title for the blog post. How important is a title, really?

Tag UI Schematic

The chicken scrawl above is a hand-drawn quick and dirty schematic for the user interface. No surprises here. There is a text area for entering the body of the blog post. There is a list of existing tags for the current post on the right. At the bottom of the list of tags is a text box. In order to add a new tag, the user can simply type in the name of the tag and click the add button. This will add a new tag to the list and associate it to the blog post. Note that when adding tags, any changes to the blog post should not be lost. The user can also remove tags by clicking the [x] next to the tag name.

When the user is finally ready to save the blog entry, the user clicks the Save button.

Defining the View

The next step is to analyze the schematic and define a view interface that can appropriately represent this interface. For the sake of this discussion, I will implement a single view that will represent this entire page. An alternative approach would be to break the page into two user controls and implement each user control independently with its own view and presenter.

Examining the schematic reveals what properties we need to populate the view interface. Obviously the view should have a getter and setter for the body text. Probably good to have a property that returns a collection of tags as well. We will want a getter and setter for the tag textbox, and a an event each for the buttons.

The entire process of defining a view interface may take multiple iterations. I am going to skip that long process (mainly because I cannot bear to type that much) and show you what I ended up with. Before I created the specific view interface, I defined a more generic base interface, IView. This is the base interface I will use for all my ASP.NET views.

public interface IView
{
    event EventHandler Init;

    event EventHandler Load;

    bool IsPostBack { get; }

    void DataBind();

    bool IsValid { get;}
}

One thing to notice is this interface defines two events, Init and Load. Most implementations of MVP that I’ve seen place an Initialize method on the Presenter/Controller class that every concrete view implementation must remember to call at the right time.

But when you have an ASP.NET Control or Page implement this interface, you don’t have to remember to have every concrete view call anything and you don’t have to implement these interfaces. It is already done for you by the ASP.NET runtime. You get the initialization call for free. Less code is better code I always say.

One common complaint with such an approach is that events on an interface are hard to test. I thought the same until I discovered Rhino Mocks. Now testing events is quite easy.

Here is the final view interface for the blog post edit page.

public interface IPostEditView : IView
{
    string BlogPostBody { get;set;}

    ICollection<Tag> Tags { get; set; }

    string NewTag { get; set;}

    int BlogPostId {get;}

    event EventHandler PostSaved;

    event EventHandler TagCreated;

    event EventHandler<TagRemovedEventArgs> TagRemoved;
}

Again, no surprises. One thing to note is the BlogPostId property. In this scenario, it is the view that is responsible for figuring out which blog post to edit. This makes sense in a lot of scenarios as the id may be coming in from a query string or via some other means. However, in other scenarios, you might want the controller to be responsible for figuring out which post to edit.

Writing a Unit Test

Now our next major task is to implement the presenter. But before we do that, we should start off with a basic unit test. The first thing I want to test is that the presenter properly attaches to the events on the view. The following is the test I wrote. Notice that there is some code in the SetUp method that I am not presenting here. You can see that code later.

[Test]
public void VerifyAttachesToViewEvents()
{
    viewMock.Load += null;
    LastCall.IgnoreArguments();
    viewMock.PostSaved += null;
    LastCall.IgnoreArguments();
    mocks.ReplayAll();
    new PostEditController(viewMock, 
      this.dataServiceMock);
    mocks.VerifyAll();
}

Defining the presenter {.clear}

With the test in place, I can move forward and start implementing the controller. In practice, I only implement enough to get my test to pass, at which point I write another test and the cycle continues. In order not to bore you, I will skip ahead and show you the entire Controller implementation.

public class PostEditController
{
    BlogPost blogPost;
    IPostEditView view;
    IBlogDataService dataService;
    
    //Attaches this presenter to the view’s events.
    public PostEditController(IPostEditView view, 
      IBlogDataService dataService)
    {
        this.view = view;
        this.dataService = dataService;
        SubscribeViewToEvents();
    }
    
    void SubscribeViewToEvents()
    {
        view.Load += OnViewLoad;
        view.PostSaved += OnPostSaved;
        view.TagCreated += OnTagCreated;
        view.TagRemoved += OnTagRemoved;
    }

    void OnTagRemoved(object sender, TagRemovedEventArgs e)
    {
        this.dataService.RemoveTag(e.Title);
        this.blogPost = this.dataService.GetById(view.BlogPostId);
        view.Tags = blogPost.Tags;
        view.DataBind();
    }

    void OnPostSaved(object sender, EventArgs e)
    {
        Save();
    }

    void OnTagCreated(object sender, EventArgs e)
    {
        CreateAndAddTag();
    }

    void OnViewLoad(object sender, EventArgs e)
    {
        if (!view.IsPostBack)
        {
            LoadViewFromModel();
            view.DataBind();
        }
    }
    
    public Tag GetTagById(int id)
    {
        //Normally we’d probably just have a method 
        //of the service just return this.
        foreach (Tag tag in this.blogPost.Tags)
        {
            if(tag.Id == id)
                return tag;
        }
        return null;
    }

    void LoadViewFromModel()
    {
        this.blogPost = this.dataService.GetById(view.BlogPostId);
        view.Tags = blogPost.Tags;
        view.BlogPostBody = blogPost.Description;
    }
       
    void Save()
    {
        this.dataService.Save(view.BlogPostId, view.BlogPostBody);
        LoadViewFromModel();
        view.DataBind();
    }
    
    void CreateAndAddTag()
    {
        this.dataService.AddTag(view.NewTag);
        //Need to rebind the tags. retrieve tags from db.
        this.blogPost = this.dataService.GetById(view.BlogPostId);
        view.Tags = blogPost.Tags;
        view.NewTag = string.Empty;
        view.DataBind();
    }
}

Implementing the View {.clear}

I have yet to implement the ASP.NET page that will implement the view, yet I am able to write a bunch of unit tests (which I will provide) against the presenter to make sure it behaves appropriately. This is the benefit of this pattern in that much more of the UI logic is now testable.

Implementing the ASP.NET page is pretty straight forward. I drop a few controls on a page, wire up the controls declaratively to their data sources, and then implement the IPostEditView interface. As much as possible, I want to leverage ASP.NET declarative data binding. The point isn’t to force developers to write more code. Here is the code behind for the page. I apologize for the code heaviness of this article.

public partial class _Default : System.Web.UI.Page, IPostEditView
{
    PostEditController controller;
    ICollection<Tag> tags;
    
    public _Default()
    {
         this.controller = 
             new PostEditController(this, new BlogDataService());
    }
    
    protected void Page_Load(object sender, EventArgs e)
    {
    }
    
    public void Update()
    {
        DataBind();
    }

    public int BlogPostId
    {
        get { return GetBlogId(); }
    }
   
    private int GetBlogId()
    {
        string idText = Request.QueryString["id"];
        int result;
        if(int.TryParse(idText, out result))
        {
            return result;
        }
        return 1;
    }

    public string BlogPostBody
    {
        get { return this.txtDescription.Text; }
        set { this.txtDescription.Text = value; }
    }

    public ICollection<Tag> Tags
    {
        get { return this.tags; }
        set { this.tags = value; }
    }

    public string NewTag
    {
        get { return this.txtNewTag.Text; }
        set { this.txtNewTag.Text = value; }
    }
    
    protected void OnSaveClick(object sender, EventArgs e)
    {
        EventHandler postSaved = this.PostSaved;
        if (postSaved != null)
            postSaved(this, EventArgs.Empty);
    }
    
    protected void OnAddTagClick(object sender, EventArgs e)
    {
        EventHandler tagCreated = this.TagCreated;
        if (tagCreated != null)
            tagCreated(this, EventArgs.Empty);
    }

    void OnTagDeleteClick(object source, 
                          RepeaterCommandEventArgs e)
    {
        EventHandler<TagRemovedEventArgs> tagRemoved 
        = this.TagRemoved;
        if(tagRemoved != null)
        {
            string tagTitle 
          = ((Literal)e.Item.FindControl("ltlTag")).Text;
            tagRemoved(this, new TagRemovedEventArgs(tagTitle));
        }
    }

    public event EventHandler TagCreated;

    public event EventHandler PostSaved;

    public event EventHandler<TagRemovedEventArgs> TagRemoved;
    
    protected override void OnInit(EventArgs args)
    {
        this.btnSave.Click += OnSaveClick;
        this.btnNewTag.Click += OnAddTagClick;
        this.rptTags.ItemCommand += OnTagDeleteClick;
    }
}

When I finally compile all this and run it, here is the view I see in the browser.

Blog Post Editor Page

The benefits of this exercise become clear when you find bugs in the UI logic. Even while going through this exercise, I would find minor little bugs that I could expose by writing a failing unit test. I would then fix the bug which would cause the test to pass. That is a great way to work!

As an exercise for the reader, I left in a bug. When you leave the tag textbox blank, but click the Add button, it adds a blank tag. We should just ignore a blank tag. Try writing a unit test that fails because it assumes that a tag will not be added when the NewTag field is left blank. Then make the test pass by fixing the code in the Controller class. Finally, verify that by fixing the unit test, that the actual UI works by trying it out in the browser.

I apologize for glossing over so many details, but I did not want to turn this post into a book. If you want to read more on MVP, check out Bill McCafferty’s detailed treatment at the CodeProject. Also worth checking out is Jeremy D. Miller’s post on Model View Presenter.

Finally, to really understand what I am trying to present here, I encourage you to download the source code and try it out. I have included extensive comments and unit tests. This requires the free Web Application Project Model from Microsoft to compile and run.

The code doesn’t require a database connection, instead simulating a database via some static members. Please understand, that code is just a simulation and is not meant to be emulated in production code.

comments edit

I knew this question would come up, so I figure I would address it in its own blog post. Mike asks a great question about my MVP implementation (actually he asks two).

One observation…don’t you seem to be tying the presenter to the ASP.NET event model? If not, can you use the same presenter for a WinForms app?

The answer is that I am absolutely tying my presenter to ASP.NET.

Why?

Well when I first working on the article, I planned on creating an abstracted IView and presenter that would work for both ASP.NET and Windows Forms, but ran into a few problems. The biggest problem is that I rarely have to write a Windows Forms applications. In fact, I almost never do. So why spend all this time on something I won’t need? I had to call YAGNI on my efforts.

Premature Generalization

Besides, I didn’t want to run afoul of Eric Gunnerson’s #1 deadly sin of programming, premature generalization. There is no point in writing an IView and Presenter to work with both winforms and ASP.NET unless I am also implementing concrete instances of both at the same time. Otherwise I will write it for one platform and hope it will work for the other. If I ever do implement it for the other, I will probably have to rewrite it anyways.

Parity is a rarity

Secondly, even if I did need it, there are some other issues to deal with. First, trying to write a single presenter for both ASP.NET and a WinForms app assumes the user interaction with the application and the view is going to be roughly the same. That is rarely the case. If I have to go to the trouble to write a Winforms app, I will certainly take advantage of its UI benefits.

Leaky Abstractions Rear Their Head

Thirdly, despite all the hoops that ASP.NET jumps through to abstract the fact that it is a web application and present an API that feels like a desktop platform, it is still a web application platform. The abstraction is leaky and trying to abstract it even more causes problems.

For example, in a Winforms view, you only need to call the Initialize method once because the data is persistent in memory. With an ASP.NET view by default, you have to essentially repopulate every data field every time a user clicks a button. Can you imagine a Winforms app written like that?

Of course you could more closely simulate the Winforms view in ASP.NET view by storing these fields in ViewState or, shudder, Session, but this then becomes a constraint on your ASP.NET view in order to support this pattern, forcing you to take a Winforms approach to a web based app. Ideally a presenter for an ASP.NET application should not have to assume that the ASP.NET view is going to store fields in a persistent manner.

Conclusion

So that is a long-winded answer to a short question. I believe if I had to, I could get the same Presenter to work for both a Winforms App and an ASP.NET app. These problems I mention are not insurmountable. However, I would need to be properly motivated to do so, i.e., have a real hard requirement to do so.

comments edit

A while ago I wrote that you should never lock a value type and never lock this. I presented a code snippet to illustrate the point but I violated the cardinal rule for code examples: compile and test it in context. Mea Culpa! Today in my comments, someone named Jack rightly pointed out that my example doesn’t demonstrate a deadlock due to locking this. As he points out, if the code were in a Finalizer, then my example would be believable.

To my defense, I was just testing to see if you were paying attention. ;) Nice find Jack!

My example was loosely based on Richter’s example in his article on Safe Thread Synchronization. Instead of rewriting his example, I will just link to it.

His example properly demonstrates the problem with a Finalizer thread attempting to lock on the object. However Jack goes on to say that locking on this in an ordinary method is fine. I still beg to differ, and have a better code example to prove it.

Again, suppose you carefully craft a class to handle threading internally. You have certain methods that carefully protect against reentrancy by locking on the this keyword. Sounds great in theory, no? However now you pass an instance of that class to some method of another class. That class should not have a way to use the same SyncBlock for thread synchronization that your methods do internally, right?

But it does!

In .NET, an object’s SyncBlock is not private. Because of the way locking is implemented in the .NET framework, an object’s SyncBlock is not private. Thus if you lock this, you are using to the current object’s SyncBlock for thread synchronization, which is also available to external classes.

Richter’s article explains this well. But enough theory you say, show me the code! I will demonstrate this with a simple console app that has a somewhat realistic scenario. Here is the application code. It simply creates a WorkDispatcher that dispatches a Worker to do some work. Simple, eh?

class Program
{
    static void Main()
    {
        WorkDispatcher dispatcher = new WorkDispatcher();
        dispatcher.Dispatch(new Worker());
    }
}

Next we have the carefully crafted WorkDispatcher. It has a single method Dispatch that takes a lock on this (for some very good reason, I am sure) and then dispatches an instance of IWorker to do something by calling its DoWork method.

public class WorkDispatcher
{
    int dispatchCount = 0;
    
    public void Dispatch(IWorker worker)
    {
        Console.WriteLine("Locking this");
        lock(this)
        {
            Thread thread = new Thread(worker.DoWork);
            Console.WriteLine("Starting a thread to do work.");
            dispatchCount++;
            Console.WriteLine("Dispatched " + dispatchCount);
            thread.Start(this);
            
            Console.WriteLine("Wait for the thread to join.");
            thread.Join();
        }
        Console.WriteLine("Never get here.");
    }
}

From the look of it, there should be no reason for this class to deadlock in and of itself. But now let us suppose this is part of a plugin architecture in which the user can plug in various implementations of the IWorker interface. The user downloads a really swell plugin from the internet and plugs it in there. Unfortunately, this worker was written by a malicious eeeeevil developer.

public class Worker : IWorker
{        
    public void DoWork(object dispatcher)
    {
        Console.WriteLine("Cause Deadlock.");
        lock (dispatcher)
        {
            Console.WriteLine("Simulating some work");
        }
    }
}

The evil worker disrupts the carefully constructed synchronization plans of the WorkDispatcher class. This is a somewhat contrived example, but in real world multi-threaded application, this type of scenario can quite easly surface.

If the WorkDispatcher was really concerned about thread safety and protecting its synchronization code, it would lock on something private that no external class could lock on. Here is a corrected example of the WorkDispatcher.

public class WorkDispatcher
{
    object syncBlock = new object();
    int dispatchCount = 0;
    
    public void Dispatch(IWorker worker)
    {
        Console.WriteLine("Locking this");
        lock (this.syncBlock)
        {
            Thread thread = new Thread(worker.DoWork);
            Console.WriteLine("Starting a thread to do work.");
            dispatchCount++;
            Console.WriteLine("Dispatched " + dispatchCount);
            thread.Start(this);
            
            Console.WriteLine("Wait for the thread to join.");
            thread.Join();
        }
        Console.WriteLine("Now we DO get here.");
    }
}

So Jack, if you are reading this, I hope it convinces you (and everyone else) that locking on this, even in a normal method, is a pretty bad idea. It won’t always lead to problems, but why risk it?

comments edit

Recently I wrote that I could not seem to get Log4Net to work with an external configuration file while running ASP.NET 2.0 in Medium Trust. It turns out that I should have been more explicit. I could not get Subtext to work with Log4Net in Medium Trust, but it had nothing to do with Medium Trust. Mea culpa!

My best guess is that there was a small breaking code change in Log4Net that led to this issue since we hadn’t changed the logging code. Here is a breakdown of what happened just in case you run into this problem.

In Subtext, we wrap the Log4Net classes in our own Log class which is in the Subtext.Framework assembly. This is how we declare a logger within a class.

private readonly static ILog log 
    = new Subtext.Framework.Logging.Log();

In the Subtext.Web project, we have the following attribute in AssemblyInfo.cs which specifies the location of the log4net configuration file.

[assembly: log4net.Config.XmlConfigurator(ConfigFile 
    = "Log4Net.config", Watch = true)]

This worked fine and dandy up until ASP.NET 2.0. When you use the attribute approach, you have to make a log4net call early to jump start the engine so to speak. An attribute just sits there until somebody is told to look at it and do something about it. In our case, the line of code I showed above does the trick within Global.asax.cs.

I started digging into the Log4Net code to figure out how it uses the attribute to find the configuration file. I finally ended up at this code.

public static ILog GetLogger(Type type) 
{
    return GetLogger(Assembly.GetCallingAssembly(), 
        type.FullName);
}

GetLogger searches the attributes on the calling assembly to find out which configuration file to use. Since the calling assembly in our case is always Subtext.Framework (since we wrap all calls to Log4Net), Log4Net searches the Subtext.Framework assembly for the XmlConfiguratorAttribute. Well that won’t work because we have the attribute declared on the Subtext.Web assembly.

My initial fix was to move the attribute declaration to AssemblyInfo.cs within Subtext.Framework. That worked, but I felt like the proper place was to have it within the web project, since that is the natural place to look when you are trying to figure out where the config file is specified. So I updated the code to call log4net directly within Global.asax.cs like so.

//This call is to kickstart log4net.
//log4net Configuration Attribute is in AssemblyInfo
private readonly static ILog log 
    = LogManager.GetLogger(typeof(Global));

static Global()
{
    //Wrap the logger with our own.
    log = new Subtext.Framework.Logging.Log(log);
}

I only point this out to show there are two ways to solve it, both with their plusses and minuses. If you run into this problem, hopefully this guide will help you.

comments edit

In response to my blog post on ViewState backed properties and the Null Coalescing operator, Scott Watermasysk expresses a worry that the null coalescing operator opens one up to a race condition in the comments of his blog post.

He provides a code example of a thread safe means of reading the ViewState that copies the value from the ViewState into a local variable before performing the null check.

That got me worried as well. Not so much about the ViewState but about applying the null coalescing operator against the Cache or Session. These are classes where you are more likely to run into thread contention. Take a look at this method.

public void Demo(ref object obj){    Console.WriteLine((int)(obj ?? "null"));}

The worry is that it might be possible for another thread to set value of the reference to obj to null in between the null coalescing operator’s check for null and returning it to the cast operation, potentially causing a NullReferenceException.

However, looking at the generated IL (with my comments interspersed), it seems to me (and I am no IL expert so correct me if I am wrong) that everything is just fine. It seems to copy the value before it performs the null check. So it looks like the null coalescing operator is roughly equivalent to the code Scott uses.

.method public hidebysig instance void Demo(object& obj)     cil managed{  .maxstack 8  L_0000: nop   L_0001: ldarg.1   L_0002: ldind.ref   L_0003: dup //copy value to stack     L_0004: brtrue.s L_000c //jump to L_000c if value isn’t null     L_0006: pop      L_0007: ldstr "null"     L_000c: unbox.any int32     L_0011: call void [mscorlib]System.Console::WriteLine(int32)  L_0016: nop   L_0017: ret }

So it looks like this is a thread safe operation to me. I look forward to any IL experts informing me if I happen to be missing something.

aspnet comments edit

This might be almost too obvious for many of you, but I thought I’d share it anyways. Back in the day, this was the typical code I would write for a value type property of an ASP.NET Control that was backed by the ViewState.

public bool WillSucceed
{
    get
    {
        if (ViewState["WillSucceed"] == null)
            return false;
        return (bool)ViewState["WillSucceed"];
    }
    set
    {
        ViewState["WillSucceed"] = value;
    }
}

I have seen code that tried to avoid the null check in the getter by initializing the property in the constructor. But since the getters and setters for the ViewState are virtual, this violates the warning against calling virtual methods in the constructor. You also can’t initialize it in the OnInit method because the property might be set declaratively which happens before Init.

With C# 2.0 out, I figured I could use the null coalescing operator to produce cleaner code. Here is what I naively tried.

public bool WillSucceed
{
    get
    {
        return (bool)ViewState["WillSucceed"] ?? false;
    }
    set
    {
        ViewState["WillSucceed"] = value;
    }
}

Well of course that won’t compile. It doesn’t make sense to apply the null coalescing operator on a value type that is not nullable. Now if I had stopped to think about it for a second, I would have realized how simple the fix would be, but I was in a hurry and quickly moved on and dropped the issue. What an eeediot! All I had to do was move the cast outside of the expression.

public bool WillSucceed
{
    get
    {
        return (bool)(ViewState["WillSucceed"] ?? false);
    }
    set
    {
        ViewState["WillSucceed"] = value;
    }
}

I am probably the last one to realize this improvement and everyone reading this is thinking, “well duh!”. But in case there is someone out there even slower than me, here you go!

And if I spend this much time trying to write a property, you gotta wonder how I get anything done. ;)

comments edit

Tonight I attended our local Los Angeles .NET Developers Group meeting for the first time in years. I pretty much never go to these meetings because I just haven’t found them worth dealing with the congestion of rush hour traffic in the UCLA area, which is really bad. Of course I should probably view user group meetings in the same way Jeff Atwood views conferences - I am not there for the talks, I am there to meet you.

However the local group does bring in some great speakers via INETA. Tonight’s meeting featured Rob Howard, the CEO of Telligent. I first met Rob at Mix06 and it was good to see him again at this meeting. He gave a great talk on ASP.NET tips and tricks. The one trick that stood out to me had nothing to do with his talk. I noticed at one point he had SQL code with expandable regions much like code regions via the #region directive. Instead of the pound sign, they used --region. I just tried that with a .sql file and it didn’t work for me. Probably requires a database project. I’ll have to ask him about that.

I recognized one guy in attendance who happened to be Michael Washington, a member of the DotNetNuke core team. He patiently listened to my constructive criticism and we discussed ideas for improvements to module development. One thing I hope to help him with is incorporating more unit tests into DNN code he is working on. Andrew Stopford will be pleased that I am trying to steer Michael towards MbUnit.

The challenge will be how to integrate unit tests into the ASP.NET Web Site model, since VS.NET Web Developer Express does not support class library projects. This may be a no brainer, but I have never tried it. The tests will probably just be dropped in the App_Code folder, but will TD.NET run all tests by right clicking on App_Code and selecting Run Tests. I assume so, but we’ll see.

comments edit

I am a little late in reporting this, but I hadn’t realized the problem until I had to maintain an older project that used Log4Net 1.2.8. I upgraded it to log4net 1.2.10 and noticed it stopped working. I then found this comment in the log4net mailing list archives.

There were a number of breaking changes in 1.2.9

http://logging.apache.org/log4net/release/release-notes.html#1.2.9

In your config file “log4net.spi.LevelEvaluator” needs to be updated to “log4net.Core.LevelEvaluator”.

I hope changes that would break existing config files are far and few between.

comments edit

The Evil Monkey In Chris's
Closet This story from Boing Boing just cracks me up. Apparently monkeys have been harassing passengers of India’s Delhi Metro. This has become such a problem that they have had to hire some langurs (a fierce looking primate) along with a langur wrangler to scare away the monkeys. Here is a description of one such incident.

In that incident, a monkey boarded a train at the underground Chawri Bazaar station and reportedly scared passengers by scowling at them for three stops. It then disembarked at Civil Lines station. Passengers had to be moved to another car while staff chased the dexterous creature, causing delays.

Bad monkey! Bad monkey!

As the Simpsons showed us, the monkeys are trying to take over the world. The image above is from Family Guy.

comments edit

If you read this blog outside of an aggregator, you might notice a few minor new tweaks. I am dogfooding Subtext 1.9 which runs on ASP.NET 2.0. We are very close to preparing a release, so I figured I would beta test this one on my own blog and see if everything works well.

A couple of new things you might notice is that there is now a simple search field on the left hand sidebar that displays its results in an overlaying div. Also, when you view an individual post, there are links to the next and previous post. I have also added gravatar support to the comments.

It took me a while to warm up to the idea, but I really like the gravatars. I have participated in many various message boards and sites (such as flickr) in which users choose an avatar to represent themselves. It is a small thing, but adds to the fun and identity for the visually focused. However in most cases, you have to set up a separate avatar for each site.

With gravatars, you register an avatar with their site and in any system that supports it, your avatar is displayed when you supply your email address to the software. Subtext takes your email address, creates a one-way MD5 hash of it, and then requests your gravatar from gravatar.com. If none is found, then a default placeholder is displayed. I will post a comment to this post as a demonstration.

code, open source comments edit

Orchid Jeff “The CodingHorror” Atwood takes issue with the idea that software developers have any moral obligation to contribute to Open Source projects. And you know what? I agree. However I do take issue with his conclusion as well as some of the points that he makes in an attempt to bolster his argument.

The whole point of open source– the reason any open source project exists– is to save us time. To keep us from rewriting the same software over and over.

snip…

It’s exactly what open source is about: maximum benefit, minimum effort.

Well no. Not really. Back in the day, software was just an afterthought provided by hardware companies to complement and run their hardware. This software was widely shared freely, until companies began to impose restrictions with licensing agreements and started protecting their code by hiding the source.

Many developers grew frustrated as they lost control over their working environment and were unable to modify the programs to fit their needs. Hence Richard Stallman started the Free Software movement, hoping to give users the right to:

  1. run the program, for any purpose.
  2. modify the program to suit their needs. (To make this freedom effective in practice, they must have access to the source code, since making changes in a program without having the source code is exceedingly difficult.) They must have the freedom to redistribute copies, either gratis or for a fee.
  3. redistribute copies, either gratis or free.
  4. distribute modified versions of the program, so that the community can benefit from your improvements.

Thus originally, the whole point of open source was to provide users the freedom to regain control over software in order to get their jobs done.

Of course later, the movement split into the Free Software movement which was absolutist about the freedom issue, and the Open Source movement which sought to be more pragmatic and provide a business case for open source software. Open Source focuses on the intrinsic benefit of having communities of developers improving a software codebase.

Enough with the history lesson and please, I do know that I glossed over that with a big hand wave. That is called a rough history.

The highest compliment you can pay any piece of open source software is to simply use it, because it’s worth using. The more you use it, the more that open source project becomes a part of the fabric of your life, your organization, and ultimately the world.

Isn’t that the greatest contribution of all?

If I read a book you wrote because I found it worth reading, have I made a contribution? Hardly! Apart from the contribution I paid at Amazon.com for the pleasure of reading the book, I am merely using something worth using. That is not a contribution, it is just good sense.

Now if I post a positive review on my blog, that would be a contribution.

The way I see it, Jeff’s right, there is no moral obligation to contribute to an Open Source project. However I am not quite sure the puppy analogy he mentions suggests that. At the risk of yet another bad analogy, allow me to propose one that works for me. I think of Open Source software as being like a nice set of flowers in a common space such as a courtyard. Nobody that lives around the common space owns the flowers, yet they all enjoy the presense of the flowers.

Fortunately, one or two volunteers tend to the flowers and the flowers thrive. Some flowers seem to do fine with very little care, others require lots of care. But everybody benefits and there is no cost, nor moral obligation to others to contribute to the upkeep of the flowers.

However, should the caretakers decide to stop tending to the flowers, the others certainly have no right to complain to the caretakers. And while it is true that in some cases, others may take up the slack and tend to the flowers, or other flowers might grow as replacements, if nobody takes up the cause, or if the new flowers are simply weeds, then everybody loses out. Including those who had the time, means, and even desire to help with the flowers, but just never thought to because someone else was taking care of it.

So while I do not think contributing to Open Source software is a moral obligation, I do think it is a worthy practical consideration. No offense to my homie, but I think it is Jeff who misunderstands the economics of open source. Minimum effort does not equate to free in cost.

For example, should you begin to rely more and more on some piece of Open Source software, consider the effort to replace the software (which might tack on a monetary cost for a proprietary replacement) and learn a new toolkit if development of the software dies out. Hardly a minimal effort.

True, a contribution in this situation is no guarantee that a project will succeed. But doesn’t the same risk apply to purchasing proprietary software? It certainly applied to contributing to a political party (if you were a Democrat in the last couple of elections).

And while it may be infeasible for a large portion of the audience (Jeff pulls the figure 99% out of his rear, which is probably close to the truth) to contribute the extravagance of a bug fix, for the remainder for whom it is feasible, it is certainly worth considering. Every contribution not only helps in a small way to keep the project viable, but it improves the software for your own purposes. It is an entirely selfish benefit.

So again, please do not confuse exhortations to contribute to Open Source as a claim for moral obligation. I honestly believe that those who contribute will benefit from the contribution as much as the recipients of the contribution, which is why I am such a big proponent for contributing.

If you have the means, why not water the flowers for a bit? There are many forms of contributing to an open source project. I list several on the Subtext contribution web page.