Tyler, an old friend and an outstanding contractor for VelocIT recently wrote a post suggesting one would receive better performance by passing in an array of objects to the Microsoft Data Application Block methods rather than passing in an array of SqlParameter instances. He cited this article.

The article suggests that instead of this:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlParameter[] parameters = new SqlParameter[]
  {
    new SqlParameter("@id", aUser.Id)
    , new SqlParameter("@name", aUser.Name)
    , new SqlParameter("@name", aUser.Email)
    , new SqlParameter("@name", aUser.LastLogin)
    , new SqlParameter("@name", aUser.LastLogOut)
  };

  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , parameters);
}

You should do something like this for performance reasons:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , aUser.Id
    , aUser.Name
    , aUser.Email
    , aUser.LastLogin
    , aUser.LastLogout);
}

Naturally, when given such advice, I fall back to the first rule of good performance from the perf guru himself, Rico Mariani. Rule #1 Is to Measure. So I mentioned to Tyler that I’d love to see metrics on both approaches. He posted the result on his blog.

Calling the methods included in a previous post, 5000 times each,

With parameters took 1203.125 milliseconds. With objects took 1250 milliseconds. Objects took -46.875 milliseconds less.

20000 times each:

With parameters took 4859.375 milliseconds.
With objects took 5015.625 milliseconds. Objects took -156.25 milliseconds less.

The results show that the performance difference is negligible. However, even before seeing the performance results, I would agree with the article to choose the second approach, but for different reasons. It results in a lot less code. As I have said before, Less code is better code.

I tend to prefer optimizing for productivity all the time, but only optimizing for performance after carefully measuring for bottlenecks.

There’s also a basic economics question hidden in this story. The first approach does seem to eke out slightly better performance, but at what cost? That’s a lot more code to write to eke out 47 milliseconds worth of performance out of 5000 method calls. Is it really worth it?

This particular example may not be the best example of this principle of wasting time optimizing at the expense of productivity because there is one redeeming factor worth mentioning with the first approach.

By explicitly specifying the parameters, the parameters can be listed in any order, whereas the second approach requires that the parameters be listed in the same order that they are specified in the stored procedure. Based on that, some may find the first approach preferable. Me, I prefer the second approach because it is cleaner, easier to read, and I don’t see keeping the order intact much more work than getting the parameter names correct.

But that’s just me.