code, tdd comments edit

I’m currently doing some app building with ASP.NET MVC in which I try to cover a bunch of different scenarios. One scenario in particular I wanted to cover is approaching an application using a Test Driven Development approach. I especially wanted to cover using various Dependency Injection frameworks, to make sure everything plays nice.

Since I’ve already seen demos with Castle Windsor and Spring.NET, I wanted to give StructureMap a try. Here is the problem I’ve run into.

Say I have a class like so:

public class MyController : IController
{
  MembershipProvider membership;
  public HomeController(MembershipProvider provider)
  {
    this.membership = provider;
  }
}

As you can see, this class has a dependency on the abstract MembershipProvider class, which is passed to this class via a constructor argument. In my unit tests, I can use RhinoMocks to dynamically create a mock that inherits MembershipProvider provider and pass that mock to this controller class. It’s nice for testing.

But eventually, I need to use this class in a real app and I would like a DI framework container to create the controller for me. Here is my StructureMap.config file with some details left out.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication" />
  </PluginFamily>
</StructureMap>

If I add an empty constructor to HomeController, this code allows me to create an instance of HomeController like so.

HomeController c = 
  ObjectFactory.GetNamedInstance<IController>("HomeController")
  as HomeController;

But when I remove the empty constructor, StructureMap cannot create an instance of HomeController. I would need to tell StructureMap (via StructureMap.config) how to construct an instance of MembershipProvider to pass into the constructor for HomeController

Normally, I would just specify a type to instantiate as another PluginFamily entry. But what I really want to happen in this case is for StructureMap to call a method or delegate and use the value returned as the constructor argument.

In other words, I pretty much want something like this:

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication">
      <Instance>
        <Property Name="provider">
          <![CDATA[
            return Membership.Provider;
          ]]>
        </Property>
      </Instance>
    </Plugin>
  </PluginFamily>
</StructureMap>

The made up syntax I am using here is stating that when StructureMap is creating an instance of HomeController, execute the code in the CDATA section to get the instance to pass in as the constructor argument named provider.

Does anyone know if something like this is possible with any of the Dependency Injection frameworks out there?Whether via code or configuration?

Tags: TDD , Dependency Injection , IoC , StructureMap

code, tdd comments edit

Frans Bouma wrote an interesting response to my last post, Writing Testable Code Is About Managing Complexity entitled Correctness Provability should be the goal, not Testability.

He states in his post:

When focusing on testability, one can fall into the trap of believing that the tests prove that your code is correct.

God I hope not. Perhaps someone in theory could fall into that trap, but a person could also fall into the trap and buy a modestly priced bridge I have to sell to them in the bay area? This seems like a straw man fallacy to me.

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

Instead, what you often hear testing proponents talk about is confidence. For example, in my post Unit Tests cost More To Write I make the following point (emphasis added):

They reduce the true cost of software development by promoting cleaner code with less bugs. They reduce the TCO by documenting how code works and by serving as regression tests, giving maintainers more confidence to make changes to the system.

Frans goes on to say (emphasis mine)…

Proving code to be correct isn’t easy, but it should be your main focus when writing solid software. Your first step should be to prove that your algorithms are correct. If an algorithm fails to be correct, you can save yourself the trouble typing the executable form of it (the code representing the algorithm) as it will never result in solid correct software.

Before I address this, let me tell you a short story from my past. I promise it’ll be brief.

When I was a young bright eyed bushy tailed math major in college, I took a fantastic class called Differential Equations that covered equations which describe continuous phenomena in one or more dimension.

During the section on partial differential equations, we wracked our brains going through crazy mental gymnastics in order to find an explicit formula that solved a set of equations with multiple independent variables. With these techniques, it seemed like we could solve anything. Until of course, near the end of the semester when the cruel joke was finally revealed.

The sets of equations we solved were heavily contrived examples. As difficult as they were to solve, it turns out that only the most trivial sets of differential equations can be solved by an explicit formula. All that mental gymnastics we were doing up until that point was essentially just mental masturbation. Real world phenomena is hardly ever described by sets of equations that lined up so nicely.

Instead, mathematicians use techniques like Numerical Analysis (the Monte Carlo Method is one classic example) to attempt to find approximate solutions with reasonable error bounds.

Disillusioned, I never ended up taking Numerical Analysis (the next class in the series), choosing to try my hand at studying stochastic processes as well as number theory at that point.

The point of this story is that trying to prove the correctness of computer programs is a lot like trying to solve a set of partial differential equations. It works great on small trivial programs, but is incredibly hard and costly on anything resembling a real world software system.

Not only that, what exactly are you trying to prove?

In mathematics, a mathematician will take a set of axioms, a postulate, and then spend years converting caffeine into a long beard (whether you are male or female) and little scribbles on paper (which mathematicians call equations) that hopefully result in a proof that the postulate is true. At that point, the postulate becomes a theorem.

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker, because the only unambiguous complete spec for what an application does is code. Even then, you have to ask, how do you prove that the specification does what the customers want?

This is why proving code should not be your main focus, unless, maybe, you write code for the Space Shuttle.

Like differential equations, it’s too costly to explicitly prove code in all but the most trivial cases. If you are an algorithms developer writing the next sort algorithm, perhaps it is worth your time to prove your code because that cost is amortized over the life of such a small reusable unit of code. You have to look at your situation and see if the cost is worth it.

For large real world data driven applications, proving code correctness is just not reasonable because it calls for an extremely costly specification process, whereas tests are very easy to specify and cheap to write and maintain.

This is somewhat more obvious with an example. Suppose I asked you to write a program that could break a CAPTCHA. Writing the program is very time consuming and difficult. But first, before you write the program, what if I asked you to write some tests for the program you will write. That’s trivially easy, isn’t it? You just feed in some CAPTCHA images and then check that the program spits out the correct value. How do you know your tests are correct? You apply the red-green-refactor cycle along with the principle of triangulation. ;)

As we see, testing is easy. So how do you prove its correctness? Is it as easy as testing it?

As I said before, testing doesn’t give you a proof of correctness, but like the approaches of numerical analysis, it can give you an approximate proof with reasonable error bounds, aka, a confidence factor. The more tests, the smaller the error bounds and the better your confidence. This is a way better use of your time than trying to prove everything you write.

Technorati Tags: TDD,Math,Code Correctness

code, tdd comments edit

When discussing the upcoming ASP.NET MVC framework, one of the key benefits I like to tout is how this framework will improve testability of your web applications.

The response I often get is the same question I get when mention patterns such as Dependency Injection, IoC, etc…

Why would I want to do XYZ just to improve testability?

I think to myself in response

Just to improve testability? Isn’t that enough of a reason!

That’s how excited I am about test driven development. Testing seems enough of a reason for me!

Of course, when I’m done un-bunching my knickers, I realize that despite all the benefits of unit testable code, the real benefit of testable code is how it helps handle the software development’s biggest problem since time immemorial, managing complexity.

There are two ways that testable code helps manage complexity.

​1. It directly helps manage complexity assuming that you not only write testable code, but also write the unit tests to go along with them. With decent code coverage, you now have a nice suite of regression tests, which helps manage complexity by alerting you to potential bugs introduced during code maintenance in a large project long before they become a problem in production.

​2. It indirectly helps manage complexity because in order to write testable code, you have to employ the principle of separation of concerns to really write testable code.

Separating concerns within an application is an excellent tool for managing complexity when writing code. And writing code is complex!

The MVC pattern, for example, separates an application into three main components: the Model, the View, and the Controller. Not only does it separate these three components, it outlines the loosely coupled relationships and communication between these components.

Key Benefits of Separating Concerns

This separation combined with loose coupling allows a developer to manage complexity because it allows the developer to focus on one aspect of the problem at a time.

Martin Fowler writes about this benefit in his paper, Separating User Interface Code (pdf):

A clear separation lets you concentrate on each aspect of the problem separately—and one complicated thing at a time is enough. It also lets different people work on the separate pieces, which is useful when people want to hone more specialized skills. \

The ability to divide work into parallel tracks is a great benefit of this approach. In a well separated application, if Alice needs time to implement the controller or business logic, she can quickly stub out the model so that Bob can work on the view without being blocked by Alice. Meanwhile, Alice continues developing the business layer without the added stress that Bob is waiting on her.

Bring it home

The MVC example above talks about separation of concerns on a large architectural scale. But the same benefits apply on a much smaller scale outside of the MVC context. And all of these benefits can be yours as a side-effect of writing testable code.

So to summarize, when you write testable code, whether it is via Test Driven Development (TDD) or Test After Development, you get the following side effects.

  1. A nice suite of regression tests.
  2. Well separated code that helps manage complexity.
  3. Well separated code that helps enable concurrent development.

Compare that list of side effects with the list of side effects of the latest pharmaceutical wonder drug for curing restless legs or whatever. What’s not to like!?

Technorati Tags: TDD, Separation of Concerns, MVC, Loose Coupling, Design

comments edit

In his book, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (title long enough for you?), James Surowiecki argues that decisions made by a crowd are generally better than those made by any single individual in the group.

Crowd

Seems like a lot of theoretical hogwash until you see this thesis put to action in the real world via a prediction market. A prediction market (also called a decision market) is, as its name implies, a market created specifically to predict the likelihood of a specific outcome. They are most successful when participants have something invested in their decisions. Money often does the trick, but is not necessary.

The Hollywood Stock Exchange, a prediction market focused around the film industry, demonstrated the potential accuracy of such markets when it went 32 for 39 in predicting 2006’s Oscar Nominees (in the major categories that are the only ones people care about). They didn’t do so bad in 2007 either.

It’s no surprise then that there is a lot of research going on in predictive markets. Preliminary results show they seem to work very well when properly structured.

Contrast the success of prediction markets to another decision making process that involves a group. A recent comment by a reader got me thinking on this subject as he described one of the symptoms of this process.

If you try to satisfy all parties, you’ll end up with mediocre product that does not satisfy everybody.

An even more strong way to state this is to say that trying to satisfy everyone leaves almost everyone unsatisfied.

This is one typical result of a decision making process known as Groupthink. Another colorful term used to describe it is decision by committee. These terms do not, by any means, have a positive connotation.

Group
Think 

There seems to be a paradox here between the power of markets and the ineptitude of Group Think. Why do these two somewhat similar means of decision making have such a wide variance in accuracy?

Perhaps counter-intuitively, it has a lot to do with the coercive effects of seeking consensus in decision making. Group think is often hamstrung by seeking consensus whereas participants in an effective market are largely independent of one another.

Before I continue, let me head off that insightful comment on how I am trying to promote anarchy and save your fingers the pain of some typing by pointing out that this is not an indictment of all consensus driven decision making. In many cases, consensus is absolutely required. It doesn’t help if a decision is optimal but nobody is willing to participate in the result of the decision. This sort of decision making is especially essential in one particular form of decision making - negotiations,which is well covered by the book Getting to Yes: Negotiating Agreement Without Giving In.

The problem lies in applying negotiation to make decisions that should not be made by consensus. At least not without applying non-consensus based fact gathering first, so that negotiations can occur using data gathered in a dispassionate manner.

So what makes groupthink particularly inadequate? Social psychologist Irving Janis identified the following eight symptoms of groupthink that hamstring good decision making:

  1. Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
  2. Collective rationalization – Members discount warnings and do not reconsider their assumptions.
  3. Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
  4. Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.
  5. Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
  6. Self-censorship – Doubts and deviations from the perceived group consensus are not expressed.
  7. Illusion of unanimity – The majority view and judgments are assumed to be unanimous.
  8. Self-appointed ‘mindguards’ – Members protect the group and the leader from information that is problematic or contradictory to the group’s cohesiveness, view, and/or decisions.

One symptom not listed here, but probably fits in with #6 is the fear of causing offense.

One theme in common in the list is that many of these symptoms are the result of seeking some form of consensus with others within the group. The net effect is that all of these symptoms provide incentives not to deviate from the group.

Prediction Markets counter these symptoms through independence and proper incentive. Because the participants in a market are not directly working together, there is no way for peer pressure to burrow itself into the decision making process. There is no fear of offending others in such a market. Putting something valuable (such as money) on the line (or having a huge investment in the correct outcome, such as predicting disaster) has an eye-opening habit of making people truthful and helps to avoid self-censorship etc…

Another necessary factor in the success of markets, as pointed out in an insightful comment to this post, is diversity. Markets and committees that lack diversity of viewpoint fail to take advantage of all available information in a meaningful way and succumb to a form of tunnel vision. This type of phenomena is very evident in a leader who surrounds him/herself with sycophants and thus makes decisions based on what they want to hear rather than fact.

Prediction markets are not perfect and are not suitable for all decisions. In fact, they are probably not suitable for most decisions as the cost to set up a market isn’t justified for everyday decisions like whether you should get the bottle of beer or the pitcher of sangria.

However, understanding what makes predictive markets work and what the symptoms of groupthink look like can be a great benefit the next time you are making a decision within a group and start to see groupthink emerging.

Technorati Tags: Decision Making,Consensus,Predictive Markets,Groupthink

comments edit

Seen in Twitter today

CodingHorror: Guitar Hero III for the PC/Mac has truly insane system requirements http://tinyurl.com/2c42fm and USB guitar = dongle :(

LazyCoder: @codinghorror So for only 1 billion times the computing power required to put a man on the moon, you too can fake guitar playing.

It is a good thing we put all that computing power to good use.

Technorati Tags: Humor

comments edit

Last week I was busy in Las Vegas at the DevConnections/OpenForce conferences, and unlike that pithy but over-used catch-phrase, what happens at a conference in Vegas should definitely not stay in Vegas but should be blogged (only things during sessions that won’t get anyone in trouble).

It was my first time at DevConnections and I was not disappointed. This was also a first ever OpenForce conference put on by the DotNetNuke corporation and from all appearances, it was quite a success.

Carl Franklin and Rob
Conery Rob Conery and Rick
Strahl

The first night, as I passed by the hotel bar, I noticed Carl Franklin, well known for .NET Rocks, a podcast on .NET technologies. As I had an appearance on the show, it was nice to finally meet the guy in person.

Also met up with the Flyin’ Hawaiians, Rob Conery (from Kauai) and Rick Strahl (from Maui).

This is a big part of the value and fun in such conferences. The back-channel conversations on various topics that provide new insights you might not have otherwise received.

The next day, I attended the ASP.NET MVC talk given by Scott Hanselman and Eilon Lipton. The talk was well attended for a technology in which there are no CTP bits available yet. There were quite a few who stuck around to ask questions. I also attended their next talk that was part 3 of 3 in a series in which Scott Guthrie gave the first two parts.

Scott and Eilon are like the Dean Martin and Jerry Lewis of conference talks (I won’t say which is which). They play off each other quite well in giving a humorous, but informative, talk.

The best part for me was watching Eilon squirm as a star-struck attendee asked to have her picture taken with him after having done the same with Scott Hanselman. I think we expect this sort of geek-worship with Scott, but Eilon seemed genuinely uncomfortable. She was quite excited to get a pic with the guy who wrote**the UpdatePanel!

An admirer gazes at
Scott An admirer gazes at
Eilon

One particular experience that was particularly fun for me was that I got to go around the exhibitor floor, camera-man in tow, to interview attendees about their impressions of the conference. Normally such work goes to charismatic and well-spoken guys like Carl Franklin and Scott Hanselman, but both were too busy at the time and Scott pointed the cameraman to me.

I try to remain open to new experiences, even ones that take me out of my comfort zone (I have stage fright). I walked around with a microphone interviewing people and saw that the attendees really love this conference and felt they got a lot out of it. At least the ones willing to talk about it on camera. ;)

When asked what their favorite talk was, a couple attendees mentioned the MVC talk, which was good to hear.

IMG_0818 IMG_0820

While on the floor, they had a drawing in which they gave out a Harley. The winner happened to be a Harley-loving motorcycle rider, so it worked out pretty well.

On Wednesday and Thursday, I participated on two panels for the OpenForce conference on the topic of Open Source. They taped these panels so hopefully the videos will be up soon. I’ll write more about what we discussed later. I need to get some sleep as after leaving Vegas, I flew out the next day to Redmond and it is very late.

Technorati Tags: DevConnections,OpenForce

asp.net, asp.net mvc comments edit

While at DevConnections/OpenForce, I had some great conversations with various people on the topic of ASP.NET MVC. While many expressed their excitement about the framework and asked when they could see the bits (soon, I promise), there were several who had mixed feelings about it. I relish these conversations because it helps highlight the areas in which we need to put more work in and helps me become a better communicator about it.

One thing I’ve noticed is that most of my conversations focused too much on the MVC part of the equation. Dino Esposito (who I met very briefly), wrote an insightful post pointing out that it isn’t the MVC part of the framework that is most compelling:

So what’s IMHO the main aspect of the MVC framework? It uses a REST-like approach to ASP.NET Web development. It implements each request to the Web server as an HTTP call to something that can be logically described as a “remote service endpoint”. The target URL contains all that is needed to identify the controller that will process the request up to generating the response–whatever response format you need. I see more REST than MVC in this model. And, more importantly, REST is a more appropriate pattern to describe what pages created with the MVC framework actually do.

In describing the framework, I’ve tended to focus on the MVC part of it and the benefits in separation of concerns and testability. However, others have pointed out that by keeping the UI thin, a good developer could do all these things without MVC. So what’s the benefit of the MVC framework?

I agree, yet I still think that MVC provides even greater support for Test Driven Development than before both in substance and in style, so even in that regard, there’s a benefit. I need to elaborate on this point, but I’ll save that for another time.

But MVC is not the only benefit of the MVC framework. I think the REST-like nature is a big selling point. Naturally, the next question is, well why should I care about that?

Fair question. Many developers won’t care and perhaps shouldn’t. In those cases, this framework might not be a good fit. Some developers do care and desire a more REST-like approach. In this case, I think the MVC framework will be a good fit.

This is not a satisfying answer, I know. In a future post, I hope to answer that question better. In what situations should developers care about REST and in which situations, should they not? For now, I really should get some sleep. Over and out.

Technorati Tags: ASP.NET MVC,REST

asp.net, code, asp.net mvc, tdd comments edit

UPDATE: This content is a bit outdated as these interfaces have changed in ASP.NET MVC since the writing of this post.

One task that I relish as a PM on the ASP.NET MVC project is to build code samples and sample applications to put the platform through its paces and try to suss out any problems with the design or usability of the API.

Since testability is a key goal of this framework, I’ve been trying to apply a Test Driven Development (TDD) approach as I build out the sample applications. This has led to some fun discoveries in terms of using new language features of C# to improve my tests.

For example, the MVC framework will include interfaces for the ASP.NET intrinsics. So to mock up the HTTP context using Rhino Mocks, you might do the following.

MockRepository mocks = new MockRepository();
      
IHttpContext context = mocks.DynamicMock<IHttpContext>();
IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();
IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();

SetupResult.For(context.Request).Return(request);
SetupResult.For(context.Response).Return(response);
SetupResult.For(context.Server).Return(server);
SetupResult.For(context.Session).Return(session);

mocks.ReplayAll();
//Ready to use the mock now

Kind of a mouthful, no?

Then it occurred to me. I should use C# 3.0 Extension Methods to create a mini DSL (to abuse the term) for building HTTP mock objects. First, I wrote a simple proof of concept class with extension methods.

public static class MvcMockHelpers
{
  public static IHttpContext 
    DynamicIHttpContext(this MockRepository mocks)
  {
    IHttpContext context = mocks.DynamicMock<IHttpContext>();
    IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
    IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
    IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();
    IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();

    SetupResult.For(context.Request).Return(request);
    SetupResult.For(context.Response).Return(response);
    SetupResult.For(context.Session).Return(session);
    SetupResult.For(context.Server).Return(server);

    mocks.Replay(context);
    return context;
  }

  public static void SetFakeHttpMethod(
    this IHttpRequest request, string httpMethod)
  { 
    SetupResult.For(request.HttpMethod).Return(httpMethod);
  }
}

And then I rewrote the setup part for the test (the rest of the test is omitted for brevity).

MockRepository mocks = new MockRepository();
IHttpContext context = mocks.DynamicIHttpContext();
context.Request.SetFakeHttpMethod("GET");
mocks.ReplayAll();

That’s much cleaner, isn’t it?

Please note that I call the Replay method on the IHttpContext mock. That means you won’t be able to setup any more expectations on the context. But in most cases, you won’t need to.

This is just a proof-of-concept, but I could potentially add a bunch of SetFakeXYZ extension methods on the various intrinsics to make setting up expectations and results much easier. I chose the pattern of using the SetFake prefix to help differentiate these test helper methods.

Note that this technique isn’t specific to ASP.NET MVC. As you start to build apps with #C 3.0, you can build extensions for commonly used mocks to make it easier to write unit tests with mocked objects. That takes a lot of the drudgery out of setting up a mocked object.

Oh, and if you’re lamenting the fact that you’re writing ASP.NET 2.0 apps that don’t have interfaces for the HTTP intrinsics, you should read my post on IHttpContext and Duck Typing in which I provide such interfaces.

Happy testing to you!

I have a follow-up post on testing routes. The project includes a slightly more full featured version of the MvcMockHelpers class.

Tags: ASP.NET MVC , TDD, Rhino Mocks

comments edit

Recently I gave out a few free copies of a book I co-wrote, but ran out quickly. This is the same book that Jeff Atwood (a co-author) told everyone, Do Not Buy This Book.

Well, if you didn’t get a copy, there is another opportunity to get a free copy. DotNetSlackers is running a contest and will reward the top 3 contributors to their forums (ooh, that could get…interesting. First Post!orI Agree!Count it up.) with great prizes.

ASP.NET 2.0
Anthology Unfortunately, our book is part of a second prize package which includes ANTS Profiler Pro. Our book is also the third-prize.

Why is that unfortunate?

XBox 360
Elite Because if you want a free copy of our book, you have to try real hard not to do too well that you win first prize, which is an XBox 360 Elite along with Telerik RadControls. Once you got your hands on Halo 3 or Gears of War, you’d have no time to read our book! And you wouldn’t want that, would you?

In any case, for more contest details, check out the contest page.

Tags: Contest , Book

code comments edit

One thing I’ve found with various open source projects is that many of them contain very useful code nuggets that could be generally useful to developers writing different kinds of apps. Unfortunately, in many cases, these nuggets are hidden. If you’ve ever found yourself thinking, Man, I wonder how that one open source app does XYZ because I could use that in this app, then you know what I mean.

One goal I have with Subtext is to try and expose code that I think would be useful to others. It’s part of the reason I started the Subkismet project.

Another useful library you might find useful in Subtext is our SQL Script execution library encapsulated in the Subtext.Scripting.dll assembly.

A loooong time ago, Jon Galloway wrote a post entitled Handling GO Separators in SQL Scripts - the easy way that tackled the subject of executing SQL Scripts that contain GO separators using SQL Server Management Objects (SMO). SMO handles GO separators, but it doesn’t (AFAIK) handle SQL template parameters.

So rather than go the easy way, we went the hard way and wrote our own library for parsing and executing SQL scripts that contain GO separators (much harder than it sounds) and template parameters. Here’s a code sample that demonstrates the usage.

string script = @"SELECT * FROM <table1, nvarchar(256), Products>
GO
SELECT * FROM <table2, nvarchar(256), Users>";

SqlScriptRunner runner = new SqlScriptRunner(script);
runner.TemplateParameters["table1"] = "Post";
runner.TemplateParameters["table2"] = "Comment";

using(SqlConnection conn = new SqlConnection(connectionString))
{
  conn.Open();
  using(SqlTransaction transaction = conn.BeginTransaction())
  {
    runner.Execute(transaction);
    transaction.Commit();
  }
}            

The above code uses the SqlScriptRunner class to parse the script into its constituent scripts (you can access them via a ScriptCollection property) and then sets the value of two template parameters before executing all of the constituent scripts within a transaction.

Currently, the class only has one Execute method which takes in a SqlTransaction instance. This is slightly cumbersome and it would be nice to have a version that didn’t need all this setup, but this was all we needed for Subtext.

When I started writing this post, I thought about making some overrides that would make this class even easier to use, but instead, I will provide a copy of the assembly and point people to our Subversion repository and hope that someone out there will find this useful and have enough incentive to submit improvements!

Also, be sure to check out our unit tests for this class to understand what I mean when I said it was harder than it look. As a hint, think about dealing with GO statements in comments and quotes. Also, GO doesn’t have to be the only thing on the line. Certain specific elements can come before or after a GO statement on a line.

In case you missed the link, DOWNLOAD IT HERE.

Tags: Subtext , SQL , Sql Script Runner

comments edit

A while back, Jon Galloway asked the question, Can Operating Systems tell if they’re running in a Virtual Machine? What a silly question! When was the last time an Operating System questioned its own existence? Is that going to be in the next version of Windows - Windows Vista Into Its Own Soul? Or perhaps Apple will come out with Mac OS Existentialist?

Perhaps a more interesting question is whether or not you can tell that a web server is running in a virtual machine? Last weekend I migrated my blog into a virtual server running on a pretty sweet host machine and so far, my site seems to have gained an extra spring in its step.

Given that it’s hosted on the same server as that bandwidth hog, CodingHorror.com, I’m very pleased with the performance thus far. At least until he gets Dugged again.

In Jeff’s post, he mentioned that CrystalTech hooked us up with a beefy 64-bit dedicated server. Several commenters noted that there was no 64-bit offering on their site. The reason is that they hadn’t received much demand for 64-bit servers until we came along with our bullying tactics.

Dedicated Dual & Quad
Core Through a contact over there, I wanted to see if we could work out a hosting deal. Jeff was adamant that we get a 64-bit server, which they didn’t have at the time, but could certainly order. I pretty much didn’t want them to go through all that trouble and was ready to move on, but they insisted it was not a big deal.

They lied…er…understated their case. Rather than simply build a single 64-bit server, they took this as an opportunity to build out a whole new product offering of 64 bit dedicated servers.

So what started off as me trying to scam some discounted or free hosting ended up spurring these guys to start a new product offering. Nice!

I’m now hosting this site and SubtextProject.com on this box, but our CCNET server is still in a virtual machine hosted generously by Eric Kemp of Subsonic fame.

I used to be skeptical of hosting my site in a virtual machine, as I felt like if I hosted closer to the metal, I could wring out extra performance. But let’s get real here, I’m not taxing this machine in any way.

I’m sold on the benefits and convenience of virtualization.

Tags: CrystalTech , Hosting , Virtual Machine

comments edit

From Monday night to Thursday afternoon next week I will be in Las Vegas attending both DevConnections/ASPConnections as well as the DotNetNuke OpenForce conference. After that, I will be up in Redmond for the next week.

I wrote before that I would be speaking on a couple panels at OpenForce conference talking about open source and .NET.

If you’re interested, the panels will be:

Wednesday, Nov 7 - 8:00 AM - 9:15 AM Lagoon L

DOS101: Panel Discussion: Open Source on the Microsoft \ Technology Stack\ Scott Guthrie, Phil Haack, Rob Conery and Shaun Walker

Thursday, Nov 8 - 9:30 AM - 10:45 AM Lagoon F

DOS102: Panel Discussion: .NET Open Source Architectural Models \ Joe Brinkman, Phil Haack, Jay Flowers, Jon Galloway and Rob Conery

It’s unfortunate that the first panel is at 8:00 AM because I never really sleep much at all when I’m in Vegas. I pretty much have too much fun (unless I’m losing bad). So if you run into me, I apologize in advance for the fool I make of myself. ;)

Another talk you’ll not want to miss is the MVC talk given by Scott Hanselman and Eilon Lipton.

Tuesday, Nov 6 - 2:30 PM - 3:30 PM Room 2

AMS304: Introduction to the new ASP.NET Model View Controller (MVC) Framework

Scott Hanselman, Eilon Lipton

I’ll definitely be at this talk. These two are giving a couple of talks together you can read about on Scott’s post. Come by and say hi.

Tags: Microsoft , DevConnections , AspConnections , OpenForce , Conference

comments edit

Pop quiz for you C# developers out there. Will the following code compile?

//In Foo.dll
public class Kitty
{
  protected internal virtual void MakeSomeNoise()
  {
    Console.WriteLine("I'm in ur serverz fixing things...");
  }
}

//In Bar.dll
public class Lion : Kitty
{
  protected override void MakeSomeNoise()
  {
    Console.WriteLine("LOL!");
  }
}

If you had asked me that yesterday, I would have said hell no. You can’t override an internal method in another assembly.

Of course, I would have been WRONG!

Well the truth of the matter is, I was wrong. This came up in an internal discussion in which I was unfairly complaining that certain methods I needed to override were internal. In fact, they were protected internal. Doesn’t that mean that the method is both protected and internal?

Had I simply tried to override them, I would have learned that my assumption was wrong. For the record…

protected internal means protected OR internal

It’s very clear when you think of the keywords as the union of accessibility rather than the intersection. Thus protected internal means the method is accessible by anything that can access the protected method UNION with anything that can access the internal method.

A Donkey Named Lester - Creative Commons By Attribution -
ninjapoodlesAs the old saying goes, when you assume, you make an ass out of u and me. I never understood this saying because when I assume, I only make an ass of me. I really think the word should simply be assme. As in… 

Never assme something won’t work without at least trying it.

UPDATE: Eilon, sent me an email to point out that…

BTW the CLR does have the notion of ProtectedANDInternal, but C# has no syntax to specify it. If you look at the CLR’s System.Reflection.MethodAttributes enum you’ll see both FamANDAssem as well as FamORAssem (“Family” is the CLR’s term for C#’s protected and “Assem” is C#’s internal).

If you don’t know Eilon, he’s a freaking sharp developer I get to work with on the MVC project and was the one who kindly disabused me of my ignorance on this subject. He keeps a blog at http://weblogs.asp.net/leftslipper/.

Apparently he’s the one with the clever idea of using a C# 3.0 anonymous type as a dictionary, that many of you saw in ScottGu’s ALT.NET Conference talk. Very cool.

subtext comments edit

MySpace China
LogoAn undisclosed source informed me that MySpace China is using a modified version of Subtext for its blogging engine.

I had to check it out for myself to confirm it and it is true! Check out my first MySpace China blog post. How do I know for a fact that this is running on Subtext? I just viewed source and saw this little bit of javascript…

var subtextBlogInfo = new blogInfo('/', '/1304049400/');

So if anyone is wondering if Subtext can scale, it sure can. MySpace China gets around 100 million page views, approximately a million of which go to the blog.

My source tells me the MySpace China developers found some bugs with Subtext they had to fix that were only exposed when they put a huge load on it. Although they are under no requirements to do so under our license, I hope that they would contribute those fixes as patches back to Subtext.

So to all you Chinese users of Subtext (via MySpace China), 你好 to you.

Technorati Tags: Subtext , MySpace China

comments edit

Like a lovesick puppy, my good friend Rob Conery is following me to Microsoft.

I’m excited (not yet super excited) that Rob is going to be joining us working on Subsonic as the sugar on top of the work we’re doing with the MVC framework. Good times! We’re definitely going to have to celebrate in Vegas at the DotNetNuke conference and DevConnections (you all will be there, won’t you?)

This is perhaps another item to add to the list I made of signs of progress in regards to how Microsoft is approaching Open Source.

Rob, be sure to read Dan Fernandez’s post on the stages of new employees at Microsoft employee.

I really think our first order of business is to build a sample MVC application using LOLCode.NET. Until your framework supports LOLCode, nobody takes you seriously.

Technorati Tags: Microsoft , Rob Conery

comments edit

For those starting out at Microsoft, an analogy that you’re likely to hear a lot is “Drinking from the firehose”. The first time I’ve ever heard this phrase was when Dare used it in a post about the flood of information due to subscribing to multiple RSS feeds.

Bronx Summer. Photographer,
unknown.

It’s entirely apropos (just love that word ever since The Matrix) as a description of starting as a new employee at Microsoft. My buddy Walter said his brother had the same feeling when starting at CalTech. That resonates with me, because unlike my college, with its emphasis on the liberal arts, Microsoft very much feels like an engineering college.

For example, everyone around me is technically adept and incredibly smart. It’s funny to hear myself say that. Over the past few years, I’ve read many blog posts from people I highly respect, real brainiacs in the industry, talk about their transition to Microsoft and they would often say something similar. Something along the lines of…

“I’m surrounded by really really smart people.”

“Everyone here is super smart” Note, they like to use “super” as a prefacing adjective a lot around here. Especially the phrases “Super Smart” or “Super Excited” 

“I feel humbled by the smart people around me.”

I used to read these statements and think to myself, Bullshit! You’re freakin’Don Box! OrChris Sells! OrScott Hanselman! OrJohn Lam! (I could go on…) I know you’re just saying that to be nice. I mean, how could you really say otherwise since you have to work with them?

Sure I bet these people are smart, and many of them might even be scary smart, but you know you’re a big dog over there. Admit it. Go ahead, admit it.

Ahhhh yes, the ignorant arrogance of an outsider. Now here I find myself saying the same things these guys have said, though admittedly, the bar is lower in my case than the aforementioned highly respected gentlemen.

I’m sure there must be some stupid people around here somewhere. They’re just not in my group as far as I can tell.

Investing for the long run

I had a great meeting with my manager on my third day of work. Rather than focusing on technology, tasks, and features, we spent a couple hours talking about passion, personal mission, goals, long term outlook, etc… And not the typical bullcrap I can regurgitate in an interview (whoops!). I was forced to really think deeply about these issues, about what I really want out of a career, which is quite frankly something I haven’t done in a long time. I’ve been too busy doing and not spending any time pondering. It is really important to have a balance of both.

In part, I think this is a reflection of a company that can afford to invest in the long run and be strategic, rather than always investing in the short term and feeling like a chicken with its head cut off. It is a refreshing change of pace.

But don’t confuse that with a slow pace, it was crazy busy over here last week. Right off the bat I was put to work on preparing materials for a private Software Design Review (SDR) we had with select customers and partners, which kept me busy over the weekend. The great part of course is that I’m pretty much in control over my own schedule for the most part, as long as I’m producing results.

You Sound Like You Totally Drank The Kool-Aid?

It may seem like I have completely drunk the kool-aid, but I like to think that I held it in my cheek waiting for a moment when nobody was looking to spit it out. Besides, the kool-aid would be extremely diluted from the fire hose.

The phrase drinking the kool-aid”** implies a cult or herd mentality, which is something I hope to avoid. In fact, it makes me a more valuable employee if I can keep some of my naive outsider thinking intact, though perhaps over time, I will be super assimilated (doh!).

I think my enthusiasm for my work has a lot to do with the particular group I am in and the particular project I am on. I know there are some people very dissatisfied at Microsoft, so it’s not all roses and ice cream.

I have also heard second-hand comments that show that some people here still have misconceptions about Open Source. Not a willful antagonism, just a misunderstanding, which is easily remedied via education.

This post has gone on long enough. If people are interested, I’d be happy to write more about my experiences and impressions of Microsoft as things progress.

The main thing I want to say is that I still plan to work on Subtext, though my involvement will be scant in the near term. I also still plan to continue blogging and not drop off the face of the blogodome as some have feared.

Technorati Tags: Microsoft

comments edit

If you live in the Seattle area and like code, talking about code, or listening to people talk about code, you owe it to yourself to check out the Seattle Code Camp.

  • WHO: You and a bunch of other code junkies
  • WHAT: Code Camp Seattle!
  • WHEN: November 17, 18 (Sat and Sun) 2007
  • WHERE: DigiPen Institute of Technology, Redmond, WA
  • WHY: Did you not see the first paragraph of this post?
  • HOW: I leave that up to you, but consider car pooling.

Sadly, I won’t be able to make this one since I still live in Los Angeles, but Jason Haley and others will be there. So at the very least, make nice with Jason, get on his Interesting Finds list, and get your blog some exposure.

Check out the website for more information and a brief FAQ on the code camp ethos.

comments edit

Many of you noticed that my blog was down. Thanks for the heads up. For some reason, it was pegging the CPU at 100% all of a sudden. Not sure why this was happening since nobody made any changes to the server. At least no changes they would fess up to ;).

I migrated the blog to a dedicated server courtesy of Rob Conery and still had the problem. Migrating allowed me to narrow down the problem without affecting anyone else’s site.

After much trial and error, I narrowed it down to the Recent Comments and Recent Posts controls in the footer of my skin. If you scroll down, you’ll see I’ve removed them.

So again, thanks for those who pointed it out and sorry to anyone who was inconvenienced (like me!) looking for information etc….

comments edit

UPDATE: We decided on the Three Lions Pub in Redmond at 7:30 PM

beer-and-dinner I’m going to be in Redmond next week and would love to get a geek dinner together around 7:30 PM. My flight gets in at 5:18 PM so I hope that’s doable. Anybody interested in joining us?

I think Scott Hanselman, Brad Wilson, Scott Koon, and Scott Densmore will grace us with their presence. If you’re interested in showing up, post a comment so we get a rough idea of numbers.

Topic for discussion, Forget Separation of Concerns, what does MS Dev Div have against Spaghetti Code? I like spaghetti!

Another potential topic of discussion, Render on POST - A fool’s http response or a pragmatic approach? Choose a side and let the all-out bare knuckle bloody brawl begin.

Final topic, if you believe GET is the one true way to render and prefer redirect on POST, which one do you choose? 302 or 303?

comments edit

Here’s the dirty little secret about being a software developer. No matter how good the code you write is, it’s crap to another developer.

It doesn’t matter if the code is so clean you could eat sushi off of it. Doesn’t matter if both John Carmack and Linus Torvalds bow down in respect every time the code is shown on the screen. Some developer out there will call it crap, and it’s usually the developer who inherits the code when you leave.

The reasons are many and petty:

  • Your code uses string concatenation in that one method rather than using a StringBuilder. So what if in this one situation, that was a conscious decision because on average that method only concatenates three or four strings together. The next guy doesn’t care.
  • You put your curly braces on the same line rather than its own line as God intended (or vice versa).
  • You used a switch statement when everyone (including the next developer) knows you’re supposed to replace that with the State or Strategy pattern, always! Didn’t you read Design Patterns? Never mind the fact that there’s only one switch statement and thus no code duplication.
  • You’re using Spring.NET for dependency injection, but the next guy loves Windsor. Only idiots choose Spring.NET (or vice versa, again).
  • Or perhaps you used dependency injection at all. What the hell is dependency injection? I don’t understand the code now! :(

While we strive for perfect code, it is unattainable on real projects because real code is weighed down by the pressure of constraints such as time pressure. Unfortunately, these constraints aren’t reflected in the code, just the effect of the constraints. The next developer reading your code didn’t know that code was written with one hour left to deliver the project.

Although I admit, having been burned by misguided criticism before, it’s hard not to be tempted to take a pre-emptive strike at criticism by trying to embed the constraints in the code via comments.

For example,

public void SomeMethod()
{
  /*
  At most, there will only be 4 to 5 foos, so string concatenation 
  is just fine in this situation. Here are links to five blog posts that 
  talk about the perf implications. Give me a break, it’s 
  3 AM, I’m hopped up on Jolt, this project is 3 months
  late, and I have no social life anymore. Cut me some slack!
  ...
  */
  string result = string.Empty;
  foreach(Foo foo in Foos)
  {
    result += foo;
  }
  return result;
}

Seems awful defensive, no? There’s nothing wrong with leaving a comment to highlight why a particular non-obvious design decision is made. In fact, that’s exactly what comments are for, rather than simply reiterating what the code does.

The problem though, is that developers sometimes cut each other so little slack, you start writing a treatise in green (or whichever color you have comments set to in your IDE) to justify every line of code because you have no idea what is going to be obvious to the next developer.

That’s why I was particularly pleased to receive an email the other day from a developer who inherited some code I wrote and said that the solutions were, and I quote, “really well written”.

Seriously? Am I being Punk’d? Ashton, where the hell are you hiding?

This is quite possibly the highest compliment you can receive from another developer. And I don’t think it’s because I’m such a great developer. I really think the person who deserves credit here is the one giving the compliment.

I mean, my reaction when I’ve inherited code was typically, why the hell did they write this this way!? Did they learn to code from the back of a Cracker Jack Box!? Who better to serve as the scapegoat than the developer who just left?

Fortunately I had enough tact to keep those thoughts to myself. In the future, I’ll work harder on the empathy side of things. When I inherit code, I’ll assume the developer wrote it in a 72 hour straight coding binge, his World of Warcraft character held hostage, bees all over his body, with only an hour to finish the code on a 386 before everything really starts to go south.

Given those circumstances, it’s no wonder the idiot didn’t use a using block around that IDisposable instance.