comments edit

Oren Eini, aka Ayende, writes about his dissatisfaction with Microsoft reproducing the efforts of the OSS community. His post was sparked by the following thread in the ALT.NET mailing list:

Brad: If you’re simply angry because we had the audacity to make our own object factory with DI, then I can’t help you; the fact that P&P did ObjectBuilder does not invalidate any other object factory and/or DI container.

Ayende: No, it doesn’t. But it is a waste of time and effort.

Brad: In all seriousness: why should you care if I waste my time?

Ayende’s response is:

  • I care because it means that people are going to get a product that is a duplication of work already done elsewhere, usually with less maturity and flexibility.
  • I care because people are choosing Microsoft blindly, and that puts MS in a position of considerable responsibility.
  • I care because I see this as continued rejection of the community efforts and hard work.
  • I care because it, frankly, shows contempt to anything except what is coming from Microsoft.
  • I care because it so often ends up causing me grief.
  • I care because it is doing disservice to the community.

As a newly minted employee of Microsoft, it may seem like I am incapable of having a balanced opinion on this, but I am also an OSS developer and was so before I joined, so hopefully I am not so totally unbalanced ;).

I think his sentiment comes from certain specific efforts by Microsoft that, how can I put this delicately, sucked in comparison to the existing open source alternatives.

Two specific implementations come to mind, MS Test and SandCastle.

However, as much I tend to enjoy and agree with much of what Ayende says in his blog, I have to disagree with Ayende on this point that duplication of efforts is the problem.

After all, open source projects are just as guilty of this duplication. Why do we need BlogEngine.NET when there is already Subtext? And why do we need Subtext when there is already DasBlog? Why do we need MbUnit when there is NUnit? For that matter, why do we need Monorail when there is Ruby on Rails or RhinoMocks when there is NMock?

I think Ayende is well suited to answer that question. When he created RhinoMocks, there was already an open source mocking framework out there, NMock. But NMock perhaps didn’t meet Ayende’s need. Or perhaps he thought he could do better. In any case, he went out and duplicated the efforts of NMock, but in many (but maybe not all) ways, made it better. I personally love using RhinoMocks.

The thing is, there is no way for NMock nor RhinoMocks to meet all the needs of all the possible constituencies needs for a mocking framework. Technical superiority isn’t always the deciding factor. Sometimes political realities come into play. For example, whether we like it or not, some companies won’t use open source software. In an environment like that, neither NMock nor RhinoMocks will make any headway, leaving the door open for yet another mocking framework to make a dent.

Projects that seem to duplicate efforts never make perfect copies. They each have a slightly different set of requirements they seek to address. In an evolutionary sense, each duplicate contains mutations. And like evolution, survival of the fittest ensues. Except this isn’t a global winner takes all zero sum game.

What works in one area might not survive in another area. Like the real world, niches form and that which can find a niche in which it is strong will survive in that niche.

I’m reminded of this when I read that the Opera Mini browser beats Apple Safari, Netscape, and Mozilla combined in the Ukraine. Another remindes is how Google built yet another social platform that is really big Brazil.

So again, Duplication Is Not The Problem. Competition is healthy. If anything, the problem is, to stick with the evolution analogy, is that Microsoft because of its sheer might gives its creations quite the head start, to survive when the same product would die had it been released by a smaller company. We’ve seen this before when Microsoft let IE 6 rot on the vine and risks doing the same with IE 7. Fact of the matter is, Microsoft has a lot of influence.

So can we really fault Microsoft for duplicating efforts? Or only for doing a half-assed job of it sometimes? As I wrote before when I asked the question Should Microsoft Really Bundle Open Source Software?, I’d like to see some balance that both recognizes business realities that push Microsoft to duplicating community efforts, but at the same time support the community.

After all, Microsoft can’t let what is out there completely dictate its product strategy, but it also can’t ignore the open source ecosystem which is a boon to the .NET Framework.

Disclaimer: I shouldn’t need to say it, but to be clear, these are my opinions and not necessarily those of my employer.

Tags: Microsoft , Open Source , OSS

comments edit

Despite an international team of committers to Subtext and the fact that MySpace China uses a customized version of Subtext for its blog, I am ashamed to say that Subtext’s support for internationalization has been quite weak.

world
map

True, I did once write that The Only Universal Language in Software is English, but I didn’t mean that English is the only language that matters, especially on the web.

One area that we need to improve is in dealing with international URLs. For example, if I’m a user in Korea, I should be able to write a post with a Korean domain and a Korean title and thus have a friendly URL like so:

http://하쿹.com/blog/안녕하십니까.aspx

(As an aside, roughly speaking, 하쿹 would be pronounced hah-kut. About as close as I can get to haacked which is pronounced like hackt.)

If you’re a kind soul, you will forgive us for punting on this issue for so long. After all, RFC 2396, which defines the syntax for Uniform Resource Identifiers (URI) only allows for a subset of ASCII (about 60 characters).

But then again, I’ve been hiding behind this RFC as an excuse for a while fully knowing there are workarounds. I have just been too busy to fix this.

There are two issues here actually, the hostname (aka domain name) which is quite restrictive and cannot be URL encoded, AFAIK, and the rest of the URL which can be encoded.

The domain name issue is resolved by the diminutively named Punycode (described in RFC 3492). Punycode is a protocol for converting Unicode strings into the more limited set of ASCII characters for network host names.

For example, http://你好.com/ translates to http://xn–6qq79v.com/in Punycode.

Fortunately, this issue is pretty easy to fix. Since the browser is responsible for converting the Unicode domain name in the URL to Punycode, all we need to do in Subtext is allow users to setup a hostname that contains Unicode and we can then convert that to Punycode using something like the Punycode / IDN library for .NET 2.0. For this blog post, I used the web based phlyLabs IDNA Converter for converting Unicode to Punycode.

The second issue is rest of the URL. When you enter a title of a URL in Subtext, we convert that to a human and URL friendly ASCII “slug”. For example, if you enter the title “I like lamp” for a blog post, Subtext creates the friendly URL ending with “i_like_lamp.aspx”.

We haven’t totally ignored international URLs. For international western languages, we have code that effectively replaces accented characters with a close ASCII equivalent. A couple of examples (there are more in our unit tests) are:

Åñçhòr çùè becomes Anchor_cue

Héllò wörld becomes Hello_world

Unfortunately for my Korean brethren, something like 안녕하십니까 becomes (empty string). Well that totally sucks!

The thing is, the simple solution in this case is to just allow the Unicode Korean word as the slug. Browsers will apply the correct URL encoding to the URL. Thus http://haacked.com/안녕하십니까/ would become a request for http://haacked.com/%EC%95%88%EB%85%95%ED%95%98%EC%8B%AD%EB%8B%88%EA%B9%8C/and everything works just fine as far as I can tell. Please note that Firefox 2.0 actually replaces the Unicode string in the address bar with the encoded string while IE7 displays the Unicode as-is, but makes the request using the encoded URL (as confirmed by Fiddler).

For western languages in which we can do a decent enough conversion to ASCII, the benefit there is the URL remains somewhat readable and “friendlier” than a long URL encoded string. But for non-western scripts, we have no choice but to deal with these ugly URL encoded strings (at least in Firefox).

The interesting thing is, when researching how sites in China handle internationalized URLs, I discovered that in the same way we did, they simply punt on the issue. For example, http://baidu.com/, the most popular search engine in China last I checked, has English URLs.

Tags: URL , Localization , Punycode

code, tdd comments edit

My friend (and former boss and business partner) Micah found this gem of a quote from Donald Knuth addressing code proofs.

Beware of bugs in the above code; I have only proved it correct, not tried it.

Micah writes more on the topic and reminds me of why I enjoyed working with him so much. He’s always been quite thoughtful in his approach to problems. And I’m not just saying that because he agrees with me. ;)

On another note, several commenters pointed out that one thing I didn’t mention before, but should have, is that verifying the quality of code is only one small aspect of unit testing and Test Driven Development.

The more important factor is that TDD is a design process. Employing TDD is one (not the only one, but I think it is a good one) approach for improving the design of code and especially the usability of your code. By usability, I mean from another developer’s perspective.

If I have to create twenty different objects in order to call a method on your class, your class is probably not very usable to other developers. TDD is one approach that forces you to find that out sooner, rather than later.

A code proof won’t necessarily find that “flaw” because it is not a flaw in logic.

Tags: TDD , Code Provability

code, tdd comments edit

I’m currently doing some app building with ASP.NET MVC in which I try to cover a bunch of different scenarios. One scenario in particular I wanted to cover is approaching an application using a Test Driven Development approach. I especially wanted to cover using various Dependency Injection frameworks, to make sure everything plays nice.

Since I’ve already seen demos with Castle Windsor and Spring.NET, I wanted to give StructureMap a try. Here is the problem I’ve run into.

Say I have a class like so:

public class MyController : IController
{
  MembershipProvider membership;
  public HomeController(MembershipProvider provider)
  {
    this.membership = provider;
  }
}

As you can see, this class has a dependency on the abstract MembershipProvider class, which is passed to this class via a constructor argument. In my unit tests, I can use RhinoMocks to dynamically create a mock that inherits MembershipProvider provider and pass that mock to this controller class. It’s nice for testing.

But eventually, I need to use this class in a real app and I would like a DI framework container to create the controller for me. Here is my StructureMap.config file with some details left out.

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication" />
  </PluginFamily>
</StructureMap>

If I add an empty constructor to HomeController, this code allows me to create an instance of HomeController like so.

HomeController c = 
  ObjectFactory.GetNamedInstance<IController>("HomeController")
  as HomeController;

But when I remove the empty constructor, StructureMap cannot create an instance of HomeController. I would need to tell StructureMap (via StructureMap.config) how to construct an instance of MembershipProvider to pass into the constructor for HomeController

Normally, I would just specify a type to instantiate as another PluginFamily entry. But what I really want to happen in this case is for StructureMap to call a method or delegate and use the value returned as the constructor argument.

In other words, I pretty much want something like this:

<?xml version="1.0" encoding="utf-8" ?>
<StructureMap>
  <PluginFamily Type="IController" DefaultKey="HomeController"
      Assembly="...">
    <Plugin Type="HomeController" ConcreteKey="HomeController"
        Assembly="MvcApplication">
      <Instance>
        <Property Name="provider">
          <![CDATA[
            return Membership.Provider;
          ]]>
        </Property>
      </Instance>
    </Plugin>
  </PluginFamily>
</StructureMap>

The made up syntax I am using here is stating that when StructureMap is creating an instance of HomeController, execute the code in the CDATA section to get the instance to pass in as the constructor argument named provider.

Does anyone know if something like this is possible with any of the Dependency Injection frameworks out there?Whether via code or configuration?

Tags: TDD , Dependency Injection , IoC , StructureMap

code, tdd comments edit

Frans Bouma wrote an interesting response to my last post, Writing Testable Code Is About Managing Complexity entitled Correctness Provability should be the goal, not Testability.

He states in his post:

When focusing on testability, one can fall into the trap of believing that the tests prove that your code is correct.

God I hope not. Perhaps someone in theory could fall into that trap, but a person could also fall into the trap and buy a modestly priced bridge I have to sell to them in the bay area? This seems like a straw man fallacy to me.

Certainly no major TDD proponent has ever stated that testing provides proof that your code is correct. That would be outlandish.

Instead, what you often hear testing proponents talk about is confidence. For example, in my post Unit Tests cost More To Write I make the following point (emphasis added):

They reduce the true cost of software development by promoting cleaner code with less bugs. They reduce the TCO by documenting how code works and by serving as regression tests, giving maintainers more confidence to make changes to the system.

Frans goes on to say (emphasis mine)…

Proving code to be correct isn’t easy, but it should be your main focus when writing solid software. Your first step should be to prove that your algorithms are correct. If an algorithm fails to be correct, you can save yourself the trouble typing the executable form of it (the code representing the algorithm) as it will never result in solid correct software.

Before I address this, let me tell you a short story from my past. I promise it’ll be brief.

When I was a young bright eyed bushy tailed math major in college, I took a fantastic class called Differential Equations that covered equations which describe continuous phenomena in one or more dimension.

During the section on partial differential equations, we wracked our brains going through crazy mental gymnastics in order to find an explicit formula that solved a set of equations with multiple independent variables. With these techniques, it seemed like we could solve anything. Until of course, near the end of the semester when the cruel joke was finally revealed.

The sets of equations we solved were heavily contrived examples. As difficult as they were to solve, it turns out that only the most trivial sets of differential equations can be solved by an explicit formula. All that mental gymnastics we were doing up until that point was essentially just mental masturbation. Real world phenomena is hardly ever described by sets of equations that lined up so nicely.

Instead, mathematicians use techniques like Numerical Analysis (the Monte Carlo Method is one classic example) to attempt to find approximate solutions with reasonable error bounds.

Disillusioned, I never ended up taking Numerical Analysis (the next class in the series), choosing to try my hand at studying stochastic processes as well as number theory at that point.

The point of this story is that trying to prove the correctness of computer programs is a lot like trying to solve a set of partial differential equations. It works great on small trivial programs, but is incredibly hard and costly on anything resembling a real world software system.

Not only that, what exactly are you trying to prove?

In mathematics, a mathematician will take a set of axioms, a postulate, and then spend years converting caffeine into a long beard (whether you are male or female) and little scribbles on paper (which mathematicians call equations) that hopefully result in a proof that the postulate is true. At that point, the postulate becomes a theorem.

The key here is that the postulate is an unambiguous specification of a truth you wish to prove. To prove the correctness of code, you need to know exactly what correct behavior is for the code, i.e. a complete and unambiguous specification for what the code should do. So tell me dear reader, when was the last time you received an unambiguous fully detailed specification of an application?

If I ever received such a thing, I would simply execute that sucker, because the only unambiguous complete spec for what an application does is code. Even then, you have to ask, how do you prove that the specification does what the customers want?

This is why proving code should not be your main focus, unless, maybe, you write code for the Space Shuttle.

Like differential equations, it’s too costly to explicitly prove code in all but the most trivial cases. If you are an algorithms developer writing the next sort algorithm, perhaps it is worth your time to prove your code because that cost is amortized over the life of such a small reusable unit of code. You have to look at your situation and see if the cost is worth it.

For large real world data driven applications, proving code correctness is just not reasonable because it calls for an extremely costly specification process, whereas tests are very easy to specify and cheap to write and maintain.

This is somewhat more obvious with an example. Suppose I asked you to write a program that could break a CAPTCHA. Writing the program is very time consuming and difficult. But first, before you write the program, what if I asked you to write some tests for the program you will write. That’s trivially easy, isn’t it? You just feed in some CAPTCHA images and then check that the program spits out the correct value. How do you know your tests are correct? You apply the red-green-refactor cycle along with the principle of triangulation. ;)

As we see, testing is easy. So how do you prove its correctness? Is it as easy as testing it?

As I said before, testing doesn’t give you a proof of correctness, but like the approaches of numerical analysis, it can give you an approximate proof with reasonable error bounds, aka, a confidence factor. The more tests, the smaller the error bounds and the better your confidence. This is a way better use of your time than trying to prove everything you write.

Technorati Tags: TDD,Math,Code Correctness

code, tdd comments edit

When discussing the upcoming ASP.NET MVC framework, one of the key benefits I like to tout is how this framework will improve testability of your web applications.

The response I often get is the same question I get when mention patterns such as Dependency Injection, IoC, etc…

Why would I want to do XYZ just to improve testability?

I think to myself in response

Just to improve testability? Isn’t that enough of a reason!

That’s how excited I am about test driven development. Testing seems enough of a reason for me!

Of course, when I’m done un-bunching my knickers, I realize that despite all the benefits of unit testable code, the real benefit of testable code is how it helps handle the software development’s biggest problem since time immemorial, managing complexity.

There are two ways that testable code helps manage complexity.

​1. It directly helps manage complexity assuming that you not only write testable code, but also write the unit tests to go along with them. With decent code coverage, you now have a nice suite of regression tests, which helps manage complexity by alerting you to potential bugs introduced during code maintenance in a large project long before they become a problem in production.

​2. It indirectly helps manage complexity because in order to write testable code, you have to employ the principle of separation of concerns to really write testable code.

Separating concerns within an application is an excellent tool for managing complexity when writing code. And writing code is complex!

The MVC pattern, for example, separates an application into three main components: the Model, the View, and the Controller. Not only does it separate these three components, it outlines the loosely coupled relationships and communication between these components.

Key Benefits of Separating Concerns

This separation combined with loose coupling allows a developer to manage complexity because it allows the developer to focus on one aspect of the problem at a time.

Martin Fowler writes about this benefit in his paper, Separating User Interface Code (pdf):

A clear separation lets you concentrate on each aspect of the problem separately—and one complicated thing at a time is enough. It also lets different people work on the separate pieces, which is useful when people want to hone more specialized skills. \

The ability to divide work into parallel tracks is a great benefit of this approach. In a well separated application, if Alice needs time to implement the controller or business logic, she can quickly stub out the model so that Bob can work on the view without being blocked by Alice. Meanwhile, Alice continues developing the business layer without the added stress that Bob is waiting on her.

Bring it home

The MVC example above talks about separation of concerns on a large architectural scale. But the same benefits apply on a much smaller scale outside of the MVC context. And all of these benefits can be yours as a side-effect of writing testable code.

So to summarize, when you write testable code, whether it is via Test Driven Development (TDD) or Test After Development, you get the following side effects.

  1. A nice suite of regression tests.
  2. Well separated code that helps manage complexity.
  3. Well separated code that helps enable concurrent development.

Compare that list of side effects with the list of side effects of the latest pharmaceutical wonder drug for curing restless legs or whatever. What’s not to like!?

Technorati Tags: TDD, Separation of Concerns, MVC, Loose Coupling, Design

comments edit

In his book, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (title long enough for you?), James Surowiecki argues that decisions made by a crowd are generally better than those made by any single individual in the group.

Crowd

Seems like a lot of theoretical hogwash until you see this thesis put to action in the real world via a prediction market. A prediction market (also called a decision market) is, as its name implies, a market created specifically to predict the likelihood of a specific outcome. They are most successful when participants have something invested in their decisions. Money often does the trick, but is not necessary.

The Hollywood Stock Exchange, a prediction market focused around the film industry, demonstrated the potential accuracy of such markets when it went 32 for 39 in predicting 2006’s Oscar Nominees (in the major categories that are the only ones people care about). They didn’t do so bad in 2007 either.

It’s no surprise then that there is a lot of research going on in predictive markets. Preliminary results show they seem to work very well when properly structured.

Contrast the success of prediction markets to another decision making process that involves a group. A recent comment by a reader got me thinking on this subject as he described one of the symptoms of this process.

If you try to satisfy all parties, you’ll end up with mediocre product that does not satisfy everybody.

An even more strong way to state this is to say that trying to satisfy everyone leaves almost everyone unsatisfied.

This is one typical result of a decision making process known as Groupthink. Another colorful term used to describe it is decision by committee. These terms do not, by any means, have a positive connotation.

Group
Think 

There seems to be a paradox here between the power of markets and the ineptitude of Group Think. Why do these two somewhat similar means of decision making have such a wide variance in accuracy?

Perhaps counter-intuitively, it has a lot to do with the coercive effects of seeking consensus in decision making. Group think is often hamstrung by seeking consensus whereas participants in an effective market are largely independent of one another.

Before I continue, let me head off that insightful comment on how I am trying to promote anarchy and save your fingers the pain of some typing by pointing out that this is not an indictment of all consensus driven decision making. In many cases, consensus is absolutely required. It doesn’t help if a decision is optimal but nobody is willing to participate in the result of the decision. This sort of decision making is especially essential in one particular form of decision making - negotiations,which is well covered by the book Getting to Yes: Negotiating Agreement Without Giving In.

The problem lies in applying negotiation to make decisions that should not be made by consensus. At least not without applying non-consensus based fact gathering first, so that negotiations can occur using data gathered in a dispassionate manner.

So what makes groupthink particularly inadequate? Social psychologist Irving Janis identified the following eight symptoms of groupthink that hamstring good decision making:

  1. Illusion of invulnerability – Creates excessive optimism that encourages taking extreme risks.
  2. Collective rationalization – Members discount warnings and do not reconsider their assumptions.
  3. Belief in inherent morality – Members believe in the rightness of their cause and therefore ignore the ethical or moral consequences of their decisions.
  4. Stereotyped views of out-groups – Negative views of “enemy” make effective responses to conflict seem unnecessary.
  5. Direct pressure on dissenters – Members are under pressure not to express arguments against any of the group’s views.
  6. Self-censorship – Doubts and deviations from the perceived group consensus are not expressed.
  7. Illusion of unanimity – The majority view and judgments are assumed to be unanimous.
  8. Self-appointed ‘mindguards’ – Members protect the group and the leader from information that is problematic or contradictory to the group’s cohesiveness, view, and/or decisions.

One symptom not listed here, but probably fits in with #6 is the fear of causing offense.

One theme in common in the list is that many of these symptoms are the result of seeking some form of consensus with others within the group. The net effect is that all of these symptoms provide incentives not to deviate from the group.

Prediction Markets counter these symptoms through independence and proper incentive. Because the participants in a market are not directly working together, there is no way for peer pressure to burrow itself into the decision making process. There is no fear of offending others in such a market. Putting something valuable (such as money) on the line (or having a huge investment in the correct outcome, such as predicting disaster) has an eye-opening habit of making people truthful and helps to avoid self-censorship etc…

Another necessary factor in the success of markets, as pointed out in an insightful comment to this post, is diversity. Markets and committees that lack diversity of viewpoint fail to take advantage of all available information in a meaningful way and succumb to a form of tunnel vision. This type of phenomena is very evident in a leader who surrounds him/herself with sycophants and thus makes decisions based on what they want to hear rather than fact.

Prediction markets are not perfect and are not suitable for all decisions. In fact, they are probably not suitable for most decisions as the cost to set up a market isn’t justified for everyday decisions like whether you should get the bottle of beer or the pitcher of sangria.

However, understanding what makes predictive markets work and what the symptoms of groupthink look like can be a great benefit the next time you are making a decision within a group and start to see groupthink emerging.

Technorati Tags: Decision Making,Consensus,Predictive Markets,Groupthink

comments edit

Seen in Twitter today

CodingHorror: Guitar Hero III for the PC/Mac has truly insane system requirements http://tinyurl.com/2c42fm and USB guitar = dongle :(

LazyCoder: @codinghorror So for only 1 billion times the computing power required to put a man on the moon, you too can fake guitar playing.

It is a good thing we put all that computing power to good use.

Technorati Tags: Humor

comments edit

Last week I was busy in Las Vegas at the DevConnections/OpenForce conferences, and unlike that pithy but over-used catch-phrase, what happens at a conference in Vegas should definitely not stay in Vegas but should be blogged (only things during sessions that won’t get anyone in trouble).

It was my first time at DevConnections and I was not disappointed. This was also a first ever OpenForce conference put on by the DotNetNuke corporation and from all appearances, it was quite a success.

Carl Franklin and Rob
Conery Rob Conery and Rick
Strahl

The first night, as I passed by the hotel bar, I noticed Carl Franklin, well known for .NET Rocks, a podcast on .NET technologies. As I had an appearance on the show, it was nice to finally meet the guy in person.

Also met up with the Flyin’ Hawaiians, Rob Conery (from Kauai) and Rick Strahl (from Maui).

This is a big part of the value and fun in such conferences. The back-channel conversations on various topics that provide new insights you might not have otherwise received.

The next day, I attended the ASP.NET MVC talk given by Scott Hanselman and Eilon Lipton. The talk was well attended for a technology in which there are no CTP bits available yet. There were quite a few who stuck around to ask questions. I also attended their next talk that was part 3 of 3 in a series in which Scott Guthrie gave the first two parts.

Scott and Eilon are like the Dean Martin and Jerry Lewis of conference talks (I won’t say which is which). They play off each other quite well in giving a humorous, but informative, talk.

The best part for me was watching Eilon squirm as a star-struck attendee asked to have her picture taken with him after having done the same with Scott Hanselman. I think we expect this sort of geek-worship with Scott, but Eilon seemed genuinely uncomfortable. She was quite excited to get a pic with the guy who wrote**the UpdatePanel!

An admirer gazes at
Scott An admirer gazes at
Eilon

One particular experience that was particularly fun for me was that I got to go around the exhibitor floor, camera-man in tow, to interview attendees about their impressions of the conference. Normally such work goes to charismatic and well-spoken guys like Carl Franklin and Scott Hanselman, but both were too busy at the time and Scott pointed the cameraman to me.

I try to remain open to new experiences, even ones that take me out of my comfort zone (I have stage fright). I walked around with a microphone interviewing people and saw that the attendees really love this conference and felt they got a lot out of it. At least the ones willing to talk about it on camera. ;)

When asked what their favorite talk was, a couple attendees mentioned the MVC talk, which was good to hear.

IMG_0818 IMG_0820

While on the floor, they had a drawing in which they gave out a Harley. The winner happened to be a Harley-loving motorcycle rider, so it worked out pretty well.

On Wednesday and Thursday, I participated on two panels for the OpenForce conference on the topic of Open Source. They taped these panels so hopefully the videos will be up soon. I’ll write more about what we discussed later. I need to get some sleep as after leaving Vegas, I flew out the next day to Redmond and it is very late.

Technorati Tags: DevConnections,OpenForce

asp.net, asp.net mvc comments edit

While at DevConnections/OpenForce, I had some great conversations with various people on the topic of ASP.NET MVC. While many expressed their excitement about the framework and asked when they could see the bits (soon, I promise), there were several who had mixed feelings about it. I relish these conversations because it helps highlight the areas in which we need to put more work in and helps me become a better communicator about it.

One thing I’ve noticed is that most of my conversations focused too much on the MVC part of the equation. Dino Esposito (who I met very briefly), wrote an insightful post pointing out that it isn’t the MVC part of the framework that is most compelling:

So what’s IMHO the main aspect of the MVC framework? It uses a REST-like approach to ASP.NET Web development. It implements each request to the Web server as an HTTP call to something that can be logically described as a “remote service endpoint”. The target URL contains all that is needed to identify the controller that will process the request up to generating the response–whatever response format you need. I see more REST than MVC in this model. And, more importantly, REST is a more appropriate pattern to describe what pages created with the MVC framework actually do.

In describing the framework, I’ve tended to focus on the MVC part of it and the benefits in separation of concerns and testability. However, others have pointed out that by keeping the UI thin, a good developer could do all these things without MVC. So what’s the benefit of the MVC framework?

I agree, yet I still think that MVC provides even greater support for Test Driven Development than before both in substance and in style, so even in that regard, there’s a benefit. I need to elaborate on this point, but I’ll save that for another time.

But MVC is not the only benefit of the MVC framework. I think the REST-like nature is a big selling point. Naturally, the next question is, well why should I care about that?

Fair question. Many developers won’t care and perhaps shouldn’t. In those cases, this framework might not be a good fit. Some developers do care and desire a more REST-like approach. In this case, I think the MVC framework will be a good fit.

This is not a satisfying answer, I know. In a future post, I hope to answer that question better. In what situations should developers care about REST and in which situations, should they not? For now, I really should get some sleep. Over and out.

Technorati Tags: ASP.NET MVC,REST

asp.net, code, asp.net mvc, tdd comments edit

UPDATE: This content is a bit outdated as these interfaces have changed in ASP.NET MVC since the writing of this post.

One task that I relish as a PM on the ASP.NET MVC project is to build code samples and sample applications to put the platform through its paces and try to suss out any problems with the design or usability of the API.

Since testability is a key goal of this framework, I’ve been trying to apply a Test Driven Development (TDD) approach as I build out the sample applications. This has led to some fun discoveries in terms of using new language features of C# to improve my tests.

For example, the MVC framework will include interfaces for the ASP.NET intrinsics. So to mock up the HTTP context using Rhino Mocks, you might do the following.

MockRepository mocks = new MockRepository();
      
IHttpContext context = mocks.DynamicMock<IHttpContext>();
IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();
IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();

SetupResult.For(context.Request).Return(request);
SetupResult.For(context.Response).Return(response);
SetupResult.For(context.Server).Return(server);
SetupResult.For(context.Session).Return(session);

mocks.ReplayAll();
//Ready to use the mock now

Kind of a mouthful, no?

Then it occurred to me. I should use C# 3.0 Extension Methods to create a mini DSL (to abuse the term) for building HTTP mock objects. First, I wrote a simple proof of concept class with extension methods.

public static class MvcMockHelpers
{
  public static IHttpContext 
    DynamicIHttpContext(this MockRepository mocks)
  {
    IHttpContext context = mocks.DynamicMock<IHttpContext>();
    IHttpRequest request = mocks.DynamicMock<IHttpRequest>();
    IHttpResponse response = mocks.DynamicMock<IHttpResponse>();
    IHttpSessionState session = mocks.DynamicMock<IHttpSessionState>();
    IHttpServerUtility server = mocks.DynamicMock<IHttpServerUtility>();

    SetupResult.For(context.Request).Return(request);
    SetupResult.For(context.Response).Return(response);
    SetupResult.For(context.Session).Return(session);
    SetupResult.For(context.Server).Return(server);

    mocks.Replay(context);
    return context;
  }

  public static void SetFakeHttpMethod(
    this IHttpRequest request, string httpMethod)
  { 
    SetupResult.For(request.HttpMethod).Return(httpMethod);
  }
}

And then I rewrote the setup part for the test (the rest of the test is omitted for brevity).

MockRepository mocks = new MockRepository();
IHttpContext context = mocks.DynamicIHttpContext();
context.Request.SetFakeHttpMethod("GET");
mocks.ReplayAll();

That’s much cleaner, isn’t it?

Please note that I call the Replay method on the IHttpContext mock. That means you won’t be able to setup any more expectations on the context. But in most cases, you won’t need to.

This is just a proof-of-concept, but I could potentially add a bunch of SetFakeXYZ extension methods on the various intrinsics to make setting up expectations and results much easier. I chose the pattern of using the SetFake prefix to help differentiate these test helper methods.

Note that this technique isn’t specific to ASP.NET MVC. As you start to build apps with #C 3.0, you can build extensions for commonly used mocks to make it easier to write unit tests with mocked objects. That takes a lot of the drudgery out of setting up a mocked object.

Oh, and if you’re lamenting the fact that you’re writing ASP.NET 2.0 apps that don’t have interfaces for the HTTP intrinsics, you should read my post on IHttpContext and Duck Typing in which I provide such interfaces.

Happy testing to you!

I have a follow-up post on testing routes. The project includes a slightly more full featured version of the MvcMockHelpers class.

Tags: ASP.NET MVC , TDD, Rhino Mocks

comments edit

Recently I gave out a few free copies of a book I co-wrote, but ran out quickly. This is the same book that Jeff Atwood (a co-author) told everyone, Do Not Buy This Book.

Well, if you didn’t get a copy, there is another opportunity to get a free copy. DotNetSlackers is running a contest and will reward the top 3 contributors to their forums (ooh, that could get…interesting. First Post!orI Agree!Count it up.) with great prizes.

ASP.NET 2.0
Anthology Unfortunately, our book is part of a second prize package which includes ANTS Profiler Pro. Our book is also the third-prize.

Why is that unfortunate?

XBox 360
Elite Because if you want a free copy of our book, you have to try real hard not to do too well that you win first prize, which is an XBox 360 Elite along with Telerik RadControls. Once you got your hands on Halo 3 or Gears of War, you’d have no time to read our book! And you wouldn’t want that, would you?

In any case, for more contest details, check out the contest page.

Tags: Contest , Book

code comments edit

One thing I’ve found with various open source projects is that many of them contain very useful code nuggets that could be generally useful to developers writing different kinds of apps. Unfortunately, in many cases, these nuggets are hidden. If you’ve ever found yourself thinking, Man, I wonder how that one open source app does XYZ because I could use that in this app, then you know what I mean.

One goal I have with Subtext is to try and expose code that I think would be useful to others. It’s part of the reason I started the Subkismet project.

Another useful library you might find useful in Subtext is our SQL Script execution library encapsulated in the Subtext.Scripting.dll assembly.

A loooong time ago, Jon Galloway wrote a post entitled Handling GO Separators in SQL Scripts - the easy way that tackled the subject of executing SQL Scripts that contain GO separators using SQL Server Management Objects (SMO). SMO handles GO separators, but it doesn’t (AFAIK) handle SQL template parameters.

So rather than go the easy way, we went the hard way and wrote our own library for parsing and executing SQL scripts that contain GO separators (much harder than it sounds) and template parameters. Here’s a code sample that demonstrates the usage.

string script = @"SELECT * FROM <table1, nvarchar(256), Products>
GO
SELECT * FROM <table2, nvarchar(256), Users>";

SqlScriptRunner runner = new SqlScriptRunner(script);
runner.TemplateParameters["table1"] = "Post";
runner.TemplateParameters["table2"] = "Comment";

using(SqlConnection conn = new SqlConnection(connectionString))
{
  conn.Open();
  using(SqlTransaction transaction = conn.BeginTransaction())
  {
    runner.Execute(transaction);
    transaction.Commit();
  }
}            

The above code uses the SqlScriptRunner class to parse the script into its constituent scripts (you can access them via a ScriptCollection property) and then sets the value of two template parameters before executing all of the constituent scripts within a transaction.

Currently, the class only has one Execute method which takes in a SqlTransaction instance. This is slightly cumbersome and it would be nice to have a version that didn’t need all this setup, but this was all we needed for Subtext.

When I started writing this post, I thought about making some overrides that would make this class even easier to use, but instead, I will provide a copy of the assembly and point people to our Subversion repository and hope that someone out there will find this useful and have enough incentive to submit improvements!

Also, be sure to check out our unit tests for this class to understand what I mean when I said it was harder than it look. As a hint, think about dealing with GO statements in comments and quotes. Also, GO doesn’t have to be the only thing on the line. Certain specific elements can come before or after a GO statement on a line.

In case you missed the link, DOWNLOAD IT HERE.

Tags: Subtext , SQL , Sql Script Runner

comments edit

A while back, Jon Galloway asked the question, Can Operating Systems tell if they’re running in a Virtual Machine? What a silly question! When was the last time an Operating System questioned its own existence? Is that going to be in the next version of Windows - Windows Vista Into Its Own Soul? Or perhaps Apple will come out with Mac OS Existentialist?

Perhaps a more interesting question is whether or not you can tell that a web server is running in a virtual machine? Last weekend I migrated my blog into a virtual server running on a pretty sweet host machine and so far, my site seems to have gained an extra spring in its step.

Given that it’s hosted on the same server as that bandwidth hog, CodingHorror.com, I’m very pleased with the performance thus far. At least until he gets Dugged again.

In Jeff’s post, he mentioned that CrystalTech hooked us up with a beefy 64-bit dedicated server. Several commenters noted that there was no 64-bit offering on their site. The reason is that they hadn’t received much demand for 64-bit servers until we came along with our bullying tactics.

Dedicated Dual & Quad
Core Through a contact over there, I wanted to see if we could work out a hosting deal. Jeff was adamant that we get a 64-bit server, which they didn’t have at the time, but could certainly order. I pretty much didn’t want them to go through all that trouble and was ready to move on, but they insisted it was not a big deal.

They lied…er…understated their case. Rather than simply build a single 64-bit server, they took this as an opportunity to build out a whole new product offering of 64 bit dedicated servers.

So what started off as me trying to scam some discounted or free hosting ended up spurring these guys to start a new product offering. Nice!

I’m now hosting this site and SubtextProject.com on this box, but our CCNET server is still in a virtual machine hosted generously by Eric Kemp of Subsonic fame.

I used to be skeptical of hosting my site in a virtual machine, as I felt like if I hosted closer to the metal, I could wring out extra performance. But let’s get real here, I’m not taxing this machine in any way.

I’m sold on the benefits and convenience of virtualization.

Tags: CrystalTech , Hosting , Virtual Machine

comments edit

From Monday night to Thursday afternoon next week I will be in Las Vegas attending both DevConnections/ASPConnections as well as the DotNetNuke OpenForce conference. After that, I will be up in Redmond for the next week.

I wrote before that I would be speaking on a couple panels at OpenForce conference talking about open source and .NET.

If you’re interested, the panels will be:

Wednesday, Nov 7 - 8:00 AM - 9:15 AM Lagoon L

DOS101: Panel Discussion: Open Source on the Microsoft \ Technology Stack\ Scott Guthrie, Phil Haack, Rob Conery and Shaun Walker

Thursday, Nov 8 - 9:30 AM - 10:45 AM Lagoon F

DOS102: Panel Discussion: .NET Open Source Architectural Models \ Joe Brinkman, Phil Haack, Jay Flowers, Jon Galloway and Rob Conery

It’s unfortunate that the first panel is at 8:00 AM because I never really sleep much at all when I’m in Vegas. I pretty much have too much fun (unless I’m losing bad). So if you run into me, I apologize in advance for the fool I make of myself. ;)

Another talk you’ll not want to miss is the MVC talk given by Scott Hanselman and Eilon Lipton.

Tuesday, Nov 6 - 2:30 PM - 3:30 PM Room 2

AMS304: Introduction to the new ASP.NET Model View Controller (MVC) Framework

Scott Hanselman, Eilon Lipton

I’ll definitely be at this talk. These two are giving a couple of talks together you can read about on Scott’s post. Come by and say hi.

Tags: Microsoft , DevConnections , AspConnections , OpenForce , Conference

comments edit

Pop quiz for you C# developers out there. Will the following code compile?

//In Foo.dll
public class Kitty
{
  protected internal virtual void MakeSomeNoise()
  {
    Console.WriteLine("I'm in ur serverz fixing things...");
  }
}

//In Bar.dll
public class Lion : Kitty
{
  protected override void MakeSomeNoise()
  {
    Console.WriteLine("LOL!");
  }
}

If you had asked me that yesterday, I would have said hell no. You can’t override an internal method in another assembly.

Of course, I would have been WRONG!

Well the truth of the matter is, I was wrong. This came up in an internal discussion in which I was unfairly complaining that certain methods I needed to override were internal. In fact, they were protected internal. Doesn’t that mean that the method is both protected and internal?

Had I simply tried to override them, I would have learned that my assumption was wrong. For the record…

protected internal means protected OR internal

It’s very clear when you think of the keywords as the union of accessibility rather than the intersection. Thus protected internal means the method is accessible by anything that can access the protected method UNION with anything that can access the internal method.

A Donkey Named Lester - Creative Commons By Attribution -
ninjapoodlesAs the old saying goes, when you assume, you make an ass out of u and me. I never understood this saying because when I assume, I only make an ass of me. I really think the word should simply be assme. As in… 

Never assme something won’t work without at least trying it.

UPDATE: Eilon, sent me an email to point out that…

BTW the CLR does have the notion of ProtectedANDInternal, but C# has no syntax to specify it. If you look at the CLR’s System.Reflection.MethodAttributes enum you’ll see both FamANDAssem as well as FamORAssem (“Family” is the CLR’s term for C#’s protected and “Assem” is C#’s internal).

If you don’t know Eilon, he’s a freaking sharp developer I get to work with on the MVC project and was the one who kindly disabused me of my ignorance on this subject. He keeps a blog at http://weblogs.asp.net/leftslipper/.

Apparently he’s the one with the clever idea of using a C# 3.0 anonymous type as a dictionary, that many of you saw in ScottGu’s ALT.NET Conference talk. Very cool.

subtext comments edit

MySpace China
LogoAn undisclosed source informed me that MySpace China is using a modified version of Subtext for its blogging engine.

I had to check it out for myself to confirm it and it is true! Check out my first MySpace China blog post. How do I know for a fact that this is running on Subtext? I just viewed source and saw this little bit of javascript…

var subtextBlogInfo = new blogInfo('/', '/1304049400/');

So if anyone is wondering if Subtext can scale, it sure can. MySpace China gets around 100 million page views, approximately a million of which go to the blog.

My source tells me the MySpace China developers found some bugs with Subtext they had to fix that were only exposed when they put a huge load on it. Although they are under no requirements to do so under our license, I hope that they would contribute those fixes as patches back to Subtext.

So to all you Chinese users of Subtext (via MySpace China), 你好 to you.

Technorati Tags: Subtext , MySpace China

comments edit

Like a lovesick puppy, my good friend Rob Conery is following me to Microsoft.

I’m excited (not yet super excited) that Rob is going to be joining us working on Subsonic as the sugar on top of the work we’re doing with the MVC framework. Good times! We’re definitely going to have to celebrate in Vegas at the DotNetNuke conference and DevConnections (you all will be there, won’t you?)

This is perhaps another item to add to the list I made of signs of progress in regards to how Microsoft is approaching Open Source.

Rob, be sure to read Dan Fernandez’s post on the stages of new employees at Microsoft employee.

I really think our first order of business is to build a sample MVC application using LOLCode.NET. Until your framework supports LOLCode, nobody takes you seriously.

Technorati Tags: Microsoft , Rob Conery

comments edit

For those starting out at Microsoft, an analogy that you’re likely to hear a lot is “Drinking from the firehose”. The first time I’ve ever heard this phrase was when Dare used it in a post about the flood of information due to subscribing to multiple RSS feeds.

Bronx Summer. Photographer,
unknown.

It’s entirely apropos (just love that word ever since The Matrix) as a description of starting as a new employee at Microsoft. My buddy Walter said his brother had the same feeling when starting at CalTech. That resonates with me, because unlike my college, with its emphasis on the liberal arts, Microsoft very much feels like an engineering college.

For example, everyone around me is technically adept and incredibly smart. It’s funny to hear myself say that. Over the past few years, I’ve read many blog posts from people I highly respect, real brainiacs in the industry, talk about their transition to Microsoft and they would often say something similar. Something along the lines of…

“I’m surrounded by really really smart people.”

“Everyone here is super smart” Note, they like to use “super” as a prefacing adjective a lot around here. Especially the phrases “Super Smart” or “Super Excited” 

“I feel humbled by the smart people around me.”

I used to read these statements and think to myself, Bullshit! You’re freakin’Don Box! OrChris Sells! OrScott Hanselman! OrJohn Lam! (I could go on…) I know you’re just saying that to be nice. I mean, how could you really say otherwise since you have to work with them?

Sure I bet these people are smart, and many of them might even be scary smart, but you know you’re a big dog over there. Admit it. Go ahead, admit it.

Ahhhh yes, the ignorant arrogance of an outsider. Now here I find myself saying the same things these guys have said, though admittedly, the bar is lower in my case than the aforementioned highly respected gentlemen.

I’m sure there must be some stupid people around here somewhere. They’re just not in my group as far as I can tell.

Investing for the long run

I had a great meeting with my manager on my third day of work. Rather than focusing on technology, tasks, and features, we spent a couple hours talking about passion, personal mission, goals, long term outlook, etc… And not the typical bullcrap I can regurgitate in an interview (whoops!). I was forced to really think deeply about these issues, about what I really want out of a career, which is quite frankly something I haven’t done in a long time. I’ve been too busy doing and not spending any time pondering. It is really important to have a balance of both.

In part, I think this is a reflection of a company that can afford to invest in the long run and be strategic, rather than always investing in the short term and feeling like a chicken with its head cut off. It is a refreshing change of pace.

But don’t confuse that with a slow pace, it was crazy busy over here last week. Right off the bat I was put to work on preparing materials for a private Software Design Review (SDR) we had with select customers and partners, which kept me busy over the weekend. The great part of course is that I’m pretty much in control over my own schedule for the most part, as long as I’m producing results.

You Sound Like You Totally Drank The Kool-Aid?

It may seem like I have completely drunk the kool-aid, but I like to think that I held it in my cheek waiting for a moment when nobody was looking to spit it out. Besides, the kool-aid would be extremely diluted from the fire hose.

The phrase drinking the kool-aid”** implies a cult or herd mentality, which is something I hope to avoid. In fact, it makes me a more valuable employee if I can keep some of my naive outsider thinking intact, though perhaps over time, I will be super assimilated (doh!).

I think my enthusiasm for my work has a lot to do with the particular group I am in and the particular project I am on. I know there are some people very dissatisfied at Microsoft, so it’s not all roses and ice cream.

I have also heard second-hand comments that show that some people here still have misconceptions about Open Source. Not a willful antagonism, just a misunderstanding, which is easily remedied via education.

This post has gone on long enough. If people are interested, I’d be happy to write more about my experiences and impressions of Microsoft as things progress.

The main thing I want to say is that I still plan to work on Subtext, though my involvement will be scant in the near term. I also still plan to continue blogging and not drop off the face of the blogodome as some have feared.

Technorati Tags: Microsoft

comments edit

If you live in the Seattle area and like code, talking about code, or listening to people talk about code, you owe it to yourself to check out the Seattle Code Camp.

  • WHO: You and a bunch of other code junkies
  • WHAT: Code Camp Seattle!
  • WHEN: November 17, 18 (Sat and Sun) 2007
  • WHERE: DigiPen Institute of Technology, Redmond, WA
  • WHY: Did you not see the first paragraph of this post?
  • HOW: I leave that up to you, but consider car pooling.

Sadly, I won’t be able to make this one since I still live in Los Angeles, but Jason Haley and others will be there. So at the very least, make nice with Jason, get on his Interesting Finds list, and get your blog some exposure.

Check out the website for more information and a brief FAQ on the code camp ethos.