comments suggest edit

Chameleon Having trouble sleeping lately. Thought I’d start an intermittent blog series about the questions that keep me up at night.

For example, this question popped in my head tonight and would not let me rest.

What the hell is a Karma Chameleon?

Ponder amongst yourselves.  I bid you adieu.

comments suggest edit

UPDATE: A bug was reported that blog posts could not be deleted. We have updated the release files with a fixed version.  There’s also a quick and dirty workaround.

Reading over my last blog post, I realized I can be a bit long winded when describing the latest release of Subtext. Can you blame me? I pour my blood, sweat, and tears (minus the blood) into this project.  Doesn’t that give me some leeway to be a total blowhard? Wink

This post is for those who just want the meat. Subtext 1.9.2 has the goal of making the world safe for trackbacks and comments.  It adds significant comment spam blocking support.  Here are the key take-aways for upgrading.

  • As always, backup your database and site first before upgrading.  We implemented a major schema change which requires that the upgrade process move some data to a new table.
  • If you are upgrading from Subtext 1.5 or earlier, read this.
  • Instructions for upgrading.
  • Instructions for a clean installation. This is easier than upgrading.
  • When upgrading, make sure to merge your customizations to web.config into the new web.config.
  • If you use Akismet, make sure not to use ReverseDOS until we resolve some issues.
  • After upgrading, login to the admin and select the correct timezone that you are located in.

Download it here.

comments suggest edit

In response to question about integrating my custom search engine with the browser, Oran left a comment with a link to a post on how to implement searching your FogBugz database in your browser via the OpenSearch provider, which is supported by Firefox 2.0 and IE7.

So I went ahead and used this as a guide to implementing OpenSearch for my custom search engine on my blog.  When you visit my blog, you should notice that the the search icon in the top left corner of your browser is highlighted (screenshot from Firefox 2).

Open search
box.

Click on the down arrow and you will see my own search engine Haack Attack in the list of search providers.

Haack
Attack

Now you can search using Haack Attack via your browser.  Implementing this required two easy steps.  First I created an OpenSearch.xml file and dropped it in the root of my website. Here is my file with some of the gunk removed from the url.

<?xml version="1.0" encoding="UTF-8" ?>
<OpenSearchDescription 
    xmlns="http://a9.com/-/spec/opensearch/1.1/">
  <ShortName>Haack Attack</ShortName>
  <Tags>Software Development C# ASP.NET</Tags>
  <Description>
    Search the web for relevant 
    .NET and software development content.
  </Description>
  <Url type="text/html" 
    template="http://www.google.com/custom?
    StuffOmitted&amp;q={searchTerms}" />
</OpenSearchDescription>

Remember to make sure to use &amp; for query string ampersands, as this is an XML file.  Also, if you are using your own Google Custom search engine, the actual template value looks something like:

http://www.google.com/custom?cx=016071428520527893278%3A3kvxtxmsfga &q={searchTerms}&sa=Search&cof=GFNT%3A%23000000%3BGALT%3A%23000066% 3BLH%3A23%3BCX%3AHaack%2520 Attack%2520The%2520Web%3BVLC% 3A%23663399%3BLW%3A100%3BDIV%3A%23336699%3BFORID%3A0%3BT%3A%23000000 %3BALC%3A%23660000%3BLC %3A%23660000%3BS%3Ahttp%3A%2F%2Fhaacked%2Ecom %2F%3BL%3Ahttp%3A%2F%2Fhaacked%2Ecom%2Fskins%2Fhaacked%2Fimages%2F Header%2Ejpg%3BGIMP%3A%23000000%3BLP%3A1%3BBGC%3A%23FFFFFF%3BAH%3Aleft& client=pub-7694059317326582

So be sure to change it appropriate to your own search engine.

The second step is to add auto-discovery. I added the following <link /> tag to my blog.  The bolded sections you would obviously want to customize for your own needs.

<link title="Haack Attack" 
  type="application/opensearchdescription+xml" 
  rel="search" 
  href="https://haacked.com/OpenSearch.xml">
</link>

So give it a try and let me know what you think. Be sure to add sites you think are relevant to this searh engine.

comments suggest edit

The Viper
RoomLast night I went to the “World Famous Viper Room”.  Gotta respect that their website makes sure to mention the World Famous part.  I suppose you’d have to have a real club inferiority complex to promote your club as The In This Neighborhood Sorta Famous Viper Room.

Anyways, the purpose of my visit was to see my soccer (sorry, football) teammate and team captain Pete perform in the acoustic lounge, an intimate (read tiny) lounge downstairs from the main room.

Pete’s the one from Glasgow with a heavy Scottish brogue.  We can hardly understand him most of the time, though I’m getting better at it.  Usually I just nod my head in agreement.

After our last game we asked Pete if he’d get us on the guest list.  “Not after that performance I won’t.” was his reply…I think.  He was referring to the 12 to 1 drubbing received at the hands of Hollywood United.  This is the team fielding 9 internationally capped players and two World Cup players, one of whom was on the winning squad of the 1998 champions.

In contrast, we are fielding one internationally capped player who played for the Cayman Islands.  We should be making some acquisitions for the next season that should help.  Our goal for the next season is to keep them in the single digits.  Incremental improvements, ya know?

comments suggest edit

Google
Beta Google just launched a neat build-your-own-search-engine feature.  You can choose sites that should be emphasized in the search results, or even restrict search results to that list.

It offers a lot of customization, but I seemed to have run into a problem in which it doesn’t seem to be saving the images for my profile for some reason. 

One particularly neat feature is the ability to allow others to collaborate on the search engine.  For example, check out my search engine called Haack Attack the Web.  I entered a few .NET and software development focused websites to the list of sites to search, restricting results to just those sites.

Feel free to add your favorite websites to the search engine.  I think I’ll actually find this as a useful first stop for finding .NET content.

The next step is to integrate this into the search features for Firefox and IE7.  Anyone want to help?

code, tech comments suggest edit

Space Shuttle
Landing Jeff Atwood writes a great post about The Last Responsible Moment. Take a second to read it and come back here. I’ll wait.

In the comments, someone named Steve makes this comment:

This is madness. Today’s minds have been overly affected by short attention span music videos, video games, film edits that skip around every .4 seconds, etc.

People are no longer able to focus and hold a thought, hence their “requirements” never settle down, hence “agile development”, extreme coding, etc.

I wonder what methodology the space shuttle folks use.

You shouldn’t humor this stuff, it’s a serious disease.

Ahhhh yes. The Space Shuttle. The paragon of software development. Nevermind the fact that 99.999% of developers are not developing software for the Space Shuttle, we should all write code like the Shuttle developers. Let’s delve into that idea a bit, shall we?

Invoking the Space Shuttle is common when arguing against iterative or agile methodologies to building software. Do people really think hey, if agile wont work for the Space Shuttle, how the hell is it going to work for my clients widget tracking application?Gee, your widget app is doomed.

The Space Shuttle is a different beast entirely from what most developers deal with day in and day out. This is not to say that there aren’t lessons to be learned from how the Shuttle software is built, there certainly are. Good lessons. No, great lessons! But in order to make good use of the lessons, you must understand how your client is very different from the client to the Shuttle developers.

One reason that the requirements for the Space Shuttle can be more formally specified beforehand and up front is because the requirements have very little need to changonce the project is underway.When was the last time the laws of gravity changed? The Shuttle code is mostly an autonomous system, which means the “users” of the code is the Shuttle itself as well as the various electronic and mechanical systems that it must coordinate.  These are well specified systems that do not need to change often, if at all, in the course of a project.

Contrast this to business processes which are constantly evolving and heavily people centric. Many times, the users of a system aren’t even sure about how to solve the business problem they are trying to solve with software.  This is partly why they don’t exactly know what they want until they have the solution in hand. We can wave activity diagrams, list of requirements, user stories and use cases in front of them all day long, but these are rough approximations of what the final system will do and look like. It’s showing the user a pencil sketch of a stick figure and hoping they see the Mona Lisa.

Later in the comments, the same guy Steve responds with what we should do with users to focuse them.

​1) what do you want it to do?\ 2) understand the business as much as you can\ 3) draw a line in the sand for that which you can safely deliver pretty soon\ 4) build a system that is extensible, something that can be added on too fairly easily, because changes are coming (that is agile-ness)\ 5) charge for changes

And I agree. One of the common misperceptions of agile approaches is that you never draw a line in the sand. This is flat out wrong. You do draw a line in the sand, but you do it every iteration.

Unlike the BDUF Waterfall approach which requires that you force the user to spew requirements until he or she is blue in the face, with iterative approaches you gather a list of requirements and prioritize them according to iterations.  This helps a long way to avoiding poor requirements due to design fatique. The user can change any requirements for later iterations, but once an iteration has commenced, the line in the sand is drawn for that iteration.

To me, this sounds like the last responsible moment for deciding on a set of requirements to implement. You don’t have to decide on the entire system at the beginning. You get some leeway to trade requirements for later iterations. You only are forced to decide for the current iteration.

 

comments suggest edit

Mona
Lisa When I was a bright eyed bushy tailed senior in college, I remember wading through pages and pages of job ads in Jobtrak (which has since been absorbed into Monster.com).

Most of the ads held my attention in the same way reading a phone book does. The bulk of them had something like the following format.

Responsibilites:

Design and develop data-driven internet based applications that meet functional and technical specifications. Use [technology X] and [technology Y] following the [methodology du jour] set of best practices. Create documentation, review code, and perform testing.

Required Skills and Experience:

Must have X years in language [now outdated language]. Must be proficient in BLAH, BLAH, and BLAH. Ability to work in a team environment. Able to work in a fast-paced [meaning we’ll work your ass off] environment.

I know what you’re thinking. Where do I sign up!

Yaaaaawn. Keep in mind, this was in 1997 just as the job market was starting to reach the stratosphere. Competition was tight back then. Do a search on Dice.com right now and you’ll still see a lot of the same.

I’m sorry, but your job posting is not the place to spew forth a list of Must have this and Must have that and a list of responsibilities so plain and vanilla that…that… I just don’t have a good analogy for it. Sorry.

These type of ads are attempting to filter out candidates who do not meet some laundry list of requirements. But this is not the goal of a good job ad. A good job ad should not explain what hoops the candidate must jump through to join your company, it should explain why the candidate should even want to jump through those hoops in the first place.

This of course assumes you are attempting to hire some star developers away from their current jobs rather than resume padders who have spent most of their careers in training classes so they can place your laundry list of technology TLAs underneath their name on their resume.

Certainly, a job posting should explain briefly the type of work and experience desired to fill the role. No point in having a person who only has experience in sales and marketing applying for your senior DBA position (true story). But first and foremost, you want to catch the attention of a great candidate. Boring job ads that read like the one above do not capture the imagination of good developers.

Back to my story. As I said, most of the ads fit this mold, but there were a few here and there that popped off the screen. I wish I had saved the one that really caught my attention. It was from a small company named Sequoia Softworks (which is still around but now named Solien) My memory of it is vague, so I’ll just make something up that resembles the spirit of the ad. All I remember is that it started off by asking questions.

Are you a fast learner and good problem solver? Are you interested in solving interesting business problems using the latest web technologies and building dynamic data driven web applications?

We’re a small web development company in Seal Beach (right by the beach in fact!) looking for bright developers. We have a fun casual environment (we sometimes wear shorts to work) with some seriously interesting software projects.

Experience in Perl, VBScript, and SQL is helpful, but being a quick learner is even more important. If you’ve got what it takes, give us a call.

I ended up working there for six years, moving up the ranks to end up as the Manager of Software Development (never once writing a line of PERL). They did several things right in their ad, as I recall.

Challenge the reader and demand an answer!

Do you have two years experience in C#? is not a challenge to the reader. This is not a question that captures my attention nor draws me in demanding an answer.

Do you know C# like Paris Hilton knows manafacturing fame?Now that is a challenge to my intellect! Hell yeah I know C# like nobody’s business. That kind of question demands an answer.  And a good candidate is more likely to drive over to your headquarters and give it to you.

Appeal to vanity

Not every appeal to vanity is a bad thing. It doesn’t always amount to sucking up. This point is closely related to the last point in that an appeal to vanity is also a challenge to a candidate to show just how great they are. Asking someone if they are a good problem solver, for example, conjures up a desire to prove it.

Show some personality

Sure, many corporations seem like soulless cubicle farms in which workers are seen as mindless drones. But surely not your company, right? So why does your job posting have a tombstone all over it?

Who wants to be another cog in a machine performing mundane tasks for god knows what reason? Your ad should express a bit of your company’s personality and culture.  It should also indicate to the reader that people who come to work for you are going to work with people. Interesting people. And they are going to work on interesting projects.

I write all this because of an article I read about business schools. It was a throw-away quote in which some employer mentioned how his new employee fresh out of business school helped rewrite some job postings and they were able to quickly fill some positions with high quality candidates they had been struggling to fill.

A well written job posting makes a difference.

I mentioned before that I am participating in the HiddenNetwork job board network because I really believe in the power of blogs to connect good people with good jobs. It’s in its nascent stages so I really don’t know how well it is fulfilling that mission yet, but I believe that it will do well.

If you do post a job via my blog (yes, I get a little something something if you do), be sure to make it a good posting that really captures the imagination of good developers (as all my readers are!  See. Appeal to vanity.). It’s even more of a challenge given how few words you have at your disposal for these ads.

comments suggest edit

In a recent post I ranted about how ASP.NET denies WebPermission in Medium Trust. I also mentioned that there may be some legitimate reasons to deny this permission based on this hosting guide.

Then Cathal (thanks!) emailed me and pointed out that the originUrl does not take wildcards, it takes a regular expression.

So I updated the <trust /> element of web.config like so:

<trust level="Medium" originUrl=".*" />

Lo and Behold, it works! Akismet works. Trackbacks work. All in Medium Trust.

Of course, a hosting provider can easily override this as Scott Guthrie points out in my comments. I need to stop blogging while sleep deprived. I have a tendency to say stupid things.

Now a smart hosting company can probably create a custom medium trust policy in order to make sure this doesn’t work, but as far as I can tell, this completely works around the whole idea of denying WebPermission in Medium Trust.

If I can simply add a regular expression to allow all web requests, what’s the point of denying WebPermission?

comments suggest edit

Tyler, an old friend and an outstanding contractor for VelocIT recently wrote a post suggesting one would receive better performance by passing in an array of objects to the Microsoft Data Application Block methods rather than passing in an array of SqlParameter instances. He cited this article.

The article suggests that instead of this:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlParameter[] parameters = new SqlParameter[]
  {
    new SqlParameter("@id", aUser.Id)
    , new SqlParameter("@name", aUser.Name)
    , new SqlParameter("@name", aUser.Email)
    , new SqlParameter("@name", aUser.LastLogin)
    , new SqlParameter("@name", aUser.LastLogOut)
  };

  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , parameters);
}

You should do something like this for performance reasons:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , aUser.Id
    , aUser.Name
    , aUser.Email
    , aUser.LastLogin
    , aUser.LastLogout);
}

Naturally, when given such advice, I fall back to the first rule of good performance from the perf guru himself, Rico Mariani. Rule #1 Is to Measure. So I mentioned to Tyler that I’d love to see metrics on both approaches. He posted the result on his blog.

Calling the methods included in a previous post, 5000 times each,

With parameters took 1203.125 milliseconds. With objects took 1250 milliseconds. Objects took -46.875 milliseconds less.

20000 times each:

With parameters took 4859.375 milliseconds.
With objects took 5015.625 milliseconds. Objects took -156.25 milliseconds less.

The results show that the performance difference is negligible. However, even before seeing the performance results, I would agree with the article to choose the second approach, but for different reasons. It results in a lot less code. As I have said before, Less code is better code.

I tend to prefer optimizing for productivity all the time, but only optimizing for performance after carefully measuring for bottlenecks.

There’s also a basic economics question hidden in this story. The first approach does seem to eke out slightly better performance, but at what cost? That’s a lot more code to write to eke out 47 milliseconds worth of performance out of 5000 method calls. Is it really worth it?

This particular example may not be the best example of this principle of wasting time optimizing at the expense of productivity because there is one redeeming factor worth mentioning with the first approach.

By explicitly specifying the parameters, the parameters can be listed in any order, whereas the second approach requires that the parameters be listed in the same order that they are specified in the stored procedure. Based on that, some may find the first approach preferable. Me, I prefer the second approach because it is cleaner, easier to read, and I don’t see keeping the order intact much more work than getting the parameter names correct.

But that’s just me.

comments suggest edit

Source:
http://macibolt.hu/pag/goldilock.htmlThis is a bit of rant born out of some frustrations I have with ASP.NET. When setting the trust level of an ASP.NET site, you have the following options:

Full, High, Medium, Low, Minimal

It turns out that many web hosting companies have chosen to congregate around Medium trust as a sweet spot in terms of tightened security while still allowing decent functionality. Only natural as it is the one in the middle.

For the most part, I am sure there are very good reasons for which permissions make it into Medium trust and which ones are not allowed. But some decisions seem rather arbitrary. For example, WebPermission. Why couldn’t that be a part of the default Medium trust configuration? I mean really? Why not? (Ok, there are really good reasons, but remember, this is a rant, not careful analysis. Bear with me. Let me get it off my chest.)

Web applications have very good reason to make web requests (ever hear of something called a web service. They may take off someday) and how damaging is that going to be to a hosting environment. I mean, put a throttle on it if you are that concerned, but don’t rule it out entirely!

I really do want to be a good ASP.NET citizen and support Medium Trust with applications such as Subtext, but what a huge pain it is when some of the best features do not work under Medium Trust. For example, Akismet.

Akismet makes a web request in order to check incoming comments for spam. I tried messing around with wildcards for the originUrl attribute of the <trust /> element, but they don’t work. In fact, I only found a single blog post that said it would work, but no documentation that backed that claim up.

Instead, you need access to the machine.config file (as the previously linked blog post describes), which no self respecting host is going to just give you willy nilly. Nope. In order to get Akismet to work under medium trust, I have to tell Subtext users that they must beg, canoodle, and plead with their host provider to update the machine.config file to allow unrestricted access to the WebPermission. Good luck with that.

If they don’t give unrestricted access, then they need to add an originURl entry for each URL you wish to request. Hopefully machine.config entries do allow wildcards because the URL for an Akismet request includes the Akismet API code. Otherwise running an Akismet enabled multiple user blog in a custom Medium Trust environment would be a royal pain.

Hopefully you can see the reason behind all my bitching and moaning. A major goal for Subtext is to provide a really simple and easy installation experience. At least as easy as possible for installing a web application.  Having an installation step that requires groveling does not make for a good user experience.  But then again, security and usability have always created tension between them.

Scott Watermasysk points out a great guide to enabling WebPermission in Medium Trust for hosters. So if you’re going to be groveling, at least you have a helpful guide to give them. The guide also points out the security risks in involved with Medium Trust.

Related Posts:

tech comments suggest edit

For a project I worked on, we had an automated build server running CruiseControl.NET hosted in a virtual machine.  We did the same thing for Subtext (project is dead). 

Some of you may have multiple virtual servers running on the same machine.  Typically in such a setup (at least typically for me), each virtual server won’t have its own public IP Address, instead sharing the public IP of the host computer.

This makes it a tad bit difficult to manage the virtual servers since using Remote Desktop to connect to the public IP will connect to the host computer and not the virtual machine.  The same thing applies to multiple real computers behind a firewall.

One solution (and the one I use) is set up each virtual server to run Terminal Services, but each one listens on a different port.  Then set up port-forwarding on your firewall to forward requests for the respective ports to the correct virtual machine.

Configuring the Server

The setting for the Terminal Services port lives in the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp Open up Regedit, find this key, and look for the the PortNumber value.

PortNumber Setting

Double click on the PortNumber setting and enter in the port number you wish to use. Unless you think in hex (pat yourself on the back if you do), you might want to click on decimal before entering your new port number.

Port Number Value Dialog

Or, you can use my creatively named Terminal Services Port Changer application, which is available with source on GitHub.  This is a simple five minute application that does one thing and one thing only. It allows you to change the port number that Terminal Services listens on.

Terminal Services Port Changer

Remember, all the usual caveats apply about tinkering with the registry. You do so at your own risk.

Connecting via Remote Desktop to the non-standard Port

Now that you have the server all set up, you need to connect to it. This is pretty easy.  Suppose you change the port for the virtual machine to listen in on port 3900.  You simply append 3900 to the server name (or IP) when connecting via Remote Desktop.

Port Changer App Screenshot

Keep In Mind

Keep in mind that the user you attempt to connect with must have the logon interactively right as well as permissions to logon to the terminal services session.  For more on that, check out this extremely helpful article with its trouble shooting section.

That’s pretty easy, no?  Now you should have no problem managing your legions of virtual servers.

Related Posts:

comments suggest edit

Just upgraded my blog to the latest version of Subtext in the Subversion 1.9 branch, not that you needed to know that. I just appreciate you letting me know if you run into problems leaving a comment and such by using my Contact page.

Before I release 1.9.2 (long story why that’s ends with a 2 and not 1), I need to update the Contact page so that spam filters also apply to the contact page.

comments suggest edit

Weird how coding on Subtext relaxes me. For the past couple days I’ve been feeling a bit under the weather and getting worse.  The weird part is that anytime I try and eat something, there’s a terrible after-taste. And no, it’s not my breathe.  I couldn’t finish my pizza tonight.  Pizza!

Anyways, I couldn’t sleep tonight so I figured hacking on some Subtext code might relax me.  I fixed some bugs and implemented FeedBurner support in Subtext, using the DasBlog code as a guide, though the way we implement feeds is much different.  It’ll come out in the next edition of Subtext.

Unfortunately, it may take me longer to release because although coding on Subtext feels good when I’m sick, QA testing doesn’t.

comments suggest edit

Quick tip for you if you need to remotely connect to a server with VMWare Server installed in order to manage the virtual server. 

VMWare Server Console doesn’t work correctly if you Remote Desktop or Terminal in. You have to physically be at the machine or Remote Desktop into the Console session.

The symptoms I ran into was that I could not open a virtual machine, and when I tried to create a new one, I got an “Invalid Handle” error.

Technorati Tags: Tips

comments suggest edit

Recently I wrote a .NET based Akismet API component for Subtext.  In attempting to make as clean as interface as possible, I made the the type of the property to store the commenter’s IP address of type IPAddress.

This sort of falls in line with the Framework Design Guidelines, which mention using the Uri class in your public interface rather than a string to represent an URL.  I figured this advice equally applied to IP Addresses as well.

To obtain the user’s IP Address, I simply used the UserHostAddress property of the HttpRequest object like so.

HttpContext.Current.Request.UserHostAddress

The UserHostAddress property is simply a wrapper around the REMOTE_ADDR server variable which can be accessed like so.

HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"]

For users behind a proxy (or router), this returns only one IP Address, the IP Address of the proxy (or router).  After some more digging, I learned that many large proxy servers will append their IP Address to a list maintained within another HTTP Header, HTTP_X_FORWARDED_FOR or HTTP_FORWARDED.

For example, if you make a request from a country outside of the U.S., your proxy server might add the header HTTP_X_FORWARDED_FOR and put in your real IP and append its own IP Address to the end. If your request then goes through yet another proxy server, it may append its IP Address to the end.  Note that not all proxy servers follow this convention, the notable exception being anonymizing proxies.

Thus to get the real IP address for the user, it makes sense to check the value of this first:

HttpContext.Current.Request.ServerVariables[“HTTP_X_FORWARDED_FOR”]

If that value is empty or null, then check the UserHostAddress property.

So what does this mean for my Akismet implementation?  I could simply change that property to be a string and return the entire list of IP addresses.  That’s probably the best choice, but I am not sure whether or not Akismet accepts multiple IPs.  Not only that, I’m really tired and lazy, and this change would require that I change the Subtext schema since we store the commenter’s IP in a field just large enough to hold a single IP address.

So unless smart slap me upside the head and call me crazy for this approach, I plan to look at the HTTP_X_FORWARDED_FOR header first and take the first IP address in the list if there are any.  Otherwise I will grab the value of UserHostAddress.  As far as I am concerned, it’s really not that important that I am 100% accurate in identifying the remote IP, I just need something consistent to pass to Akismet.

code, tdd comments suggest edit

UPDATE: I’ve since supplemented this with another approach.

Jeremy Miller asks the question, “How do you organize your NUnit test code?”.  My answer? I don’t, I organize my MbUnit test code.

Bad jokes aside, I do understand that his question is more focused on the structure of unit testing code and not the structure of any particular unit testing framework.

I pretty much follow the same structure that Jeremy does in that I have a test fixture per class (sometimes more than one per class for special cases).  I experimented with having a test fixture per method, but gave up on that as it became a maintenance headache.  Too many files!

One convention I use is to prefix my unit test projects with “UnitTest”.  Thus the unit tests for Subtext are in the project UnitTests.Subtext.dll.  The main reason for this, besides the obvious fact that it’s a sensible name for a project that contains unit tests, is that for most projects, the unit test assembly would show up on bottom in the Solution Explorer because of alphabetic ordering.

So then I co-found a company whose name starts with the letter V.  Doh!

UPDATE: I neglected to point out (as David Hayden did) that with VS.NET 2005 I can use Solution Folders to group tests. We actually use Solution Folders within Subtext. Unfortunately, many of my company work is still using VS.NET 2003, which does not boast such a nice feature.

One thing I don’t do is separate my unit tests and integration tests into two separate assemblies.  Currently I don’t separate those tests at all, though I have plans to start. 

Even when I do start separating tests, one issue with having unit tests in two separate assemblies is that I don’t know how to produce NCover reports that merge the results of coverage from two separate assemblies.

One solution I proposed in the comments to Jeremy’s post is to use a single assembly for tests, but have UnitTests and Integration Tests live in two separate top level namespaces.  Thus in MbUnit or in TD.NET, you can simply run the tests for one namespace or another.

Example Namespaces: Tests.Unit and Tests.Integration

In the root of a unit test project, I tend to have a few helper classes such as UnitTestHelper, which contains static methods useful for unit tests. I also have a ReflectionHelper class, just in case I need to “cheat” a little. Any other classes I might find useful typically go in the root, such as my SimulatedHttpRequest class as well.

comments suggest edit

Tivo
Icon Ever prolific Jon Galloway has released another tool on our tools site.  When we started the tools site, I talked some trash to spur some friendly competition between the two of us.  Let’s just say Jon is kicking my arse so hard my relatives in Korea can’t sit down.

His latest RegmonToRegfile tool works with yet another SysInternals tool, Regmon.

Winternals (maker of Sysinternal) released many fantastic tools for managing and spelunking your system.

So great, in fact, that Robb feels he owes them his child in gratitude.

Kudos to Microsoft for snatching up Mark Russinovich and Winternals Software.

Regmon is essentially a Tivo for your registry, allowing you to record and play back changes to the registry.

Regmon lacks the ability to export to a registry (.reg) file, which is where Jon’s tool comes to play.  It can parse the Regmon log files and translate them to .reg files.

Here is a link to Jon’s blog post on this tool.