comments suggest edit

The great people at WebHost4Life moved my database and web server to new Windows 2003 servers. They put them in the same server block and I noticed a significant speed increase in the time it takes my blog to load. This explains why my site was down this morning.

Hopefully, this server has much less abusive tenants than my last one did.

comments suggest edit

Eric Lippert does a great job of defining the term Idempotent. I’ve used this term many times both to sound smart and because it is so succinct.

The one place I find idempotence really important is creating update scripts for a database such as the Subtext database. In an open source project geared towards other devs, you just have to assume that people are tweaking and applying various updates to the database. You really have no idea in what condition the database is going to be in. That’s where idempotence can help.

For example, if an update script is going to add a column to a table, I try to make sure the column isn’t already there already, before adding the column. That way, if I run the script twice, three times, twenty times, the table is the same as if I ran it once. I don’t end up adding the column multiple times.

comments suggest edit

Sometimes someone writes a post that makes you say, “Oh shit!”. For example, Jon Galloway writes that writing a windows service just to run a scheduled process is a bad idea.

And he presents a very nice case. Nice enough that I take back the times I have condescendingly said that Windows Services are easy to write in .NET. I probably should look through some of the services I have written in the past. I know of one I could easily convert to a console app and gain functionality.

However, I think the decision sometimes isn’t so easy as that. One service I have written in the past is a Socket Server that takes in encrypted connections and communicates back with the client. Now that obviously needs to run all the time and is best served as a Windows Service. The problem then was since I had written the Windows Service code to be generalized, I was able to implement many other services very quickly, even ones with timers that ran on a schedule.

However, the most challenging ones to write, happened to be the ones that ran on a schedule, since the scheduling requirements kept changing and I realized I was going down the path to implementing…well…the Windows Task Scheduler.

In general, I think Jon’s right. If all you are doing is running a scheduled task, use Windows Task Scheduler until you reach the point that your system’s need are no longer met by the scheduler. This follows the principle of doing only what is necessary and implementing the simplest solution that works.

In a conversation, Jon mentioned that a lot of developers perceive Windows Services to be a more “professional” solution than task scheduling a console app. But one way to think of a service is an application that responds to requests and system events, not necessarily a scheduled task. So to satisfy both camps you could consider creating a service that takes in requests, and a scheduled task to make the requests. For example, a service might have a file system watcher active and a scheduled task might write the file. I don’t suggest adding all this complexity to something that can be very simple.

For me, I also like writing windows services because I have a system for quickly creating installation packages very quickly. What I need to do is spend some time creating an installer task for setting up a windows task scheduler job. That way I can do the same for a scheduled console app.

comments suggest edit

Welcome to the new pissing contest.

Oh my, I couldn’t possible link to that blog. It’s only worth half what my blog is worth.

According to this site, my blog is worth…

\ My blog is worth $92,020.02.\ How much is your blog worth?

Ok, I’m ready to sell out! ;)

UPDATE: Others around the block have been posting their blog’s worth.

Of course, as Scott Reynolds points out,

A thing is only worth what someone will pay you for it.

Which is actually encouraging because I can probably find some fool willing to part with twice the figure listed above for my blog. But I better sell fast before the bubble bursts and blogs go through a huge market slump. Wouldn’t want my pedigreed blog having to resort to McDonald pitches to bring up its own worth.

tech comments suggest edit

I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone. – Jack Valenti, former head of the MPAA

I know that Dave Winer has dismissed Google Print as a bad idea, but Dave is often hit or miss with his opinions. However I was surprised to see Dare’s criticism of the effort.

Yes it is true that Google’s “Do No Evil” motto is pure marketing schtick, and now that they are a large corporation, they can be just as evil as any other corporation, that still doesn’t take away from the benefits of the Google print project. Not all profit driven operations are evil. Those at Microsoft should know that.

Is It Going To Harm Publishers?\ Dare uses himself as an example in that he almost never buys technical books, but chooses to search the web to find references that he needs. However, if Google is to be believed, there is a big difference with web search and the search within a book feature.

The difference is that with web search you get the full content of what you are searching for. With Google Print, you get a snippet of a page in the book. Perhaps it contain all that you need, perhaps not. If it doesn’t, you’ll spend a lot of time searching trying to hit that exact page. Can you imagine trying to read through a volume of Art of Computer Programming like that? You might as well just physically go to the bookstore.

If you only needed one little piece of information from the book, you probably wouldn’t have bought it anyways, right? For example, Dare already admits he never buys technical books. So how will Google Print take Dare’s money from publishers? They aren’t getting his dollar already.

Personally, I find reading a book to be a great way to get a focused education on a technical topic. However, I would want to be able to search within the book to see that it does cover the topic in the depth I expect, and I hate running to Borders Book Store to do so. It’s a great relief when a book I am considering is part of Amazon.com’s Search Within a Book program.

Technical References are a small part of the total market\ Another key point to make is that technical reference books are a very tiny part of the entire book market. I certainly don’t want to read The Invisible Man via search. The general public are not going to search their way through the latest Stephen King novel. I don’t see how searching within a book is going to hurt the huge majority of publishers. As many point out, it will be an enabler of the long tail, perhaps selling books long forgotten by their publishers.

Legality\ As for the legality of the program, you should read Lawrence Lessig’s take on it. In his opinion, it most definitely constitutes fair use. If that is the case, whether or not it hurts the publishers becomes a moot point. Much like the pain that the VCR caused the movie industry was a moot point. Oh wait, the movie industry made millions off of the VCR…

comments suggest edit

Whose Line Ok, I may have misfired with the last video, but this is truly hilarious. It passed the “Wife Test” (the last one didn’t).

I hear this is an old one, but it is a clip from the humorous Improv show, “Whose Line Is It Anyways?” with Drew Carey, Wayne Brady, etc all

In this one, Richard Simmons is a guest.

Watch it! (this time, I did not forget to link to it). It takes a bit of time to download and get started, so be patient.

\

comments suggest edit

I’ve talked before about the various physical pains that software developers face on the job. For me, it seems that my pain likes to migrate around my body. If I have pain, it almost always seems to be one at a time.

For example, recently, my hands started hurting again, but my back felt much better. More recently, my hands and back felt good, but my eyes started bugging out due to eye strain. Now I am back to my back hurting, and everything else is feeling good. I hope to get back to where everything feels good, but I think that situation only occurs in the womb.

tdd database integration comments suggest edit

UPDATE: For the most part, I think young Phil Haack is full of shit in these first two paragraphs. I definitely now think unit tests should NOT touch the database. Instead, I do separate those into a separate integration test suite, as I had suggested in the last paragraph. So maybe Young Phil wasn’t so full of shit after all.

I know there are unit testing purists who say unit tests by definition should never touch the database. Instead you should use mock objects or some other contraption. And in part, I agree with them. Given unlimited time, I will gladly take that approach.

But I work on real projects with real clients and tight deadlines. So I will secretly admit that this is one area I am willing to sacrifice a bit of purity. Besides, at some point, you just have to test the full interaction of your objects with the database. You want to make sure your stored procedures are correct etc…

However, I do follow a few rules to make sure that this is as pure as possible.

First, I always try and test against a local database. Ideally, I will script the schema and lookup data so that my unit tests will create the database. MbUnit has an attribute that allows you to perform a setup operation on assembly load and teardown when the tested assembly unloads. That would be a good place to set up the database so you don’t have to do it for every test. However, often, I set up the database by hand once and let my tests just assume a clean database is already there.

Except for lookup data, my tests create all the data they will use using whichever API and objects I am testing. Each test runs within a COM+ 1.5 transaction using a RollBack attribute so that no changes are stored after each test. This ensures that each test is testing against the same exact database.

This is the reason I can be a bit lazy and set up the database by hand, since the none of the tests will change the data in the database. Although I would prefer to have a no-touch approach where the unit tests set up the database. For that, there is TestFu which is now part of TestDriven.Net.

From my experience, I think this approach is a good middle ground for many projects. A more purist approach might separate the tests that touch the database into a separate assembly, but still use NUnit or MbUnit to run them. Perhaps that assembly would be called IntegrationTests.dll instead of UnitTests.dll. It’s your choice.

comments suggest edit

You know you’re a big geek when a sequence of numbers with an interesting property just pops in your head. No, I’m not talking about myself (this time). Jayson Knight is the big geek as he noticed a pattern in a sequence of numbers that popped in his head…

This just popped into my head the other day for no other reason than to bug me: Square all odd numbers starting with 1…subtract 1 from the result…then divide by 8. Now look for the pattern in the results.

He even provides a code sample to do the math for you, but you can easily do it by hand on paper. The pattern he noticed can be phrased another way, the square of any odd number when divided by eight leaves a remainder of 1.

This is actually a pattern noticed by John Horton Conway and Richard Guy in 1996. They stated that in general, the odd squares are congruent to 1 (mod 8).

I couldn’t find their proof, but it is easily proved by induction. I’ll walk you through the steps.

The Proposition\ We want to prove that

 

(x2 - 1) mod 8 = 0 for all x >= 1 where x is an odd integer.

Note that this is the same as proving that x2 mod 8 = 1. In other words, if we prove this, we prove the interesting property Jayson’s noticed.

Verify the Base Case\ Here’s where our heavy duty third grade math skills come into play. We try out the case where x = 1.

(12 - 1) mod 8 = 0 mod 8

So yes, 0 mod 8 is zero, so we’re good with the base case.

Formulate the Inductive Hypothesis\ Ok, having demonstrated the fact for x = 1, let’s hypothesise that it is indeed true that

(x2 - 1) mod 8 = 0 for all odd x > 1 where x is an odd integer

Now prove it\ Here we prove the next case. So assuming our above hypothesis is true, we want to show that the it must be true for the next odd number. We want to show that

((x+2)2 - 1) mod 8 = 0

Well that can be multiplied out to…

((x2 + 4x + 4) - 1) mod 8 = 0 Note I don’t subtract the one from the four.

So just re-arranging the numbers a bit we get…

((x2 - 1) + 4x + 4) mod 8 = 0

Now I factor the right side and get (You do remember factoring right?)

((x2 - 1) + 4(x + 1)) mod 8 = 0

Ok, you should notice here that we know what’s on the left side is certainly divisible by 8 due to our hypothesis. So we just need to prove that 4(x+1) is also divisible by 8. If two numbers are divisible by another number, we know the sum of the two numbers are also divisble by that number.

Well it should be pretty clear that 4(x+1) is divisible by eight. How? Well since x is an odd number, x + 1 must be an EVEN number. We can rewrite (x + 1) as 2n where n is an integer (the very definition of an even number). So our equation becomes…

(x2 - 1) + 4(2n) mod 8 = 0

Which is naturally…

((x2 - 1) + 8n) mod 8 = 0

And we’re pretty much done. We know that (x^2^ - 1) is divisible by eight due to our inductive hypothesis. We also know 8n is divisible by eight. Therefore the sum of the two numbers must be divisble by 8. And the proof is in the pudding.

Ok, some of you are probably thinking I am hand waving that last conclusion. So I will quickly prove the last step. Since we know that the left hand side is divisible by eight, we can substitute 8m where m is an integer (the very definition of a number divisible by eight).

That leaves us with…

(8m + 8n) mod 8 = 0

which factors to…

(8(m + n)) mod 8 = 0

Conclude the proof for formality sake\ And thus, proposition is true for all odd integers.

Sorry for such a long boringNo need to thank me for a long and scintillating math post, but it’s been a loooong time since I’ve stretched my math muscles. This was a fun excercise in inductive proofs.

So how does an inductive proof prove anything? At first glance, for those unfamiliar with inductive proofs, it hardly seems like we proved anything. Our proof rests on an assumption. We stated that if our assumption is true for one odd number, then next odd number must exihibit the same behavior. We went ahead and proved that to be true, but it still leaves the possibility that this isn’t true for any odd number.

That’s where our base case comes in. We showed that for x = 1, it is indeed true. So since it is true for x = 1, we’ve proved it is true for x = 3. Since it is true for x = 3, we know it is true for x = 5. Ad infinitum.

And that concludes today’s math lesson.

UPDATE: Fixed a couple typos. Thanks Jeremy! Also, optionsScalper in my comments list a lot of great links about number theory and congruences. I applied his correct suggestion to clarify the mod operations by putting a parenthesis around the left hand side.

comments suggest edit

In my last post, I didn’t explain the pattern to Jayson’s satisfaction and I had a typo in my proof that I have since corrected.

My proof demonstrated one pattern, namely that the square of an odd number minus one is divisible by eight. However, Jayson noticed that if you start with the first few odd numbers and go through those mathematical steps, the result of the operation leaves you with another series with interesting properties.

It turns out that series is the triangular series. I believe what Jayson wanted to know was why his function yielded this sequence. I shall dig into this here (notice I used the word shall? That’s a mathematician thang. You wouldn’t understand ;)) Here are the first few numbers in the sequence…

0, 1, 3, 6, 10,…

Another way to look at the series is…

f(0) = 0 f(1) = 0 + 1 = 1 f(2) = 0 + 1 + 2 = 3 f(3) = 0 + 1 + 2 + 3 = 6 f(4) = 0 + 1 + 2 + 3 + 4 = 10 . . . f(n) = 0 + 1 + 2 + ... + n - 1 + n = ???

The n^th^ number in the series is the sum of all the numbers before n and including n. There’s a simple formula to get the n^th^ number in this series. Legend has it that Karl Friedrich Gauss discovered this while a very young student. He was told to sum up the numbers from 1 to 100 as a means to keep him busy for a long time. In a very short while, he came up with the answer. He observed that you could simply pair the numbers up like so…

1 + 100 = 101 2 + 99 = 101 3 + 98 = 101 . . . 50 + 51 = 101 = 50 * 101 = 5050

It turns out, that the sum of all numbers n and below can be described by the simple formula…

n(n+1)/2

So how does this equation relate to the one Jayson showed us? Well to refresh your memory, that equation could be described as such

f(xi) = (xi2 - 1)/8 = Ti

In English, that means that applying his function to the i^th^ odd number yields the i^th^ triangular number.

So let’s start doing some simple algebraic substitutions. First, we need to define what we mean by the “i^th^” odd number. What is the odd number at i=0? Well that should clearly be the first odd number, one. So we state…

xi = 2i + 1

That’ll make sure we are only dealing with odd numbers. Now let’s substitute for x~i~

f(xi) = f(2i + 1)

Ok, this next step is a little tricky. By definition, f(x) = (x^2^ - 1)/8. This is Jayson’s formula. So let’s expand out f(2i + 1) into this formula. My assistant, the color green will assist to keep this clear.

f(xi) = ((2i + 1)2 - 1)/8

By now, I am really wishing HTML supported math symbols easily. Now doing some multiplying.

f(xi) = (4i2 + 4i + 1 - 1)/8

Doing a bit of arithmetic leads us to

f(xi) = (4i2 + 4i)/8

Some factorization…

f(xi) = 4i(i + 1)/8

Doing some division (man this math stuff is hard)

f(xi) = i(i + 1)/2

Does that look familiar? I hope you are having an aha moment (if you didn’t have it a long time ago). That is the formula for the i^th^ triangular number! Thus with a bit of algebra, I have demonstrated that…

f(xi) = (xi2 - 1)/8 = i(i + 1)/2 = Ti

So that is why his function reveals the triangular number series.

One interesting thing about triangular numbers are their connection to Pascal’s triangle as evident in this image I found at this site.

Pascal's Triangle

Trippy eh? You gotta love the various diversions mathematicians come up with to keep themselves busy.

comments suggest edit

I noticed a recent check-in has added a TimeOut property to the RollBack attribute in MbUnit. Woohoo!

A while ago I presented the source code for a RollBack attribute for NUnit based on Roy Osherove’s work in the area. Well I found a little problem with using the RollBack attribute that affects the one I presented along with the one that comes packaged with MbUnit.

I uncovered the problem while running a particularly long running unit test. Every time I ran the test, it failed at just about exactly 61 seconds into it (I know, a unit taking that long is kind of useless for TDD, but I’ll get that time down to something manageable. I promise!).

I reran the test multiple times and the line of code it failed on would be different, but MbUnit was showing me that it was failing at 61 seconds every time. To prove it, I removed the RollBack attribute and ran the test and it succeeded after around 90 seconds (yeah, I have some heavy perf work to do, but it is a BIG test).

The error message I got each time was Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.

Not a helpful message because I wasn’t attempting to complete the transaction. But the timing of the matter made it obvious to me I was running into a timeout issue.

The RollBack attribute works by enlisting a COM+ 1.5 transaction, which allows you to use Enterprise Services without inheriting from ServicedComponent using a feature called Services Without Components or SWC for short (gotta love them TLAs). To work around the issue in MbUnit, I simply removed the RollBack attribute and added the code to start a COM+ transaction directly to the method. The one change I made was to set the TransactionTimeout property which takes an integer timeout value in seconds.

[Test]

public void MyTest()

{

    ServiceConfig config = new ServiceConfig();

    config.TransactionTimeout = 120;

    config.Transaction = TransactionOption.RequiresNew;

    ServiceDomain.Enter(config);

    try

    {

        //Run my test code…

    }

    finally

    {

        if(ContextUtil.IsInTransaction)

        {

            //Abort the transaction.

            ContextUtil.SetAbort();

        }

        ServiceDomain.Leave();

    }

}

At the same time, I revisited the RollBack attribute I put together for NUnit and added a TransactionTimeout property to the attribute. That way you can mark up a test like so…

[Test]

[RollBack(120)]

public void MyTest()

{

    //Run my test code…

}

You can download the new version of the attribute for NUnit here.

As for MbUnit, I’ll mention this to the maintainers and we’ll hopefully see a fix soon.

comments suggest edit

The problem with extremists is that they inevitably color the mainstream’s perception of a thing, whether it be a race, a culture, or a software development practice.

In truth though, it is also important for the mainstream to use better judgement and stop falling for that trap. For example, I’ve read several articles and blog posts that attack unit testing (and by extension Test Driven Development) as a practice. What is interesting is that many of the points used to pillory unit testing are examples of taking the practice of unit testing to the extreme, and not necessarily a reasonable and mainstream usage of the practice.

So let’s make this very clear using a simple logical statement.

The fact that Unit Testing is a fundamental part of Extreme Programming does not imply that Extreme Programming is a fundamental part of Unit Testing.

For example, as I said many times, code coverage is not the end goal of unit testing. That’s extremist to say so. Your time is better spent focusing on automating tests for the most troublesome or important code.

Automated unit tests are NOT a replacement for system testing, beta testing, integration testing, nor any other kind of testing. Unit tests are only one small part of the testing equation, but they are an important part, just as all the other types of testing are important.

So in most cases, it pays to stop looking to the extremists to make a case against a practice (such as unit testing) and start talking to those using it in the real world and getting real results.

comments suggest edit

Open Bar

Last night the missus and I attended the launch party for American Idol Underground, the site I’ve been working on for a client.

The second best part of the party was when we arrived to two huge lines right outside of the Cabana Club. We expected a relatively small party, but it ended up swelling to a major event. Two lines extending in opposite direction full of meticulously coiffed “industry” types.

Figuring this was off to a bad start, we walked to the center of the two lines to figure out which line we were supposed to wait in. Waiting in line. That’s what us little people do. We wait in lines.

So when we made it to the center, we ran into the administrative assistant for the client. I asked her which line we were supposed to wait in and I loved her reply. “Oooh nooo. There’s no line for you.” She motioned the bouncers to let us straight in and we were given a staff badge that gave us VIP access.

I have to admit we felt just a bit like rockstars, except without all the lines of coke, bad hair, and breakups and reunion tours. So this is the special treatment that celebrities get at clubs. Cutting ahead of the masses of peons wating in line. The little people.

Inside we had access to the supposed VIP rooms, saw Spinderella from Salt ’n Pepa as well as that guy who played the bus driver and band manager in the movie Ray, Clifton Powell.

The best part of the party was the open bar. The music was fine too, but you have to love an open bar staffed by talented professional bartenders. Each drink was a worthy concoction, pleasing to the tastebuds, and pleasant on the eyes.

My former SkillJam coworkers in attendance certainly were livened by the open bar. You know how mixing water and potassium can cause an explosion? It’s a bit like that when you mix alcohol and my former coworkers…a party breaks out. Always a good time with them folks.

[Listening to: Solitude (Duke Ellington) - Ella Fitzgerald - Love Songs: Best Of The Verve Song Books (2:09)]

comments suggest edit

For the longest time now, I’ve been a fan of MbUnit, but never really used it on a real project. In part, I stuck with NUnit despite MbUnit’s superiority because NUnit was the defacto standard.

Well you know what? Pretty much every company or client I end up moving to has not had unit testing in place before I arrived. I’ve always been the one to introduce unit testing. So on my latest project, when I finally meet another developer who writes unit tests, and he is using MbUnit, I decided to make the switch.

And that, my friends, was a great decision… Why?

TypeFixture\ Say you write an interface IFoo (though it could just as easily have been an abstract class or any base class for that matter). You then proceed to implement a couple implementations of IFoo. Wouldn’t it be nice to write some unit tests specific to the interface? Here’s how you do it in MbUnit.

interface IFoo {}

class Bar : IFoo {}

class Baz : IFoo {}

 

[TypeFixture(typeof(IFoo), “Tests the IFoo interface.”)]

publicclass IFooTests

{

    [Provider(typeof(Bar))]

    public Bar ProvideBar()

    {

        returnnew Bar();

    }

 

    [Provider(typeof(Baz))]

    public Baz ProvideBaz()

    {

        returnnew Baz();

    }

 

    [Test]

    publicvoid TestIFoo(IFoo instance)

    {

        //Test that the IFoo instance

        //Behaves properly.

    }

}

What you are seeing is a TypeFixture which is a type of TestFixture that is useful for testing an interface. There is only one test method TestIFoo. However you should notice that it takes in a parameter of type IFoo.

This deviates from the typical NUnit test which does not allow any parameters. So just who is passing the test that parameter? The other methods in the fixture that have been marked with the Provider attribute. The test method is called once for every provider. The provider methods simply instantiate the concrete instance of the interface you are testing. So the next time you implement the interface, you simply add another provider method. Pretty sweet, eh?

Row Based Testing\ I already wrote a post on the RowTest attribute for MbUnit. It supports a very common test paradigm of using the same method to test a wide variety of inputs.

Test Runner\ Matt Berther shows how easy it is to write an executable that will run all your unit tests.

RollBack Attribute\ Attach this attribute to a test method and MbUnit makes sure that any database transactions are rolled back at the end of the test. There is an implementation of a RollBack attribute for NUnit out there, but the extensibility model is tricky, as I found I couldn’t get the RollBack attribute to work with an ExpectedException attribute.

And More…\ MbUnit also has test attributes for repeating a test and repeating a test on multiple threads. It also has a test fixture designed to test custom collections that implement IEnumerable and IEnumeration.

For more information, check out this Code Project article and the MbUnit wiki.

In the near future, I’ll be switching the unit tests for Subtext to use MbUnit. And assuming Dare and Torsten are ok with it, the unit tests for RSS Bandit.

comments suggest edit

Jeff links to a post by Wil Shipley criticizing unit testing. You knew I had to chime in on this… ;)

I won’t rehash what I’ve already written on the subject, but will merely try to add a couple key points.

There are a couple of misconceptions I want to clear up.

First, the proponents of unit testing are not promoting it as the end all and be all of software development. However, I would say that unit tests are very important when done right, much in the same way version control is important when done right. Unit tests should be applied using a cost benefit analysis just as you’d do anything else. For example, problematic or tricky code should have more unit tests. Important code (such as the code in a banking system that performs a calculation) should have more unit tests. Simple getters and setters on the other hand can do without.

But I’ve NEVER, EVER seen a structured test program that (a) didn’t take like 100 man-hours of setup time, (b) didn’t suck down a ton of engineering resources, and (c) actually found any particularly relevant bugs.

Then perhaps you haven’t seen unit testing done right. My setup time for unit testing is about as long as it takes to set up a class library and run NUnit or MBUnit. Marginal.

Most unit tests are written as code is developed, not tacked on after the fact. The design aspect of writing unit tests cannot be overstated. Especially in teams where one person writes a piece of code that another person is going to call. It’s very easy to create a really awful API to a class library that then costs other developers who have to use the API extra time to fiddle around with it and understand it. With unit tests, at least the author has had to “dogfood” his own medicine and the API is more likely to be usable. If it’s still confusing, well the unit test can serve as a code sample. I tend to learn better from code samples than Intellisense.

​1) When you modify your program, test it yourself. Your goal should be to break it, NOT to verify your code.

Yes, unit testing does not take away the need to test your own code. However, pages and pages of studies show how easy it is for even very talented developers to develop blind spots when testing their own code. That’s why you still have a QA team dedicated to the task without the baggage that the developer carries.

However, testing the feature you changed isn’t necessarily good enough. For example, suppose you make a slight schema change. Are you sure you haven’t broken a feature developed by another developer? Are you prepared to test the entire system for every change you make? With unit tests you have some degree of regression testing at your fingertips.

While it is true that unit testing can some additional upfront time, in my experience, especially if you work in a team, it always produces a cost savings overall. The time and cost savings of unit tests cannot be overstated. Yes. Savings!

One TDD practice I am a firm proponent of is to make sure that when a bug is discovered in the code, before you fix the bug, you write a unit test that exposes the bug (if possible and cost effective). By “exposing” the bug I mean you write a test that would pass if the code was working properly, but fails because of the bug. Afterwards you fix the bug, make sure the test passes and then check in the code and the unit test. Now you have a fair degree of certainty that particular bug won’t crop up again. By the very existence of the bug in the first place, you know that area of code is troublesome and deserves to have unit tests testing it.

These sort of unit tests address issues that unit tests are too soft on the code as they are effectively generated as a result of human interaction with the system.

On a recent project, a developer checked in a schema change and tested the system and it seemed to work just fine. Meanwhile, I had gotten latest and noticed several of my unit tests were suddenly failing. After a few minutes of digging, I called the other developer and confirmed the schema change. It required a small change in my code and everything was running smoothly. Without my suite of unit tests, I would have no easy way to judge the true impact of that schema change. In fact, it may have taken hours for me to even notice the problem, as things “seemed” to be working fine from a UI perspective. It was the underlying calculations that were broken.

Real testers hate your code. A unit test simply verifies that something works. This makes it far, far too easy on the code. Real testers hate your code and will do whatever it takes to break it– feed it garbage, send absurdly large inputs, enter unicode values, double-click every button in your app, etcetera.

Yes they do! And when they throw in garbage that breaks the code, I make sure to codify that as a unit test so it doesn’t break again. In fact, over time as I gain experience with unit testing, I realize I can just as easily throw garbage at my code as a human tester can. So I write my unit tests to be harsh. To be mean angry bad mofos. I make sure the tests probe the limits of my code. It takes me a bit of upfront time, but you know what? My automated unit test can throw garbage at my code faster than a human can. Plus, don’ forget, it is a human who is writing the test.

Testing is hugely important. Much too important to trust to machines. Test your program with actual users who have actual data, and you’ll get actual results.

Certainly true, but regression testing is much too boring to be left to humans. Humans make mistakes, especially when performing boring repetitive tasks. You would never tell a human manually sum up a long row of numbers, that’s what computers are for. You let the machine do the tasks its well suited for, and the humans can do what they are well suited for. It all works hand in hand. Unit testing is no substitute for Beta testing. But Beta testing is certainly no substitute for unit testing.

comments suggest edit

It rained the night before last night (with Thunder!) and it rained last night and it is still raining. I think the forces of nature conspire against me. I have no choice but to crawl up back into bed with a good book and listen to the rain. If a nap should overtake me, so be it.

comments suggest edit

Writing proper custom exceptions can amount to a lot of busy work. Oh sure, it’s easy to simply inherit from System.Exception and stop there. But try running that baby through FxCop or passing that exception across AppDomains and you’re in for a world of hurt (hyperbole alert!).

What makes writing a custom exception a pain? First, there are all those constructors you have to implement. You also need to remember to mark the class with the Serializable attribute. Also, if your exception has at least one custom property, then you’ll want to implement ISerializable, a special serialization constructor, and more constructors that accept the new property.

On page 411 of Applied .NET Framework Programming, Jeffrey Richter outlines the steps to write a proper custom exception class.

If you have this book, you should definitely read and learn these steps. Or if you are a ReSharper user, you can be lazy and just use the Live Template (akin to a Whidbey Code Snippet) I’ve created and posted here for your exceptional enjoyment.

Unfortunately, I do not know of any way to export and import live templates within ReSharper, so you’ll have to follow the steps I outlined in a previous post.

I have included two templates. The first is for a full-blown sealed custom exception with a single custom property. It’s easy enough to add more properties if you need them. The second is for a simple custom exception with no custom properties. The ReadMe.txt file included outlines a couple of settings you need to make for a couple template variables.

I ended up using the abbreviation excc to expand the full exception class and excs for the simple exception class. This ought to save you a lot of typing. Below is a screenshot of the full template…

Exception Live
Template