# Patterns in Number Sequences

You know you’re a big geek when a sequence of numbers with an interesting property just pops in your head. No, I’m not talking about myself (this time). Jayson Knight is the big geek as he noticed a pattern in a sequence of numbers that popped in his head…

This just popped into my head the other day for no other reason than to bug me: Square all odd numbers starting with 1…subtract 1 from the result…then divide by 8. Now look for the pattern in the results.

He even provides a code sample to do the math for you, but you can easily do it by hand on paper. The pattern he noticed can be phrased another way, the square of any odd number when divided by eight leaves a remainder of 1.

This is actually a pattern noticed by John Horton Conway and Richard Guy in 1996. They stated that in general, the odd squares are congruent to 1 (mod 8).

I couldn’t find their proof, but it is easily proved by induction. I’ll walk you through the steps.

The Proposition\ We want to prove that

`(x2 - 1) mod 8 = 0 for all x >= 1 where x is an odd integer.`

Note that this is the same as proving that `x2 mod 8 = 1`. In other words, if we prove this, we prove the interesting property Jayson’s noticed.

Verify the Base Case\ Here’s where our heavy duty third grade math skills come into play. We try out the case where `x = 1`.

`(12 - 1) mod 8 = 0 mod 8`

So yes, 0 mod 8 is zero, so we’re good with the base case.

Formulate the Inductive Hypothesis\ Ok, having demonstrated the fact for x = 1, let’s hypothesise that it is indeed true that

`(x2 - 1) mod 8 = 0 for all odd x > 1 where x is an odd integer`

Now prove it\ Here we prove the next case. So assuming our above hypothesis is true, we want to show that the it must be true for the next odd number. We want to show that

`((x+2)2 - 1) mod 8 = 0`

Well that can be multiplied out to…

`((x2 + 4x + 4) - 1) mod 8 = 0` Note I don’t subtract the one from the four.

So just re-arranging the numbers a bit we get…

`((x2 - 1) + 4x + 4) mod 8 = 0`

Now I factor the right side and get (You do remember factoring right?)

`((x2 - 1) + 4(x + 1)) mod 8 = 0`

Ok, you should notice here that we know what’s on the left side is certainly divisible by 8 due to our hypothesis. So we just need to prove that 4(x+1) is also divisible by 8. If two numbers are divisible by another number, we know the sum of the two numbers are also divisble by that number.

Well it should be pretty clear that 4(x+1) is divisible by eight. How? Well since x is an odd number, x + 1 must be an EVEN number. We can rewrite (x + 1) as 2n where n is an integer (the very definition of an even number). So our equation becomes…

`(x2 - 1) + 4(2n) mod 8 = 0`

Which is naturally…

`((x2 - 1) + 8n) mod 8 = 0`

And we’re pretty much done. We know that (x^2^ - 1) is divisible by eight due to our inductive hypothesis. We also know 8n is divisible by eight. Therefore the sum of the two numbers must be divisble by 8. And the proof is in the pudding.

Ok, some of you are probably thinking I am hand waving that last conclusion. So I will quickly prove the last step. Since we know that the left hand side is divisible by eight, we can substitute 8m where m is an integer (the very definition of a number divisible by eight).

That leaves us with…

`(8m + 8n) mod 8 = 0`

which factors to…

`(8(m + n)) mod 8 = 0`

Conclude the proof for formality sake\ And thus, proposition is true for all odd integers.

Sorry for such a long boringNo need to thank me for a long and scintillating math post, but it’s been a loooong time since I’ve stretched my math muscles. This was a fun excercise in inductive proofs.

So how does an inductive proof prove anything? At first glance, for those unfamiliar with inductive proofs, it hardly seems like we proved anything. Our proof rests on an assumption. We stated that if our assumption is true for one odd number, then next odd number must exihibit the same behavior. We went ahead and proved that to be true, but it still leaves the possibility that this isn’t true for any odd number.

That’s where our base case comes in. We showed that for x = 1, it is indeed true. So since it is true for x = 1, we’ve proved it is true for x = 3. Since it is true for x = 3, we know it is true for x = 5. Ad infinitum.

And that concludes today’s math lesson.

UPDATE: Fixed a couple typos. Thanks Jeremy! Also, optionsScalper in my comments list a lot of great links about number theory and congruences. I applied his correct suggestion to clarify the mod operations by putting a parenthesis around the left hand side.

# Digging Deeper Into the Triangular Series

In my last post, I didn’t explain the pattern to Jayson’s satisfaction and I had a typo in my proof that I have since corrected.

My proof demonstrated one pattern, namely that the square of an odd number minus one is divisible by eight. However, Jayson noticed that if you start with the first few odd numbers and go through those mathematical steps, the result of the operation leaves you with another series with interesting properties.

It turns out that series is the triangular series. I believe what Jayson wanted to know was why his function yielded this sequence. I shall dig into this here (notice I used the word shall? That’s a mathematician thang. You wouldn’t understand ;)) Here are the first few numbers in the sequence…

0, 1, 3, 6, 10,…

Another way to look at the series is…

` f(0) = 0 f(1) = 0 + 1 = 1 f(2) = 0 + 1 + 2 = 3 f(3) = 0 + 1 + 2 + 3 = 6 f(4) = 0 + 1 + 2 + 3 + 4 = 10 . . . f(n) = 0 + 1 + 2 + ... + n - 1 + n = ???`

The n^th^ number in the series is the sum of all the numbers before n and including n. There’s a simple formula to get the n^th^ number in this series. Legend has it that Karl Friedrich Gauss discovered this while a very young student. He was told to sum up the numbers from 1 to 100 as a means to keep him busy for a long time. In a very short while, he came up with the answer. He observed that you could simply pair the numbers up like so…

` 1 + 100 = 101 2 + 99 = 101 3 + 98 = 101 . . . 50 + 51 = 101 = 50 * 101 = 5050`

It turns out, that the sum of all numbers n and below can be described by the simple formula…

`n(n+1)/2`

So how does this equation relate to the one Jayson showed us? Well to refresh your memory, that equation could be described as such

`f(xi) = (xi2 - 1)/8 = Ti`

In English, that means that applying his function to the i^th^ odd number yields the i^th^ triangular number.

So let’s start doing some simple algebraic substitutions. First, we need to define what we mean by the “i^th^” odd number. What is the odd number at i=0? Well that should clearly be the first odd number, one. So we state…

`xi = 2i + 1`

That’ll make sure we are only dealing with odd numbers. Now let’s substitute for x~i~

`f(xi) = f(2i + 1)`

Ok, this next step is a little tricky. By definition, f(x) = (x^2^ - 1)/8. This is Jayson’s formula. So let’s expand out f(2i + 1) into this formula. My assistant, the color green will assist to keep this clear.

`f(xi) = ((2i + 1)2 - 1)/8`

By now, I am really wishing HTML supported math symbols easily. Now doing some multiplying.

`f(xi) = (4i2 + 4i + 1 - 1)/8`

Doing a bit of arithmetic leads us to

`f(xi) = (4i2 + 4i)/8`

Some factorization…

`f(xi) = 4i(i + 1)/8`

Doing some division (man this math stuff is hard)

`f(xi) = i(i + 1)/2`

Does that look familiar? I hope you are having an aha moment (if you didn’t have it a long time ago). That is the formula for the i^th^ triangular number! Thus with a bit of algebra, I have demonstrated that…

`f(xi) = (xi2 - 1)/8 = i(i + 1)/2 = Ti`

So that is why his function reveals the triangular number series.

One interesting thing about triangular numbers are their connection to Pascal’s triangle as evident in this image I found at this site.

Trippy eh? You gotta love the various diversions mathematicians come up with to keep themselves busy.

# Transaction Timeout When Using the RollBack Attribute

I noticed a recent check-in has added a `TimeOut` property to the `RollBack` attribute in MbUnit. Woohoo!

A while ago I presented the source code for a `RollBack` attribute for NUnit based on Roy Osherove’s work in the area. Well I found a little problem with using the RollBack attribute that affects the one I presented along with the one that comes packaged with MbUnit.

I uncovered the problem while running a particularly long running unit test. Every time I ran the test, it failed at just about exactly 61 seconds into it (I know, a unit taking that long is kind of useless for TDD, but I’ll get that time down to something manageable. I promise!).

I reran the test multiple times and the line of code it failed on would be different, but MbUnit was showing me that it was failing at 61 seconds every time. To prove it, I removed the RollBack attribute and ran the test and it succeeded after around 90 seconds (yeah, I have some heavy perf work to do, but it is a BIG test).

The error message I got each time was Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.

Not a helpful message because I wasn’t attempting to complete the transaction. But the timing of the matter made it obvious to me I was running into a timeout issue.

The RollBack attribute works by enlisting a COM+ 1.5 transaction, which allows you to use Enterprise Services without inheriting from `ServicedComponent` using a feature called Services Without Components or SWC for short (gotta love them TLAs). To work around the issue in MbUnit, I simply removed the RollBack attribute and added the code to start a COM+ transaction directly to the method. The one change I made was to set the `TransactionTimeout` property which takes an integer timeout value in seconds.

[Test]

public void MyTest()

{

ServiceConfig config = new ServiceConfig();

config.TransactionTimeout = 120;

config.Transaction = TransactionOption.RequiresNew;

ServiceDomain.Enter(config);

try

{

//Run my test code…

}

finally

{

if(ContextUtil.IsInTransaction)

{

//Abort the transaction.

ContextUtil.SetAbort();

}

ServiceDomain.Leave();

}

}

At the same time, I revisited the `RollBack` attribute I put together for NUnit and added a `TransactionTimeout` property to the attribute. That way you can mark up a test like so…

[Test]

[RollBack(120)]

public void MyTest()

{

//Run my test code…

}

As for MbUnit, I’ll mention this to the maintainers and we’ll hopefully see a fix soon.

# The Problem With Extremism

The problem with extremists is that they inevitably color the mainstream’s perception of a thing, whether it be a race, a culture, or a software development practice.

In truth though, it is also important for the mainstream to use better judgement and stop falling for that trap. For example, I’ve read several articles and blog posts that attack unit testing (and by extension Test Driven Development) as a practice. What is interesting is that many of the points used to pillory unit testing are examples of taking the practice of unit testing to the extreme, and not necessarily a reasonable and mainstream usage of the practice.

So let’s make this very clear using a simple logical statement.

The fact that Unit Testing is a fundamental part of Extreme Programming does not imply that Extreme Programming is a fundamental part of Unit Testing.

For example, as I said many times, code coverage is not the end goal of unit testing. That’s extremist to say so. Your time is better spent focusing on automating tests for the most troublesome or important code.

Automated unit tests are NOT a replacement for system testing, beta testing, integration testing, nor any other kind of testing. Unit tests are only one small part of the testing equation, but they are an important part, just as all the other types of testing are important.

So in most cases, it pays to stop looking to the extremists to make a case against a practice (such as unit testing) and start talking to those using it in the real world and getting real results.

# Two Words That Always Put a Smile On My Face...

Open Bar

Last night the missus and I attended the launch party for American Idol Underground, the site I’ve been working on for a client.

The second best part of the party was when we arrived to two huge lines right outside of the Cabana Club. We expected a relatively small party, but it ended up swelling to a major event. Two lines extending in opposite direction full of meticulously coiffed “industry” types.

Figuring this was off to a bad start, we walked to the center of the two lines to figure out which line we were supposed to wait in. Waiting in line. That’s what us little people do. We wait in lines.

So when we made it to the center, we ran into the administrative assistant for the client. I asked her which line we were supposed to wait in and I loved her reply. “Oooh nooo. There’s no line for you.” She motioned the bouncers to let us straight in and we were given a staff badge that gave us VIP access.

I have to admit we felt just a bit like rockstars, except without all the lines of coke, bad hair, and breakups and reunion tours. So this is the special treatment that celebrities get at clubs. Cutting ahead of the masses of peons wating in line. The little people.

Inside we had access to the supposed VIP rooms, saw Spinderella from Salt ’n Pepa as well as that guy who played the bus driver and band manager in the movie Ray, Clifton Powell.

The best part of the party was the open bar. The music was fine too, but you have to love an open bar staffed by talented professional bartenders. Each drink was a worthy concoction, pleasing to the tastebuds, and pleasant on the eyes.

My former SkillJam coworkers in attendance certainly were livened by the open bar. You know how mixing water and potassium can cause an explosion? It’s a bit like that when you mix alcohol and my former coworkers…a party breaks out. Always a good time with them folks.

[Listening to: Solitude (Duke Ellington) - Ella Fitzgerald - Love Songs: Best Of The Verve Song Books (2:09)]

# Switching to MbUnit

For the longest time now, I’ve been a fan of MbUnit, but never really used it on a real project. In part, I stuck with NUnit despite MbUnit’s superiority because NUnit was the defacto standard.

Well you know what? Pretty much every company or client I end up moving to has not had unit testing in place before I arrived. I’ve always been the one to introduce unit testing. So on my latest project, when I finally meet another developer who writes unit tests, and he is using MbUnit, I decided to make the switch.

And that, my friends, was a great decision… Why?

TypeFixture\ Say you write an interface `IFoo` (though it could just as easily have been an abstract class or any base class for that matter). You then proceed to implement a couple implementations of `IFoo`. Wouldn’t it be nice to write some unit tests specific to the interface? Here’s how you do it in MbUnit.

interface IFoo {}

class Bar : IFoo {}

class Baz : IFoo {}

[TypeFixture(typeof(IFoo), “Tests the IFoo interface.”)]

publicclass IFooTests

{

[Provider(typeof(Bar))]

public Bar ProvideBar()

{

returnnew Bar();

}

[Provider(typeof(Baz))]

public Baz ProvideBaz()

{

returnnew Baz();

}

[Test]

publicvoid TestIFoo(IFoo instance)

{

//Test that the IFoo instance

//Behaves properly.

}

}

What you are seeing is a `TypeFixture` which is a type of `TestFixture` that is useful for testing an interface. There is only one test method `TestIFoo`. However you should notice that it takes in a parameter of type `IFoo`.

This deviates from the typical NUnit test which does not allow any parameters. So just who is passing the test that parameter? The other methods in the fixture that have been marked with the `Provider` attribute. The test method is called once for every provider. The provider methods simply instantiate the concrete instance of the interface you are testing. So the next time you implement the interface, you simply add another provider method. Pretty sweet, eh?

Row Based Testing\ I already wrote a post on the `RowTest` attribute for MbUnit. It supports a very common test paradigm of using the same method to test a wide variety of inputs.

Test Runner\ Matt Berther shows how easy it is to write an executable that will run all your unit tests.

RollBack Attribute\ Attach this attribute to a test method and MbUnit makes sure that any database transactions are rolled back at the end of the test. There is an implementation of a `RollBack` attribute for NUnit out there, but the extensibility model is tricky, as I found I couldn’t get the RollBack attribute to work with an `ExpectedException` attribute.

And More…\ MbUnit also has test attributes for repeating a test and repeating a test on multiple threads. It also has a test fixture designed to test custom collections that implement `IEnumerable` and `IEnumeration`.

For more information, check out this Code Project article and the MbUnit wiki.

In the near future, I’ll be switching the unit tests for Subtext to use MbUnit. And assuming Dare and Torsten are ok with it, the unit tests for RSS Bandit.

# Unit Testing Loves Beta Testing And Vice Versa

Jeff links to a post by Wil Shipley criticizing unit testing. You knew I had to chime in on this… ;)

I won’t rehash what I’ve already written on the subject, but will merely try to add a couple key points.

There are a couple of misconceptions I want to clear up.

First, the proponents of unit testing are not promoting it as the end all and be all of software development. However, I would say that unit tests are very important when done right, much in the same way version control is important when done right. Unit tests should be applied using a cost benefit analysis just as you’d do anything else. For example, problematic or tricky code should have more unit tests. Important code (such as the code in a banking system that performs a calculation) should have more unit tests. Simple getters and setters on the other hand can do without.

But I’ve NEVER, EVER seen a structured test program that (a) didn’t take like 100 man-hours of setup time, (b) didn’t suck down a ton of engineering resources, and (c) actually found any particularly relevant bugs.

Then perhaps you haven’t seen unit testing done right. My setup time for unit testing is about as long as it takes to set up a class library and run NUnit or MBUnit. Marginal.

Most unit tests are written as code is developed, not tacked on after the fact. The design aspect of writing unit tests cannot be overstated. Especially in teams where one person writes a piece of code that another person is going to call. It’s very easy to create a really awful API to a class library that then costs other developers who have to use the API extra time to fiddle around with it and understand it. With unit tests, at least the author has had to “dogfood” his own medicine and the API is more likely to be usable. If it’s still confusing, well the unit test can serve as a code sample. I tend to learn better from code samples than Intellisense.

​1) When you modify your program, test it yourself. Your goal should be to break it, NOT to verify your code.

Yes, unit testing does not take away the need to test your own code. However, pages and pages of studies show how easy it is for even very talented developers to develop blind spots when testing their own code. That’s why you still have a QA team dedicated to the task without the baggage that the developer carries.

However, testing the feature you changed isn’t necessarily good enough. For example, suppose you make a slight schema change. Are you sure you haven’t broken a feature developed by another developer? Are you prepared to test the entire system for every change you make? With unit tests you have some degree of regression testing at your fingertips.

While it is true that unit testing can some additional upfront time, in my experience, especially if you work in a team, it always produces a cost savings overall. The time and cost savings of unit tests cannot be overstated. Yes. Savings!

One TDD practice I am a firm proponent of is to make sure that when a bug is discovered in the code, before you fix the bug, you write a unit test that exposes the bug (if possible and cost effective). By “exposing” the bug I mean you write a test that would pass if the code was working properly, but fails because of the bug. Afterwards you fix the bug, make sure the test passes and then check in the code and the unit test. Now you have a fair degree of certainty that particular bug won’t crop up again. By the very existence of the bug in the first place, you know that area of code is troublesome and deserves to have unit tests testing it.

These sort of unit tests address issues that unit tests are too soft on the code as they are effectively generated as a result of human interaction with the system.

On a recent project, a developer checked in a schema change and tested the system and it seemed to work just fine. Meanwhile, I had gotten latest and noticed several of my unit tests were suddenly failing. After a few minutes of digging, I called the other developer and confirmed the schema change. It required a small change in my code and everything was running smoothly. Without my suite of unit tests, I would have no easy way to judge the true impact of that schema change. In fact, it may have taken hours for me to even notice the problem, as things “seemed” to be working fine from a UI perspective. It was the underlying calculations that were broken.

Real testers hate your code. A unit test simply verifies that something works. This makes it far, far too easy on the code. Real testers hate your code and will do whatever it takes to break it– feed it garbage, send absurdly large inputs, enter unicode values, double-click every button in your app, etcetera.

Yes they do! And when they throw in garbage that breaks the code, I make sure to codify that as a unit test so it doesn’t break again. In fact, over time as I gain experience with unit testing, I realize I can just as easily throw garbage at my code as a human tester can. So I write my unit tests to be harsh. To be mean angry bad mofos. I make sure the tests probe the limits of my code. It takes me a bit of upfront time, but you know what? My automated unit test can throw garbage at my code faster than a human can. Plus, don’ forget, it is a human who is writing the test.

Testing is hugely important. Much too important to trust to machines. Test your program with actual users who have actual data, and you’ll get actual results.

Certainly true, but regression testing is much too boring to be left to humans. Humans make mistakes, especially when performing boring repetitive tasks. You would never tell a human manually sum up a long row of numbers, that’s what computers are for. You let the machine do the tasks its well suited for, and the humans can do what they are well suited for. It all works hand in hand. Unit testing is no substitute for Beta testing. But Beta testing is certainly no substitute for unit testing.

# It's Raining in Southern California

It rained the night before last night (with Thunder!) and it rained last night and it is still raining. I think the forces of nature conspire against me. I have no choice but to crawl up back into bed with a good book and listen to the rain. If a nap should overtake me, so be it.

# Writing Custom Exceptions Using Resharper Live Templates

Writing proper custom exceptions can amount to a lot of busy work. Oh sure, it’s easy to simply inherit from `System.Exception` and stop there. But try running that baby through FxCop or passing that exception across AppDomains and you’re in for a world of hurt (hyperbole alert!).

What makes writing a custom exception a pain? First, there are all those constructors you have to implement. You also need to remember to mark the class with the `Serializable` attribute. Also, if your exception has at least one custom property, then you’ll want to implement `ISerializable`, a special serialization constructor, and more constructors that accept the new property.

On page 411 of Applied .NET Framework Programming, Jeffrey Richter outlines the steps to write a proper custom exception class.

If you have this book, you should definitely read and learn these steps. Or if you are a ReSharper user, you can be lazy and just use the Live Template (akin to a Whidbey Code Snippet) I’ve created and posted here for your exceptional enjoyment.

Unfortunately, I do not know of any way to export and import live templates within ReSharper, so you’ll have to follow the steps I outlined in a previous post.

I have included two templates. The first is for a full-blown sealed custom exception with a single custom property. It’s easy enough to add more properties if you need them. The second is for a simple custom exception with no custom properties. The ReadMe.txt file included outlines a couple of settings you need to make for a couple template variables.

I ended up using the abbreviation `excc` to expand the full exception class and `excs` for the simple exception class. This ought to save you a lot of typing. Below is a screenshot of the full template…

# Fatwa Against Soccer

It appears that one Islamic extremist has issued a Fatwa against playing soccer by the regular rules. Here are some choice examples.

​4. Do not follow the heretics, the Jews, the Christians and especially evil America regarding the number of players. Do not play with 11 people. Instead, add to this number or decrease it.

​5. Play in your regular clothes or pajamas or something like that, but not colored shorts and numbered T-shirts, because shorts and T-shirts are not Muslim clothing. Rather they are heretical and Western clothing, so beware of imitating their fashion

​8. Do not play in two halves. Rather play in one half or three halves in order to completely differentiate yourselves from the heretics, the polytheists, the corrupted and the disobedient.

Thanks to Walt for sending this to me.

I should note that this is edict was issued by an extremist and is not representative of general Islamic scholarship and thought. I am still waiting for the Christian set of soccer rules. They might look something like…

​1. A woman shall not be allowed to referee as it sayeth in the Bible, 1 Timothy 2:12 “I do not permit a woman to teach or to have authority over a man”. If a woman attempteth to have authority and become a judge, her name and character shall be impugned and criticism by other Christians appeareth in every newspaper.

​2. Whence the ball shall enter the goal, the goal shall be postponed till the scorer recite a verse from memory chosen at random by the referee. Afterwards, to celebrate, the team shall be allowed to pray, but not cheer.

​3. Players shall not lust after scoring a goal for lust is sinful. Only through prayer and should the good lord will it, shall a goal be allowed. One who scores a goal shall not say “I scored a goal” but merely say, “The Lord Jesus Scored a Goal and I was merely his vessel.”

# Decompression

My weekends just seem to get better and better every week. Since going independent and then starting a company with a friend, my work weeks have been much more enjoyable. Fortunately, my weekends have kept up as well.

This past Saturday we went to the Decompression LA party. Decompression is a big party held by various Burning Man regional groups. I hear the best (and biggest) one is the one held in San Francisco.

The idea behind it is that it serves as decompression from returning to our normal pressure filled lives after the bliss that is Burning Man. It’s a way to bring a bit of that Burning Man spirit to the city. For those that have been curious about Burning Man, Decompression is a once a year party that is very much like a mini-Burning Man, but without all the annoying Playa dust and everyone is clean.

It’s billed as a 12 hour street fair (noon to midnight) with performers, art, art cars, and of course, everyone in their Playa wear. Oh, and I shouldn’t forget to mention the three city blocks of thumping music.

Unfortunately, my camera ran out of batteries, so I didn’t take many pics. But here’s one of something I really wanted to see someone operate. At the party we met up with Bruce and Kelly, two people we met at the Playa. We also met up with Dane, Mark, and Erika, the crew I went with.

The rest of my weekend consisted of the usual, Soccer games, working on Subtext, working on an article, relaxing.

# The Simple Answer To VS.NET Designer Woe

It’s happened to all of us. You are happily coding along (in Visual Studio .NET 1.x), minding your own business when you decide to switch from the code view to the designer and back to the code view. That’s when you experience…The Woe.

Now to prevent the woe, this post has some great tips.

However, I discovered something quite by accident, and I’m not sure if it works in all cases. But I was working on an ASPX page and switched to design mode and then switched back and noticed that the HTML was completely messed up. Various tags had been upper-cased (for god knows what reason) and my indenting was kicked in the nuts.

So I hit `CTRL+Z` twice.

It appears that VS.NET took two steps to fubar my code, but both steps were still in the command stack. So undoing twice restored my ASPX markup to its beautiful pristine state.

# Connecting to Terminal Services When All Active Sessions are Used

UPDATE: If you are using Windows Server 2008, the switch is `/admin` not `/console`. See this post for details.

We use Remote Desktop (Terminal Services) to remotely manage a Windows 2003 server that is not part of our domain. Recently we ran into the two user limit for remote desktop connections, which barred anyone from connecting.

Jon discovered a neat little trick that got us in. He ran the following command from the command line:

`mstsc -console`

It turns out that mstsc.exe is the remote desktop connection application. The `-console` flag specifies that we want to connect to the console session of a server. Since we generally launch Remote Desktop from the icon, we almost always leave this console session free. Nice!

When I got back in the server, I used the Terminal Services Manager tool to reset the disconnected and idle sessions. I then used Terminal Services Configuration tool to set a timeout for disconnected sessions. Finally, I remembered to logout rather than simply close the remote desktop window. Simply closing remote desktop doesn’t reset the session.

# Humans Are Not Random Number Generators

There’s an interesting discussion in the comments on the Coding Horror blog in which Jeff suggests that

Your password alone should be enough information for the computer to know who you are.

And I definitely agree assuming a couple constraints

• You’re on a home computer or a system with a small number of users.
• You enforce pass-phrases rather than passwords.

A while ago I referenced an article on the insecurity of passwords as compared to pass-phrases. The article discusses how dictionary attacks and their ilk (brute-force, pre-computation, etc…) are becoming more and more successful at breaking into systems because people generally choose poor passwords.

However, in a sufficiently large system, a pass-phrase alone is no substitute for a username, pass-phrase combination during authentication. The reason is not that a 30+ character pass-phrase is theoretically statistically insecure. One commenter in Jeff’s post mentioned:

I honestly don’t care how improbable it would be, I want it to be impossible.

Sorry, no system is unhackable.\ Impossible? The only system impossible to hack is one that does not allow logins. Perhaps a lump of rock would be more to your taste? Even with a username and password combination, it is not impossible to guess a usernamen and password combination by pure accident . I might by pure chance in haste mistype my credentials in such a way that I inadvertently type in the username and password of another user. That’s possible.

That’s probably within the same range of probability (and I’m hand waving here) as guessing a 30+ character cryptographically generated pass-phrase.

But there’s just one problem. Humans are not cryptographically strong generators.

True Story\ When I was giving a presentation in college about random number sequences, I asked my classmates to “generate” two random sequences of ones and zeroes, each fifty numbers long. I stepped out of the room and they generated the first sequence by just writing ones and zeroes on the board as they saw fit, attempting to generate a random sequence. For the second sequence, they flipped a coin fifty times and wrote those numbers on the board.

They then summoned me into the classroom. I took a look at the two sequences and quickly discerned which was generated by coin toss and which was generated by consensus.

It turns out that we have a tendency, in an attempt to be random, to assume that there will not be very long strings of the same number. So in the sequence generated by hand, the longest sequence of the same character was only three or four long. But in the random sequence of 50 coin tosses, I expected at least one sequence of the same number to be around 5 or 6 characters long.

Psychology of secrets\ So back to the point. The problem in a system with a large number of users is that psychology comes into play. You just know one or two people are going to choose the phrase “Who let the dogs out?” If you didn’t require a username and pass-phrase combo when authenticating, a person could inadverdently access another user’s account. Instead of attempting to guess one user’s account at a time, a hacker could be guessing at ALL user’s accounts at the same time.

Now there are some potential ideas that could make this work, assuming the benefit is worth it. One is to require that the pass phrase contain a number and a punctuation mark. Another option is to also require that the pass-phrase contain the username. So instead of the earlier pass-phrase I mentioned, my pass-phrase might be “Who let the dogs out Mr. Haacked?”

# Teaching Solid HTML and CSS Production Work

I want to teach a friend everything I know about HTML Production work (which won’t take long). By production work, I mean the process of receiving a Photoshop file, cutting it up, and producing nice clean semantic (X)HTML and CSS.

I’m not a master of such things (though I am pretty handy with CSS these days), but I do know there’s a difference in producing HTML for a static web page verse producing HTML for a dynamically rendered page such as an ASP.NET page. It’s those details that I feel I can teach her well that you don’t learn at many design shops.

However, in preparation, I have a few books and websites I want her to check out. Let me know what you think of this list and if you’d make any additions, deletions, etc…

Books

Websites

• QuirksMode\ This is a great site for learning CSS and Javascript tips and how to deal with browser compatibility issues.
• A List Apart\ This site sports insightful articles on web design and CSS. Looking at the code behind the site itself is quite educational.
• Listamatic\ This is more of a reference and tutorial on how you can use CSS and the unordered list `<ul>` element to produce all sorts of lists, navbars, menus, etc….
• 20 CSS Tips and Tricks\ Tips and tricks for achieving common tasks using CSS.

Also, I asked some friends what applications they use to cut a photoshop file and one mentioned Macromedia Fireworks while another just uses Photoshop. Any tips?

Also, after learning the basics, the following links are important for understanding the box model problem between Firefox and IE.

Box Model Hacking

• The IE Box Model and Doctype Modes\ Explains the Box Model problem and how the Doctype affects it.
• BoxModelHack\ Explains some hacks to get around the Box Model problem.
• Box Model Tweaking\ Discusses an upcoming CSS3 declaration that allows the browser to change its box model. This (and variants) are supported by some browsers already perhaps making it possible have them use the IE box model.

# Who The Hell Cares What a Blogger Is Listening To?

Every now and then on various blogs (including mine), you’ll notice a little snippet at the bottom of a post that looks something like…

[Listening to: Never Forget - Paul Van Dyk - Reflections (5:26)]

This is usually inserted by a plug-in to some media player (in my case iTunes) that allows the blogger to easily insert information about what is currently playing.

So in the comments of my last post, Jeff Atwood asks the pertinent question (and I’m paraphrasing here).

Why the hell do I care what you were listening to when you wrote that post?

I attempted some bullshit answer about how writing is art and music influences art. Blah blah blah.

However, in an attempt at introspection and honesty, the real answer is:

Because I’m an egomaniacal Bandwagon jumper!

Yeah, that’s right! I saw some others do it, so I started doing it. I jumped on the bandwagon.

The egomaniacal aspect relates to the belief that someday, I am going to be so freaking famous that everyone will scour my trash to discover what brand of floss I use. Even more, they’ll want to know what music I listened to when I wrote. They’ll have college courses where they deconstruct my writings in the context of the music I was listening to at the time. They’ll even write alternative histories such as…

How would the texture of the article on the Poor Management Epidemic have changed were he listening to Rage Against The Machine when he wrote it as opposed to Röyksopp?

Oh yes, it will happen. Oh yes. Just you watch Jeff.

# Hilarious Video: The Yes Man

This video answers the question, “Just who is on the the other side of the call?”

It’s abso-freakin-hilarious! Watch the whole thing.

DISCLAIMER: Some foul language, so keep the speakers low at work.

[Listening to: Manga - Timo Maas - Loud (6:24)]

# The Poor Management Epidemic

I’ve noticed a bit of talk lately about poor management such as this piece from Dare and this one from Mini-Microsoft.

While they focus on poor management at Microsoft, Microsoft does not have a monopoly on poor management. Indeed, it is rampant in the industry.

In general, I see one two main afflictions that affects corporate thinking in America. It is from these two items that all other problems seem to sprout from.

• Placing short-sighted goals above all strategic and long-term planning.
• Making decisions based on hopes rather than analysis and objective data.

A typical scenario might look like this. A company is starting a software project and ask their tech team (or consultants), “Hey, how long will it take to build this?” A reasonable question, but surprisingly difficult to answer because as we know, business types often don’t really know what they want.

After spending some time gathering requirements, the whole time being pressured by the business team, the tech team delivers a rushed estimate. Unfortunately for the tech team, the business types have already promised delivery of the product in half the time of the estimate.

So what happens now? Perhaps the company offers some token incentive and a pep talk about pulling together and taking one for the team by entering crunch mode from the start. Maybe they’ll even hire a few more developers attempting to prove that nine women can indeed have a baby in one month.

Perhaps the tech team understands the principle of the project triangle.

There are three goals of every project: Good, Fast, Cheap. You can pick two.

But try telling that to management. You ask them if they will prioritize features, and they come back with a list where every feature is priority #1.

At this point, the company is managing by hopes and fairy tales. They hoped the time they promised was reasonable. They hope the tech team can complete the project done in time. The tech team wants to put in place a longer design and planning phase, but management want them to get coding because they hope there won’t be any coding problems. The management team does’t keep a list of risks because they hope nothing bad will happen.

Invariably, by putting these shortsighted goals above the long term success of the project, they manage to make the project progress even slower than had they allocated the correct amount of time. Certainly it is possible they will get a deliverable on time. But a deliverable that is very much a house of cards.

To see the epidemic nature of this scenario, you only need to read the paper. The recent classic example is the Virtual Case File project. After more than three years and \$170 million spent, the entire project was scrapped. That is \$170 million in taxpayer money down the drain because of complete ineptitude, poor management by both the client (the FBI) and their vendor.

We can probably create a hughe catalog of business failures due to short sightedness and management through hopes.

UPDATE: Jon Galloway turned me on to Johanna Rothman’s blog. She has a great example of how managers can be “penny-wise and pound-foolish”. Just another example of being short-sighted.

[Listening to: Circuit Breaker - Röyksopp - The Understanding (5:25)]