comments edit

I just had to post this in its full glory. Great leadup Matt!

You just came to Texas Tech University as a freshman…

You are SO PROUD that you were chosen to be the “Bell Ringer”. Your job is to ring the school’s bell during the big game to help pump up the crowd…

Your whole family, all your friends, and 15 million ESPN viewers see you on Saturday’s telecast ringing the team’s bell. But, due to the tragically unfortunate placement of the bell, the camera, and your body, your whole family, all of your friends, and 15 million ESPN viewers see this instead….

[Via public MattBerther : ISerializable]

comments edit

While it may be exciting to see Microsoft jumping aboard the web-based application bandwagon, something that I am experiencing right now reminds me why I think there is still a strong place for rich “smart” clients.

There is an important piece of information in an email someone sent me and when I try to login to GMail I get…

Gmail is temporarily unavailable. Cross your fingers and try again in a few minutes. We’re sorry for the inconvenience.

At least with a rich client like Outlook, I would have had that email on my local machine. I also use Yahoo Notepad for important information and have had that site be down a few times when I needed a critical piece of information. It makes me realize that I shouldn’t trust these services to host my important data. I want it on my own machine where I can get to it.

comments edit

I have a question for those of you who host a blog with a hosting provider such as WebHost4Life. Do you make sure to remove write access for the ASPNET user to the bin directory? If so, would you be willing to enable write access for an installation process?

The reason I ask is that I’ve created a proof of concept for a potential nearly no-touch upgrade tool for upgrading .TEXT to Subtext. This particular tool is geared towards those who have .TEXT hosted with 3rd party hosting, although even those who host their own server may want to take advantage of it.

The way it works is that you simply drop a single upgrader assembly into the bin directory of an existing .TEXT installation. You also drop an UpgradeToSubtext.aspx file in your admin directory (This provides a bit of safety so that only an admin can initiate the upgrade process).

Afterwards, you simply point your browser to Admin/UpgradeToSubtext.aspx which initiates the upgrade proces.

The upgrad tool finds the connection string in the existing web.config and displays a message with the actions it is about to take. After you hit the upgrade button, it backs up important .TEXT files, unzips an embedded zip file which contains all the binaries and content files for Subtext. It also runs an embedded sql script to create all new subtext tables and stored procedures and copies your .TEXT data over. It doesn’t modify any existing tables so it is possible to rollback in case something goes wrong. Finally, it overwrites the web.config file with a Subtext web.config file, making sure to set the connection string properly.

It’s a very nice and automated procedure, but it has a key flaw. It requires write access to your bin directory.

An alternate approach that avoids writing to the bin directory is to have the user manually deploy all the Subtext binaries to the bin directory. The upgrade process would run the same, but it would only need write access to the web directory to deploy the various content files. Giving the ASPNET user write access to the web directory is not an unreasonable request since the gallery feature of .TEXT did create folders and require write access.

If you are considering upgrading from .TEXT to Subtext, or if you just have an opinion, please chime in.

comments edit

The subject of this post is the title of an interesting article on page 58 of this month’s Wired magazine. The author, Patrick Di Justo, shows that compared to 1950 prices, we are paying more of our annual income for houses, but we get a lot more for our dollar.

For example, the average square feet per persion in 1950 was 289.1 compared to 896.2 today. Price per square foot, when adjusted for inflation is actually lower today than in 1950. One of the more striking numbers is the square feet annual income buys today as compared to then. Then one could buy 429.3 square feet while today one can buy 930.1.

What I would love to see is this analysis applied to Los Angeles home prices from 1950 to present.

comments edit

Software pundit Joel Spolsky finally added titles to his RSS feeds (among other site improvements) and it’s about time. The title “November 5, 2005” tells me nothing about what he’s saying. Love him or hate him (why choose one or the other. Choose both!), Joel is definitely worth reading.

comments edit

Every day I look at my current code and go, “Damn, that’s some sweet shit!” But I also look at code I wrote a month ago and say, “What a freakin’ idiot I was to write that!” So in a month, the code I’m writing today will have been written by an idiot.

It looks like I am not the only one who feels that way.

It seems that at no matter which date if I look back to the code I wrote six months prior, I always wonder “what kind of crack was I smoking when I wrote that?” For the most part it’s not likely to end up on the daily wtf, but still, does this cycle ever end? Or at least get longer?

I suppose the optimistic way to look at it is that I am still learning pretty steadily, and not becoming stagnant. I’m also able to resist the temptation and go back and fiddle with what isn’t broke. I do kinda feel bad for anyone that has to maintain any of my older stuff (actually not really, suckers).

[Via Pragmatic Entropy]

comments edit

By now, every developer and his mother knows that VS.NET 2005 and SQL Server 2005 has been released. Prepare for the generics explosion as legions of .NET developers retrofit code, happily slapping <T> wherever it fits.

I predict the number of angle brackets in C# will initially increase by 250% only to settle over time to around 75% above current numbers. If you don’t count the angle brackets in C# comments, could be even higher.

But before you go too hog wild with generics, do consider that generics have an overhead associated with them, especially generics involving a value type. Their benefits do not come completely free.

As Rico Mariani pointed out in his PDC talk, generics involve a form of code generation by the run-time. His rule of thumb was that when your collection contains around 50 500 or so items, you’ll the benefits outweigh the overhead. But as always, measure measure measure.

In general, the strong typing and improved code productivity outweigh any performance concerns I have with small collections.

UPDATE: Whoops, I mistyped the number of items Rico mentioned. He said 500, not 50. Thanks for the correction Rico.

comments edit

The great people at WebHost4Life moved my database and web server to new Windows 2003 servers. They put them in the same server block and I noticed a significant speed increase in the time it takes my blog to load. This explains why my site was down this morning.

Hopefully, this server has much less abusive tenants than my last one did.

comments edit

Eric Lippert does a great job of defining the term Idempotent. I’ve used this term many times both to sound smart and because it is so succinct.

The one place I find idempotence really important is creating update scripts for a database such as the Subtext database. In an open source project geared towards other devs, you just have to assume that people are tweaking and applying various updates to the database. You really have no idea in what condition the database is going to be in. That’s where idempotence can help.

For example, if an update script is going to add a column to a table, I try to make sure the column isn’t already there already, before adding the column. That way, if I run the script twice, three times, twenty times, the table is the same as if I ran it once. I don’t end up adding the column multiple times.

comments edit

Sometimes someone writes a post that makes you say, “Oh shit!”. For example, Jon Galloway writes that writing a windows service just to run a scheduled process is a bad idea.

And he presents a very nice case. Nice enough that I take back the times I have condescendingly said that Windows Services are easy to write in .NET. I probably should look through some of the services I have written in the past. I know of one I could easily convert to a console app and gain functionality.

However, I think the decision sometimes isn’t so easy as that. One service I have written in the past is a Socket Server that takes in encrypted connections and communicates back with the client. Now that obviously needs to run all the time and is best served as a Windows Service. The problem then was since I had written the Windows Service code to be generalized, I was able to implement many other services very quickly, even ones with timers that ran on a schedule.

However, the most challenging ones to write, happened to be the ones that ran on a schedule, since the scheduling requirements kept changing and I realized I was going down the path to implementing…well…the Windows Task Scheduler.

In general, I think Jon’s right. If all you are doing is running a scheduled task, use Windows Task Scheduler until you reach the point that your system’s need are no longer met by the scheduler. This follows the principle of doing only what is necessary and implementing the simplest solution that works.

In a conversation, Jon mentioned that a lot of developers perceive Windows Services to be a more “professional” solution than task scheduling a console app. But one way to think of a service is an application that responds to requests and system events, not necessarily a scheduled task. So to satisfy both camps you could consider creating a service that takes in requests, and a scheduled task to make the requests. For example, a service might have a file system watcher active and a scheduled task might write the file. I don’t suggest adding all this complexity to something that can be very simple.

For me, I also like writing windows services because I have a system for quickly creating installation packages very quickly. What I need to do is spend some time creating an installer task for setting up a windows task scheduler job. That way I can do the same for a scheduled console app.

comments edit

Welcome to the new pissing contest.

Oh my, I couldn’t possible link to that blog. It’s only worth half what my blog is worth.

According to this site, my blog is worth…

\ My blog is worth $92,020.02.\ How much is your blog worth?

Ok, I’m ready to sell out! ;)

UPDATE: Others around the block have been posting their blog’s worth.

Of course, as Scott Reynolds points out,

A thing is only worth what someone will pay you for it.

Which is actually encouraging because I can probably find some fool willing to part with twice the figure listed above for my blog. But I better sell fast before the bubble bursts and blogs go through a huge market slump. Wouldn’t want my pedigreed blog having to resort to McDonald pitches to bring up its own worth.

tech comments edit

I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone. – Jack Valenti, former head of the MPAA

I know that Dave Winer has dismissed Google Print as a bad idea, but Dave is often hit or miss with his opinions. However I was surprised to see Dare’s criticism of the effort.

Yes it is true that Google’s “Do No Evil” motto is pure marketing schtick, and now that they are a large corporation, they can be just as evil as any other corporation, that still doesn’t take away from the benefits of the Google print project. Not all profit driven operations are evil. Those at Microsoft should know that.

Is It Going To Harm Publishers?\ Dare uses himself as an example in that he almost never buys technical books, but chooses to search the web to find references that he needs. However, if Google is to be believed, there is a big difference with web search and the search within a book feature.

The difference is that with web search you get the full content of what you are searching for. With Google Print, you get a snippet of a page in the book. Perhaps it contain all that you need, perhaps not. If it doesn’t, you’ll spend a lot of time searching trying to hit that exact page. Can you imagine trying to read through a volume of Art of Computer Programming like that? You might as well just physically go to the bookstore.

If you only needed one little piece of information from the book, you probably wouldn’t have bought it anyways, right? For example, Dare already admits he never buys technical books. So how will Google Print take Dare’s money from publishers? They aren’t getting his dollar already.

Personally, I find reading a book to be a great way to get a focused education on a technical topic. However, I would want to be able to search within the book to see that it does cover the topic in the depth I expect, and I hate running to Borders Book Store to do so. It’s a great relief when a book I am considering is part of Amazon.com’s Search Within a Book program.

Technical References are a small part of the total market\ Another key point to make is that technical reference books are a very tiny part of the entire book market. I certainly don’t want to read The Invisible Man via search. The general public are not going to search their way through the latest Stephen King novel. I don’t see how searching within a book is going to hurt the huge majority of publishers. As many point out, it will be an enabler of the long tail, perhaps selling books long forgotten by their publishers.

Legality\ As for the legality of the program, you should read Lawrence Lessig’s take on it. In his opinion, it most definitely constitutes fair use. If that is the case, whether or not it hurts the publishers becomes a moot point. Much like the pain that the VCR caused the movie industry was a moot point. Oh wait, the movie industry made millions off of the VCR…

comments edit

Whose Line Ok, I may have misfired with the last video, but this is truly hilarious. It passed the “Wife Test” (the last one didn’t).

I hear this is an old one, but it is a clip from the humorous Improv show, “Whose Line Is It Anyways?” with Drew Carey, Wayne Brady, etc all

In this one, Richard Simmons is a guest.

Watch it! (this time, I did not forget to link to it). It takes a bit of time to download and get started, so be patient.

\

comments edit

I’ve talked before about the various physical pains that software developers face on the job. For me, it seems that my pain likes to migrate around my body. If I have pain, it almost always seems to be one at a time.

For example, recently, my hands started hurting again, but my back felt much better. More recently, my hands and back felt good, but my eyes started bugging out due to eye strain. Now I am back to my back hurting, and everything else is feeling good. I hope to get back to where everything feels good, but I think that situation only occurs in the womb.

tdd database integration comments edit

UPDATE: For the most part, I think young Phil Haack is full of shit in these first two paragraphs. I definitely now think unit tests should NOT touch the database. Instead, I do separate those into a separate integration test suite, as I had suggested in the last paragraph. So maybe Young Phil wasn’t so full of shit after all.

I know there are unit testing purists who say unit tests by definition should never touch the database. Instead you should use mock objects or some other contraption. And in part, I agree with them. Given unlimited time, I will gladly take that approach.

But I work on real projects with real clients and tight deadlines. So I will secretly admit that this is one area I am willing to sacrifice a bit of purity. Besides, at some point, you just have to test the full interaction of your objects with the database. You want to make sure your stored procedures are correct etc…

However, I do follow a few rules to make sure that this is as pure as possible.

First, I always try and test against a local database. Ideally, I will script the schema and lookup data so that my unit tests will create the database. MbUnit has an attribute that allows you to perform a setup operation on assembly load and teardown when the tested assembly unloads. That would be a good place to set up the database so you don’t have to do it for every test. However, often, I set up the database by hand once and let my tests just assume a clean database is already there.

Except for lookup data, my tests create all the data they will use using whichever API and objects I am testing. Each test runs within a COM+ 1.5 transaction using a RollBack attribute so that no changes are stored after each test. This ensures that each test is testing against the same exact database.

This is the reason I can be a bit lazy and set up the database by hand, since the none of the tests will change the data in the database. Although I would prefer to have a no-touch approach where the unit tests set up the database. For that, there is TestFu which is now part of TestDriven.Net.

From my experience, I think this approach is a good middle ground for many projects. A more purist approach might separate the tests that touch the database into a separate assembly, but still use NUnit or MbUnit to run them. Perhaps that assembly would be called IntegrationTests.dll instead of UnitTests.dll. It’s your choice.

comments edit

You know you’re a big geek when a sequence of numbers with an interesting property just pops in your head. No, I’m not talking about myself (this time). Jayson Knight is the big geek as he noticed a pattern in a sequence of numbers that popped in his head…

This just popped into my head the other day for no other reason than to bug me: Square all odd numbers starting with 1…subtract 1 from the result…then divide by 8. Now look for the pattern in the results.

He even provides a code sample to do the math for you, but you can easily do it by hand on paper. The pattern he noticed can be phrased another way, the square of any odd number when divided by eight leaves a remainder of 1.

This is actually a pattern noticed by John Horton Conway and Richard Guy in 1996. They stated that in general, the odd squares are congruent to 1 (mod 8).

I couldn’t find their proof, but it is easily proved by induction. I’ll walk you through the steps.

The Proposition\ We want to prove that

 

(x2 - 1) mod 8 = 0 for all x >= 1 where x is an odd integer.

Note that this is the same as proving that x2 mod 8 = 1. In other words, if we prove this, we prove the interesting property Jayson’s noticed.

Verify the Base Case\ Here’s where our heavy duty third grade math skills come into play. We try out the case where x = 1.

(12 - 1) mod 8 = 0 mod 8

So yes, 0 mod 8 is zero, so we’re good with the base case.

Formulate the Inductive Hypothesis\ Ok, having demonstrated the fact for x = 1, let’s hypothesise that it is indeed true that

(x2 - 1) mod 8 = 0 for all odd x > 1 where x is an odd integer

Now prove it\ Here we prove the next case. So assuming our above hypothesis is true, we want to show that the it must be true for the next odd number. We want to show that

((x+2)2 - 1) mod 8 = 0

Well that can be multiplied out to…

((x2 + 4x + 4) - 1) mod 8 = 0 Note I don’t subtract the one from the four.

So just re-arranging the numbers a bit we get…

((x2 - 1) + 4x + 4) mod 8 = 0

Now I factor the right side and get (You do remember factoring right?)

((x2 - 1) + 4(x + 1)) mod 8 = 0

Ok, you should notice here that we know what’s on the left side is certainly divisible by 8 due to our hypothesis. So we just need to prove that 4(x+1) is also divisible by 8. If two numbers are divisible by another number, we know the sum of the two numbers are also divisble by that number.

Well it should be pretty clear that 4(x+1) is divisible by eight. How? Well since x is an odd number, x + 1 must be an EVEN number. We can rewrite (x + 1) as 2n where n is an integer (the very definition of an even number). So our equation becomes…

(x2 - 1) + 4(2n) mod 8 = 0

Which is naturally…

((x2 - 1) + 8n) mod 8 = 0

And we’re pretty much done. We know that (x^2^ - 1) is divisible by eight due to our inductive hypothesis. We also know 8n is divisible by eight. Therefore the sum of the two numbers must be divisble by 8. And the proof is in the pudding.

Ok, some of you are probably thinking I am hand waving that last conclusion. So I will quickly prove the last step. Since we know that the left hand side is divisible by eight, we can substitute 8m where m is an integer (the very definition of a number divisible by eight).

That leaves us with…

(8m + 8n) mod 8 = 0

which factors to…

(8(m + n)) mod 8 = 0

Conclude the proof for formality sake\ And thus, proposition is true for all odd integers.

Sorry for such a long boringNo need to thank me for a long and scintillating math post, but it’s been a loooong time since I’ve stretched my math muscles. This was a fun excercise in inductive proofs.

So how does an inductive proof prove anything? At first glance, for those unfamiliar with inductive proofs, it hardly seems like we proved anything. Our proof rests on an assumption. We stated that if our assumption is true for one odd number, then next odd number must exihibit the same behavior. We went ahead and proved that to be true, but it still leaves the possibility that this isn’t true for any odd number.

That’s where our base case comes in. We showed that for x = 1, it is indeed true. So since it is true for x = 1, we’ve proved it is true for x = 3. Since it is true for x = 3, we know it is true for x = 5. Ad infinitum.

And that concludes today’s math lesson.

UPDATE: Fixed a couple typos. Thanks Jeremy! Also, optionsScalper in my comments list a lot of great links about number theory and congruences. I applied his correct suggestion to clarify the mod operations by putting a parenthesis around the left hand side.