code comments edit

branches Scott Allen writes about a Branching and Merging primer (doc) written by Chris Birmele. It is a short but useful tool agnostic look at branching and merging in the abstract. This is a nice complement to my favorite tutorial on source control, Eric Sink’s Source Control HOWTO.

Another useful resource on branching strategies is Steve Harman’s guide to branching with CVS.

The primer takes a tool agnostic look at branching and points out several branching models. One thing to keep in mind is that not every model makes use of your source control tool’s branching feature. In particular, let’s take a closer look at the Branch-per-task model. This model is almost universally in use via what I call implicit branches, which are private and not shared among other team members.

Using a pessimistic locking source control system like Visual Source Safe (VSS), every time you check out a file (which grants you an exclusive lock on that file), you are implicitly making a branch as soon as you edit that file. However, this is not a branch that VSS recognizes. It is merely a branch by fact that the code on your system is not the same as the code in the repository. Also consider that other team members may be making changes to other files in the same codebase. Perhaps files that contain classes that the file you are working on are dependent. So when you check that file back in, you are performing an implicit merge.

This type of implicit branching pretty much maps to the primer’s Branch-per-Task model of branching. Optimistic locking source control systems such as CVS and Subversion make this implicit branching and merging a bit more explicit. Rather than checking out a file, you typically update your local desktop with the latest version from the repository and just start working on files. There is no need to exclusively lock files by checking them out which only gives you the illusion of safety.

When you are ready to commit your changes back into the system, you typically get latest again and merge in any changes that may have been committed by other team members into your local workspace. Finally, you commit your local changes (assuming everything builds) and resolve any automatic merge conflicts (which is may not be very likely since you just pulled all changes from the repository into your local workspace unless there is a lot of repository activity going on).

The point here is to recognize that the implicit branching model (branch-per-task) is almost certainly already in use in your day to day work. It is not necessary to employ your source control’s branching feature to employ this branching model, unless you need multiple developers working on that single task. In that case, you would create an explicit branch for that task so that it can be shared. However, keep in mind that when multiple developers work on an explicit branch, the branching and merging model for that individual branch will look like the implicit branch-per-task model as I described.

comments edit

I have been looking for this for a looong time.

Jon Galloway writes about two sites created by Dan Vine that allow you to enter an URL and it will display a screenshot of the website as rendered by IE7 or Safari. This is extremely useful for testing out web designs on alternate platform. Much cheaper than buying a Mac to test your work on Safari browsers.

I also want to commend Dan for such clean design. Each site is like a well written function. It does one thing and it does it well. A joy to use really.

ieCapture iCapture

One thing to note, iCapture produces a screenshot of the entire page, potentially producing a tall image. At the time of this writing, ieCapture takes a screenshot of the actual browser window, which doesn’t show the entire document.

comments edit

Greg Young takes my Testing Mail Server to task and asks the question, what does it test that a mock provider doesn’t?

It is a very good question and as he points out in his blog post on the subject, it seems like a lot of overhead for very little benefit. For the most part, he is right.

To my defense, and as Greg points out, I would not start with such a test when writing email functionality. I would start with the mock email provider and follow the typical Red-Green TDD pattern of development. However there are cases where this approach does not test enough and this testing server was necessitated by some real world scenarios I ran into.

For example, in some situations, it is very important to understand the exact format of the raw SMTP message that is sent. Some systems actually use email from server to server to kick off automated tasks. In that situation, it helps to know that the SMTP message is formatted as expected by the receiving server. For example, you may want to make sure that the appropriate headers are sent and that the message is not a multi-part message. This approach lets you get at the raw SMTP message in a way that the mock provider approach cannot.

A more common issue is when sending mass mailings such as newsletters to subscribers. At one former employer, we had real difficulty getting our emails to land in our user’s mailbox despite adhering to appropriate SPAM laws and only mailing to subscribed users.

It turns out that actually landing a mass mailing even to users who want the email is very tough when dealing with Hotmail, yahoo, and AOL accounts. Something as seemingly innocuous as the X-Mailer header value can trigger the spam filters.

In this case, this very much falls under the rubrik of an integration test, as I am testing the actual mailing component in use. But I am not only testing the particular mailing component. I am also testing that my code uses the mailing component in a correct manner.

So in answer to the question, Where’s the red bar? The red bar comes into play when I write my unit test and asser that the X-Mailer header is missing. The green bar comes into play when I make sure to remove the header. I could probably test this with a mock object as well, but I have been burnt by mailing components that did not remove the X-Mailer header but simply set the value to blank, when I really intended it to be removed. That is not something a mock object would have told me.

comments edit

In spirit, this is a follow-up to my recent post on unit-testing email functionality.

This probably doesn’t apply to those of you who have reached O/RM nirvana with such tools as NHibernate etc… ADO is probably just a distant memory, not unlike those vague embarassing recollections of soiling yourself in a public location long ago.

But I digress.

For the rest of us, we sometimes need to get knee-deep in ADO. For example, even though Subtext abstracts away the data access via the provider model, I still want to test the data provider itself, right?

Subtext’s has a series of methods that each call a stored procedure and return an IDataReader. The returned data reader is passed to another class which populates entity objects using the data reader.

In my unit tests, it would be nice to have a means to attach a data reader to an in-memory object structure rather than directly to the database. That is where my StubDataReader class comes into play.

It implements the IDataReader interface and provides a quick and dirty way to create an in-memory object structure. By quick and dirty I mean that you do not need to build out a table schema first. The code to set up the stub data reader is quite simple.

Single Result Set

If you are dealing with a Data Reader that should only return one result set (which seems to be the vast majority of cases), then setting it up would look like this:

DateTime testDate = DateTime.Now;
StubResultSet resultSet 
   = new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);
resultSet.AddRow(2, "Test2", testDate.AddDays(1));
            
StubDataReader reader = new StubDataReader(resultSet);

//Advance to first row.
Assert.IsTrue(reader.Read());

// Assertions            
Assert.AreEqual(1, reader["col0"]);
Assert.AreEqual("Test", reader["col1"]);
Assert.AreEqual(testDate, reader["col2"]);

In the above snippet, I create an instance of StubResultSet with a list of the column names. I then make a couple of calls to AddRow. Notice that AddRow takes in a param array of object instances. This is the quick and dirty part. Since the StubDataReader doesn’t require setting up a schema before-hand, it will not validate that the objects added to the columns of the rows are the correct type. It just doesn’t have that information. But this isn’t all that important since this class is specifically for use in unit testing scenarios.

Multiple Result Sets

Not everyone realizes this, but you can iterate over multiple result sets with a single data reader instance. Simulating that scenario is quite easy.

DateTime testDate = DateTime.Now;
StubResultSet resultSet 
   = new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);

StubResultSet anotherResultSet 
   = new StubResultSet("first", "second");
anotherResultSet.AddRow((decimal)1.618, "Foo");
anotherResultSet.AddRow((decimal)2.718, "Bar");
anotherResultSet.AddRow((decimal)3.142, "Baz");

StubDataReader reader 
   = new StubDataReader(resultSet, anotherResultSet);

//Advance to first row.
Assert.IsTrue(reader.Read());

Assert.AreEqual(1, reader["col0"]);

//Advance to second ResultSet.
Assert.IsTrue(reader.NextResult(), "Expected next result set");

//Advance to first row.
Assert.IsTrue(reader.Read());
Assert.AreEqual((decimal)1.618, reader["first"]);
Assert.AreEqual("Foo", reader["second"]);

In this snippet, I create two StubResultSet instances and pass it to the constructor of the DataReader. Afterwards, you can see that the code makes sure to test that the NextResult functions properly.

The above code snippets above are excerpts from the unit tests I wrote for this code. Although this code is more complete than the mail server example, there are a couple methods that haven’t been well tested because I have never run into a situation in which I needed them. I put in various comments so feel free to improve this and let me know about it. This code is within the UnitTests.Subtext project in the Subtext solution in our Subversion repository.

You can download the code here , but as before, I do not guarantee I will update the link to have the latest code. You can access our Subversion repository for the latest.

comments edit

Mud BathMy wife received a free day at the Glen Ivy Hot Springs Spa from our friends Dan and Judy (the same Dan to whom my last non-geek post was dedicated).

So the four of us headed over there yesterday for a day of relaxation. The day consisted of sitting in a stinky hot sulfur mineral jacuzzi bath, then swimming in the pool, taking a nap, eating lunch, and finally covering ourselves from head to toe in mud.

I was a little too aggressive in covering myself in mud, slathering it on and getting plenty of it in my eyes. I didn’t pay attention to the memo to rub it around the eyes and not in the eyes. You don’t say!

I remember as a kid always being admonished about getting too muddy when playing outside. Now as an adult, I pay for the experience. Must be some form of latent rebellion.

comments edit

MailSo you are coding along riding that TDD high when you reach the point at which your code needs to send an email. What do you do now?

You might consider writing something that looks like:

EmailMessage email = new EmailMessage();
email.FromAddress = new EmailAddress(from);
email.AddToAddress(new EmailAddress(to));
email.Subject = subject;
email.BodyText = message;

SmtpServer smtpServer = new SmtpServer(SmtpServer, Port);
email.Send(smtpServer);

But you, being a TDD god(des) know better and quickly refactor that into some sleek code that uses an EmailProvider. This ensures that your code is not tied to any specific email implementation and will make unit testing your code easier. Just swap out your concrete email provider for a unit test specific email provider. Now your code looks like:

EmailProvider.Instance().Send(to, from, subject, message);

But a nagging thought still pulls at the edge of your consciousness. “Shouldn’t I unit test my concrete email provider and actually make sure the email gets sent correctly?

I certainly think so. As for the semantic arguments around whether this really constitutes an Integration Test as opposed to a Unit Test, please don’t bore me with your hang-ups. Either way, it deserves a test and what better way to test it than using something like MbUnit or NUnit.

Wouldn’t it be nice to test your email code like so?

DotNetOpenMailProvider provider = new DotNetOpenMailProvider();
NameValueCollection configValue = new NameValueCollection();
configValue["smtpServer"] = "127.0.0.1";
configValue["port"] = "8081";
provider.Initialize("providerTest", configValue);

TestSmtpServer receivingServer = new TestSmtpServer();
try
{
    receivingServer.Start("127.0.0.1", 8081);
    provider.Send("phil@example.com", 
                "nobody@example.com", 
                "Subject to nothing", 
                "Mr. Watson. Come here. I need you.");
}
finally
{
    receivingServer.Stop();
}

// So Did It Work?
Assert.AreEqual(1, receivingServer.Inbox.Count);
ReceivedEmailMessage received = receivingServer.Inbox[0];
Assert.AreEqual("phil@example.com", received.ToAddress.Email);

That there code starts up a mail server, sends an email to it, and then checks that the mail server received the email. It also quickly checks the to address.

This is a snippet of an actual unit test within the Subtext codebase.

A long while ago I discovered a wonderful .NET based freeware mail server written by Ivar Lumi. I decided to write a wrapper specifically for unit testing scenarios. I added the TestSmtpServer to a new project named Subtext.UnitTesting.Servers in the Subtext VS.NET solution.

The wrapper parses incoming SMTP messages and adds an ReceivedEmailMessage instance to the Inbox custom collection. This makes it easy to quickly examine the email messages sent via SMTP in your unit test.

As this is a very early draft, there are some key limitations. I have yet to implement multi-part messages and attachments in the object model. I also punted on dealing with multiple to addresses. However, the ReceivedEmailMessage class does have a RawSmtpMessage property you can examine. For now, it works very well for simple text based emails.

Over time, I hope to implement these more complicated testing features as the need arises. However, if you find this useful and would like to contribute, please do!

If you want to view the latest code, check out these instructions for downloading the latest Subtext code using Subversion.

Or you can simply download this one project here, though keep in mind that I will be updating this project, but not necessarily this link to the project.

Since the project is specifically for unit testing purpsose, I went ahead and embedded the unit tests for this server within the project itself using MbUnit references. However you can simply swap out the assemblies and references to use NUnit if that is your preference.

comments edit

Last night we went out with friends to celebrate Akumi’s birthday. Somehow the topic of my blog came up in conversation. Perhaps I have a tendency to interject the topic of “blogging” every chance I get. I can be annoying that way.

In any case, my friend Dan points out that he wishes I would write a little more personal content. His poor eyes get tired from scrolling down through the reams of code which is all meaningless gibberish to him.

Well Dan, this post is for you.

Unfortunately I have nothing to say in this particular post. I slept in late today and my day is just beginning. I have a bit of work to do for a client so I need to get head down into coding.

In the meanwhile, for all my friends who don’t care for my technical gibberish, you can subscribe to my Day to Day (rss) category. This contains all my non-technical posts 100% free of code.

comments edit

I noticed this odd post on SimpleBits, Dan Cederholm’s website. It is a list of words that he can easily type with his left hand. One has to ask for what reason he is keeping his right hand free? But I, being a man of good taste, won’t go there.

For me, this list is quite different. Several years ago I was suffering from a lot of wrist pain due to typing. I started looking into all sorts of remedies. One remedy I tried was taking some time to learn the Dvorak Simplified Keyboard layout. My coworker at the time (and now business partner) Micah also did the same.

We simply full on took the plunge at work. It was a slow period so we downloaded a little practice app, switched our regional settings to Dvorak, and started practicing. When I had to respond to emails or write code, it was quite laughable how slow and clumsy I was…at first.

Soon enough I picked up speed and probably type faster in Dvorak than I ever did in QWERTY. Since I never got around to buying a Dvorak keyboard, I was forced to really learn touch typing. If you watch me type slowly on a keyboard, it would confuse the heck out of you as I am hitting all the wrong keys to produce the right letters.

In any case, here are a few words that I can type with my left hand using the Dvorak layout.

  • puke
  • pee
  • keep
  • peak
  • quake
  • pique
  • oak
  • quux (metasyntactic variable such as foo, bar, baz)

That is quite a limited vocabulary.

comments edit

I don’t write much about my personal life here because most days are pretty mundane and not unlike other days I’ve had. If I were to write about my day, most entries would look like the following…

Today I woke up, had some breakfast, said goodbye to the wife, read my blogs, wrote some code, walked the dog, said hello to the wife, ate dinner, spent time with the wife, worked some more, snuck in a bit of Oblivion, went to sleep.

What a travesty of a run-on sentence!

So, my dear readers, I have done you a service of sparing you the banality of my life.

However this weekend is a bit special as my wife’s family (mom, brother, and brother’s wife) are in town from Japan to observer the one year anniversary of her otosan (dad) passing away.

While last year was an understandably somber affair, this year has been very light and fun. We drove down to Chula Vista to visit the location in which he was found. Afterwards, we drove up to San Diego and had the best sushi around at Sushi Ota. Mr. Ota (or Ota-san as we call him) is a family friend and took very good care of us, making all sorts of creative interstitial treats between our orders.

Jon Galloway also stopped by the Residence Inn where we were staying so I could trash him in table tennis. I had to lighten up on my vicious serve a bit otherwise it just would’ve been ugly.

We also took a boat ride in Oceanside to the point at which we spread Otosan’s ashes. My brother-in-law took some great photos, such as the sea lions basking on a buoy.

Sea Lions

Everytime I ride the boat I start to wonder what it would be like to sell our place and live on a boat. But I realize they have the same parking congestion that we have.

Today I am back in Los Angeles and back to work while they are out shopping. It is interesting to see their shopping choices. They were so excited to purchase some sets of tupperware at Ikea because it was the fraction of the cost of similar containers in Tokyo.

company culture comments edit

Implied policies are policies that are never written in any employee manual, but are implied due to real world practices or are side effects of explicit policies. The classic example is when an employee gives notice to an employer and the employer counter-offers with a raise. In some cases, a raise that was refused earlier.

This was recently well illustrated by Scott Adams in the Dilbert comic strip on May 14 (click image to see full-size)

This is probably all too common in many workplaces. I certainly have worked at places in which the only means to receiving a raise is to threaten to quit. At one work place, I knew of a couple coworkers who over the years threatened to quit several times each, receiving a raise in compensation of one form or another each time.

In most cases, this is symptomatic of a dysfunctional work environment that is incapable of valuing employees and paying them what they are worth.

Good managers pay attention to implied policies as much as they do the explicit policies. This is sometimes easier said than done, as it is not always clear what the unintended side-effects of a policy might create. Mary Poppendieck highlights several examples (pdf) of the unintended side-effects of common popular compensation policies. The recent announcement to dismiss the infamous Microsoft Curve is perhaps a recognition of the negative side effects of peer competitive approaches to compensation.

Johanna Rothman points out another implied policy when management is unwilling to budge on any of the four key constraints of software development:

  • Resources
  • Quality
  • Scope
  • Time

If management stubbornly persists on asking for all features (scope) without willing to budge on time, resources, or quality. Then managment is making an implicit decision. As Johanna states (and I reword), not making a decision is an implicit decision. By not deciding on which features to prioritize, management is effectively delegating strategic decisions concerning which projects to staff and which to postpone.

Once you start taking a hard look at your workplace, you can probably come up with a laundry list of implicit policies. What are some of the ones you’ve experienced?

personal comments edit

Security expert Bruce Schneier writes a fantastic essay on the value of privacy. This is a great response to the rhetorical question “If you aren’t doing anything wrong, what do you have to hide?” often used to counter privacy advocates.

A couple key points he makes.

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

It reminds me of this political cartoon in the paper today.

What do terrorists
hate?

via the Washington Post

comments edit

Not too long ago I mentioned that the Subtext team is using CruiseControl.NET for continuous integration. Well Simone Chiaretta, the developer who set this up, wrote up an article describing Continuous Integration and the various utilities that Subtext uses in its CI process.

As you can see in the screenshot, the last build succeeded. Check out this small snippet from our NCover report

As you can see, we have a bit of work to do. But remember, code coverage isn’t everything.

comments edit

Better grab this before they take away my DNN license. But first, let me give you a bit of background.

Background

Past versions of DotNetNuke typically came with a source code release and an installation release. Many developers (myself included) look at DNN as a platform and prefer not to touch the DNN source code. Once you start tweaking the source code, you open up a world of headaches if you plan on upgrading to the next version of DNN since you add the pain of making sure to migrate your own changes. DNN provides plenty of integration and extensibility points that for the most part, touching the source code is unnecessary.

Instead, I set up my projects to only reference the DNN assemblies and include the *.aspx, *.ascx, etc… files without the code behind. If you’ve worked with DNN before, you may be familiar with the My Modules technique which included the famous _DNNStub project.

But now comes ASP.NET 2.0 which introduces a new web project model. To put it mildly, there was a bit of a negative reaction in some circles of the community around this new project model, which to be fair, serves its purpose but is not for everybody.

Naturally, when DNN 4.* was released, it was built upon this new model. Unfortunately for module developers used to the existing manner of development, the recommended method for developing modules now involves adding code directly into the special App_Code directory of the DNN web project. Shaun Walker, the creator and maintainer of DNN, wrote up a helpful guide to module development for DNN 4.* using the new Starter Kits.

Web Application Projects Introduced

But now that Microsoft released the new ASP.NET 2.0 Web Application Projects model, I thought there had to be a better way to develop modules that took advantage of the Web Application projects and was more in line with the old manner of doing it. I figured it couldn’t be that hard.

Also, I wanted to take advantage of the WebDev.WebServer (aka Cassini) that comes with VS.NET 2005. Shaun had mentioned that they had problems with running DNN using it, but I had to see for myself. The benefits of a completely self-contained build as well as being able to run the local development site on a webroot (for example http://localhost:8080/) on WinXP was well worth an attempt.

Web Application Projects Unleashed

So after installing the Web Application Project templates and add-in, I created a new web application project in VS.NET. To give myself a bit of a challenge (and since I may decide to add a custom page for some reason later), I chose to create a C# project as shown in the screenshot.

New Web Application Project
Dialog

As per my usual process, I created a folder named ExternalDependencies in the project and copied all the DNN assemblies from the Installation distribution (DotNetNuke_4.0.3_Install.zip) into that folder (this is just the way I roll). To add those assemblies as assembly references, I right-clicked the project, selected Add Reference, and then selected all the assemblies in that folder.

Add Reference Dialog

The next step was to add the special App_GlobalResources folder to the project by simply right clicking on the project and selecting Add | Add ASP.NET Folder | App_GlobalResources.

Adding Global Resources Context
Menu

After copying the contents of App_GlobalResources from the installation distribution into that folder, I copied all the other non-code files, *.ascx, *.aspx etc… into the project. At this point I am almost done getting the basic project tree setup. The one last issue to deal with is the code behind for Global.asax. Even with an installation distribution of DNN 4, this is included because under the Web Site project model, it gets compiled at runtime (unless pre-deploying). Personally I think this code could be put in an HttpModule. In any case, I translated the file into C#. This was actually a bit trickier than I expected because of the use of Global variables.

After completing these steps, I renamed release.config to web.config, updated the connection string, and hit CTRL+F5. The WebDev.Webserver started up pointing to the web application project using the URL http://localhost:2334/ (your results may vary) and it all worked!

One major benefit to using WebDev.WebServer is that getting this site running on a new development machine takes one less step. No need to futz around with IIS. Not only that, since I do my development on Windows XP which only allows one website, I used to have to develop DNN sites in a virtual application. This caused a problem when deploying the site because static image and css file references had to be updated.

With this approach, my URLs on my dev server match the URLs in the production site. One caveat to be aware of is that this approach only works if you are not using any special features of IIS. I recommend testing on a staging server that is running IIS before deploying to a production server with IIS. I only use Cassini for development purposes, not to actually host a site.

Module Development

I went ahead and added some pre-existing modules to the project (upgrading them to .NET 2.0) as separate projects. I was able to add project references from my Web Application Project to the individual module projects. As far as I can tell, there is no longer the need to have a BuildSupport project with this approach.

Download

To save you some time I am including the barebone solution and project here based on the DNN 4.0.3 distribution.

Keep in mind that this is a “pre-install” project meaning that after you set it up, you will need to rename release.config to web.config and update the connection string settings to point to your database. Afterwards, hit CTRL+F5 and walk through the DNN web-based installation process. That process will make filesystem changes so make sure you have appropriate write access.

Let me know if this works for you or if you find any mistakes, problems, issues with it.

personal, tech comments edit

So Adam Kinney isn’t quite as ga-ga over Oblivion as I. Understandable. As he points out, it is missing the key ingredient of social interaction with other real humans.

Now why would you want to interact with other humans when you have the computer? ;) I suppose it is true that conversation via a drop down list isn’t doesn’t produce quite as stimulating a conversation. What if the AI reaches the point that a game like Oblivion is indistinguishable from an online multi-player game? Would that be as satisfying?

I digress. As Adam states,

I don’t think I’ve ever enjoyed any RPG video game as much as carefree pencil, paper and dice role-playing from the high school years.

Well that’s because no amount of HDR lighting, Anti-aliasing, large texture maps is going to match the lighting effects and graphics going on in your noggin.

I admit, I was into the paper and dice game back in the day. I lived in Guam at the time and kept it on D-L for very self-conscious reasons. The funny part is that my friends, all in different circles (Hawaiian volleyball player, skateboarder, heavy metal dude, African American dude, etc…) didn’t know there was any stigma (imagined or real) to the game. I would cringe when they would tell our friends we were heading to so and so’s house to play Dungeons and Dragons.

But again, I digress…

My company regularly hosts internal conference calls via Skype. It got me thinking one day that Skype would be a wonderful means to play paper and dice Role Playing Games. The difficulty in getting a game together after highschool was not only the lack of time, but also the sparseness of interested parties. There is no way you are going to get six people to drive across town to meet all on the same day and time.

With Skype, geographical location is no longer a limitation. Granted you still lose some of the benefits of physical presence such as passing the Doritos and knocking over your friend’s figurine when he accidentally hits you with his fireball. But at least you have a much larger pool of people to choose from to start a game. Is anyone doing this?

comments edit

When starting a new DotNetNuke based website, I like to develop it on my local machine, and when everything is ready for a first deployment, I deploy to whatever staging or production server is relevant.

This has worked fine over the years, but I ran into a problem recently when applying this approach to DNN 4.03. I had everything working just fine on my local machine, but after deploying to our production server, I could not get the site to work. It would give me some message about a NullReferenceException when trying to get the portal.

Opening up Query Analyzer, I could select the records from the dnn_PortalAlias table and see that everything matched up. I banged my head on this for a long time.

I finally had the idea to change the connection string to point to a brand new database. I thought maybe I would find some discrepancy in the database records. Perhaps I deleted something or other important. After the change, I hit the site which invoked the web-based installation process. Once that was complete I tried to get a list of records from dnn_PortalAlias and got an error message Invalid object name 'dnn_PortalAlias'. Huh?

Executing sp_tables showed there was no dnn_PortalAlias table. Instead, there was a PortalAlias table. Aha! I looked in web.config and indeed the ObjectQualifier value was set to the empty string. So how did that change from my development machine to the production machine?

Well the source zip archive for DNN 4.0 ships with two config files. One named development.config and one named release.config. Before deploying, you are supposed to rename release.config to web.config. However, I had assumed that on my local machine, I could simply rename development.config to web.config for development purposes. I assumed that the only differences were in some debug settings. Boy was I wrong!

It turns out that the ObjectQualifier setting was set to dnn_ in development.config. This is the value I would expect as this was the typical installation I used in previous versions. In any case, I hope this saves you time if you happen to run into it. The fix on my production server was simply to change the ObjectQualifier value to be dnn_.

comments edit

Seems like everyone and their mother has an opinion on the “right” way to have comment threads. Currently Subtext supports the same model as .TEXT did, a simple linear sequential list of comments. It is simple and gets the job done.

The 37Signals blog addresses the question of comments and presents several examples of how different sites handle it.

Personally I like the first example. It retains the simplicity and fluidity of the linear approach, while adding a bit of useful meta-data. What do you think?

comments edit

Party Mode Button It is so easy to get caught up in your day to day work and home duties and forget to take a break to really cut loose. The answer, my friends, is that big red button in the image to the left there. That there initiates Party Mode! Set this sucker up in your office or home bedroom, and whenever life catches up to you… Instant Party!

This here is the invention of some MIT students who pimped their dorm room with an instant rave setup. We are talking six video cameras, electric blinds, lights, laser, LED screens, music server, voice activation, blacklights, fog machine, etc etc…

Scroll down to see a couple videos they post of the setup in action. Now all they have to do to complete the club experience is charge $5 for a bottle of water and $12 for a crappily mixed drink in a plastic cup. Brilliant!

comments edit

Lest you think I sit around spending all my time on computer games and soccer, I also try to write occasionally.

Today an article I have been working for a while has finally been published on DevSource. It is entitled a A Developer’s Introduction to Microformats and attempts to present a clear introductory look at Microformats. This is my second article for DevSource, the first being one I helped that crazy Bob Reselman write.

I was fired up to write this article after attending the Mix06 conference. Hearing Bill Gates mention Microformats (whether O’Reilly fed it to him or not) highlights the fact that Microformats are poised to really take off. There are some detractors and potential real problems with syndicating Microformats, so it will be interesting to see how they are solved.

In any case, check it out and let me know what you think. Did I present it well?

And before I forget, big ups to the Microformats mailing list for helping me think through some of these topics I covered.

comments edit

I once thought I was a bit of a blogging addict. To get settled into work I would read my blogs. I’d tune back in while eating my lunch. And if I went on vacation, I thought about the huge number of unread feeds. Heck, I even went and got involved in RSS Bandit and Subtext so that I could work on the means of delivering blogs.

Oblivion Box But now I realized that my blogging addiction is merely the mild craving for milk after a cookie. I have discovered what true addiction is, and its name is Oblivion.

Steve Yegge was right when he says…

…if you’re not playing Oblivion, then I highly, nay strongly recommend that you don’t start, or you’ll suddenly develop an aversion to Real Life…

This is quite simply the best computer game I have ever had the pleasure to play. I remember spending hours as a kid playing such classics as the Phantasie, Ultima III, Ultima IV, The Bard’s Tale and Dungeon Master. Dungeon Master at the time elevated the FRPG genre for me because it was the first that really incorporated first person realtime playing. But I remember drawing up plans for the ultimate game. Apparently Bethesda swiped those plans from my brain and decided to do even better.

So why is this game so damn addicting? It is a combination of a lot of things really. First, the skill based system really seems to mean something. I remember there was never a point in playing a thief in most role playing games because you would just get killed first. Most games were simply hack and slash fight your way out of every situation.

Oblivion Screenshot But with Oblivion, you have the opportunity to really put those sneaking and lockpicking skills to good use in daring missions where simply blasting your way through really isn’t a good option. I also like the fact that lock-picking isn’t simply rolling a die and comparing it to a skill (though you can resort to that option). You have the ability to actually try and pick that lock.

If there were no other characters in the game, it would be like Myst, but with the ability to fully explore your environment. The scenery in this game is jaw dropping.

But ultimately, I think the open-ended gameplay really kicks it up a notch. After a short stint as a gladiator (got my ass handed to me) my character is now working his way up the Thieves guild and trying to advance in the Mage’s guild. At the beginning of the game, some important Emporer got shanked and I am supposed to deliver his amulet somewhere, but I sort of got sidetracked.

Now I am travelling around, checking out the scenery, and getting way too little sleep. I suppose I should look into delivering this amulet, but first I have some pilfered goods to fence off and I want to help this half-orc reclaim his heritage.