humor comments suggest edit

As you may well know, today is June 6, 2006 or in shorthand notation 6/6/06, the mark of the beast. As the church lady would say, “mmmmm, isn’t that special?”

Dana Carvey is the Church

This day reminds me of a song I heard a loooong time ago called “Sweat Loaf” by the Butthole Surfers (pleasant name). It starts off with a nice conversation between an all American boy and his all American dad.

  1. Son


  2. Dad

    Yes, son?

  3. Son

    What does regret mean?

  4. Dad

    Well a funny thing about regret is, its better to regret something you have done, then to regret something you haven’t done….Oh and if you see your mother this weekend remember to tell her- SATAN, SATAN, SATAN

So in commemoration of this numerologically significant day, be sure to show the evil horns gesture to somebody today and include a little head banging. Our President will lead us with a showing of the horns.

Bush Horns

I always knew he was in with the Devil

tdd comments suggest edit

If you’ve read my blog for any length of time, you know that I am a fan of the MbUnit generative unit test framework.

What I haven’t been a fan of is linking to the MbUnit website. If you want to refer someone to NUnit, you simply link to But if you want to refer someone to MbUnit, you have to type out this monstrosity:

So in addition to complaining about it, I decided to do something about it. I purchased and pointed it to their website. Now when I mention MbUnit in a post, I can spare my fingers a few keystrokes.

One other issue I hope this helps solve is that the MbUnit website comes up second in Google search results. First is its old hosting location. Hopefully everyone will start updating their links to point to instead and help solve that issue.

open source, blogging comments suggest edit

Recently I highlighted a site named DotNetKicks which is like, but targetted to .NET technology. In particular I thought it was a smart move for them to share in their advertising revenue with those who submit stories.

Well to make it easier to kick stories from the convenience of your favorite RSS Aggregator, I wrote an IBlogExtension plugin so that you can submit/kick/unkick stories from RSS Bandit or any RSS Aggregator that supports the IBlogExtension plugin model (.NET 1.1 must also be on the machine).

Just download and unzip the extension to your plugins directory. The default location for RSS Bandit would be c:\Program Files\RssBandit\plugins.

Once you have it installed, restart RSS Bandit and right click on any feed item and select the DotNetKick This - Configure… menu option.

Context Menu

This will bring up the plugin configuration dialog. You should leave the URLs as they are. I made left them to be configurable in case the URL ever changes. Just enter your DotNetKicks username and password and click OK.

This will save your username and password in an xml settings file with the password heavily encrypted.

Now you can right click on a story to submit it to DotNetKicks. If the story hasn’t been submitted, you will get the following dialog.

Submit a Story Dialog

This form is pretty self-explanatory.

If a story has already been submitted, you will see the following dialog which allows you to kick it or unkick it (essentially adding your vote to the story or removing it (editor’s note: At the time of this writing, the unkick function was not working).

Kick/Unkick a story dialog

The API for DotNetKicks was published today on Gavin Joyce’s wiki. This was quite a turnaround as I emailed him on friday asking if there was a web-based API. We went back and forth formulating the API and he said he would work on it over the weekend. This morning, he sent me the URL to his wiki page describing the API! Much of the API was inspired by the API.

If you want to learn to write an IBlogExtension from scratch, check out my tutorial here. In this case, I was able to get a jumpstart by using Dare’s excellent plugin as a starting point.

Another plugin I wrote a while ago is the improved w.bloggar plugin for RSS Bandit that should hopefully be included in the next release of RSS Bandit.

Once again, in case you missed the first link to this DotNetKicks plugin, [DOWNLOAD] it.

blogging comments suggest edit

I recently learned about DotNetKicks due to a referral in my referrer logs. It is essentially a Digg knockoff, but tightly focused on the .NET community, which makes it a nice complement to Digg.

Recently, security expert Bruce Shneier wrote a piece on how security can be improved by aligning interest with capability. One example he gives is how some retail stores promise refunds if you don’t get a receipt. This keeps the employee working the register honest by aligning the store owner’s security interest with the interests of the customers. The owner is effectively hiring the customers to keep employees honest.

In a similar manner, DotNetKicks has done the same thing to promote its own growth. If you have an AdSense account, you can submit your AdSense ID and earn 50% of the advertisement revenue on the site for all stories that you submit. This aligns story submitters’ interests with DotNetKicks’ interest and should go along way to ensure that more stories are submitted.

Of course, the signal to noise ratio of submitted stories may go down as a result, but that is hardly a concern because only the stories that the community deem interesting, via the voting system, percolate to the front page and have any chance of really generating advertising revenue. The question I have is what is the incentive to users to kick stories apart from the community benefits?

code comments suggest edit

branches Scott Allen writes about a Branching and Merging primer (doc) written by Chris Birmele. It is a short but useful tool agnostic look at branching and merging in the abstract. This is a nice complement to my favorite tutorial on source control, Eric Sink’s Source Control HOWTO.

Another useful resource on branching strategies is Steve Harman’s guide to branching with CVS.

The primer takes a tool agnostic look at branching and points out several branching models. One thing to keep in mind is that not every model makes use of your source control tool’s branching feature. In particular, let’s take a closer look at the Branch-per-task model. This model is almost universally in use via what I call implicit branches, which are private and not shared among other team members.

Using a pessimistic locking source control system like Visual Source Safe (VSS), every time you check out a file (which grants you an exclusive lock on that file), you are implicitly making a branch as soon as you edit that file. However, this is not a branch that VSS recognizes. It is merely a branch by fact that the code on your system is not the same as the code in the repository. Also consider that other team members may be making changes to other files in the same codebase. Perhaps files that contain classes that the file you are working on are dependent. So when you check that file back in, you are performing an implicit merge.

This type of implicit branching pretty much maps to the primer’s Branch-per-Task model of branching. Optimistic locking source control systems such as CVS and Subversion make this implicit branching and merging a bit more explicit. Rather than checking out a file, you typically update your local desktop with the latest version from the repository and just start working on files. There is no need to exclusively lock files by checking them out which only gives you the illusion of safety.

When you are ready to commit your changes back into the system, you typically get latest again and merge in any changes that may have been committed by other team members into your local workspace. Finally, you commit your local changes (assuming everything builds) and resolve any automatic merge conflicts (which is may not be very likely since you just pulled all changes from the repository into your local workspace unless there is a lot of repository activity going on).

The point here is to recognize that the implicit branching model (branch-per-task) is almost certainly already in use in your day to day work. It is not necessary to employ your source control’s branching feature to employ this branching model, unless you need multiple developers working on that single task. In that case, you would create an explicit branch for that task so that it can be shared. However, keep in mind that when multiple developers work on an explicit branch, the branching and merging model for that individual branch will look like the implicit branch-per-task model as I described.

comments suggest edit

I have been looking for this for a looong time.

Jon Galloway writes about two sites created by Dan Vine that allow you to enter an URL and it will display a screenshot of the website as rendered by IE7 or Safari. This is extremely useful for testing out web designs on alternate platform. Much cheaper than buying a Mac to test your work on Safari browsers.

I also want to commend Dan for such clean design. Each site is like a well written function. It does one thing and it does it well. A joy to use really.

ieCapture iCapture

One thing to note, iCapture produces a screenshot of the entire page, potentially producing a tall image. At the time of this writing, ieCapture takes a screenshot of the actual browser window, which doesn’t show the entire document.

comments suggest edit

Greg Young takes my Testing Mail Server to task and asks the question, what does it test that a mock provider doesn’t?

It is a very good question and as he points out in his blog post on the subject, it seems like a lot of overhead for very little benefit. For the most part, he is right.

To my defense, and as Greg points out, I would not start with such a test when writing email functionality. I would start with the mock email provider and follow the typical Red-Green TDD pattern of development. However there are cases where this approach does not test enough and this testing server was necessitated by some real world scenarios I ran into.

For example, in some situations, it is very important to understand the exact format of the raw SMTP message that is sent. Some systems actually use email from server to server to kick off automated tasks. In that situation, it helps to know that the SMTP message is formatted as expected by the receiving server. For example, you may want to make sure that the appropriate headers are sent and that the message is not a multi-part message. This approach lets you get at the raw SMTP message in a way that the mock provider approach cannot.

A more common issue is when sending mass mailings such as newsletters to subscribers. At one former employer, we had real difficulty getting our emails to land in our user’s mailbox despite adhering to appropriate SPAM laws and only mailing to subscribed users.

It turns out that actually landing a mass mailing even to users who want the email is very tough when dealing with Hotmail, yahoo, and AOL accounts. Something as seemingly innocuous as the X-Mailer header value can trigger the spam filters.

In this case, this very much falls under the rubrik of an integration test, as I am testing the actual mailing component in use. But I am not only testing the particular mailing component. I am also testing that my code uses the mailing component in a correct manner.

So in answer to the question, Where’s the red bar? The red bar comes into play when I write my unit test and asser that the X-Mailer header is missing. The green bar comes into play when I make sure to remove the header. I could probably test this with a mock object as well, but I have been burnt by mailing components that did not remove the X-Mailer header but simply set the value to blank, when I really intended it to be removed. That is not something a mock object would have told me.

comments suggest edit

In spirit, this is a follow-up to my recent post on unit-testing email functionality.

This probably doesn’t apply to those of you who have reached O/RM nirvana with such tools as NHibernate etc… ADO is probably just a distant memory, not unlike those vague embarassing recollections of soiling yourself in a public location long ago.

But I digress.

For the rest of us, we sometimes need to get knee-deep in ADO. For example, even though Subtext abstracts away the data access via the provider model, I still want to test the data provider itself, right?

Subtext’s has a series of methods that each call a stored procedure and return an IDataReader. The returned data reader is passed to another class which populates entity objects using the data reader.

In my unit tests, it would be nice to have a means to attach a data reader to an in-memory object structure rather than directly to the database. That is where my StubDataReader class comes into play.

It implements the IDataReader interface and provides a quick and dirty way to create an in-memory object structure. By quick and dirty I mean that you do not need to build out a table schema first. The code to set up the stub data reader is quite simple.

Single Result Set

If you are dealing with a Data Reader that should only return one result set (which seems to be the vast majority of cases), then setting it up would look like this:

DateTime testDate = DateTime.Now;
StubResultSet resultSet 
   = new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);
resultSet.AddRow(2, "Test2", testDate.AddDays(1));
StubDataReader reader = new StubDataReader(resultSet);

//Advance to first row.

// Assertions            
Assert.AreEqual(1, reader["col0"]);
Assert.AreEqual("Test", reader["col1"]);
Assert.AreEqual(testDate, reader["col2"]);

In the above snippet, I create an instance of StubResultSet with a list of the column names. I then make a couple of calls to AddRow. Notice that AddRow takes in a param array of object instances. This is the quick and dirty part. Since the StubDataReader doesn’t require setting up a schema before-hand, it will not validate that the objects added to the columns of the rows are the correct type. It just doesn’t have that information. But this isn’t all that important since this class is specifically for use in unit testing scenarios.

Multiple Result Sets

Not everyone realizes this, but you can iterate over multiple result sets with a single data reader instance. Simulating that scenario is quite easy.

DateTime testDate = DateTime.Now;
StubResultSet resultSet 
   = new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);

StubResultSet anotherResultSet 
   = new StubResultSet("first", "second");
anotherResultSet.AddRow((decimal)1.618, "Foo");
anotherResultSet.AddRow((decimal)2.718, "Bar");
anotherResultSet.AddRow((decimal)3.142, "Baz");

StubDataReader reader 
   = new StubDataReader(resultSet, anotherResultSet);

//Advance to first row.

Assert.AreEqual(1, reader["col0"]);

//Advance to second ResultSet.
Assert.IsTrue(reader.NextResult(), "Expected next result set");

//Advance to first row.
Assert.AreEqual((decimal)1.618, reader["first"]);
Assert.AreEqual("Foo", reader["second"]);

In this snippet, I create two StubResultSet instances and pass it to the constructor of the DataReader. Afterwards, you can see that the code makes sure to test that the NextResult functions properly.

The above code snippets above are excerpts from the unit tests I wrote for this code. Although this code is more complete than the mail server example, there are a couple methods that haven’t been well tested because I have never run into a situation in which I needed them. I put in various comments so feel free to improve this and let me know about it. This code is within the UnitTests.Subtext project in the Subtext solution in our Subversion repository.

You can download the code here , but as before, I do not guarantee I will update the link to have the latest code. You can access our Subversion repository for the latest.

comments suggest edit

Mud BathMy wife received a free day at the Glen Ivy Hot Springs Spa from our friends Dan and Judy (the same Dan to whom my last non-geek post was dedicated).

So the four of us headed over there yesterday for a day of relaxation. The day consisted of sitting in a stinky hot sulfur mineral jacuzzi bath, then swimming in the pool, taking a nap, eating lunch, and finally covering ourselves from head to toe in mud.

I was a little too aggressive in covering myself in mud, slathering it on and getting plenty of it in my eyes. I didn’t pay attention to the memo to rub it around the eyes and not in the eyes. You don’t say!

I remember as a kid always being admonished about getting too muddy when playing outside. Now as an adult, I pay for the experience. Must be some form of latent rebellion.

comments suggest edit

MailSo you are coding along riding that TDD high when you reach the point at which your code needs to send an email. What do you do now?

You might consider writing something that looks like:

EmailMessage email = new EmailMessage();
email.FromAddress = new EmailAddress(from);
email.AddToAddress(new EmailAddress(to));
email.Subject = subject;
email.BodyText = message;

SmtpServer smtpServer = new SmtpServer(SmtpServer, Port);

But you, being a TDD god(des) know better and quickly refactor that into some sleek code that uses an EmailProvider. This ensures that your code is not tied to any specific email implementation and will make unit testing your code easier. Just swap out your concrete email provider for a unit test specific email provider. Now your code looks like:

EmailProvider.Instance().Send(to, from, subject, message);

But a nagging thought still pulls at the edge of your consciousness. “Shouldn’t I unit test my concrete email provider and actually make sure the email gets sent correctly?

I certainly think so. As for the semantic arguments around whether this really constitutes an Integration Test as opposed to a Unit Test, please don’t bore me with your hang-ups. Either way, it deserves a test and what better way to test it than using something like MbUnit or NUnit.

Wouldn’t it be nice to test your email code like so?

DotNetOpenMailProvider provider = new DotNetOpenMailProvider();
NameValueCollection configValue = new NameValueCollection();
configValue["smtpServer"] = "";
configValue["port"] = "8081";
provider.Initialize("providerTest", configValue);

TestSmtpServer receivingServer = new TestSmtpServer();
    receivingServer.Start("", 8081);
                "Subject to nothing", 
                "Mr. Watson. Come here. I need you.");

// So Did It Work?
Assert.AreEqual(1, receivingServer.Inbox.Count);
ReceivedEmailMessage received = receivingServer.Inbox[0];
Assert.AreEqual("", received.ToAddress.Email);

That there code starts up a mail server, sends an email to it, and then checks that the mail server received the email. It also quickly checks the to address.

This is a snippet of an actual unit test within the Subtext codebase.

A long while ago I discovered a wonderful .NET based freeware mail server written by Ivar Lumi. I decided to write a wrapper specifically for unit testing scenarios. I added the TestSmtpServer to a new project named Subtext.UnitTesting.Servers in the Subtext VS.NET solution.

The wrapper parses incoming SMTP messages and adds an ReceivedEmailMessage instance to the Inbox custom collection. This makes it easy to quickly examine the email messages sent via SMTP in your unit test.

As this is a very early draft, there are some key limitations. I have yet to implement multi-part messages and attachments in the object model. I also punted on dealing with multiple to addresses. However, the ReceivedEmailMessage class does have a RawSmtpMessage property you can examine. For now, it works very well for simple text based emails.

Over time, I hope to implement these more complicated testing features as the need arises. However, if you find this useful and would like to contribute, please do!

If you want to view the latest code, check out these instructions for downloading the latest Subtext code using Subversion.

Or you can simply download this one project here, though keep in mind that I will be updating this project, but not necessarily this link to the project.

Since the project is specifically for unit testing purpsose, I went ahead and embedded the unit tests for this server within the project itself using MbUnit references. However you can simply swap out the assemblies and references to use NUnit if that is your preference.

comments suggest edit

Last night we went out with friends to celebrate Akumi’s birthday. Somehow the topic of my blog came up in conversation. Perhaps I have a tendency to interject the topic of “blogging” every chance I get. I can be annoying that way.

In any case, my friend Dan points out that he wishes I would write a little more personal content. His poor eyes get tired from scrolling down through the reams of code which is all meaningless gibberish to him.

Well Dan, this post is for you.

Unfortunately I have nothing to say in this particular post. I slept in late today and my day is just beginning. I have a bit of work to do for a client so I need to get head down into coding.

In the meanwhile, for all my friends who don’t care for my technical gibberish, you can subscribe to my Day to Day (rss) category. This contains all my non-technical posts 100% free of code.

comments suggest edit

I noticed this odd post on SimpleBits, Dan Cederholm’s website. It is a list of words that he can easily type with his left hand. One has to ask for what reason he is keeping his right hand free? But I, being a man of good taste, won’t go there.

For me, this list is quite different. Several years ago I was suffering from a lot of wrist pain due to typing. I started looking into all sorts of remedies. One remedy I tried was taking some time to learn the Dvorak Simplified Keyboard layout. My coworker at the time (and now business partner) Micah also did the same.

We simply full on took the plunge at work. It was a slow period so we downloaded a little practice app, switched our regional settings to Dvorak, and started practicing. When I had to respond to emails or write code, it was quite laughable how slow and clumsy I was…at first.

Soon enough I picked up speed and probably type faster in Dvorak than I ever did in QWERTY. Since I never got around to buying a Dvorak keyboard, I was forced to really learn touch typing. If you watch me type slowly on a keyboard, it would confuse the heck out of you as I am hitting all the wrong keys to produce the right letters.

In any case, here are a few words that I can type with my left hand using the Dvorak layout.

  • puke
  • pee
  • keep
  • peak
  • quake
  • pique
  • oak
  • quux (metasyntactic variable such as foo, bar, baz)

That is quite a limited vocabulary.

comments suggest edit

I don’t write much about my personal life here because most days are pretty mundane and not unlike other days I’ve had. If I were to write about my day, most entries would look like the following…

Today I woke up, had some breakfast, said goodbye to the wife, read my blogs, wrote some code, walked the dog, said hello to the wife, ate dinner, spent time with the wife, worked some more, snuck in a bit of Oblivion, went to sleep.

What a travesty of a run-on sentence!

So, my dear readers, I have done you a service of sparing you the banality of my life.

However this weekend is a bit special as my wife’s family (mom, brother, and brother’s wife) are in town from Japan to observer the one year anniversary of her otosan (dad) passing away.

While last year was an understandably somber affair, this year has been very light and fun. We drove down to Chula Vista to visit the location in which he was found. Afterwards, we drove up to San Diego and had the best sushi around at Sushi Ota. Mr. Ota (or Ota-san as we call him) is a family friend and took very good care of us, making all sorts of creative interstitial treats between our orders.

Jon Galloway also stopped by the Residence Inn where we were staying so I could trash him in table tennis. I had to lighten up on my vicious serve a bit otherwise it just would’ve been ugly.

We also took a boat ride in Oceanside to the point at which we spread Otosan’s ashes. My brother-in-law took some great photos, such as the sea lions basking on a buoy.

Sea Lions

Everytime I ride the boat I start to wonder what it would be like to sell our place and live on a boat. But I realize they have the same parking congestion that we have.

Today I am back in Los Angeles and back to work while they are out shopping. It is interesting to see their shopping choices. They were so excited to purchase some sets of tupperware at Ikea because it was the fraction of the cost of similar containers in Tokyo.

company culture comments suggest edit

Implied policies are policies that are never written in any employee manual, but are implied due to real world practices or are side effects of explicit policies. The classic example is when an employee gives notice to an employer and the employer counter-offers with a raise. In some cases, a raise that was refused earlier.

This was recently well illustrated by Scott Adams in the Dilbert comic strip on May 14 (click image to see full-size)

This is probably all too common in many workplaces. I certainly have worked at places in which the only means to receiving a raise is to threaten to quit. At one work place, I knew of a couple coworkers who over the years threatened to quit several times each, receiving a raise in compensation of one form or another each time.

In most cases, this is symptomatic of a dysfunctional work environment that is incapable of valuing employees and paying them what they are worth.

Good managers pay attention to implied policies as much as they do the explicit policies. This is sometimes easier said than done, as it is not always clear what the unintended side-effects of a policy might create. Mary Poppendieck highlights several examples (pdf) of the unintended side-effects of common popular compensation policies. The recent announcement to dismiss the infamous Microsoft Curve is perhaps a recognition of the negative side effects of peer competitive approaches to compensation.

Johanna Rothman points out another implied policy when management is unwilling to budge on any of the four key constraints of software development:

  • Resources
  • Quality
  • Scope
  • Time

If management stubbornly persists on asking for all features (scope) without willing to budge on time, resources, or quality. Then managment is making an implicit decision. As Johanna states (and I reword), not making a decision is an implicit decision. By not deciding on which features to prioritize, management is effectively delegating strategic decisions concerning which projects to staff and which to postpone.

Once you start taking a hard look at your workplace, you can probably come up with a laundry list of implicit policies. What are some of the ones you’ve experienced?

personal comments suggest edit

Security expert Bruce Schneier writes a fantastic essay on the value of privacy. This is a great response to the rhetorical question “If you aren’t doing anything wrong, what do you have to hide?” often used to counter privacy advocates.

A couple key points he makes.

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

It reminds me of this political cartoon in the paper today.

What do terrorists

via the Washington Post

comments suggest edit

Not too long ago I mentioned that the Subtext team is using CruiseControl.NET for continuous integration. Well Simone Chiaretta, the developer who set this up, wrote up an article describing Continuous Integration and the various utilities that Subtext uses in its CI process.

As you can see in the screenshot, the last build succeeded. Check out this small snippet from our NCover report

As you can see, we have a bit of work to do. But remember, code coverage isn’t everything.

comments suggest edit

Better grab this before they take away my DNN license. But first, let me give you a bit of background.


Past versions of DotNetNuke typically came with a source code release and an installation release. Many developers (myself included) look at DNN as a platform and prefer not to touch the DNN source code. Once you start tweaking the source code, you open up a world of headaches if you plan on upgrading to the next version of DNN since you add the pain of making sure to migrate your own changes. DNN provides plenty of integration and extensibility points that for the most part, touching the source code is unnecessary.

Instead, I set up my projects to only reference the DNN assemblies and include the *.aspx, *.ascx, etc… files without the code behind. If you’ve worked with DNN before, you may be familiar with the My Modules technique which included the famous _DNNStub project.

But now comes ASP.NET 2.0 which introduces a new web project model. To put it mildly, there was a bit of a negative reaction in some circles of the community around this new project model, which to be fair, serves its purpose but is not for everybody.

Naturally, when DNN 4.* was released, it was built upon this new model. Unfortunately for module developers used to the existing manner of development, the recommended method for developing modules now involves adding code directly into the special App_Code directory of the DNN web project. Shaun Walker, the creator and maintainer of DNN, wrote up a helpful guide to module development for DNN 4.* using the new Starter Kits.

Web Application Projects Introduced

But now that Microsoft released the new ASP.NET 2.0 Web Application Projects model, I thought there had to be a better way to develop modules that took advantage of the Web Application projects and was more in line with the old manner of doing it. I figured it couldn’t be that hard.

Also, I wanted to take advantage of the WebDev.WebServer (aka Cassini) that comes with VS.NET 2005. Shaun had mentioned that they had problems with running DNN using it, but I had to see for myself. The benefits of a completely self-contained build as well as being able to run the local development site on a webroot (for example http://localhost:8080/) on WinXP was well worth an attempt.

Web Application Projects Unleashed

So after installing the Web Application Project templates and add-in, I created a new web application project in VS.NET. To give myself a bit of a challenge (and since I may decide to add a custom page for some reason later), I chose to create a C# project as shown in the screenshot.

New Web Application Project

As per my usual process, I created a folder named ExternalDependencies in the project and copied all the DNN assemblies from the Installation distribution ( into that folder (this is just the way I roll). To add those assemblies as assembly references, I right-clicked the project, selected Add Reference, and then selected all the assemblies in that folder.

Add Reference Dialog

The next step was to add the special App_GlobalResources folder to the project by simply right clicking on the project and selecting Add | Add ASP.NET Folder | App_GlobalResources.

Adding Global Resources Context

After copying the contents of App_GlobalResources from the installation distribution into that folder, I copied all the other non-code files, *.ascx, *.aspx etc… into the project. At this point I am almost done getting the basic project tree setup. The one last issue to deal with is the code behind for Global.asax. Even with an installation distribution of DNN 4, this is included because under the Web Site project model, it gets compiled at runtime (unless pre-deploying). Personally I think this code could be put in an HttpModule. In any case, I translated the file into C#. This was actually a bit trickier than I expected because of the use of Global variables.

After completing these steps, I renamed release.config to web.config, updated the connection string, and hit CTRL+F5. The WebDev.Webserver started up pointing to the web application project using the URL http://localhost:2334/ (your results may vary) and it all worked!

One major benefit to using WebDev.WebServer is that getting this site running on a new development machine takes one less step. No need to futz around with IIS. Not only that, since I do my development on Windows XP which only allows one website, I used to have to develop DNN sites in a virtual application. This caused a problem when deploying the site because static image and css file references had to be updated.

With this approach, my URLs on my dev server match the URLs in the production site. One caveat to be aware of is that this approach only works if you are not using any special features of IIS. I recommend testing on a staging server that is running IIS before deploying to a production server with IIS. I only use Cassini for development purposes, not to actually host a site.

Module Development

I went ahead and added some pre-existing modules to the project (upgrading them to .NET 2.0) as separate projects. I was able to add project references from my Web Application Project to the individual module projects. As far as I can tell, there is no longer the need to have a BuildSupport project with this approach.


To save you some time I am including the barebone solution and project here based on the DNN 4.0.3 distribution.

Keep in mind that this is a “pre-install” project meaning that after you set it up, you will need to rename release.config to web.config and update the connection string settings to point to your database. Afterwards, hit CTRL+F5 and walk through the DNN web-based installation process. That process will make filesystem changes so make sure you have appropriate write access.

Let me know if this works for you or if you find any mistakes, problems, issues with it.

personal, tech comments suggest edit

So Adam Kinney isn’t quite as ga-ga over Oblivion as I. Understandable. As he points out, it is missing the key ingredient of social interaction with other real humans.

Now why would you want to interact with other humans when you have the computer? ;) I suppose it is true that conversation via a drop down list isn’t doesn’t produce quite as stimulating a conversation. What if the AI reaches the point that a game like Oblivion is indistinguishable from an online multi-player game? Would that be as satisfying?

I digress. As Adam states,

I don’t think I’ve ever enjoyed any RPG video game as much as carefree pencil, paper and dice role-playing from the high school years.

Well that’s because no amount of HDR lighting, Anti-aliasing, large texture maps is going to match the lighting effects and graphics going on in your noggin.

I admit, I was into the paper and dice game back in the day. I lived in Guam at the time and kept it on D-L for very self-conscious reasons. The funny part is that my friends, all in different circles (Hawaiian volleyball player, skateboarder, heavy metal dude, African American dude, etc…) didn’t know there was any stigma (imagined or real) to the game. I would cringe when they would tell our friends we were heading to so and so’s house to play Dungeons and Dragons.

But again, I digress…

My company regularly hosts internal conference calls via Skype. It got me thinking one day that Skype would be a wonderful means to play paper and dice Role Playing Games. The difficulty in getting a game together after highschool was not only the lack of time, but also the sparseness of interested parties. There is no way you are going to get six people to drive across town to meet all on the same day and time.

With Skype, geographical location is no longer a limitation. Granted you still lose some of the benefits of physical presence such as passing the Doritos and knocking over your friend’s figurine when he accidentally hits you with his fireball. But at least you have a much larger pool of people to choose from to start a game. Is anyone doing this?

comments suggest edit

When starting a new DotNetNuke based website, I like to develop it on my local machine, and when everything is ready for a first deployment, I deploy to whatever staging or production server is relevant.

This has worked fine over the years, but I ran into a problem recently when applying this approach to DNN 4.03. I had everything working just fine on my local machine, but after deploying to our production server, I could not get the site to work. It would give me some message about a NullReferenceException when trying to get the portal.

Opening up Query Analyzer, I could select the records from the dnn_PortalAlias table and see that everything matched up. I banged my head on this for a long time.

I finally had the idea to change the connection string to point to a brand new database. I thought maybe I would find some discrepancy in the database records. Perhaps I deleted something or other important. After the change, I hit the site which invoked the web-based installation process. Once that was complete I tried to get a list of records from dnn_PortalAlias and got an error message Invalid object name 'dnn_PortalAlias'. Huh?

Executing sp_tables showed there was no dnn_PortalAlias table. Instead, there was a PortalAlias table. Aha! I looked in web.config and indeed the ObjectQualifier value was set to the empty string. So how did that change from my development machine to the production machine?

Well the source zip archive for DNN 4.0 ships with two config files. One named development.config and one named release.config. Before deploying, you are supposed to rename release.config to web.config. However, I had assumed that on my local machine, I could simply rename development.config to web.config for development purposes. I assumed that the only differences were in some debug settings. Boy was I wrong!

It turns out that the ObjectQualifier setting was set to dnn_ in development.config. This is the value I would expect as this was the typical installation I used in previous versions. In any case, I hope this saves you time if you happen to run into it. The fix on my production server was simply to change the ObjectQualifier value to be dnn_.