0 comments suggest edit

Not too long ago I mentioned that the Subtext team is using CruiseControl.NET for continuous integration. Well Simone Chiaretta, the developer who set this up, wrote up an article describing Continuous Integration and the various utilities that Subtext uses in its CI process.

As you can see in the screenshot, the last build succeeded. Check out this small snippet from our NCover report

As you can see, we have a bit of work to do. But remember, code coverage isn’t everything.

0 comments suggest edit

Better grab this before they take away my DNN license. But first, let me give you a bit of background.

Background

Past versions of DotNetNuke typically came with a source code release and an installation release. Many developers (myself included) look at DNN as a platform and prefer not to touch the DNN source code. Once you start tweaking the source code, you open up a world of headaches if you plan on upgrading to the next version of DNN since you add the pain of making sure to migrate your own changes. DNN provides plenty of integration and extensibility points that for the most part, touching the source code is unnecessary.

Instead, I set up my projects to only reference the DNN assemblies and include the *.aspx, *.ascx, etc… files without the code behind. If you’ve worked with DNN before, you may be familiar with the My Modules technique which included the famous _DNNStub project.

But now comes ASP.NET 2.0 which introduces a new web project model. To put it mildly, there was a bit of a negative reaction in some circles of the community around this new project model, which to be fair, serves its purpose but is not for everybody.

Naturally, when DNN 4.* was released, it was built upon this new model. Unfortunately for module developers used to the existing manner of development, the recommended method for developing modules now involves adding code directly into the special App_Code directory of the DNN web project. Shaun Walker, the creator and maintainer of DNN, wrote up a helpful guide to module development for DNN 4.* using the new Starter Kits.

Web Application Projects Introduced

But now that Microsoft released the new ASP.NET 2.0 Web Application Projects model, I thought there had to be a better way to develop modules that took advantage of the Web Application projects and was more in line with the old manner of doing it. I figured it couldn’t be that hard.

Also, I wanted to take advantage of the WebDev.WebServer (aka Cassini) that comes with VS.NET 2005. Shaun had mentioned that they had problems with running DNN using it, but I had to see for myself. The benefits of a completely self-contained build as well as being able to run the local development site on a webroot (for example http://localhost:8080/) on WinXP was well worth an attempt.

Web Application Projects Unleashed

So after installing the Web Application Project templates and add-in, I created a new web application project in VS.NET. To give myself a bit of a challenge (and since I may decide to add a custom page for some reason later), I chose to create a C# project as shown in the screenshot.

New Web Application Project
Dialog

As per my usual process, I created a folder named ExternalDependencies in the project and copied all the DNN assemblies from the Installation distribution (DotNetNuke_4.0.3_Install.zip) into that folder (this is just the way I roll). To add those assemblies as assembly references, I right-clicked the project, selected Add Reference, and then selected all the assemblies in that folder.

Add Reference Dialog

The next step was to add the special App_GlobalResources folder to the project by simply right clicking on the project and selecting Add | Add ASP.NET Folder | App_GlobalResources.

Adding Global Resources Context
Menu

After copying the contents of App_GlobalResources from the installation distribution into that folder, I copied all the other non-code files, *.ascx, *.aspx etc… into the project. At this point I am almost done getting the basic project tree setup. The one last issue to deal with is the code behind for Global.asax. Even with an installation distribution of DNN 4, this is included because under the Web Site project model, it gets compiled at runtime (unless pre-deploying). Personally I think this code could be put in an HttpModule. In any case, I translated the file into C#. This was actually a bit trickier than I expected because of the use of Global variables.

After completing these steps, I renamed release.config to web.config, updated the connection string, and hit CTRL+F5. The WebDev.Webserver started up pointing to the web application project using the URL http://localhost:2334/ (your results may vary) and it all worked!

One major benefit to using WebDev.WebServer is that getting this site running on a new development machine takes one less step. No need to futz around with IIS. Not only that, since I do my development on Windows XP which only allows one website, I used to have to develop DNN sites in a virtual application. This caused a problem when deploying the site because static image and css file references had to be updated.

With this approach, my URLs on my dev server match the URLs in the production site. One caveat to be aware of is that this approach only works if you are not using any special features of IIS. I recommend testing on a staging server that is running IIS before deploying to a production server with IIS. I only use Cassini for development purposes, not to actually host a site.

Module Development

I went ahead and added some pre-existing modules to the project (upgrading them to .NET 2.0) as separate projects. I was able to add project references from my Web Application Project to the individual module projects. As far as I can tell, there is no longer the need to have a BuildSupport project with this approach.

Download

To save you some time I am including the barebone solution and project here based on the DNN 4.0.3 distribution.

Keep in mind that this is a “pre-install” project meaning that after you set it up, you will need to rename release.config to web.config and update the connection string settings to point to your database. Afterwards, hit CTRL+F5 and walk through the DNN web-based installation process. That process will make filesystem changes so make sure you have appropriate write access.

Let me know if this works for you or if you find any mistakes, problems, issues with it.

personal, tech 0 comments suggest edit

So Adam Kinney isn’t quite as ga-ga over Oblivion as I. Understandable. As he points out, it is missing the key ingredient of social interaction with other real humans.

Now why would you want to interact with other humans when you have the computer? ;) I suppose it is true that conversation via a drop down list isn’t doesn’t produce quite as stimulating a conversation. What if the AI reaches the point that a game like Oblivion is indistinguishable from an online multi-player game? Would that be as satisfying?

I digress. As Adam states,

I don’t think I’ve ever enjoyed any RPG video game as much as carefree pencil, paper and dice role-playing from the high school years.

Well that’s because no amount of HDR lighting, Anti-aliasing, large texture maps is going to match the lighting effects and graphics going on in your noggin.

I admit, I was into the paper and dice game back in the day. I lived in Guam at the time and kept it on D-L for very self-conscious reasons. The funny part is that my friends, all in different circles (Hawaiian volleyball player, skateboarder, heavy metal dude, African American dude, etc…) didn’t know there was any stigma (imagined or real) to the game. I would cringe when they would tell our friends we were heading to so and so’s house to play Dungeons and Dragons.

But again, I digress…

My company regularly hosts internal conference calls via Skype. It got me thinking one day that Skype would be a wonderful means to play paper and dice Role Playing Games. The difficulty in getting a game together after highschool was not only the lack of time, but also the sparseness of interested parties. There is no way you are going to get six people to drive across town to meet all on the same day and time.

With Skype, geographical location is no longer a limitation. Granted you still lose some of the benefits of physical presence such as passing the Doritos and knocking over your friend’s figurine when he accidentally hits you with his fireball. But at least you have a much larger pool of people to choose from to start a game. Is anyone doing this?

0 comments suggest edit

When starting a new DotNetNuke based website, I like to develop it on my local machine, and when everything is ready for a first deployment, I deploy to whatever staging or production server is relevant.

This has worked fine over the years, but I ran into a problem recently when applying this approach to DNN 4.03. I had everything working just fine on my local machine, but after deploying to our production server, I could not get the site to work. It would give me some message about a NullReferenceException when trying to get the portal.

Opening up Query Analyzer, I could select the records from the dnn_PortalAlias table and see that everything matched up. I banged my head on this for a long time.

I finally had the idea to change the connection string to point to a brand new database. I thought maybe I would find some discrepancy in the database records. Perhaps I deleted something or other important. After the change, I hit the site which invoked the web-based installation process. Once that was complete I tried to get a list of records from dnn_PortalAlias and got an error message Invalid object name 'dnn_PortalAlias'. Huh?

Executing sp_tables showed there was no dnn_PortalAlias table. Instead, there was a PortalAlias table. Aha! I looked in web.config and indeed the ObjectQualifier value was set to the empty string. So how did that change from my development machine to the production machine?

Well the source zip archive for DNN 4.0 ships with two config files. One named development.config and one named release.config. Before deploying, you are supposed to rename release.config to web.config. However, I had assumed that on my local machine, I could simply rename development.config to web.config for development purposes. I assumed that the only differences were in some debug settings. Boy was I wrong!

It turns out that the ObjectQualifier setting was set to dnn_ in development.config. This is the value I would expect as this was the typical installation I used in previous versions. In any case, I hope this saves you time if you happen to run into it. The fix on my production server was simply to change the ObjectQualifier value to be dnn_.

0 comments suggest edit

Seems like everyone and their mother has an opinion on the “right” way to have comment threads. Currently Subtext supports the same model as .TEXT did, a simple linear sequential list of comments. It is simple and gets the job done.

The 37Signals blog addresses the question of comments and presents several examples of how different sites handle it.

Personally I like the first example. It retains the simplicity and fluidity of the linear approach, while adding a bit of useful meta-data. What do you think?

0 comments suggest edit

Party Mode Button It is so easy to get caught up in your day to day work and home duties and forget to take a break to really cut loose. The answer, my friends, is that big red button in the image to the left there. That there initiates Party Mode! Set this sucker up in your office or home bedroom, and whenever life catches up to you… Instant Party!

This here is the invention of some MIT students who pimped their dorm room with an instant rave setup. We are talking six video cameras, electric blinds, lights, laser, LED screens, music server, voice activation, blacklights, fog machine, etc etc…

Scroll down to see a couple videos they post of the setup in action. Now all they have to do to complete the club experience is charge $5 for a bottle of water and $12 for a crappily mixed drink in a plastic cup. Brilliant!

0 comments suggest edit

Lest you think I sit around spending all my time on computer games and soccer, I also try to write occasionally.

Today an article I have been working for a while has finally been published on DevSource. It is entitled a A Developer’s Introduction to Microformats and attempts to present a clear introductory look at Microformats. This is my second article for DevSource, the first being one I helped that crazy Bob Reselman write.

I was fired up to write this article after attending the Mix06 conference. Hearing Bill Gates mention Microformats (whether O’Reilly fed it to him or not) highlights the fact that Microformats are poised to really take off. There are some detractors and potential real problems with syndicating Microformats, so it will be interesting to see how they are solved.

In any case, check it out and let me know what you think. Did I present it well?

And before I forget, big ups to the Microformats mailing list for helping me think through some of these topics I covered.

0 comments suggest edit

I once thought I was a bit of a blogging addict. To get settled into work I would read my blogs. I’d tune back in while eating my lunch. And if I went on vacation, I thought about the huge number of unread feeds. Heck, I even went and got involved in RSS Bandit and Subtext so that I could work on the means of delivering blogs.

Oblivion Box But now I realized that my blogging addiction is merely the mild craving for milk after a cookie. I have discovered what true addiction is, and its name is Oblivion.

Steve Yegge was right when he says…

…if you’re not playing Oblivion, then I highly, nay strongly recommend that you don’t start, or you’ll suddenly develop an aversion to Real Life…

This is quite simply the best computer game I have ever had the pleasure to play. I remember spending hours as a kid playing such classics as the Phantasie, Ultima III, Ultima IV, The Bard’s Tale and Dungeon Master. Dungeon Master at the time elevated the FRPG genre for me because it was the first that really incorporated first person realtime playing. But I remember drawing up plans for the ultimate game. Apparently Bethesda swiped those plans from my brain and decided to do even better.

So why is this game so damn addicting? It is a combination of a lot of things really. First, the skill based system really seems to mean something. I remember there was never a point in playing a thief in most role playing games because you would just get killed first. Most games were simply hack and slash fight your way out of every situation.

Oblivion Screenshot But with Oblivion, you have the opportunity to really put those sneaking and lockpicking skills to good use in daring missions where simply blasting your way through really isn’t a good option. I also like the fact that lock-picking isn’t simply rolling a die and comparing it to a skill (though you can resort to that option). You have the ability to actually try and pick that lock.

If there were no other characters in the game, it would be like Myst, but with the ability to fully explore your environment. The scenery in this game is jaw dropping.

But ultimately, I think the open-ended gameplay really kicks it up a notch. After a short stint as a gladiator (got my ass handed to me) my character is now working his way up the Thieves guild and trying to advance in the Mage’s guild. At the beginning of the game, some important Emporer got shanked and I am supposed to deliver his amulet somewhere, but I sort of got sidetracked.

Now I am travelling around, checking out the scenery, and getting way too little sleep. I suppose I should look into delivering this amulet, but first I have some pilfered goods to fence off and I want to help this half-orc reclaim his heritage.

0 comments suggest edit

Pen I write this blog post with apologies to Dale Carnegie for the play on the title of his book.

Today, Jeff Atwood writes about the difference between writing and copywriting. His essential point is that good copyrighting is marketing and is boring. Good writing on the other hand is engaging and not boring. Understand the difference?

I think this dovetails nicely into another article I read recently at A List Apart entitled Calling All Designers: Learn to Write!

Derek Powazek points out that creating a good user experience goes beyond rounded corners and visual design. Good writing is an essential part of creating a great user experience. He sites Flickr as one example of getting it right. Rather than a button that says Submit they have a button that says Get in there. That really is friendlier isn’t it.

When you think about it, using plain casual English is much more natural for people to read. How often in the real-world do you hear people asking you to submit anything except when submitting a drug test or tax forms in triplicate?

So I took a look at my blog and noticed that in the front end, there is pretty much only one button that people use on a daily basis and it said Comment. So I changed it to Leave Your Mark and sat back waiting for the accolades to roll in on the improved user experience. Anybody hear crickets?

Well it is going to take more than changing a single button to improve the overall user experience here. I will actually have to start writing well and quit using this random copy generator. But these are definitely insights I want to take into consideration when I get around to tweaking and updating the admin interface to Subtext. What are areas in which we can improve the writing? How can we improve the user experience? Little touches add up to a lot in creating a great experience.

0 comments suggest edit

I recently set up Payroll via Paychex for my company. It is an eye opener to see exactly what taxes an employer pays on top of the taxes already deducted from each employee’s paycheck. I mean, I always heard that my employers were paying taxes for me when I was an employee, but I never knew how much. Till now.

This is helpful when figuring out your total compensation as it is part of the hidden cost of going into business for yourself. Of course, we are a C-Corp so these figure may be different for other types of businesses. I wouldn’t know and this does not qualify as tax advice.

Tax Breakdown

Tax Rate
Social Security 6.2%
Medicare 1.45%
Federal Unemployment 0.8%
State Unemployment 0.8%

State of CA. This changes.

Some Notes:

Social Security has a wage base limit of $94,200. So if an employee makes more than that (including bonuses etc…), the employer will only be taxed 6.2% of $94,200.

Medicare has no wage based limit.

The last two taxes are only taxed on the first $7000 of wages per employee per year. So the employer pays 3.4% of $7000 for each employee assuming each makes $7000 or more a year.

So make sure these figure into your cash-flow estimates. Also, don’t forget that by law, most companies are required to carry Workman’s compensation insurance. That will cost you a small chunk of change per year as well.

0 comments suggest edit

Since I had a rough week last week, I thought I would post something fun today. While some people are just jumping on the dual-monitor bandwagon, I have recently moved on to three screens.

Three Screens

Of course that is not exactly true. The two screens on the right are attached to my new Dell Dimension 9150 workstation. The one on the left is attached to my old Shuttle system. That there is running the VMWare Server that hosts Subtext’s CruiseControl.NET build server.

The only reason I got the third screen is that because of a deal they were offering, it actually lowered the cost of the lease to get this screen than to not get it. You can’t beat a deal like that!

Rather than using a KVM, I am using MaxiVista to remote control the computer via the third monitor. That works pretty nicely, though MaxiVista seems to hiccup alot.

0 comments suggest edit

So in the hustle and bustle of trying to get my Yahoo account back (it has been returned), I forgot to show some love for JackAce of the Code Turkey blog. He and I used to work at SkillJam and he was the one who alerted me via email that my account had been jacked.

In this post, he describes the general tactic that an Instant Messaging based attack takes to spread itself.

He also provides some tips to avoid phishing and talks about what to do if you are phised. Be careful out there.

personal 0 comments suggest edit

Stop Fraud! So after getting my Yahoo password phished, my wife reminded me that we should put a fraud alert on our credit file. I first heard about this from my friend Walter a while ago, but we never got around to it.

This is a flag that the major credit bureaus (experian, equifax, and TransUnion) attach to your credit report. If someone (including yourself) tries to open up a new credit account, the lender is supposed to (though not required by law) to contact you by phone to make sure that you really do want to open a new account.

Keep in mind that this applies to applying for a new credit card, obtaining a car loan, purchasing a cell phone, etc…

Setting up a fraud alert is pretty easy. There are three major credit bureaus you can call, but I prefer to do these things online. If you go to https://www.experian.com/fraud/, you can apply for the initial security alert (90 days) via the internet. They will forward the alert to the other two credit bureaus so you shouldn’s have to call them. One other benefit is that they let you print out your credit history online for free.

If you live in California, the protections are much better. According to California Law SB 168, you have the right to freeze your credit record at each bureau. This makes it impossible to issue credit in your name, even for someone armed with your name, address, Social Security Number, etc… To do this, you do need to contact each bureau in writing and send in $10.

For instructions on the benefits of a credit freeze and how to contact each credit bureau, check out this page on the Fight Identity Theft website.

Apparently similar laws apply to the following states at the time of this writing (CT, IL, LA, ME, NV, NC, TX, VT, WA).

0 comments suggest edit

Fish UPDATE:I am back in business. I have re-obtained control over my Yahoo account. So the IM messages you receive from me are really from me. I won’t make this mistake twice.

Never operate a computer while sleep deprived. In fact, I am starting to think people should be licensed to get on the internet much like you do to drive a car. I am absolutely mortified to admit this, but I got suckered in a phishing attack that occurred via Yahoo Messenger.

I received an IM from a former boss with a link to a geocities photo gallery. When I clicked on the link, it looked just like a Yahoo photo gallery. Thinking (or rather not thinking), “Oh yeah, Yahoo owns Geocities now, right?” I logged in to see the photos. Big mistake. Right then I had the sneaking suspicion that I had done something painfully wrong.

And today, it was confirmed when a friend emailed me to tell me that I got my password jacked. If you see an IM from me or anyone with the link http://www.geocities.com/ladivabev/photos_pics.html (or rather any geocities link) DO NOT CLICK ON IT.

I cannot believe I fell for this. I am usually excellent at spotting and ignoring these, but everybody has their off days. And lately, I have had a string of them. I recently accidentally deleted all my backup data on my external hard-drive. Sleep deprivation is a killer.

And if you receive an IM or Yahoo message from me, please know it is not from me until further notice.

0 comments suggest edit

Well this recent phishing attack is clear demonstration of the inherent dangers of homogeneity. Biologists and epidemologists have known this stuff for decades. Having given out my Yahoo password would have been much more disastrous if I was using Yahoo for my primary email address. Fortunately I use Gmail. Imagine the damage had I given out my Passport password. Egads!

Unfortunately I do use Yahoo Messenger. But I also use MSN and Skype. One password does not connect the bad guys with everything I use to communicate. But it is enough for them to do some damage. When you get an IM from a credible source, it is hard not to resist. It naturally brings your defenses down. A clever example of social engineering.

0 comments suggest edit

Prolific blogger Mr. Jeff Atwood, author of the CodingHorror blog, paid us a surprise visit last night. He is in town for a couple of days to do something or other unimportant. He tried to explain something about presenting Team System to important people but all I heard was “blah blah TS blah blah”.

After a fine dinner at the new Ford Filling Station (owned by Harrison Ford’s son) we gathered around the screen and had a chat with the not-so-prolific blogger lately, Jon Galloway.

Jeff and Jon

So that there is Jeff on the left getting cozy with Jon on the right, who couldn’t make it in person but would like to thank the academy via live video feed courtesy of Skype™.

Jeff is one of the few people who regularly reads my blog through one of these antiquated mediums called a browser. Which is actually great since he gets to experience the very cool drop-shadow effects I apply to my photos. Go CSS!

After a bit of plotting to overtake the planet and typical jokes at each others expense, we all went our merry ways. Except for me, I live here.

0 comments suggest edit

Cruise Control Logo With many thanks to Simone Chiaretta (blog in Italian) for his effort, we now have a working CruiseControl.NET setup for Subtext. Check out the chrome (or lack thereof) on our CCNET dashboard.

Though we have some kinks to work out (the build is apparently broken according to CCNET), I am particularly happy about getting this up and running. As a distributed open source project, it is part of our master plan to follow agile development practices that are well suited to building Subtext. Continuous integration is particularly important for us since we are in different time zones and locations.

The CCNet server is running on Windows 2003 within a VMWare Virtual Server on my old development workstation. That makes our build server very portable should we decide to host it elsewhere someday.

Once we get the kinks worked out, you can download the CCTray system tray applet and keep tabs on the development of Subtext. You’ll know exactly who and when someone breaks the build. How is that for open source?

To get CCTray to work, make sure your firewall allows TCP traffic over port 21234. Then add the server build.subtextproject.com:21234.

Though for now, let’s be adults and keep the teasing to a minimum. I apparently broke the build, but I am betting it is a configuration issue with moving the virtual server from Italy to Los Angeles. Ciao!

0 comments suggest edit

This is a story of intrigue.

Ok, perhaps that is a bit overblown. This is really a story of schizophrenia. It is the story of a method PageParser.GetCompiledPageInstance that exhibits a different behavior depending on whether or not you have the <compilation> tag’s debug attribute set to true or false.

The problem first came up when deploying the most recent builds of Subtext with this attribute set to false. This was the natural response to Scott Guthrie’s admonishment, Don’t Run Production ASP.NET Applications with debug=”true” enabled..

However, this affected Subtext in an unusual manner. Subtext employs an URL rewriting mechanism I wrote about before. It relies on the using an IHttpHandler that is created by calling PageParser.GetCompiledPageInstance.

I will spare you all the details and cut to the chase. GetCompiledPageInstance takes in three parameters:

  • virtualPath (string)
  • inputFile (string)
  • context (HttpContext).

In the initial request to the Subtext root, the values for those parameters on my local machine are:

  • virtualPath = “http://localhost/Subtext.Web/Default.aspx”
  • inputFile = “c:\projects\Subtext.Web\DTP.aspx”
  • context = (the current context passed in by the ASP.NET runtime)

The interesting thing to note is that there is an actual aspx file named Default.aspx located at http://localhost/Subtext.Web/Default.aspx. When the debug compilation option was set to true, this method would return a compiled instance of DTP.aspx (hence the URL rewriting).

But when I set debug="false", it would return a compiled instance of Default.aspx. Holy moly!

I confirmed this by attaching a debugger and going through the process multiple times. Using Reflector, I started walking through the code for GetCompiledPageInstance until my eyes started to burst. There is a lot of machinery at work under the hood. I eventually found some code that appears to generate a URL path differently based on debugging options. Not sure if this was the culprit, but it is possible.

Setting debug="false" causes the runtime to perform a batch compilation. Thus a request for /Default.aspx is going to compile all *.aspx files in that folder into a single DLL. Setting that debug value to true causes ASP.NET to compile every page into its own assembly.

My fix is a bit of a hack, until I can get a deeper understanding of what is really happening. As I see it, calling GetCompiledPageInstance with a virtualPath that points to a one file while passing in a different physical file path to inputFile is causing some confusion. Perhaps due to the batch compilation.

To remedy this, I simply have a check before we call GetCompiledPageInstance to check the end of the virtualPath for /Default.aspx (case insensitive of course). If it finds that string, it truncates the default.aspx portion of it. That seems to do the trick for now since this is pretty much the one place in which URL rewriting would attempt to rewrite a url that itself points to a real page.

For a nice look under the hood regarding the compilation option, check out this post by Milan Negovan.

Please keep in mind that this is a separate issue from deploying your compiled assemblies in debug mode or with debug symbols. This has to do with the ASP.NET runtime compiling the ASPX files at runtime.