May 2006 Blog Posts
Greg Young takes my Testing Mail Server to task and asks the question, what does it test that a mock provider doesn’t?
It is a very good question and as he points out in his blog post on the subject, it seems like a lot of overhead for very little benefit. For the most part, he is right.
To my defense, and as Greg points out, I would not start with such a test when writing email functionality. I would start with the mock email provider and follow the typical Red-Green TDD pattern of development. However there are cases where this approach does not test enough and this testing server was necessitated by some real world scenarios I ran into.
For example, in some situations, it is very important to understand the exact format of the raw SMTP message that is sent. Some systems actually use email from server to server to kick off automated tasks. In that situation, it helps to know that the SMTP message is formatted as expected by the receiving server. For example, you may want to make sure that the appropriate headers are sent and that the message is not a multi-part message. This approach lets you get at the raw SMTP message in a way that the mock provider approach cannot.
A more common issue is when sending mass mailings such as newsletters to subscribers. At one former employer, we had real difficulty getting our emails to land in our user’s mailbox despite adhering to appropriate SPAM laws and only mailing to subscribed users.
It turns out that actually landing a mass mailing even to users who want the email is very tough when dealing with Hotmail, yahoo, and AOL accounts. Something as seemingly innocuous as the
X-Mailer header value can trigger the spam filters.
In this case, this very much falls under the rubrik of an integration test, as I am testing the actual mailing component in use. But I am not only testing the particular mailing component. I am also testing that my code uses the mailing component in a correct manner.
So in answer to the question, Where’s the red bar? The red bar comes into play when I write my unit test and asser that the
X-Mailer header is missing. The green bar comes into play when I make sure to remove the header. I could probably test this with a mock object as well, but I have been burnt by mailing components that did not remove the X-Mailer header but simply set the value to blank, when I really intended it to be removed. That is not something a mock object would have told me.
In spirit, this is a follow-up to my recent post on unit-testing email functionality.
This probably doesn’t apply to those of you who have reached O/RM nirvana with such tools as NHibernate etc... ADO is probably just a distant memory, not unlike those vague embarassing recollections of soiling yourself in a public location long ago.
But I digress.
For the rest of us, we sometimes need to get knee-deep in ADO. For example, even though Subtext abstracts away the data access via the provider model, I still want to test the data provider itself, right?
Subtext’s has a series of methods that each call a stored procedure and return an
IDataReader. The returned data reader is passed to another class which populates entity objects using the data reader.
In my unit tests, it would be nice to have a means to attach a data reader to an in-memory object structure rather than directly to the database. That is where my
StubDataReader class comes into play.
It implements the
IDataReader interface and provides a quick and dirty way to create an in-memory object structure. By quick and dirty I mean that you do not need to build out a table schema first. The code to set up the stub data reader is quite simple.
Single Result Set
If you are dealing with a Data Reader that should only return one result set (which seems to be the vast majority of cases), then setting it up would look like this:
DateTime testDate = DateTime.Now;
= new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);
resultSet.AddRow(2, "Test2", testDate.AddDays(1));
StubDataReader reader = new StubDataReader(resultSet);
//Advance to first row.
In the above snippet, I create an instance of
StubResultSet with a list of the column names. I then make a couple of calls to
AddRow. Notice that
AddRow takes in a param array of
object instances. This is the quick and dirty part. Since the
StubDataReader doesn’t require setting up a schema before-hand, it will not validate that the objects added to the columns of the rows are the correct type. It just doesn’t have that information. But this isn’t all that important since this class is specifically for use in unit testing scenarios.
Multiple Result Sets
Not everyone realizes this, but you can iterate over multiple result sets with a single data reader instance. Simulating that scenario is quite easy.
DateTime testDate = DateTime.Now;
= new StubResultSet("col0", "col1", "col2");
resultSet.AddRow(1, "Test", testDate);
= new StubResultSet("first", "second");
= new StubDataReader(resultSet, anotherResultSet);
//Advance to first row.
//Advance to second ResultSet.
Assert.IsTrue(reader.NextResult(), "Expected next result set");
//Advance to first row.
In this snippet, I create two
StubResultSet instances and pass it to the constructor of the
DataReader. Afterwards, you can see that the code makes sure to test that the
NextResult functions properly.
The above code snippets above are excerpts from the unit tests I wrote for this code. Although this code is more complete than the mail server example, there are a couple methods that haven’t been well tested because I have never run into a situation in which I needed them. I put in various comments so feel free to improve this and let me know about it. This code is within the
UnitTests.Subtext project in the Subtext solution in our Subversion repository.
You can download the code here , but as before, I do not guarantee I will update the link to have the latest code. You can access our Subversion repository for the latest.
My wife received a free day at the Glen Ivy Hot Springs Spa from our friends Dan and Judy (the same Dan to whom my last non-geek post was dedicated).
So the four of us headed over there yesterday for a day of relaxation. The day consisted of sitting in a stinky hot sulfur mineral jacuzzi bath, then swimming in the pool, taking a nap, eating lunch, and finally covering ourselves from head to toe in mud.
I was a little too aggressive in covering myself in mud, slathering it on and getting plenty of it in my eyes. I didn’t pay attention to the memo to rub it around the eyes and not in the eyes. You don’t say!
I remember as a kid always being admonished about getting too muddy when playing outside. Now as an adult, I pay for the experience. Must be some form of latent rebellion.
So you are coding along riding that TDD high when you reach the point at which your code needs to send an email. What do you do now?
You might consider writing something that looks like:
EmailMessage email = new EmailMessage();
email.FromAddress = new EmailAddress(from);
email.Subject = subject;
email.BodyText = message;
SmtpServer smtpServer = new SmtpServer(SmtpServer, Port);
But you, being a TDD god(des) know better and quickly refactor that into some sleek code that uses an
EmailProvider. This ensures that your code is not tied to any specific email implementation and will make unit testing your code easier. Just swap out your concrete email provider for a unit test specific email provider. Now your code looks like:
EmailProvider.Instance().Send(to, from, subject, message);
But a nagging thought still pulls at the edge of your consciousness. â€œShouldn’t I unit test my concrete email provider and actually make sure the email gets sent correctly?
I certainly think so. As for the semantic arguments around whether this really constitutes an Integration Test as opposed to a Unit Test, please don’t bore me with your hang-ups. Either way, it deserves a test and what better way to test it than using something like MbUnit or NUnit.
Wouldn’t it be nice to test your email code like so?
DotNetOpenMailProvider provider = new DotNetOpenMailProvider();
NameValueCollection configValue = new NameValueCollection();
configValue["smtpServer"] = "127.0.0.1";
configValue["port"] = "8081";
TestSmtpServer receivingServer = new TestSmtpServer();
"Subject to nothing",
"Mr. Watson. Come here. I need you.");
// So Did It Work?
ReceivedEmailMessage received = receivingServer.Inbox;
That there code starts up a mail server, sends an email to it, and then checks that the mail server received the email. It also quickly checks the to address.
This is a snippet of an actual unit test within the Subtext codebase.
A long while ago I discovered a wonderful .NET based freeware mail server written by Ivar Lumi. I decided to write a wrapper specifically for unit testing scenarios. I added the
TestSmtpServer to a new project named
Subtext.UnitTesting.Servers in the Subtext VS.NET solution.
The wrapper parses incoming SMTP messages and adds an
ReceivedEmailMessage instance to the
Inbox custom collection. This makes it easy to quickly examine the email messages sent via SMTP in your unit test.
As this is a very early draft, there are some key limitations. I have yet to implement multi-part messages and attachments in the object model. I also punted on dealing with multiple to addresses. However, the
ReceivedEmailMessage class does have a
RawSmtpMessage property you can examine. For now, it works very well for simple text based emails.
Over time, I hope to implement these more complicated testing features as the need arises. However, if you find this useful and would like to contribute, please do!
If you want to view the latest code, check out these instructions for downloading the latest Subtext code using Subversion.
Or you can simply download this one project here, though keep in mind that I will be updating this project, but not necessarily this link to the project.
Since the project is specifically for unit testing purpsose, I went ahead and embedded the unit tests for this server within the project itself using MbUnit references. However you can simply swap out the assemblies and references to use NUnit if that is your preference.
Last night we went out with friends to celebrate Akumi’s birthday. Somehow the topic of my blog came up in conversation. Perhaps I have a tendency to interject the topic of “blogging” every chance I get. I can be annoying that way.
In any case, my friend Dan points out that he wishes I would write a little more personal content. His poor eyes get tired from scrolling down through the reams of code which is all meaningless gibberish to him.
Well Dan, this post is for you.
Unfortunately I have nothing to say in this particular post. I slept in late today and my day is just beginning. I have a bit of work to do for a client so I need to get head down into coding.
In the meanwhile, for all my friends who don’t care for my technical gibberish, you can subscribe to my Day to Day (rss) category. This contains all my non-technical posts 100% free of code.
I noticed this odd post on SimpleBits, Dan Cederholm’s website. It is a list of words that he can easily type with his left hand. One has to ask for what reason he is keeping his right hand free? But I, being a man of good taste, won’t go there.
For me, this list is quite different. Several years ago I was suffering from a lot of wrist pain due to typing. I started looking into all sorts of remedies. One remedy I tried was taking some time to learn the Dvorak Simplified Keyboard layout. My coworker at the time (and now business partner) Micah also did the same.
We simply full on took the plunge at work. It was a slow period so we downloaded a little practice app, switched our regional settings to Dvorak, and started practicing. When I had to respond to emails or write code, it was quite laughable how slow and clumsy I was...at first.
Soon enough I picked up speed and probably type faster in Dvorak than I ever did in QWERTY. Since I never got around to buying a Dvorak keyboard, I was forced to really learn touch typing. If you watch me type slowly on a keyboard, it would confuse the heck out of you as I am hitting all the wrong keys to produce the right letters.
In any case, here are a few words that I can type with my left hand using the Dvorak layout.
- quux (metasyntactic variable such as foo, bar, baz)
That is quite a limited vocabulary.
I don’t write much about my personal life here because most days are pretty mundane and not unlike other days I’ve had. If I were to write about my day, most entries would look like the following...
Today I woke up, had some breakfast, said goodbye to the wife, read my blogs, wrote some code, walked the dog, said hello to the wife, ate dinner, spent time with the wife, worked some more, snuck in a bit of Oblivion, went to sleep.
What a travesty of a run-on sentence!
So, my dear readers, I have done you a service of sparing you the banality of my life.
However this weekend is a bit special as my wife’s family (mom, brother, and brother’s wife) are in town from Japan to observer the one year anniversary of her otosan (dad) passing away.
While last year was an understandably somber affair, this year has been very light and fun. We drove down to Chula Vista to visit the location in which he was found. Afterwards, we drove up to San Diego and had the best sushi around at Sushi Ota. Mr. Ota (or Ota-san as we call him) is a family friend and took very good care of us, making all sorts of creative interstitial treats between our orders.
Jon Galloway also stopped by the Residence Inn where we were staying so I could trash him in table tennis. I had to lighten up on my vicious serve a bit otherwise it just would’ve been ugly.
We also took a boat ride in Oceanside to the point at which we spread Otosan’s ashes. My brother-in-law took some great photos, such as the sea lions basking on a buoy.
Everytime I ride the boat I start to wonder what it would be like to sell our place and live on a boat. But I realize they have the same parking congestion that we have.
Today I am back in Los Angeles and back to work while they are out shopping. It is interesting to see their shopping choices. They were so excited to purchase some sets of tupperware at Ikea because it was the fraction of the cost of similar containers in Tokyo.
Implied policies are policies that are never written in any employee manual, but are implied due to real world practices or are side effects of explicit policies. The classic example is when an employee gives notice to an employer and the employer counter-offers with a raise. In some cases, a raise that was refused earlier.
This was recently well illustrated by Scott Adams in the Dilbert comic strip on May 14 (click image to see full-size)
This is probably all too common in many workplaces. I certainly have worked at places in which the only means to receiving a raise is to threaten to quit. At one work place, I knew of a couple coworkers who over the years threatened to quit several times each, receiving a raise in compensation of one form or another each time.
In most cases, this is symptomatic of a dysfunctional work environment that is incapable of valuing employees and paying them what they are worth.
Good managers pay attention to implied policies as much as they do the explicit policies. This is sometimes easier said than done, as it is not always clear what the unintended side-effects of a policy might create. Mary Poppendieck highlights several examples (pdf) of the unintended side-effects of common popular compensation policies. The recent announcement to dismiss the infamous Microsoft Curve is perhaps a recognition of the negative side effects of peer competitive approaches to compensation.
Johanna Rothman points out another implied policy when management is unwilling to budge on any of the four key constraints of software development:
If management stubbornly persists on asking for all features (scope) without willing to budge on time, resources, or quality. Then managment is making an implicit decision. As Johanna states (and I reword), not making a decision is an implicit decision. By not deciding on which features to prioritize, management is effectively delegating strategic decisions concerning which projects to staff and which to postpone.
Once you start taking a hard look at your workplace, you can probably come up with a laundry list of implicit policies.
What are some of the ones you’ve experienced?
Security expert Bruce Schneier writes a fantastic essay on the value of privacy. This is a great response to the rhetorical question “If you aren’t doing anything wrong, what do you have to hide?” often used to counter privacy advocates.
A couple key points he makes.
Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance.
Too many wrongly characterize the debate as "security versus privacy." The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that's why we should champion privacy even when we have nothing to hide.
It reminds me of this political cartoon in the paper today.
via the Washington Post
Better grab this before they take away my DNN license. But first, let me give you a bit of background.
Past versions of DotNetNuke typically came with a source code release and an installation release. Many developers (myself included) look at DNN as a platform and prefer not to touch the DNN source code. Once you start tweaking the source code, you open up a world of headaches if you plan on upgrading to the next version of DNN since you add the pain of making sure to migrate your own changes. DNN provides plenty of integration and extensibility points that for the most part, touching the source code is unnecessary.
Instead, I set up my projects to only reference the DNN assemblies and include the *.aspx, *.ascx, etc... files without the code behind. If you’ve worked with DNN before, you may be familiar with the My Modules technique which included the famous
But now comes ASP.NET 2.0 which introduces a new web project model. To put it mildly, there was a bit of a negative reaction in some circles of the community around this new project model, which to be fair, serves its purpose but is not for everybody.
Naturally, when DNN 4.* was released, it was built upon this new model. Unfortunately for module developers used to the existing manner of development, the recommended method for developing modules now involves adding code directly into the special
App_Code directory of the DNN web project. Shaun Walker, the creator and maintainer of DNN, wrote up a helpful guide to module development for DNN 4.* using the new Starter Kits.
Web Application Projects Introduced
But now that Microsoft released the new ASP.NET 2.0 Web Application Projects model, I thought there had to be a better way to develop modules that took advantage of the Web Application projects and was more in line with the old manner of doing it. I figured it couldn’t be that hard.
Also, I wanted to take advantage of the WebDev.WebServer (aka Cassini) that comes with VS.NET 2005. Shaun had mentioned that they had problems with running DNN using it, but I had to see for myself. The benefits of a completely self-contained build as well as being able to run the local development site on a webroot (for example http://localhost:8080/) on WinXP was well worth an attempt.
Web Application Projects Unleashed
So after installing the Web Application Project templates and add-in, I created a new web application project in VS.NET. To give myself a bit of a challenge (and since I may decide to add a custom page for some reason later), I chose to create a C# project as shown in the screenshot.
As per my usual process, I created a folder named ExternalDependencies in the project and copied all the DNN assemblies from the Installation distribution (DotNetNuke_4.0.3_Install.zip) into that folder (this is just the way I roll). To add those assemblies as assembly references, I right-clicked the project, selected Add Reference, and then selected all the assemblies in that folder.
The next step was to add the special
App_GlobalResources folder to the project by simply right clicking on the project and selecting Add | Add ASP.NET Folder | App_GlobalResources.
After copying the contents of App_GlobalResources from the installation distribution into that folder, I copied all the other non-code files, *.ascx, *.aspx etc... into the project. At this point I am almost done getting the basic project tree setup. The one last issue to deal with is the code behind for Global.asax. Even with an installation distribution of DNN 4, this is included because under the Web Site project model, it gets compiled at runtime (unless pre-deploying). Personally I think this code could be put in an HttpModule. In any case, I translated the file into C#. This was actually a bit trickier than I expected because of the use of Global variables.
After completing these steps, I renamed release.config to web.config, updated the connection string, and hit CTRL+F5. The WebDev.Webserver started up pointing to the web application project using the URL http://localhost:2334/ (your results may vary) and it all worked!
One major benefit to using WebDev.WebServer is that getting this site running on a new development machine takes one less step. No need to futz around with IIS. Not only that, since I do my development on Windows XP which only allows one website, I used to have to develop DNN sites in a virtual application. This caused a problem when deploying the site because static image and css file references had to be updated.
With this approach, my URLs on my dev server match the URLs in the production site. One caveat to be aware of is that this approach only works if you are not using any special features of IIS. I recommend testing on a staging server that is running IIS before deploying to a production server with IIS. I only use Cassini for development purposes, not to actually host a site.
I went ahead and added some pre-existing modules to the project (upgrading them to .NET 2.0) as separate projects. I was able to add project references from my Web Application Project to the individual module projects. As far as I can tell, there is no longer the need to have a BuildSupport project with this approach.
To save you some time I am including the barebone solution and project here based on the DNN 4.0.3 distribution.
Keep in mind that this is a “pre-install” project meaning that after you set it up, you will need to rename release.config to web.config and update the connection string settings to point to your database. Afterwards, hit CTRL+F5 and walk through the DNN web-based installation process. That process will make filesystem changes so make sure you have appropriate write access.
Let me know if this works for you or if you find any mistakes, problems, issues with it.
Not too long ago I mentioned that the Subtext team is using CruiseControl.NET for continuous integration. Well Simone Chiaretta, the developer who set this up, wrote up an article describing Continuous Integration and the various utilities that Subtext uses in its CI process.
As you can see in the screenshot, the last build succeeded. Check out this small snippet from our NCover report
As you can see, we have a bit of work to do. But remember, code coverage isn’t everything.
When starting a new DotNetNuke based website, I like to develop it on my local machine, and when everything is ready for a first deployment, I deploy to whatever staging or production server is relevant.
This has worked fine over the years, but I ran into a problem recently when applying this approach to DNN 4.03. I had everything working just fine on my local machine, but after deploying to our production server, I could not get the site to work. It would give me some message about a
NullReferenceException when trying to get the portal.
Opening up Query Analyzer, I could select the records from the
dnn_PortalAlias table and see that everything matched up. I banged my head on this for a long time.
I finally had the idea to change the connection string to point to a brand new database. I thought maybe I would find some discrepancy in the database records. Perhaps I deleted something or other important. After the change, I hit the site which invoked the web-based installation process. Once that was complete I tried to get a list of records from dnn_PortalAlias and got an error message
Invalid object name 'dnn_PortalAlias'. Huh?
sp_tables showed there was no dnn_PortalAlias table. Instead, there was a PortalAlias table. Aha! I looked in web.config and indeed the
ObjectQualifier value was set to the empty string. So how did that change from my development machine to the production machine?
Well the source zip archive for DNN 4.0 ships with two config files. One named development.config and one named release.config. Before deploying, you are supposed to rename release.config to web.config. However, I had assumed that on my local machine, I could simply rename development.config to web.config for development purposes. I assumed that the only differences were in some debug settings. Boy was I wrong!
It turns out that the
ObjectQualifier setting was set to dnn_ in development.config. This is the value I would expect as this was the typical installation I used in previous versions. In any case, I hope this saves you time if you happen to run into it. The fix on my production server was simply to change the ObjectQualifier value to be dnn_.
Seems like everyone and their mother has an opinion on the “right” way to have comment threads. Currently Subtext supports the same model as .TEXT did, a simple linear sequential list of comments. It is simple and gets the job done.
The 37Signals blog addresses the question of comments and presents several examples of how different sites handle it.
Personally I like the first example. It retains the simplicity and fluidity of the linear approach, while adding a bit of useful meta-data. What do you think?
So Adam Kinney isn’t quite as ga-ga over Oblivion as I. Understandable. As he points out, it is missing the key ingredient of social interaction with other real humans.
Now why would you want to interact with other humans when you have the computer? ;) I suppose it is true that conversation via a drop down list isn’t doesn’t produce quite as stimulating a conversation. What if the AI reaches the point that a game like Oblivion is indistinguishable from an online multi-player game? Would that be as satisfying?
I digress. As Adam states,
I don't think I've ever enjoyed any RPG video game as much as carefree pencil, paper and dice role-playing from the high school years.
Well that’s because no amount of HDR lighting, Anti-aliasing, large texture maps is going to match the lighting effects and graphics going on in your noggin.
I admit, I was into the paper and dice game back in the day. I lived in Guam at the time and kept it on D-L for very self-conscious reasons. The funny part is that my friends, all in different circles (Hawaiian volleyball player, skateboarder, heavy metal dude, African American dude, etc...) didn’t know there was any stigma (imagined or real) to the game. I would cringe when they would tell our friends we were heading to so and so’s house to play Dungeons and Dragons.
But again, I digress...
My company regularly hosts internal conference calls via Skype. It got me thinking one day that Skype would be a wonderful means to play paper and dice Role Playing Games. The difficulty in getting a game together after highschool was not only the lack of time, but also the sparseness of interested parties. There is no way you are going to get six people to drive across town to meet all on the same day and time.
With Skype, geographical location is no longer a limitation. Granted you still lose some of the benefits of physical presence such as passing the Doritos and knocking over your friend’s figurine when he accidentally hits you with his fireball. But at least you have a much larger pool of people to choose from to start a game. Is anyone doing this?
Steve Harman, a Subtext Developer, writes up a helpful description of how to use the
varyByCustom parameter of the
It is so easy to get caught up in your day to day work and home duties and forget to take a break to really cut loose. The answer, my friends, is that big red button in the image to the left there. That there initiates Party Mode! Set this sucker up in your office or home bedroom, and whenever life catches up to you... Instant Party!
This here is the invention of some MIT students who pimped their dorm room with an instant rave setup. We are talking six video cameras, electric blinds, lights, laser, LED screens, music server, voice activation, blacklights, fog machine, etc etc...
Scroll down to see a couple videos they post of the setup in action. Now all they have to do to complete the club experience is charge $5 for a bottle of water and $12 for a crappily mixed drink in a plastic cup. Brilliant!
Lest you think I sit around spending all my time on computer games and soccer, I also try to write occasionally.
Today an article I have been working for a while has finally been published on DevSource. It is entitled a A Developer’s Introduction to Microformats and attempts to present a clear introductory look at Microformats. This is my second article for DevSource, the first being one I helped that crazy Bob Reselman write.
I was fired up to write this article after attending the Mix06 conference. Hearing Bill Gates mention Microformats (whether O'Reilly fed it to him or not) highlights the fact that Microformats are poised to really take off. There are some detractors and potential real problems with syndicating Microformats, so it will be interesting to see how they are solved.
In any case, check it out and let me know what you think. Did I present it well?
And before I forget, big ups to the Microformats mailing list for helping me think through some of these topics I covered.
I once thought I was a bit of a blogging addict. To get settled into work I would read my blogs. I’d tune back in while eating my lunch. And if I went on vacation, I thought about the huge number of unread feeds. Heck, I even went and got involved in RSS Bandit and Subtext so that I could work on the means of delivering blogs.
But now I realized that my blogging addiction is merely the mild craving for milk after a cookie. I have discovered what true addiction is, and its name is Oblivion.
Steve Yegge was right when he says...
...if you’re not playing Oblivion, then I highly, nay strongly recommend that you don’t start, or you’ll suddenly develop an aversion to Real Life...
This is quite simply the best computer game I have ever had the pleasure to play. I remember spending hours as a kid playing such classics as the Phantasie, Ultima III, Ultima IV, The Bard’s Tale and Dungeon Master. Dungeon Master at the time elevated the FRPG genre for me because it was the first that really incorporated first person realtime playing. But I remember drawing up plans for the ultimate game. Apparently Bethesda swiped those plans from my brain and decided to do even better.
So why is this game so damn addicting? It is a combination of a lot of things really. First, the skill based system really seems to mean something. I remember there was never a point in playing a thief in most role playing games because you would just get killed first. Most games were simply hack and slash fight your way out of every situation.
But with Oblivion, you have the opportunity to really put those sneaking and lockpicking skills to good use in daring missions where simply blasting your way through really isn’t a good option. I also like the fact that lock-picking isn’t simply rolling a die and comparing it to a skill (though you can resort to that option). You have the ability to actually try and pick that lock.
If there were no other characters in the game, it would be like Myst, but with the ability to fully explore your environment. The scenery in this game is jaw dropping.
But ultimately, I think the open-ended gameplay really kicks it up a notch. After a short stint as a gladiator (got my ass handed to me) my character is now working his way up the Thieves guild and trying to advance in the Mage’s guild. At the beginning of the game, some important Emporer got shanked and I am supposed to deliver his amulet somewhere, but I sort of got sidetracked.
Now I am travelling around, checking out the scenery, and getting way too little sleep. I suppose I should look into delivering this amulet, but first I have some pilfered goods to fence off and I want to help this half-orc reclaim his heritage.
I write this blog post with apologies to Dale Carnegie for the play on the title of his book.
Today, Jeff Atwood writes about the difference between writing and copywriting. His essential point is that good copyrighting is marketing and is boring. Good writing on the other hand is engaging and not boring. Understand the difference?
I think this dovetails nicely into another article I read recently at A List Apart entitled Calling All Designers: Learn to Write!
Derek Powazek points out that creating a good user experience goes beyond rounded corners and visual design. Good writing is an essential part of creating a great user experience. He sites Flickr as one example of getting it right. Rather than a button that says Submit they have a button that says Get in there. That really is friendlier isn’t it.
When you think about it, using plain casual English is much more natural for people to read. How often in the real-world do you hear people asking you to submit anything except when submitting a drug test or tax forms in triplicate?
So I took a look at my blog and noticed that in the front end, there is pretty much only one button that people use on a daily basis and it said Comment. So I changed it to Leave Your Mark and sat back waiting for the accolades to roll in on the improved user experience. Anybody hear crickets?
Well it is going to take more than changing a single button to improve the overall user experience here. I will actually have to start writing well and quit using this random copy generator. But these are definitely insights I want to take into consideration when I get around to tweaking and updating the admin interface to Subtext. What are areas in which we can improve the writing? How can we improve the user experience? Little touches add up to a lot in creating a great experience.
I recently set up Payroll via Paychex for my company. It is an eye opener to see exactly what taxes an employer pays on top of the taxes already deducted from each employee’s paycheck. I mean, I always heard that my employers were paying taxes for me when I was an employee, but I never knew how much. Till now.
This is helpful when figuring out your total compensation as it is part of the hidden cost of going into business for yourself. Of course, we are a C-Corp so these figure may be different for other types of businesses. I wouldn’t know and this does not qualify as tax advice.
|State Unemployment||0.8%||State of CA. This changes.|
Social Security has a wage base limit of $94,200. So if an employee makes more than that (including bonuses etc...), the employer will only be taxed 6.2% of $94,200.
Medicare has no wage based limit.
The last two taxes are only taxed on the first $7000 of wages per employee per year. So the employer pays 3.4% of $7000 for each employee assuming each makes $7000 or more a year.
So make sure these figure into your cash-flow estimates. Also, don’t forget that by law, most companies are required to carry Workman’s compensation insurance. That will cost you a small chunk of change per year as well.
So in the hustle and bustle of trying to get my Yahoo account back (it has been returned), I forgot to show some love for JackAce of the Code Turkey blog. He and I used to work at SkillJam and he was the one who alerted me via email that my account had been jacked.
In this post, he describes the general tactic that an Instant Messaging based attack takes to spread itself.
He also provides some tips to avoid phishing and talks about what to do if you are phised. Be careful out there.
Since I had a rough week last week, I thought I would post something fun today. While some people are just jumping on the dual-monitor bandwagon, I have recently moved on to three screens.
Of course that is not exactly true. The two screens on the right are attached to my new Dell Dimension 9150 workstation. The one on the left is attached to my old Shuttle system. That there is running the VMWare Server that hosts Subtext’s CruiseControl.NET build server.
The only reason I got the third screen is that because of a deal they were offering, it actually lowered the cost of the lease to get this screen than to not get it. You can’t beat a deal like that!
Rather than using a KVM, I am using MaxiVista to remote control the computer via the third monitor. That works pretty nicely, though MaxiVista seems to hiccup alot.
When your company installs one of these babies for you.
Via Boing Boing
Come to think of it, I could have used that in the past.
So after getting my Yahoo password phished, my wife reminded me that we should put a fraud alert on our credit file. I first heard about this from my friend Walter a while ago, but we never got around to it.
This is a flag that the major credit bureaus (experian, equifax, and TransUnion) attach to your credit report. If someone (including yourself) tries to open up a new credit account, the lender is supposed to (though not required by law) to contact you by phone to make sure that you really do want to open a new account.
Keep in mind that this applies to applying for a new credit card, obtaining a car loan, purchasing a cell phone, etc...
Setting up a fraud alert is pretty easy. There are three major credit bureaus you can call, but I prefer to do these things online. If you go to https://www.experian.com/fraud/, you can apply for the initial security alert (90 days) via the internet. They will forward the alert to the other two credit bureaus so you shouldn’s have to call them. One other benefit is that they let you print out your credit history online for free.
If you live in California, the protections are much better. According to California Law SB 168, you have the right to freeze your credit record at each bureau. This makes it impossible to issue credit in your name, even for someone armed with your name, address, Social Security Number, etc... To do this, you do need to contact each bureau in writing and send in $10.
For instructions on the benefits of a credit freeze and how to contact each credit bureau, check out this page on the Fight Identity Theft website.
Apparently similar laws apply to the following states at the time of this writing (CT, IL, LA, ME, NV, NC, TX, VT, WA).
Prolific blogger Mr. Jeff Atwood, author of the CodingHorror blog, paid us a surprise visit last night. He is in town for a couple of days to do something or other unimportant. He tried to explain something about presenting Team System to important people but all I heard was “blah blah TS blah blah”.
After a fine dinner at the new Ford Filling Station (owned by Harrison Ford’s son) we gathered around the screen and had a chat with the not-so-prolific blogger lately, Jon Galloway.
So that there is Jeff on the left getting cozy with Jon on the right, who couldn’t make it in person but would like to thank the academy via live video feed courtesy of Skype™.
Jeff is one of the few people who regularly reads my blog through one of these antiquated mediums called a browser. Which is actually great since he gets to experience the very cool drop-shadow effects I apply to my photos. Go CSS!
After a bit of plotting to overtake the planet and typical jokes at each others expense, we all went our merry ways. Except for me, I live here.
Well this recent phishing attack is clear demonstration of the inherent dangers of homogeneity. Biologists and epidemologists have known this stuff for decades. Having given out my Yahoo password would have been much more disastrous if I was using Yahoo for my primary email address. Fortunately I use Gmail. Imagine the damage had I given out my Passport password. Egads!
Unfortunately I do use Yahoo Messenger. But I also use MSN and Skype. One password does not connect the bad guys with everything I use to communicate. But it is enough for them to do some damage. When you get an IM from a credible source, it is hard not to resist. It naturally brings your defenses down. A clever example of social engineering.
UPDATE: I am back in business. I have re-obtained control over my Yahoo account. So the IM messages you receive from me are really from me. I won’t make this mistake twice.
Never operate a computer while sleep deprived. In fact, I am starting to think people should be licensed to get on the internet much like you do to drive a car. I am absolutely mortified to admit this, but I got suckered in a phishing attack that occurred via Yahoo Messenger.
I received an IM from a former boss with a link to a geocities photo gallery. When I clicked on the link, it looked just like a Yahoo photo gallery. Thinking (or rather not thinking), “Oh yeah, Yahoo owns Geocities now, right?” I logged in to see the photos. Big mistake. Right then I had the sneaking suspicion that I had done something painfully wrong.
And today, it was confirmed when a friend emailed me to tell me that I got my password jacked. If you see an IM from me or anyone with the link http://www.geocities.com/ladivabev/photos_pics.html (or rather any geocities link) DO NOT CLICK ON IT.
I cannot believe I fell for this. I am usually excellent at spotting and ignoring these, but everybody has their off days. And lately, I have had a string of them. I recently accidentally deleted all my backup data on my external hard-drive. Sleep deprivation is a killer.
And if you receive an IM or Yahoo message from me, please know it is not from me until further notice.
With many thanks to Simone Chiaretta (blog in Italian) for his effort, we now have a working CruiseControl.NET setup for Subtext. Check out the chrome (or lack thereof) on our CCNET dashboard.
Though we have some kinks to work out (the build is apparently broken according to CCNET), I am particularly happy about getting this up and running. As a distributed open source project, it is part of our master plan to follow agile development practices that are well suited to building Subtext. Continuous integration is particularly important for us since we are in different time zones and locations.
The CCNet server is running on Windows 2003 within a VMWare Virtual Server on my old development workstation. That makes our build server very portable should we decide to host it elsewhere someday.
Once we get the kinks worked out, you can download the CCTray system tray applet and keep tabs on the development of Subtext. You’ll know exactly who and when someone breaks the build. How is that for open source?
To get CCTray to work, make sure your firewall allows TCP traffic over port 21234. Then add the server build.subtextproject.com:21234.
Though for now, let’s be adults and keep the teasing to a minimum. I apparently broke the build, but I am betting it is a configuration issue with moving the virtual server from Italy to Los Angeles. Ciao!
Yes, yet again I have purchased tickets to Burning Man scheduled for August 28 through September 4, 2006. And you better believe I am bringing the prep back!
I must be an addict for pain, discomfort, and Playa dust to return a third time. But I had such a great time last time, and the time before, that I just couldn’t hold back. And this time, I am dragging my buddy (and business partner) Micah along. Still working on getting Kyle to come again.
This is a story of intrigue.
Ok, perhaps that is a bit overblown. This is really a story of schizophrenia. It is the story of a method
PageParser.GetCompiledPageInstance that exhibits a different behavior depending on whether or not you have the
debug attribute set to
The problem first came up when deploying the most recent builds of Subtext with this attribute set to
false. This was the natural response to Scott Guthrie’s admonishment, Don’t Run Production ASP.NET Applications with debug="true" enabled..
However, this affected Subtext in an unusual manner. Subtext employs an URL rewriting mechanism I wrote about before. It relies on the using an
IHttpHandler that is created by calling
I will spare you all the details and cut to the chase.
GetCompiledPageInstance takes in three parameters:
- virtualPath (string)
- inputFile (string)
- context (HttpContext).
In the initial request to the Subtext root, the values for those parameters on my local machine are:
- virtualPath = "http://localhost/Subtext.Web/Default.aspx"
- inputFile = "c:\projects\Subtext.Web\DTP.aspx"
- context = (the current context passed in by the ASP.NET runtime)
The interesting thing to note is that there is an actual
aspx file named
Default.aspx located at http://localhost/Subtext.Web/Default.aspx. When the
debug compilation option was set to
true, this method would return a compiled instance of DTP.aspx (hence the URL rewriting).
But when I set
debug="false", it would return a compiled instance of Default.aspx. Holy moly!
I confirmed this by attaching a debugger and going through the process multiple times. Using Reflector, I started walking through the code for
GetCompiledPageInstance until my eyes started to burst. There is a lot of machinery at work under the hood. I eventually found some code that appears to generate a URL path differently based on debugging options. Not sure if this was the culprit, but it is possible.
debug="false" causes the runtime to perform a batch compilation. Thus a request for /Default.aspx is going to compile all *.aspx files in that folder into a single DLL. Setting that debug value to true causes ASP.NET to compile every page into its own assembly.
My fix is a bit of a hack, until I can get a deeper understanding of what is really happening. As I see it, calling
GetCompiledPageInstance with a
virtualPath that points to a one file while passing in a different physical file path to
inputFile is causing some confusion. Perhaps due to the batch compilation.
To remedy this, I simply have a check before we call
GetCompiledPageInstance to check the end of the
virtualPath for /Default.aspx (case insensitive of course). If it finds that string, it truncates the default.aspx portion of it. That seems to do the trick for now since this is pretty much the one place in which URL rewriting would attempt to rewrite a url that itself points to a real page.
For a nice look under the hood regarding the
compilation option, check out this post by Milan Negovan.
Please keep in mind that this is a separate issue from deploying your compiled assemblies in debug mode or with debug symbols. This has to do with the ASP.NET runtime compiling the ASPX files at runtime.
I am absolutely livid with my company’s bank right now and I need to blow off some steam. We had two recent deposits reversed because of a missing endorsement. This is odd because I am always careful to sign every check. Well it turns out that they changed their endorsement policy on March 31 and didn’t bother to notify us.
The problem is not that the new requirements are so onerous, they are not, but that without notification, I have no way of knowing the new requirements. Adding to the problem is that they mail the checks back (I live walking distance from our local Washington Mutual) and it has been a week already and we haven’t received our first check back. As any small business owner knows, cash flow is king. When the checks arrive is more important than the amounts of the checks.
I absolutely detest the horrendous level of service banks provide. When I moved to Los Angeles, I started with Bank of America and they were the absolute worst experience I have ever had. But WAMU is closing down on that.
Well anyways, thanks for letting me blow some steam. I needed that.
UPDATE: I updated the article a bit to better explain decimal expansion to negabinary
As his script merrily iterates its way through the page’s elements, it checks the values of the element to see if the first character is a “-” (dash). And this works just fine for the majority of you people so thoroughly stuck on the “decimal” system.
But as I pointed out in his comments, this discriminates against negative base numbering systems such as ...drumroll... Negabinary!
Doesn’t negabinary sound like one in a long string of major villains to attack Godzilla and end up destroying Tokyo yet again?
Negabinary is a lot like binary’s evil twin. Rather than a base 2 system, negabinary is base -2. The beauty of negabinary is that there is no need for a negative sign (aka the sign bit). All integers, negative or positive, can be written as an unsigned stream of 1s and 0s.
To expand a decimal number into negabinary, you simply divide the number by -2 repeatedly. Each time you divide the number, you record the non-negative remainder of 0 or 1. Afterwards, you take those remainders in reverse order and there you have it, the negabinary expansion. Simple no?
Keep in mind that we are doing remainder division here. So -1/-2 is not one half, but 0 remainder 1. Likewise, 1/-2 is 1 remainder 1.
Keep in mind this simple algerbraic formula: if a / b = c remainder d, then bc + d = a.
Thus, to expand decimal 2 in negabinary:
2 / -2 = -1 remainder 0
-1 / -2 = 1 remainder 1
1 / -2 = 0 remainder 1
Taking those remainders in reverse order we get 110. So 110 is the negabinary representation of decimal 2.
I remember learning that there were computing systems built (perhaps experimental) that used negabinary instead of binary. Apparently there are benefits to representing a number without a signed bit. Unfortunately, like a good evil twin, negabinary makes arithmetic operations quite complicated.
I was going to write up a whole exposé on negabinary, but the Wikipedia did a much better job than I would have. My memory of my college math lectures on alternate numbering system is pretty hazy. Throughout history, humans have tried out various numbering systems other than base 10. The Mayans used some sort of hybrid of base 20 and base 360. I kid you not.
So with a small alteration, we can adjust Scott’s script to accomodate negabinary enthusiasts.
Downtown Los Angeles experienced a huge march today to protest bill HR 4437 and support immigrant rights and immigration reforms. I had been waffling about attending since I really hate driving to downtown (bad traffic and parking), but realized that since both my mother and my wife are immigrants, I ought to come out and show some support. An IM from my friend Kyle telling me I won’t regret it also served to jolt me out of my complacency.
Besides, my wife works on the corner where the march starts so I could just park nearby, have lunch with her, and join in the march. So I hopped in my car, grabbed a white shirt for my wife (everyone was encouraged to wear white), and headed off to downtown. Traffic was actually better than I have ever seen it on the 10 East.
When I arrived, I was greeted with the sounds of helicopters hovering overhead and people cheering. I was then assaulted by the smell of street vendor cooking in the air which instantly made me hungry and ready to part with some money despite the boycott. I proceeded to walk right through the parade in order to get to Akumi’s office.
Once there, I ran up to the roof to take some photos.
The photos from the roof do not even begin to give you a sense of how many people were there. Multiple city blocks were chock full of people chanting, singing, and dancing. The air was electric.
Even the little ones were into it.
Though this one was tuckered out.
The crowd was primarily latino. I had hoped to see a more diverse crowd show up in support, but I did manage to find the one other white guy.
We took a shortcut to the end point of the march where everyone was gathering, but didn’t feel like braving the crowds much longer.
I stepped aside for a moment to get a better view for a picture and when I looked back, I could not see my wife. Since everyone was wearing white, it was easy for me to lose track of her. What was I going to do? Ask everyone if they’ve seen a woman wearing a white shirt, blue jeans, with black hair? That described half the entire crowd. It was a beautiful day out there.
UPDATE: I forgot to place a link to my photoset on Flickr. This contains more pictures that I took.