comments suggest edit

Just so we’re all clear about this, the convenience of the CommentAPI, that nifty little service that allows users to make comments to your blog from the comfort of their favorite RSS aggregator, comes at a cost. Enabling the CommentAPI supplies a back door for comment spammers who want to bypass the CAPTCHA guard posted at the front door.

I was just chatting with Andrew about this and we realized it would be quite easy to add CAPTCHA support to the CommentAPI if we could get both RSS Aggregator developers and blog engine developers to agree on how to update to the CommentAPI to support a CAPTCHA image url or a CAPTCHA text question. The RSS Aggregator would then display this image or text, and provide the user a field in the comment dialog to supply the answer to the CAPTCHA challenge, which the CommentAPI would validate with the CAPTCHA control. Of course this wouldn’t close the CAPTCHA backdoor for Trackbacks and Pingbacks.

In the meantime, I tend to favor non-CAPTCHA approaches to comment spam filtering for this very reason. I want to fight comment spam tooth and nail with every resource I have before I turn off the CommentAPI on my blog. Likewise, I still support Trackbacks because I personally have found them more beneficial than detrimental so far.

In any case, Subtext will provide configuration options to turn each of these services on or off individually so that users have full control of comment entry points.

comments suggest edit

UPDATE: I have recently posted a newer and better version of this code on my blog.

As I’ve stated before, I’m a big fan of completely self-contained unit tests. When unit testing ASP.NET pages, base classes, controls and such, I use the technique outlined by Scott Hanselman in this post. In fact, several of the unit tests for RSS Bandit use this technique in order to test RSS auto-discovery and similar features.

However, there are cases when I want a “lightweight” quick and dirty way to test library code that will be used by an ASP.NET project. For example, take a look at this very contrived example that follows a common pattern…

/// <summary>
/// Obtains some very important information.
/// </summary>
public SomeInfo GetSomeInfo()
{
  SomeInfo info = HttpContext.Current.Items["CacheKey"] as SomeInfo;
  if(info == null)
  {
    info = new SomeInfo();
    HttpContext.Current.Items["CacheKey"] = info;
  }
  return info;
}

The main purpose of this method is to get some information in the form of the SomeInfo class. Normally, this would be very straightforward to unit test except for one little problem. This method has a side-effect. Apparently, it’ll cost you to obtain this information, so the method checks the Context’s Items dictionary (which serves as the current request’s cache) first before paying the cost to create the SomeInfo instance. Afterwards it places that instance in the Items dictionary.

If I try and test this method in NUnit, I’ll run into a NullReferenceException when attempting to access the static Current property of HttpContext.

One option around this is to factor the logic between the caching into its own method and test that. But in this case, I want to test that the caching works and doesn’t cause any unintended consequences.

Another option is to fire up Cassini in my unit test, create a website that uses this code, and test the method that way, but that’s a “heavy” (and potentially very indirect) way to test this method.

As I stated before, I wanted a “lightweight” means to test this method. There wouldn’t be a problem if HttpContext.Current was a valid instance of HttpContext. Luckily, the static Current property of HttpContext is both readable and writeable. All it takes is to set that property to a properly created instance of HttpContext. However, creating that instance wasn’t as straightforward as my first attempt. I’ll spare you the boring details and just show you what I ended up with.

I wrote the following static method in my UnitTestHelper class. All the write statements to the console shows the values for commonly accessed properties of the HttpContext Note that this method could be made more general for your use. This is the version within Subtext.

/// <summary>
/// Sets the HTTP context with a valid simulated request
/// </summary>
/// <param name="host">Host.</param>
/// <param name="application">Application.</param>
public static void SetHttpContextWithSimulatedRequest(string host,
string application)
{
  string appVirtualDir = "/";
  string appPhysicalDir = @"c:\\projects\\SubtextSystem\\Subtext.Web\\";
  string page = application.Replace("/", string.Empty) + "/default.aspx";
  string query = string.Empty;
  TextWriter output = null;

  SimulatedHttpRequest workerRequest = new SimulatedHttpRequest(appVirtualDir, appPhysicalDir, page, query, output, host);
  HttpContext.Current = new HttpContext(workerRequest);
  
  Console.WriteLine("Request.FilePath: " + HttpContext.Current.Request.FilePath);
  Console.WriteLine("Request.Path: " + HttpContext.Current.Request.Path);

  Console.WriteLine("Request.RawUrl: " + HttpContext.Current.Request.RawUrl);

  Console.WriteLine("Request.Url: " + HttpContext.Current.Request.Url);

  Console.WriteLine("Request.ApplicationPath: " + HttpContext.Current.Request.ApplicationPath);

  Console.WriteLine("Request.PhysicalPath: " + HttpContext.Current.Request.PhysicalPath);
}

You’ll notice this code makes use of a class named SimulatedHttpRequest. This is a class that inherits from SimpleWorkRequest which itself inherits from HttpWorkerRequest. Using Reflector, I spent a bit of time looking at how the HttpContext class implements certain properties. This allows me to tweak the SimulatedHttpRequest to mock up the type of request I want. The code for this class is…

/// <summary>
/// Used to simulate an HttpRequest.
/// </summary>
public class SimulatedHttpRequest : SimpleWorkerRequest
{
    string _host;

    /// <summary>
    /// Creates a new <see cref="SimulatedHttpRequest"/> instance.
    /// </summary>
    /// <param name="appVirtualDir">App virtual dir.</param>
    /// <param name="appPhysicalDir">App physical dir.</param>
    /// <param name="page">Page.</param>
    /// <param name="query">Query.</param>
    /// <param name="output">Output.</param>
    /// <param name="host">Host.</param>
    public SimulatedHttpRequest(string appVirtualDir, string
appPhysicalDir, string page, string query, TextWriter output, string
host) : base(appVirtualDir, appPhysicalDir, page, query, output)
    {
        if(host == null || host.Length == 0)
            throw new ArgumentNullException("host", "Host cannot be null
nor empty.");
        _host = host;
    }

    /// <summary>
    /// Gets the name of the server.
    /// </summary>
    /// <returns></returns>
    public override string GetServerName()
    {
        return _host;
    }

    /// <summary>
    /// Maps the path to a filesystem path.
    /// </summary>
    /// <param name="virtualPath">Virtual path.</param>
    /// <returns></returns>
    public override string MapPath(string virtualPath)
    {
        return Path.Combine(this.GetAppPath(), virtualPath);
    }
}

Within the SetUp method of my TestFixture, I call this method like so…

[SetUp]
public void SetUp()
{
    _hostName = UnitTestHelper.GenerateUniqueHost();

    UnitTestHelper.SetHttpContextWithBlogRequest(_hostName, "MyBlog");
}

Unfortunately, this so called “lightweight” approach has its limits. Any call in your code to HttpContext.Currert.Request.MapPath will throw an exception. I tried working around this, but it looks like I’m at an impasse. The MapPath method makes use of the HttpRuntime.AppDomainAppPath property. Unfortunately, I cannot simulate the HttpRuntime in a lightweight manner. There is a way to run the code being tested within an HttpRuntime, but that, of course, is the heavyweight Cassini method mentioned above.

comments suggest edit

Oh man, I have been head deep into “real” work lately, which explains the relative silence on my blog. In any case, it’s time to jump back in the fray with some light technical content.

Starting with version 2.2.1, the ever so handy NUnit unit-testing framework finally supports building custom test attributes. This allows you to create your own attributes that you can attach to tests to allow you to run custom code before and after a test and handle what to do if an exception is thrown or not thrown.

Roy Osherove, a unit-testing maestro, wrote up a simple abstract base class developers can implement that greatly simplifies the process of creating a custom test attribute.

In an MSDN article, Roy outlines various methods for dealing with database access within unit tests including a particularly promising method using COM+ 1.5. He mentions that he’s implemented a Rollback attribute for a custom version of NUnit he calls NUnitX.

Not wanting to run a custom implementation of NUnitX, I quickly implemented a Rollback attribute for NUnit 2.2.1. Heck, it was quite easy considering that Roy did all the fieldwork.

Unfortunately, I ran into a few problems. The custom attribute framework is not quite fully baked yet. When you apply a custom test attribute, your attribute may break other attributes. Case in point, my ExpectedException attributes suddenly stopped working. Looking through the NUnit codebase, it appears that the first attribute loaded handles the ProcessNoException and ProcessException method calls.

This is a known issue and an NUnit developer stated that he’s working on it. In the meanwhile, I worked around this issue with a beautiful kludge. I simply extended my Rollback attribute to absorb the functionality of an ExpectedException attribute. This is really ugly, but it does the job. So if you use this Rollback attribute, you can also specify an expected exception like so:

[Rollback(typeof(InvalidOperationException))]

I know, “ewwww!”. This is essentially an attribute doing double work. It’s probably better to name it “RollbackExpectedException” attribute. But I hope to remove this functionality at a later date when the custom attribute support in NUnit is more full baked.

The second problem I ran into is that this approach enlists the native Distributed Transaction Manager for SQL Server (in my situation). In one project, I’m testing against a remote database and native transactions are turned off for security purposes. The solution in this case is to use TIP, or Transaction Internet Protocol. This would require modifying the “BeforeTestRun” method to (notice the added line in bold):

ServiceConfig config = new ServiceConfig();

config.TipUrl = “http://YourTIPUrl/”;

config.Transaction = TransactionOption.RequiresNew;

ServiceDomain.Enter(config);

So far, I haven’t been able to get our system administrator to enable TIP so I haven’t fully tested this last bit of chicanery.

In any case, make sure to read Roy’s articles noted above before downloading this code. I’ve made some slight modifications to his base class to reflect personal preferences. Let me know if you find this useful.

[Listening to: Wake Up - Rage Against The Machine - Rage Against The Machine (6:04)]

comments suggest edit

So it’s maybe thirteen years too late, but I finally have my first pair of Air Jordans. Yeah, they don’t have anywhere near the cachet they did back in the day, but hey, better late than never. Here’s a photo to commemorate the occasion.

Air Jordans\ Michael thanks you… all the way to the bank.

I remember when I was a high-school ball player (made JV. At 5’ 9”, I was a bit short for Varsity), I positively drooled at the idea of owning a pair of Air Jordans when they came out. They defined hotness for the realm of basketball shoes, though they seemed to do little to actually improve my teammates’ skills. I remember one point guard in particular who claimed the coveted title of being the “First” to own a pair (Remember how important it was to be the “First”? New toilet model coming out. I had it first!). He would show up to practice in head to toe Jordan gear complete with the tongue sticking out and proceed to build a very sturdy tenenment with the bricks he tossed up.

For a military brat, these emblems of Basketball glory were outside the realm of affordability. But now as a hot-shot independent consultant, they are still outside the realm of affordability. The difference now is that I have less sense, so I bought a pair anyways. They certainly are a lot cheaper than they were back in the day.

To justify this purchase, I give you a picture of my last pair. As you can see, I do wait till the last minute and beyond, before buying a new pair of shoes. Now excuse me while I purchase a new pair of Copa Mundials to replace my thoroughly trashed soccer cleats.

Ratty Shoe\ Look what the cat dragged in.

personal comments suggest edit

This is what I was working on before the siren call of independent consulting lured me away.

The new product, which Philp projects to go live next month, will be known as SkillJam Mobile. For the initial product launch, SkillJam Mobile will be separate from the SkillJam.com site, giving users separate log-ons. Philp said the company hopes to combine the two sites in the future.

SkillJam Mobile will offer what Philp calls multipack gaming, an innovative concept in the mobile space. Most carriers have games on their systems but are able to deliver them only one at a time.

At the time that I left, much of the infrastructure work had been completed, though I made myself available to do some contract work to help finish up some of the loose ends, which they took advantage of. It’ll be interesting to see how well this does.

comments suggest edit

blogging my way to
pdc This is my feeble attempt win an all expense trip to the PDC this year. All I have to do is explain why I want to attend. Unfortunately, I’ll be judged on creativity, value to the community, writing quality and style. It was my hope that my dashing good looks and slipping someone a freshly minted bill would suffice.

Well the reason I want to attend is simple, it’s to play XBox 360… AND to get a heads up on the upcoming technology Microsoft is churning out. I’m particularly interested in future versions of VS.NET and ASP.NET. I hope to get some ideas to apply to Subtext.

As for sharing with the community, there’s this blog thing I have. I’ve recently inherited a video camera so I will very likely place videos and photographs on here, along with witty and insightful commentary (assuming I can hire that witty and insightful off-shore ghost writer). Like a relentless avalanche, I’ll go door to door and tell people in person if that’ll help.

Lastly, I’d like to drop the economic argument: I live in Los Angeles, so Microsoft can stay lean and mean and save a buck on the air fare voucher, though I’d gladly accept the hotel accomodations. Bill and Steve will pat you on your back for your resourcefulness.

personal, tech comments suggest edit

TRS 80 Color Computer Via Rory’s post here, I’ve discovered the Obsolete Technology Website.

The fact that this site evokes nostalgia only reinforces two facts about me, that I am a total geek and I’m getting old. I love the write-up of my very first computer (which I still own and have laying around here somewhere).

The gray/silver color scheme was fetching for the original TRS-80 Model I computer, but it just doesn’t work on the Color Computer - it has to be one of the ugliest computers ever.

Ahhh yeah!

Commodore 128Later, when several of my friends were riding the Commodore 64 wave, I jumped one step ahead with my second computer, the Commodore 128 (one-piece model). In sense, this wasn’t a step ahead at all as the Commodore 128 was just a glorified Commodore 64 with a nicer looking case. Just about nobody jumped at the chance to write software that took advantage of the C128 Mode or the CP/M mode. I pretty much spent most of the time using it in C64 mode.

Amiga 500 My third, and last computer before switching over to the Wintel universe, was every geek boy’s wet dream at the time, the Amiga 500. Unfortunately the site doesn’t have a write-up of the Amiga 500 specifically, but you can read up on the Amiga 2000 which came out the same year.

Ahhh memories…

UPDATE 2013-06-24: Turns out I still have that TRS-80!

phil-with-trs-80

comments suggest edit

Looking at my SPAM filter, I notice that nearly a quarter of my emails appear to be from PayPal. Of course, these are all spoofed to appear that way.

If you get an email from PayPal, DO NOT CLICK ON ANY LINKS IN THE EMAIL!

Instead, fire up your browser of choice, and type in www.paypal.com in the address bar. Nearly all of these emails are fakes. Here’s an example of a particularly tricky one that raised alarms and almost caused a knee jerk reaction till I realized it was a fake. It played upon a simple fear.

You have added

brian12313@yahoo.com

as a new email address for your PayPal account.

 

If you did not authorize this change or if you need assistance with your account, please contact PayPal customer service at:

https://www.paypal.com/row/wf/f=ap_email

Thank you for using PayPal!\ The PayPal Team

Please do not reply to this e-mail. Mail sent to this address cannot be answered. For assistance, log in to your PayPal account and choose the “Help” link in the header of any page.

—————————————————————-\     PROTECT YOUR PASSWORD\

    NEVER give your password to anyone and ONLY log in at

https://www.paypal.com/.

Protect yourself against fraudulent websites by opening a new web browser (e.g. Internet Explorer or Netscape) and typing in the PayPal URL every time you log in to your account.

 


PayPal Email ID PP007

This is a standard notice when adding a new email address to your PayPal account. What caught my attention is that the email address has the name brian. My brother is named Brian, so instinctually I wondered if he made a mistake with his own paypal account, adding me as an address.

But soon, I realized that this has to be a scam, simply because EVERY email I seemingly get from PayPal appears to be a scam.

Notice the URL https://www.paypal.com/row/wf/f=ap_email so helpfully included to ostensibly help you contact PayPal customer service. In my email, this was a link. When I hovered my mouse over it, it’s displays a completely different URL at some server with the IP 220.80.212.211. A quick DNS Lookup shows this is not a PayPal server.

In fact, EVERY “link” in this URL points to that IP address, even the word brian12313@yahoo.com which you would expect to be a mailto: link. Very sneaky.

comments suggest edit

I just finished reading part 2 of the Bill Venners interview with Erich Gamma and Erich so eloquently distills some of what I was trying to say in a recent post.

It’s interesting to note how thinking about building systems has changed in the ten years since Design Patterns was published. Bill Venners quotes the GOF book as saying

The key to maximizing reuse lies in anticipating new requirements and changes to existing requirements, and in designing your systems so they can evolve accordingly. To design a system so that it?s robust to such changes, you must consider how the system might need to change over its lifetime. A design that doesn?t take change into account risks major design in the future.

This is certainly something I was taught when I first started off as a developer, but I think now, it?s becoming more and more clear that speculation carries a lot of risk and can be more harmful than helpful. I learned that the hard way, as clients are a fickle lot, and you can guess what they?ll ask for next as easily as you can guess the next super lotto numbers.

Erich?s approach to building an extensibility model with Eclipse reflects how I try to approach projects I work on. In essence, experience a little pain (be it duplication, etc…) before refactoring with a pattern.

I eagerly anticipate part 3 of the interview. Be sure to also read Part 1 of the interview.

comments suggest edit

block I’ve heard of writer’s block, but never dealt with coder’s block until today. Seriously, I’ve always been able to just unleash that kernel of code simmering inside in a big pop of keyboard slamming.

As an aside, my wife and her friend happened to walk in on a coding session one day and they remarked that they could easily do what I do. Why, I’m simply stabbing at the keys at random! They proceeded to mimic me jamming the keyboard as an insane pianist attempting to perform a Liszt piece at twice the speed might do. Hmm, hopefully my clients don’t find out and replace me with a monkey.

But today, alas, I’m tipping my head to the side, and nothing is pouring out. Zip. Guess it’s time to take a break and maybe buy some books.

comments suggest edit

I’ve been following with interest Shelley’s progress with WordForm, a blogging engine. I hadn’t realized that WordForm was a fork in WordPress until she recently mentioned it.

In this particular post she describes some of the work she’s doing to handle metadata for images. She’s extracting EXIF data from images and storing as RDF statements in the database. She’s also pulling EXIF data from Flickr via its RESTful API. This is some sweet stuff that I hope finds its way into Subtext sometime in the future, though we have more pressing immediate concerns.

I’ll just wait to see how it pans out for WordForm and we’ll STEAL STEAL STEAL (of course giving full credit and props back to Shelley). ;)

company culture comments suggest edit

In my limited experience so far, and from anecdotal evidence of nearly everyone I’ve ever met who had a boss at one time or another, managers as a whole still do not trust their employees. It’s a real shame if you think about it, because the whole point of hiring employees is to scale up and create an infrastructure capable of handling more work (and ostensibly more profit) than you can now.

Instead, employees often are simple extensions of a boss, mere drones blindly following a script as if the boss is remotely controlling each one in a real life game of The Sims. In order to herd these drones, bosses implement processes for the drones to follow. The end result is that overall productivity and customer satisfaction is only incrementally increased by a small amount with each new employee, while costs increase, creating a top heavy organization.

Allow me to illustrate this point with something that occured this past weekend which serves as the source of this rant. I went to one of these newfangled “Destination”movie theaters to join some friends in watching Star Wars Reveng of the Sith. This was the type of theater that compelled patrons to pay a premium for the convenience of assigned seating.

Upon arriving, a friend suggested we prepay $1.50 immediately for parking to get a discount. After doing so, we both realized we had made a mistake. With validation, parking is only $1.00 for four hours. We informed the young lady who marked our ticket as having been paid that we made a mistake, but she had no idea how to correct the situation. She merely assured us that if we get our tickets validated, we’ll be able to leave without having to pay again.

Well I’m not one to be upset about 50 cents so we left it at that, watched the movie, and then left. On my way out, I handed my ticket to the parking ticket. The ticket clearly displayed that I had already paid $1.50 for parking. When the attendant put the ticket into the system, it showed that I had validated the ticket as well. Good, so there’s no problem I thought.

The attendant’s then proceeded to inform me that his screen states that I owe $4.50 for parking. I chuckled to myself thinking, “Cool, we’ve uncovered a bug in the system that hadn’t been anticipated by the QA team. How neat.” Unfortunately, the attendant couldn’t make that decision. It seemed awful clear to me. The rules state that with validation, parking is only one dollar. His screen clearly shows that I had been at the theater less than four hours, that I had indeed validated my ticket, and that I had already paid more than one dollar.

Unfortunately, this attendant’s training hadn’t prepared him to make a freaking decision. Instead, I sat there waiting for him to find out the name of his supervisor form the othe attendants (how did he not know this?) and then get permission from the supervisor.

You see, unless employees are trusted with decision making, they won’t make a decision. Instead, they’ll blindly follow a process and then become paralyzed when they uncover a glitch in the system. And there’s always a glitch in the system.

Instead, all that is needed is to provide employees with a vision and set of principles and then empower them to make decisions. Give them the freedom to make mistakes and learn from them. In this particular case, the simple principle of trying to maintain customer satisfaction should have sufficed. It does not lead to customer satisfaction to have him wait several minutes to leave with a line of cars behind after already having paid for parking. The cost of a mistake is very low here, if indeed I had’t paid for parking. But the cost in the case that I had paid and am unhappy for being delayed (it was near midnight) is a dissatisfied customer. And trust me, you’re not doing so well that you can afford to alienate customers.

In this scenario, it was a small incident, nothing business threatening. But scale it up a notch, and you begin to realize why so many companies falter with head strong leadership and unempowered employees.

comments suggest edit

I know it’s been around a good while now and has been the darling of the blogging community far that time, but I only recently started to play with Flickr. My initial resistance was due to my complete dissatisfaction with other online photo management tools such as oFoto, Yahoo Photos, SnapFish etc…

However, after spending only a few moments with Flickr, I can see that Flickr has put a lot of thought into photo management in an effort to get it right. It’s so good that I am reconsidering whether I even need a desktop photo management software. I probably won’t give up Photoshop Album just yet since I don’t want EVERY photo online. Besides, you never know when a company will go out of business, taking my photos with it. However my top feature request for the next version is Flickr integration.

I’ve been emailing some friends trying to get them to join. My photos are located at http://flickr.com/photos/haacked/. Feel free to add your own tags if you have relevant information.

There are two things I love about Flickr so far, its social tagging format (I can allow anyone to add tags to my photos, rather than trying to organize everything myself and I can add tags to my friends’ photos) and its API. I haven’t played with the API directly, but the fact that there are some really cool tools for uploading photos quickly and easily is evidence that they’ve really thought through how to let others extend Flickr.

So give it a shot, and try not to waste too much playing with it when you should be working.

code comments suggest edit

Many developers, especially those fresh out of college (though older developers are just as prone), fall into the trap of believing in an absolute concept of “the perfect design”. I hate to break such youthful idealism, but there’s just no such thing.

Design is always a series of trade-offs in an arduous struggle to implement the best solution given a set of competing constraints. And there are always constraints.

Not too long ago, I had an interesting discussion with a young developer who was unhappy with the design of a project he was working on. This project had a very aggressive schedule, and he complained about the poor design of the system.

“So why do you think it is poorly designed, the system appears to have met the requirements, especially given the short time constraint”, I asked him. He explained how he would have preferred a system that abstracted the data access via some form of Object Relational Mapping, rather than simply pulling data from the table and slapping that data on a page via data binding. He also would have liked to clean up the object model. It was’t in his mind, “good design”.

I pointed out that it also wouldn’t have been good design to spend time choosing and getting up to speed with an ORM tool, only to deliver the software late (which was not an option). Sure, the code would have been well factored, but we had a hard deadline, and missing it would have been a huge burden on the company.

I suggested to him that constraints are necessary for a software project. I told him,

If a project doesn’t have a time constraint, it will never get finished.

That lit a lightbulb for this developer.

That explains why I never finish my personal projects.

Absolutely! With no time constraint, this developer would spend more time after more time attempting to hit that elusive goal of the “perfect design”. But that goal will never be reached because perfect design is asymptotic. You can get infinitely close, but you can never reach it.

In the end, I told the developer that he’ll have the opportunity to refactor the code into a better design in the second phase of the project, as the time constraint is no longer so aggressive. I also suggested he skim Small Things Considered: Why There Is No Perfect Design by Henry Petroski. The book makes its main point in the first chapter, that design is about compromise and managing trade-offs to meet constraints. The rest of the book is a tour of various design decisions in history that illustrate this central theme.