comments edit

Mona
Lisa When I was a bright eyed bushy tailed senior in college, I remember wading through pages and pages of job ads in Jobtrak (which has since been absorbed into Monster.com).

Most of the ads held my attention in the same way reading a phone book does. The bulk of them had something like the following format.

Responsibilites:

Design and develop data-driven internet based applications that meet functional and technical specifications. Use [technology X] and [technology Y] following the [methodology du jour] set of best practices. Create documentation, review code, and perform testing.

Required Skills and Experience:

Must have X years in language [now outdated language]. Must be proficient in BLAH, BLAH, and BLAH. Ability to work in a team environment. Able to work in a fast-paced [meaning we’ll work your ass off] environment.

I know what you’re thinking. Where do I sign up!

Yaaaaawn. Keep in mind, this was in 1997 just as the job market was starting to reach the stratosphere. Competition was tight back then. Do a search on Dice.com right now and you’ll still see a lot of the same.

I’m sorry, but your job posting is not the place to spew forth a list of Must have this and Must have that and a list of responsibilities so plain and vanilla that…that… I just don’t have a good analogy for it. Sorry.

These type of ads are attempting to filter out candidates who do not meet some laundry list of requirements. But this is not the goal of a good job ad. A good job ad should not explain what hoops the candidate must jump through to join your company, it should explain why the candidate should even want to jump through those hoops in the first place.

This of course assumes you are attempting to hire some star developers away from their current jobs rather than resume padders who have spent most of their careers in training classes so they can place your laundry list of technology TLAs underneath their name on their resume.

Certainly, a job posting should explain briefly the type of work and experience desired to fill the role. No point in having a person who only has experience in sales and marketing applying for your senior DBA position (true story). But first and foremost, you want to catch the attention of a great candidate. Boring job ads that read like the one above do not capture the imagination of good developers.

Back to my story. As I said, most of the ads fit this mold, but there were a few here and there that popped off the screen. I wish I had saved the one that really caught my attention. It was from a small company named Sequoia Softworks (which is still around but now named Solien) My memory of it is vague, so I’ll just make something up that resembles the spirit of the ad. All I remember is that it started off by asking questions.

Are you a fast learner and good problem solver? Are you interested in solving interesting business problems using the latest web technologies and building dynamic data driven web applications?

We’re a small web development company in Seal Beach (right by the beach in fact!) looking for bright developers. We have a fun casual environment (we sometimes wear shorts to work) with some seriously interesting software projects.

Experience in Perl, VBScript, and SQL is helpful, but being a quick learner is even more important. If you’ve got what it takes, give us a call.

I ended up working there for six years, moving up the ranks to end up as the Manager of Software Development (never once writing a line of PERL). They did several things right in their ad, as I recall.

Challenge the reader and demand an answer!

Do you have two years experience in C#? is not a challenge to the reader. This is not a question that captures my attention nor draws me in demanding an answer.

Do you know C# like Paris Hilton knows manafacturing fame?Now that is a challenge to my intellect! Hell yeah I know C# like nobody’s business. That kind of question demands an answer.  And a good candidate is more likely to drive over to your headquarters and give it to you.

Appeal to vanity

Not every appeal to vanity is a bad thing. It doesn’t always amount to sucking up. This point is closely related to the last point in that an appeal to vanity is also a challenge to a candidate to show just how great they are. Asking someone if they are a good problem solver, for example, conjures up a desire to prove it.

Show some personality

Sure, many corporations seem like soulless cubicle farms in which workers are seen as mindless drones. But surely not your company, right? So why does your job posting have a tombstone all over it?

Who wants to be another cog in a machine performing mundane tasks for god knows what reason? Your ad should express a bit of your company’s personality and culture.  It should also indicate to the reader that people who come to work for you are going to work with people. Interesting people. And they are going to work on interesting projects.

I write all this because of an article I read about business schools. It was a throw-away quote in which some employer mentioned how his new employee fresh out of business school helped rewrite some job postings and they were able to quickly fill some positions with high quality candidates they had been struggling to fill.

A well written job posting makes a difference.

I mentioned before that I am participating in the HiddenNetwork job board network because I really believe in the power of blogs to connect good people with good jobs. It’s in its nascent stages so I really don’t know how well it is fulfilling that mission yet, but I believe that it will do well.

If you do post a job via my blog (yes, I get a little something something if you do), be sure to make it a good posting that really captures the imagination of good developers (as all my readers are!  See. Appeal to vanity.). It’s even more of a challenge given how few words you have at your disposal for these ads.

comments edit

In a recent post I ranted about how ASP.NET denies WebPermission in Medium Trust. I also mentioned that there may be some legitimate reasons to deny this permission based on this hosting guide.

Then Cathal (thanks!) emailed me and pointed out that the originUrl does not take wildcards, it takes a regular expression.

So I updated the <trust /> element of web.config like so:

<trust level="Medium" originUrl=".*" />

Lo and Behold, it works! Akismet works. Trackbacks work. All in Medium Trust.

Of course, a hosting provider can easily override this as Scott Guthrie points out in my comments. I need to stop blogging while sleep deprived. I have a tendency to say stupid things.

Now a smart hosting company can probably create a custom medium trust policy in order to make sure this doesn’t work, but as far as I can tell, this completely works around the whole idea of denying WebPermission in Medium Trust.

If I can simply add a regular expression to allow all web requests, what’s the point of denying WebPermission?

comments edit

Tyler, an old friend and an outstanding contractor for VelocIT recently wrote a post suggesting one would receive better performance by passing in an array of objects to the Microsoft Data Application Block methods rather than passing in an array of SqlParameter instances. He cited this article.

The article suggests that instead of this:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlParameter[] parameters = new SqlParameter[]
  {
    new SqlParameter("@id", aUser.Id)
    , new SqlParameter("@name", aUser.Name)
    , new SqlParameter("@name", aUser.Email)
    , new SqlParameter("@name", aUser.LastLogin)
    , new SqlParameter("@name", aUser.LastLogOut)
  };

  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , parameters);
}

You should do something like this for performance reasons:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , aUser.Id
    , aUser.Name
    , aUser.Email
    , aUser.LastLogin
    , aUser.LastLogout);
}

Naturally, when given such advice, I fall back to the first rule of good performance from the perf guru himself, Rico Mariani. Rule #1 Is to Measure. So I mentioned to Tyler that I’d love to see metrics on both approaches. He posted the result on his blog.

Calling the methods included in a previous post, 5000 times each,

With parameters took 1203.125 milliseconds. With objects took 1250 milliseconds. Objects took -46.875 milliseconds less.

20000 times each:

With parameters took 4859.375 milliseconds.
With objects took 5015.625 milliseconds. Objects took -156.25 milliseconds less.

The results show that the performance difference is negligible. However, even before seeing the performance results, I would agree with the article to choose the second approach, but for different reasons. It results in a lot less code. As I have said before, Less code is better code.

I tend to prefer optimizing for productivity all the time, but only optimizing for performance after carefully measuring for bottlenecks.

There’s also a basic economics question hidden in this story. The first approach does seem to eke out slightly better performance, but at what cost? That’s a lot more code to write to eke out 47 milliseconds worth of performance out of 5000 method calls. Is it really worth it?

This particular example may not be the best example of this principle of wasting time optimizing at the expense of productivity because there is one redeeming factor worth mentioning with the first approach.

By explicitly specifying the parameters, the parameters can be listed in any order, whereas the second approach requires that the parameters be listed in the same order that they are specified in the stored procedure. Based on that, some may find the first approach preferable. Me, I prefer the second approach because it is cleaner, easier to read, and I don’t see keeping the order intact much more work than getting the parameter names correct.

But that’s just me.

comments edit

Source:
http://macibolt.hu/pag/goldilock.htmlThis is a bit of rant born out of some frustrations I have with ASP.NET. When setting the trust level of an ASP.NET site, you have the following options:

Full, High, Medium, Low, Minimal

It turns out that many web hosting companies have chosen to congregate around Medium trust as a sweet spot in terms of tightened security while still allowing decent functionality. Only natural as it is the one in the middle.

For the most part, I am sure there are very good reasons for which permissions make it into Medium trust and which ones are not allowed. But some decisions seem rather arbitrary. For example, WebPermission. Why couldn’t that be a part of the default Medium trust configuration? I mean really? Why not? (Ok, there are really good reasons, but remember, this is a rant, not careful analysis. Bear with me. Let me get it off my chest.)

Web applications have very good reason to make web requests (ever hear of something called a web service. They may take off someday) and how damaging is that going to be to a hosting environment. I mean, put a throttle on it if you are that concerned, but don’t rule it out entirely!

I really do want to be a good ASP.NET citizen and support Medium Trust with applications such as Subtext, but what a huge pain it is when some of the best features do not work under Medium Trust. For example, Akismet.

Akismet makes a web request in order to check incoming comments for spam. I tried messing around with wildcards for the originUrl attribute of the <trust /> element, but they don’t work. In fact, I only found a single blog post that said it would work, but no documentation that backed that claim up.

Instead, you need access to the machine.config file (as the previously linked blog post describes), which no self respecting host is going to just give you willy nilly. Nope. In order to get Akismet to work under medium trust, I have to tell Subtext users that they must beg, canoodle, and plead with their host provider to update the machine.config file to allow unrestricted access to the WebPermission. Good luck with that.

If they don’t give unrestricted access, then they need to add an originURl entry for each URL you wish to request. Hopefully machine.config entries do allow wildcards because the URL for an Akismet request includes the Akismet API code. Otherwise running an Akismet enabled multiple user blog in a custom Medium Trust environment would be a royal pain.

Hopefully you can see the reason behind all my bitching and moaning. A major goal for Subtext is to provide a really simple and easy installation experience. At least as easy as possible for installing a web application.  Having an installation step that requires groveling does not make for a good user experience.  But then again, security and usability have always created tension between them.

Scott Watermasysk points out a great guide to enabling WebPermission in Medium Trust for hosters. So if you’re going to be groveling, at least you have a helpful guide to give them. The guide also points out the security risks in involved with Medium Trust.

Related Posts:

tech comments edit

For a project I worked on, we had an automated build server running CruiseControl.NET hosted in a virtual machine.  We did the same thing for Subtext (project is dead). 

Some of you may have multiple virtual servers running on the same machine.  Typically in such a setup (at least typically for me), each virtual server won’t have its own public IP Address, instead sharing the public IP of the host computer.

This makes it a tad bit difficult to manage the virtual servers since using Remote Desktop to connect to the public IP will connect to the host computer and not the virtual machine.  The same thing applies to multiple real computers behind a firewall.

One solution (and the one I use) is set up each virtual server to run Terminal Services, but each one listens on a different port.  Then set up port-forwarding on your firewall to forward requests for the respective ports to the correct virtual machine.

Configuring the Server

The setting for the Terminal Services port lives in the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp Open up Regedit, find this key, and look for the the PortNumber value.

PortNumber Setting

Double click on the PortNumber setting and enter in the port number you wish to use. Unless you think in hex (pat yourself on the back if you do), you might want to click on decimal before entering your new port number.

Port Number Value Dialog

Or, you can use my creatively named Terminal Services Port Changer application, which is available with source on GitHub.  This is a simple five minute application that does one thing and one thing only. It allows you to change the port number that Terminal Services listens on.

Terminal Services Port Changer

Remember, all the usual caveats apply about tinkering with the registry. You do so at your own risk.

Connecting via Remote Desktop to the non-standard Port

Now that you have the server all set up, you need to connect to it. This is pretty easy.  Suppose you change the port for the virtual machine to listen in on port 3900.  You simply append 3900 to the server name (or IP) when connecting via Remote Desktop.

Port Changer App Screenshot

Keep In Mind

Keep in mind that the user you attempt to connect with must have the logon interactively right as well as permissions to logon to the terminal services session.  For more on that, check out this extremely helpful article with its trouble shooting section.

That’s pretty easy, no?  Now you should have no problem managing your legions of virtual servers.

Related Posts:

comments edit

Just upgraded my blog to the latest version of Subtext in the Subversion 1.9 branch, not that you needed to know that. I just appreciate you letting me know if you run into problems leaving a comment and such by using my Contact page.

Before I release 1.9.2 (long story why that’s ends with a 2 and not 1), I need to update the Contact page so that spam filters also apply to the contact page.

comments edit

Weird how coding on Subtext relaxes me. For the past couple days I’ve been feeling a bit under the weather and getting worse.  The weird part is that anytime I try and eat something, there’s a terrible after-taste. And no, it’s not my breathe.  I couldn’t finish my pizza tonight.  Pizza!

Anyways, I couldn’t sleep tonight so I figured hacking on some Subtext code might relax me.  I fixed some bugs and implemented FeedBurner support in Subtext, using the DasBlog code as a guide, though the way we implement feeds is much different.  It’ll come out in the next edition of Subtext.

Unfortunately, it may take me longer to release because although coding on Subtext feels good when I’m sick, QA testing doesn’t.

comments edit

Quick tip for you if you need to remotely connect to a server with VMWare Server installed in order to manage the virtual server. 

VMWare Server Console doesn’t work correctly if you Remote Desktop or Terminal in. You have to physically be at the machine or Remote Desktop into the Console session.

The symptoms I ran into was that I could not open a virtual machine, and when I tried to create a new one, I got an “Invalid Handle” error.

Technorati Tags: Tips

comments edit

Recently I wrote a .NET based Akismet API component for Subtext.  In attempting to make as clean as interface as possible, I made the the type of the property to store the commenter’s IP address of type IPAddress.

This sort of falls in line with the Framework Design Guidelines, which mention using the Uri class in your public interface rather than a string to represent an URL.  I figured this advice equally applied to IP Addresses as well.

To obtain the user’s IP Address, I simply used the UserHostAddress property of the HttpRequest object like so.

HttpContext.Current.Request.UserHostAddress

The UserHostAddress property is simply a wrapper around the REMOTE_ADDR server variable which can be accessed like so.

HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"]

For users behind a proxy (or router), this returns only one IP Address, the IP Address of the proxy (or router).  After some more digging, I learned that many large proxy servers will append their IP Address to a list maintained within another HTTP Header, HTTP_X_FORWARDED_FOR or HTTP_FORWARDED.

For example, if you make a request from a country outside of the U.S., your proxy server might add the header HTTP_X_FORWARDED_FOR and put in your real IP and append its own IP Address to the end. If your request then goes through yet another proxy server, it may append its IP Address to the end.  Note that not all proxy servers follow this convention, the notable exception being anonymizing proxies.

Thus to get the real IP address for the user, it makes sense to check the value of this first:

HttpContext.Current.Request.ServerVariables[“HTTP_X_FORWARDED_FOR”]

If that value is empty or null, then check the UserHostAddress property.

So what does this mean for my Akismet implementation?  I could simply change that property to be a string and return the entire list of IP addresses.  That’s probably the best choice, but I am not sure whether or not Akismet accepts multiple IPs.  Not only that, I’m really tired and lazy, and this change would require that I change the Subtext schema since we store the commenter’s IP in a field just large enough to hold a single IP address.

So unless smart slap me upside the head and call me crazy for this approach, I plan to look at the HTTP_X_FORWARDED_FOR header first and take the first IP address in the list if there are any.  Otherwise I will grab the value of UserHostAddress.  As far as I am concerned, it’s really not that important that I am 100% accurate in identifying the remote IP, I just need something consistent to pass to Akismet.

code, tdd comments edit

UPDATE: I’ve since supplemented this with another approach.

Jeremy Miller asks the question, “How do you organize your NUnit test code?”.  My answer? I don’t, I organize my MbUnit test code.

Bad jokes aside, I do understand that his question is more focused on the structure of unit testing code and not the structure of any particular unit testing framework.

I pretty much follow the same structure that Jeremy does in that I have a test fixture per class (sometimes more than one per class for special cases).  I experimented with having a test fixture per method, but gave up on that as it became a maintenance headache.  Too many files!

One convention I use is to prefix my unit test projects with “UnitTest”.  Thus the unit tests for Subtext are in the project UnitTests.Subtext.dll.  The main reason for this, besides the obvious fact that it’s a sensible name for a project that contains unit tests, is that for most projects, the unit test assembly would show up on bottom in the Solution Explorer because of alphabetic ordering.

So then I co-found a company whose name starts with the letter V.  Doh!

UPDATE: I neglected to point out (as David Hayden did) that with VS.NET 2005 I can use Solution Folders to group tests. We actually use Solution Folders within Subtext. Unfortunately, many of my company work is still using VS.NET 2003, which does not boast such a nice feature.

One thing I don’t do is separate my unit tests and integration tests into two separate assemblies.  Currently I don’t separate those tests at all, though I have plans to start. 

Even when I do start separating tests, one issue with having unit tests in two separate assemblies is that I don’t know how to produce NCover reports that merge the results of coverage from two separate assemblies.

One solution I proposed in the comments to Jeremy’s post is to use a single assembly for tests, but have UnitTests and Integration Tests live in two separate top level namespaces.  Thus in MbUnit or in TD.NET, you can simply run the tests for one namespace or another.

Example Namespaces: Tests.Unit and Tests.Integration

In the root of a unit test project, I tend to have a few helper classes such as UnitTestHelper, which contains static methods useful for unit tests. I also have a ReflectionHelper class, just in case I need to “cheat” a little. Any other classes I might find useful typically go in the root, such as my SimulatedHttpRequest class as well.

comments edit

Tivo
Icon Ever prolific Jon Galloway has released another tool on our tools site.  When we started the tools site, I talked some trash to spur some friendly competition between the two of us.  Let’s just say Jon is kicking my arse so hard my relatives in Korea can’t sit down.

His latest RegmonToRegfile tool works with yet another SysInternals tool, Regmon.

Winternals (maker of Sysinternal) released many fantastic tools for managing and spelunking your system.

So great, in fact, that Robb feels he owes them his child in gratitude.

Kudos to Microsoft for snatching up Mark Russinovich and Winternals Software.

Regmon is essentially a Tivo for your registry, allowing you to record and play back changes to the registry.

Regmon lacks the ability to export to a registry (.reg) file, which is where Jon’s tool comes to play.  It can parse the Regmon log files and translate them to .reg files.

Here is a link to Jon’s blog post on this tool.

 

comments edit

Jeff Atwood writes a great rebuttal to Steve Yegge’s rant on Agile methodologies.  I won’t expound on it too much except to point out this quote which should be an instant classic, emphasis mine:

Steve talks about “staying lightweight” as if it’s the easiest thing in the world, like it’s some natural state of grace that developers and organizations are born into. Telling developers they should stay lightweight is akin to telling depressed people they should cheer up.

Heh heh.  Jeff moves from a Coding Horror to a Coding Hero.

Now while I agree much of it is religion, I like to think that the goal is to remove as much religion out of software development as possible.  A key step, as Jeff points out, is recognizing which aspects are religion.

…the only truly dangerous people are the religious nuts who don’t realize they are religious nuts.

It’s like alcoholism.  You have to first accept you are an alcoholic.  But then, once recognizing that, you strive to make changes.

For example, Java vs .NET is a religious issue insofar as one attempts to make an absolute claim that one is superior to the other. 

However it is less a religious issue to say that I prefer .NET over Java for reasons X, Y, and Z based on my experience with both, or even to say that in situation X, Java is a preferred solution.

Likewise, just because double-blind tests are nearly impossible to conduct does not mean that we cannot increase the body of knowledge of software engineering. 

For the most part, we turn to the techniques of social scientists and economists by poring over historical data and looking at trends to extrapolate what information we can, with appropriate margins of error. 

Thus we can state, with a fair degree of certainty, that:

Design is a complex, iterative process. Initial design solutions are usually wrong and certainly not optimal.

That is fact 28 of Facts and Fallacies of Software Engineering by Robert L. Glass, who is by no means an agile zealot.  Yet this fact does show a weakness in waterfall methodologies that is addressed by agile methodologies, a differentiation that owes more to the scientific method than pure religion.

comments edit

Personal matters (good stuff) and work has been keeping me really busy lately, but every free moment I get I plod along, coding a bit here and there, getting Subtext 1.9.1 “Shields Up” ready for action.

There were a couple of innovations I wanted to include in this version as well as a TimeZone handling fix, but recent comment spam shit storms have created a sense of urgency to get what I have done out the door ASAP.

In retrospect, as soon as I finished the Akismet support, I should have released.

I have a working build that I am going to test on my own site tonight.  If it works out fine, I will deploy a beta to SourceForge.  This will be the first Subtext release that we label Beta.  I think it will be just as stable as any other release, but there’s a significant schema change involved and I want to test it more before I announce a full release.

Please note, there is a significant schema change in which data gets moved around, so backup your database and all applicable warnings apply.  Upgrade at your own risk.  I am going to copy my database over and upgrade offline to test it out before deploying.

Shields up edition will contain Akismet support and CAPTCHA.  The Akismet support required adding comment “folders” to allow the user to report false positives and false negatives.

comments edit

Disk Defragmenter

For the most part, the Disk Defragmenter application (located at %SystemRoot%\system32\dfrg.msc) that comes with Windows XP does a decent enough job of defragmenting a hard drive for most users.

But if you’re a developer, you are not like most users, often dealing with very large files and installing and uninstalling applications like there’s no tomorrow.  For you, there are a couple of other free utilities you should have in your utility belt.

Recently I noticed my hard drive grinding a lot.  After defragmenting my drive, I clicked on the View Report button this time (I normally never do this out of hurriedness).

Disk Defrag
Dialog

This brings up a little report dialog.

Defrag
Report

And in the bottom, there is a list of files that Disk Defragmenter could not defragment.  In this case, I think the file was simply too large for the poor utility.  So I reached into my utility belt and whipped out Contig.

Contig

Contig is a command line utility from SysInternals that can report on the fragmentation of individual files and defrag an individual file.

I opened up a console window, changed directory to the Backup directory, and ran the command:

contig *.tib

Which defragmented every file ending with the tib extension (in this case just one).  This took a good while to complete working against a 29 Gig file, but successfully reduced the fragmens from four to two, which made a huge difference.  I may try again to see if it can bring it down to a single fragment. 

I ran Disk Defragmenter again and here are the results.

Disk
Defragmenter

Keep in mind that the disk usage before this pass with the defragger was the usage after running Disk Defragmenter once.  After using contig and then defragging again, I received much better results.

PageDefrag

Another limitation of Disk Defragmenter is that it cannot defragment files open for exclusive access, such as the Page File.  Again, reaching into my utility belt I pull yet another tool from Sysinternals (those guys rock!), PageDefrag.

Running PageDefrag brings up a list of page files, event log files, registry files along with how many clusters and fragments make up those files.

Page
Defrag

This utility allows you to specify which files to defrag and either defragment them on the next reboot, or have them defragmented at every boot.  As you can see in the screenshot, there was only one fragmentted file, so the need for this tool is not great at the moment.  But it is good to have it there when I need it.

With these tools in hand, you are ready to be a defragmenting ninja.

comments edit

TimeZones Right now, there is no easy way to convert a time from one arbitrary timezone to another arbitrary timezone in .NET.  Certainly you can convert from UTC to the local system time, or from the local system time to UTC. But how do you convert from PST to EST?

Well Scott Hanselman recently pointed me to some ingenious code in DasBlog originally written by Clemens Vasters that does this.  I recently submitted a patch to DasBlog so that this code properly handles daylight savings and I had planned to blog about it in more detail later.  Unfortunately, we recently found out that changes in Vista may break this particular approach.

It turns out that the Orcas release introduces a new TimeZone2 class.  This class will finally allow conversions between arbitrary timezones.

Krzysztof Cwalina (who wins the award for Microsoft blogger with the highest consonants to vowel ration in a first name) points out that many people are not thrilled with the “2” suffix and provides context on the naming choice

Kathy Kam of the BCL team points out some other proposed names for the new TimeZone2 class and the problems with each.

I’m fine with TimeZone2 or TimeZoneRegion.

 

comments edit

Hello
World

Jeff Atwood asks the question in a recent post if writing your own blog software is a form of procrastination (no, blogging is).

I remember reading something where someone equated rolling your own blog engine is the modern day equivalent of the Hello World program.  I wish I could remember where I heard that so I can give proper credit. UPDATE: Kent Sharkey reminds me that I read it on his blog. It was a quote from Scott Wigart. Thanks for the memory refresh Kent!

Obviously, as an Open Source project founder building a blog engine, I have a biased opinion on this topic (I can own up to that).  My feeling is that for most cases (not all) rolling your own blog engine is a waste of time given that there are several good open source blog engines such as Dasblog, SUB, and Subtext.

It isn’t so much that writing a rudimentary blog engine is hard.  It isn’t.  To get a basic blog engine up and running is quite easy.  The challenge lies in going beyond that basic engine.

The common complaint with these existing solutions (and motivation for rolling your own) is that they contain more features than a person needs.  Agreed.  There’s no way a blog engine designed for mass consumption is going to only have the features needed by any given individual.

However, there are a lot of features these blog engines support that you wouldn’t realize you want or need till you get your own engine up and running.  And in implementing these common features, a developer can spend a lot of time playing catch-up by reinventing the kitchen sink.  Who has that kind of time?

Why reinvent the sink, when the sink is there for the taking?

For example, let’s look at fighting comment spam.

Implementing comments on a blog is quite easy. But then you go live with your blog and suddenly you’re overwhelmed with insurance offers.  Implementing comments is easy, implementing it well takes more time.

If you are going to roll your own blog engine, at least “steal” the Subtext Akismet API library in our Subversion repositoryDasblog did.  However, even with that library, you still ought to build a UI for reporting false positives and false negatives back to Akismet etc…  Again, not difficult, but it is time consuming and it has already been done before.

Some other features that modern blog engines provide that you might not have thought about (not all are supported by Subtext yet, but by at least one of the blogs I mentioned):

  • RFC3229 with Feeds
  • BlogML
    • So you can get your posts in there.
  • Email to Weblog
  • Gravatars
  • Multiple Blog Support (more useful than you think)
  • Timezone Handling (for servers in other timezone)
  • Windows Live Writer support
  • Metablog API
  • Trackbacks/Pingbacks
  • Search
  • Easy Installation and Upgrade
  • XHTML Compliance
  • Live Comment Preview

My point isn’t necessarily to dissuade developers from rolling their own blog engine.  It’s fun code to write, I admit.  My point is really this (actually two points):

​1. If you plan to write your own blog engine, take a good hard look at the code for existing Open Source blog engines and ask yourself if your needs wouldn’t be better served by contributing to one of these projects.  They could use your help and it gets you a lot of features for free. Just don’t use the ones you don’t need.

Jerry
Maguire

  1. If you still want to write your own, at least take a look at the code contained in these projects and try to avail yourself of the gems contained therein.  It’ll help you keep your wheel reinventions to a minimum.

That’s all I’m trying to say.  Help us… help you.

comments edit

A friend of mine sent me an interesting report that Brad Greenspan, the founder of eUniverse (now Intermix Media) that created and owned MySpace.com issued an online report that the sale of MySpace intentionally defrauded shareholders out of multiple billions of dollars because they hid MySpace revenues from shareholders.

Disclosure: Technically, I used to work for Intermix Media as they owned my last employer, SkillJam, before SkillJam was sold to Fun technologies.

The most surprising bit to me is that (according to the report)

Shareholders were not aware that Myspace’s revenue was growing at a 1200 percent annualized rate and increasing.

I wonder how much of this is true.  If it is true, what happens next?  Gotta love the smell of scandal in the morning. Wink

comments edit

This is a pretty sweet video that demonstrates a system for sketching on a whiteboard using mimio-like projection system.  The instructor draws objects, adds a gravity vector, and then animates his drawings to see the result.

Another interesting take on user interfaces for industrial design.