comments edit

I really appreciate how Microsoft has really opened up many of its internal developer groups. I only wish this also applied to more of their consumer products.

For example, I am trying to set up online services for a bank within Microsoft Money. My bank allows passwords up to 32 characters. Microsoft Money allows me to enter 8 characters and that’s it. WTF!? With all this emphasis on security, you’d think they would support more than 8 characters in a password, especially when the bank does.

So I try going to the support site and there is nothing helpful. Since my copy is over a year old, I get the lovely option to pay $35.00 for a support incident. Heck for that money, I might as well upgrade, but how do I know this problem is even fixed in the latest versions? I don’t.

So following the support instructions, I head over to the Microsoft Money community newsgroups to be greeted with the message…

Service Temporarily Unavailable

We apologize for this inconvenience. Please try again later.

Great! It appears there is no way for me to register a complaint or have someone tell me whether this is fixed in the latest version of Money for this institution. Maybe I should have bought Quicken.

code, open source comments edit

Producing Open Source Software
Cover I just finished reading the book “Producing Open Source Software - How to Run a Successful Free Software Project” by Karl Fogel (pdf available). CollabNet has employed Karl Fogel for the past five years to work on Subversion. Prior to that he has been involved with GNU Emacs and CVS development.

If you fall in one of the following categories, I highly recommend taking the time to read this book. Especially if you fall in one of the first two.

  • Planning to start an open source project
  • Currently running an open source project
  • Involved with an open source project
  • Managing a team of distributed developers

Much of this book is really a primer on how to work with and manage people. After all, open source development is really built on relationships more than even technical know-how. Manage the relationships well, and people will happily contribute. Do a poor job, and you may find interest lacking (though interest may lack for many other reasons as well).

But Fogel also delves into how to structure a project and administer the day-to-day activities that are required to run a project smoothly. Some of the topics he covers include:

  • How to choose a license
  • Who to give commit access to
  • Writing Developer Guidelines
  • Hosting and choosing version control
  • Managing communications
  • Assigning roles
  • Voting

And the list goes on.

It is my hope to start applying some of the principles he writes to open source projects I am involved with such as Subtext. Though Fogel’s experience and advice seems targetted to very large open source projects, I think much of it is useful for small projects as well. Besides, if you don’t prepare for growth, you will never see growth. And if you do grow big suddenly, it is better to be prepared than caught off-guard. But having said that, it is also important to adjust the level of formality in processes and structure to agree with the size of the project. So I won’t let myself get carried away.

Instead, I hope to start a short blog series to summarize and perhaps expand on certain principles gleaned from the book.

comments edit

Tim Bray writes to correct misperceptions of what “Open Source” is about.

They both paint a picture of misguided innocents who believe in some starry-eyed vision of post-capitalist intellectual collectivism, but are actually pawns in the hands of larger economic forces. They’re both really wrong. Granted: Open Source is not a nation or a corporation or a political party or a religion. (While there are “movement people”, organized into the skeptical-of-each-other Open Source and Free Software sects, they are a tiny—albeit noisy—minority.) Absent those things, what is left? A collection of people who like working on software and actively seek out opportunities, preferably but not necessarily paid, to do so. If that is isn’t a “community”, what is?

Tim hits it on the mark. If Subtext is a pawn in some larger economic force, I’d be curious to find out which major corporate power seeks to gain, and perhaps ask them for some funding. ;)

In truth, there are many reasons people work on open source software, and they are not all the same. Many just find it fun to work on something more interesting than the boring data-in data-out systems they build at work. Some want to have a hand in building a better mousetrap. Many enjoy participating in a community and perhaps gaining a bit of recognition among their peers. A few see it as a political movement against capitalist interests. Yet others are paid to work on open source projects as it benefits their employer. None of these reasons are inherently wrong, misguided or amoral.

Many of these articles criticising open source focus on the big projects. What they fail to look at is that the majority of open source projects are very small. Many fill very niche markets that corporations have no interest in filling, but that there is yet a long-tail demand for.

comments edit

Consider this a more advanced followup to my Quickstart Guide to Open Source Development With CVS and SourceForge.

Intro

So you have finally decided to become a flower power card carrying community loving member of an an open source project that happens to be hosted on Sourceforge. Good for you! Unfortunately, someone might expect you to actually contribute something. Suppose they give you the responsibility to update the project Home Page. SourceForge provides the ability to host project home pages within SourceForge itself, but how do you access those files? This guide will help you with that so you can earn the respect of your peers and graduate from n00b to contributor.

First, it is important to understand that you will not be able to fallback on your trusty FTP client to move your files to your SourceForge website. If you are a Windows developer unnaccustomed to the *nix-y ways of doing things (*nix == unix, linux, anyothernix…), it’s time to get the hands a bit dirty. But don’t you worry, I’ll present the most Windowsy manner to get *nixy tasks done.

To access files on SourceForge, you are going to have connect to their shell services via an SSH session. SSH is a protocol which is analogous to, but different from FTP. Some applications adopt this protocol to provide secure communication between servers, such as SFTP (secure FTP) and SCP (secure copy). Applications which are not built on SSH can still use these services by communicating through an SSH tunnel.

WinSCP To Securely Transfer Files

The quick and easy way to do this for those of us who don’t work with *nix every day is to download and install WinSCP. WinSCP is both a SFTP (SSH File Transfer Protocol) and SCP (Secure Copy Protocol) client.

SourceForge Project Shell Info

Before you start with WinSCP, you’ll need some information about your SourceForge project handy. Remember, as with all things *nix, everything is case sensitive.

  • Hostname: shell.sourceforge.net (or shell.sf.net)
  • Username: as used to login to the SourceForge.net web site.
  • Password: Password authentication is not supported. You must configure a SSH key pair for authentication.
  • Project Group Directory: /home/groups/P/PR/PROJECTNAME
  • Project Web Directory (root): /home/groups/P/PR/PROJECTNAME/htdocs
  • Project Web CGI Script Directory: /home/groups/P/PR/PROJECTNAME/cgi-bin

For example, these values for me on Subtext are…

  • Hostname: shell.sourceforge.net (or shell.sf.net)
  • Username: haacked
  • Password: leave blank
  • Project Group Directory: /home/groups/s/su/subtext
  • Project Web Directory (root): /home/groups/s/su/subtext/htdocs
  • Project Web CGI Script Directory: /home/groups/s/su/subtext/cgi-bin

Using WinSCP

When WinSCP first starts, you will see a dialog box that requests various host information. Enter the following details in to the provided dialog box:

  • Host name: shell.sourceforge.net (or cf-shell.sourceforge.net)
  • Port number: 22
  • User name: YOUR_USERNAME
  • Password: leave this field blank
  • Private key file: Click on the “…” button to browse for the PuTTY private key you created previously following the instructions here. Load the desired key.
  • Protocol: SFTP (allow SCP fallback)

Below is a screenshot of this dialog and how I entered the fields.

Click Save and choose the default for the session name which should matche the hostname you entered previously (USERNAME@shell.sourceforge.net or USERNAME@cf-shell.sourceforge.net).

To start the session, click the Login button. The first time you do this for a session, you will get a dialog asking to compare the SSH host key fingerprint. This is to make sure you are connecting to the site you think you are connecting to.

If you followed the instructions as I described, you should see the following key:

4c:68:03:d4:5c:58:a6:1d:9d:17:13:24:14:48:ba:99

If yours differs, compare it against the list of keys here. If it does not match, please contact SourceForge.net staff by submitting a Support Request.

Once you are logged in, you can browse your project directories. Browse to your project root and if you choose the Explorer view as I did, it should look like the screenshot below.

WinSCP ScreenShot

Place your web files within the hcp directory. Unfortunately at the time of this writing, SourceForge won’t run .NET code, but it does support cgi as well as PHP and MySQL.

References

comments edit

Vista From reading other blogs, it seems many developers are unimpressed with the sheen of Windows Vista, the next version of the Microsoft operating system. There is definitely appreciation for all the improvements under the hood, but the out-of-box experience (at least in the betas), leaves much to be desired.

That is why I love this post from Jon Galloway that first points to some videos that compare Vista to Mac OSX (released 2002). He then lists several ways that Microsoft could inexpensively do better. My favorite quote is this…

…you’ve got Paint.NET, which stomps MS Paint so badly I have to turn my head away and sob.

comments edit

A while ago I wrote up a Quick and Dirty Guide to Configuring Log4Net for Web Applications. Today I received an email asking how to set up logging for a web application that also consists of a business layer and a data access layer.

The Situation

This person had the following three projects setup as part of his VS.NET Solution:

  • ASP.NET Web Application Project
  • Business Layer Class Library Project
  • Data Access Layer Class Library Project

Note that the Web Application project has a project reference (or assembly reference) to the Business Layer, which in turn probably has a project reference to the Data Access layer. These assemblies will be deployed with the web application and will not be hosted on separate servers, thus remoting does not come into play here.

The developer added a Log4Net.config file to each project as well as the AssemblyInfo directive described in my post. The goal was to get all three projects logging to the same file. For the two class library assemblies, the developer specified the full system path to the log file.

The Explanation

To understand why this doesn’t necessarily work, we have to step back and look at how configuration settings are picked up by a .NET application in general. Suppose we were’t dealing with Log4Net for a second, but wanted to configure some app settings? Would it require that we add an App.config file to the Business Layer and Data Access layer project? Indeed no. These are class libraries. They do not contain an execution entry point as an executable does. We simply need to add a web.config file to the Web Application Project and we’re set.

The main reason for this is that configuration settings apply to the executable application (in this case a web app). You can certainly include code within the business layer assembly to read app settings, but it reads the settings from the web.config or App.config file in the execution startup path.

Note: I am doing a bit of hand waving here. Technically, the ASP.NET web application assembly is not an executable, it is a class library. However due to how the ASP.NET runtime works, it exhibits some of the behavior of being an executable and for the purposes of this discussion we’ll leave it at that. One key difference though is that for executables, the config file must be named the same as the assembly with a config extension and put in the same directory as the executable (typically bin), whereas with an ASP.NET application, the config file is always named “web.config” and placed in the web root, not in the bin directory.

This is also how Log4Net configuration works. Remember that when you build the web application, your business layer and data access layer assemblies will be copied to the bin directory of the web application. Thus all three assemblies are in the same location, so there is no need to specify a different Log4Net.config file for each assembly.

When you think of it, this makes sense. Your business layer assembly is a class library, thus it can be used and re-used in more than one project. It is not an execution starting point, but is called into by another executable. You wouldn’t want that assembly to specifiy where it logs its messages. You would rather have the consumer of the assembly do that.

The Answer

So the answer to the question is to make sure that both the business layer and data access layer projects do NOT include a Log4Net.config file nor the AssemblyInfo directive. They do not need it. It will be up to the consumer of these assemblies (the execution starting point) to configure logging.

All you need to do in these assemblies is to add an assembly reference to the Log4Net assembly and make calls to its logging methods in your code just as you would in the web application layer.

Then configure your web application as mentioned in my dirty guide and you are all set. Log messages from all three assemblies should funnel nicely to your log file.

To demonstrate this, I set up another sample VS.NET 2003 solution. It is based on the same project that I included in my previous article on the subject, but includes a business layer class library. The web application references the class library and makes a method call that logs a message. The class library references Log4Net, but does not include the Assembly directive nor the Log4Net.config file.

Download it, set up the IIS directory, and visit the default page. You’ll see log messages from within the business layer as well as the web application in the log file.

code, css comments edit

On a recent project, my team pursued a CSS based design as we had two sites to build that were similar in layout, but different in look and feel. We were brought in after the schematics and design had pretty much been worked out, but we felt we could work with the agreed upon design.

The site had a typical corporate layout: a header, a body, and a footer. The body might have two or three columns. We started off writing markup that had a structure like so (not including the body columns).

<div id="main">
    <div id="header"></div>
    <div id="body">Body</div>
    <div id="footer">Footer</div>
</div>

We set the logo using CSS by applying a background image to the header div.

#header
{
    background: url(logoExample.jpg);
    width: 180px;
    height: 135px;
}

Which produces something that might look like this (Trust me, the real thing looks a lot better)…

So everything is fine and dandy till we place the site on a staging server and the client asks that the header logo link back to the main page. This wasn’t in any of the requirements or design spec, but it is perhaps something we could have guessed as it is quite common.

So how do we make the logo image be a clickable link to the main page? My first inclination was to abandon using a background image and make the logo a regular image. The markup would look like (changes in bold)…

<div id="main">
    <div id="header">
        <a href="/"><img src="images/logoExample.jpg"></a>
    </div>
    <div id="body">Body</div>
    <div id="footer">Footer</div>
</div>

But I found a better way to do this based on a technique I saw in “Bulletproof Web Design”. I changed the markup to be like so…

<div id="main">
    <div id="header">
        <a href="/" title="Home"><h1>Title</h1></a>
    </div>
    <div id="body">Body</div>
    <div id="footer">Footer</div>
</div>

I then changed the css for the anchor tag to have the same dimensions as the logo image. I positioned it so that it would fit exactly over the image.

#header
{
    background: url(logoExample.jpg);
    width: 180px;
    height: 135px;
    position: relative;
}

#header a
{
    position: absolute;
    top: 0;
    left: 0;
    width: 180px;
    height: 135px;
}

#header a h1
{
    display: none;
}

Notice that I had to add position: relative; to the header element . That ensures that the absolute positioning applied to the header link is relative to the header and not the entire document.

Now the header logo image appears to be a clickable link. Problem solved. I am pretty sure that others have pioneered this trick, but I hadn’t seen anything written up. What I read applied to making clickable tabs.

UPDATE: As Klevo mentioned in the comments, I really shouldn’t have an anchor tag without any text. Including text would be good for search engine optimization and for those who view the site without CSS. Shame on me, especially after reading the bulletproof book.

But to my defense, it was peripheral to the main point I was making. However that doesn’t excuse it as bad samples have a way of proliferating. So I corrected the sample above. The anchor tag now includes the title of the blog, but sets the title to be invisible.

comments edit

Can’t use your favorite IM tool because of a pesky firewall at your place of work? Kyle points me to this web app, Meebo which appears to be in the alpha phase, but worked fine for me. It allows you to sign in to one of several popular IM services and use a rich web-based client.

So if your worksite allows you to browse the web, you can also chat.

Here’s a screenshot of a section of the login page.

Login Screen

According to the site, passwords are encrypted with 1024-bit RSA keys. Below is a screenshot of a Yahoo Messenger session. Notice the nice transparent effects when highlighting a profile.

Meebo in action

comments edit

I can get a bit overboard with my virtual paths. I tend to prefer virtual paths over relative paths since they feel safer to use. For example, when applying a background image via CSS, I will tend to do this:

``

body

{

    background: url(‘/images/bg.gif’);

}

My thinking was that since much of the code I write employs URL rewriting and Masterpages, I never know what the url of the page will be that references this css. However my thinking was wrong.

One problem I ran into is that on my development box, I tended to run this code in a virtual directory. For example, I have Subtext running in the virtual directory /Subtext.Web. So I end up changing the CSS like so:

``

body

{

    background: url(‘/Subtext.Web/images/bg.gif’);

}

Thus when I deploy to my web server, which is hosted on a root website, I have to remove the /Subtext.Web part. Now if I had read the CSS spec more closely, I would have noticed the following line:

Partial URLs are interpreted relative to the source of the style sheet, not relative to the document.

Thus, the correct CSS in my case (assuming my css file is in the root of the virtual application) is…

``

body

{

    background: url(‘images/bg.gif’);

}

Now I have true portability between my dev box and my production box.

It turns out that Netscape Navigator 4.x incorrectly interprets partial URLs relative to the html document that references the css file and not the css file itself. Perhaps this was where I got the wrongheaded notion embedded in my head way back in the day.

personal comments edit

Pimped Desktop Ryan Farley gives the lowdown on his tricked out desktop.

In the past I’ve tried to get into tricking out the desktop, but everytime I switched to a new computer, I felt less and less inclined to do invest the time. Besides, I remember some of these programs would slow down the OS. I like my desktop to be lean and mean.

But after seeing Ryan’s screenshot, I may have to consider playing around with some of the customizations.

It’s funny to me how many geeks I know would dread spending time selecting drapery and customizing the small details of their house (“Pick any color, honey. I don’t care.”) yet will obsess over every pixel of their desktop.

comments edit

Pic of miniature portapotties with superhero figurines next to real
portapotties

This photo was taken next to some portapotties that were close to our campsite. I hadn’t noticed these little guys.

comments edit

Link Jon Galloway has an interesting write-up on the latest changes to Google’s search algorithms code named “Jagger”

The short and sweet summary is that rather than letting websites “vote” on a page’s relevancy with a link, the trustworthiness of a page is taken into account. For example, a site that has been around longer is potentially considered more trustworthy (assuming it meets other criteria). A page that has incoming links from trustworthy sources is itself more trustworthy.

I had always thought that this was how the PageRank algorithm worked all along. After all, the Google boys original inspiration was the network of citations in academic papers and texts. A citation from a well-cited and trustworthy source boosted the respectability of the cited paper, whereas a citation from a nobody, didn’t account for much.

In the end, I am pretty happy about these changes as my ad-sense revenue has increased lately.

comments edit

As I mentioned in my last post, my redesign was inspired by some of the lessons in this book, “Bulletproof Web Design by Dan Cederholm”.

The main focus of this book is how to use CSS and semantic (X)HTML markup to create flexible websites. By flexible, the author is referring to a web site’s ability to deal with the different ways a user may choose to view a site. For the most part, he covers how to make your site more accessible.

For example, many sites do not deal well with the change caused by a user resizing the text. Some sites do not deal well with this, totally breaking the design. If you specify font sizes in pixels for example, IE won’t allow text resizing at all, which gives the designer control, but at the cost of accessibility for those with high resolution monitors or poor eyesight.

Cederholm instructs the reader on several ways to make sites deal with text resizing in a more flexible manner while retaining control. For the most part though, the designer has to give up pixel perfect control in exchange for a better user experience.

The book also delves into accessibility tips such as making sure the sight is readable when images are turned off and when CSS is turned off for those with slow connections or using text to speech readers respectively.

Each chapter presents a sample of a website design that is not flexible. Most of the samples come from real-world sites, though some were made up. He then walks through the steps to recreate the design element using clean semantic xhtml and CSS. One key benefit of this approach, apart from the increased flexibility, is that the amount of markup is greatly reduced in most cases as 1 pixel images and empty table cells are no longer needed.

Lest one think Cederholm is an anti-table zealot, he points out that there are situations where using a table is correct and semantic: when displaying tabular data of course. He then demonstrates how to use tables and CSS properly to get the proper layout without resorting to nested tables and empty table cells. The key is that the table should model the data, not the layout and he succeeds.

In the end, Bulletproof is a quick and worthwhile read with clear diagrams and plenty of css examples. There were some examples I wish he had taken further. For example, he mentions several uses for the dictionary list element (<dl>) to semantically mark up code, but only presents one example of styling a dictionary list. Understandable since this was not meant to be a complete compendium of CSS examples. Even so, I found plenty of good advice which I ended up applying to this site. The site responds well to enlargening the text (to a limit).

If you are a fan of “CSS Zen Garden”, this book would serve as a nice complement. “CSS Zen Garden” inspires designers with what is possible to do with CSS. “Bulletproof Web Design” provides some of the tools to get there.

comments edit

After completing two of the three books I said I would be reading in 2006, I decided to apply some of the lessons from the book Bulletproof Web Design by Dan Cederholm by slightly redesigning my site.

The change isn’t drastic on the surface, though I like to think it looks nicer and cleaner. Most of the changes are under the hood in the HTML and CSS. Most of you won’t notice since you read this via an RSS aggregator, but if you have a moment, take a look and let me know what you think.

A short book review is forthcoming.

csharp, code comments edit

While reviewing some code this weekend, I had the thought to do a search for the following string throughout the codebase, “catch(Exception” (using the regular expression search of course it looked more like “catch\s(\sException\s*)”.

My intent was to take a look to see how badly Catch(Exception...) was being abused or if it was being used correctly. One interesting pattern I noticed frequently was the following snippet…

try
{
    fs = new FileStream(filename, FileMode.Create);

    //Do Something
}
catch(Exception ex)
{
    throw ex;
}
finally
{
    if(fs != null)
        fs.Close();
}

My guess is that the developer who wrote this didn’t realize that you don’t need a catch block in order to use a finally block. The finally block will ALWAYS fire whether or not there is an exception block. Also, this code is resetting the callstack on the exception as I’ve written about before.

This really just should be.

try
{
    fs = new FileStream(filename, FileMode.Create);

    //Do Something
}
finally
{
    if(fs != null)
        fs.Close();
}

Another common mistake I found is demonstrated by the following code snippet.

try
{
    //Do Something.
}
catch(Exception e)
{
    throw e;
}

This is another example where the author of the code is losing stack trace information. Even worse, there is no reason to even perform a try catch, since all that the developer is doing is rethrowing the exact exception being caught. I ended up removing the try/catch blocks everywhere I found this pattern.

personal comments edit

When you hear the phrase, “use your head” you are typically being told to think. There are other uses of the head that are quite unwise. For example, trying to clear a soccer ball away from another player rushing in on the attack when you are a step too late. Unfortunately that’s exactly what I tried today.

My head just happened to get in the way of the shoulder of the onrushing soccer player when we both jumped to try and win the ball. It was really no contest as his shoulder won, leaving a nice inch long laceration on top of my scalp. Fortunately it wasn’t very deep and I was not knocked unconscious, though I bled a lot and had a nice Tom & Jerry bump on the head.

tom-jerry-bump

This earned me a trip to the ER which is NOTHING like the TV show. If it were, the show would have been cancelled after the first episode. I fail to see how interesting a show would be where the patients wait around for four hours before a doctor sees them to perform a grand total of five to ten minutes of actual work.

In any case, the extremely busy doctor made quick work of cleaning out the wound and stapling it shut with two painful squeezes of the stapler (no local anesthesia). I hadn’t realized how helpful office supplies could be when applied to the head.

The doctor said I show no signs of a concussion and should be ready to play again in a few days as soon as I feel comfortable. I’m glad I’ll be able to play next week, but I won’t be using my head as much.

comments edit

I have a really old Kodak photography book laying around that delivers various tips for how to advance from your typical crapola™ snapshots into something worth boring your friends with on Flickr after your last vacation.

It is really too bad that I’ve forgotten everything the book had to say. Fortunately Robb Allen is starting a series of Photography lessons for our general photography improvement. Read lesson 1 and start taking better pics. Your friends and family will thank you for it.

My personal tip is to buy the biggest memory card you can afford, fill that sucker up when taking pictures, and delete vigorously before showing showing the pics off. Memory is getting cheaper and is way cheaper than paying for film and developing. Why settle for just one chance to get a great shot of your kid picking his nose when you can get three and keep the best. The odds are in your favor.

Just remember to delete vigorously because while memory is cheap, time isn’t.