code, tdd 0 comments suggest edit

UPDATE: I’ve since supplemented this with another approach.

Jeremy Miller asks the question, “How do you organize your NUnit test code?”.  My answer? I don’t, I organize my MbUnit test code.

Bad jokes aside, I do understand that his question is more focused on the structure of unit testing code and not the structure of any particular unit testing framework.

I pretty much follow the same structure that Jeremy does in that I have a test fixture per class (sometimes more than one per class for special cases).  I experimented with having a test fixture per method, but gave up on that as it became a maintenance headache.  Too many files!

One convention I use is to prefix my unit test projects with “UnitTest”.  Thus the unit tests for Subtext are in the project UnitTests.Subtext.dll.  The main reason for this, besides the obvious fact that it’s a sensible name for a project that contains unit tests, is that for most projects, the unit test assembly would show up on bottom in the Solution Explorer because of alphabetic ordering.

So then I co-found a company whose name starts with the letter V.  Doh!

UPDATE: I neglected to point out (as David Hayden did) that with VS.NET 2005 I can use Solution Folders to group tests. We actually use Solution Folders within Subtext. Unfortunately, many of my company work is still using VS.NET 2003, which does not boast such a nice feature.

One thing I don’t do is separate my unit tests and integration tests into two separate assemblies.  Currently I don’t separate those tests at all, though I have plans to start. 

Even when I do start separating tests, one issue with having unit tests in two separate assemblies is that I don’t know how to produce NCover reports that merge the results of coverage from two separate assemblies.

One solution I proposed in the comments to Jeremy’s post is to use a single assembly for tests, but have UnitTests and Integration Tests live in two separate top level namespaces.  Thus in MbUnit or in TD.NET, you can simply run the tests for one namespace or another.

Example Namespaces: Tests.Unit and Tests.Integration

In the root of a unit test project, I tend to have a few helper classes such as UnitTestHelper, which contains static methods useful for unit tests. I also have a ReflectionHelper class, just in case I need to “cheat” a little. Any other classes I might find useful typically go in the root, such as my SimulatedHttpRequest class as well.

0 comments suggest edit

Tivo
Icon Ever prolific Jon Galloway has released another tool on our tools site.  When we started the tools site, I talked some trash to spur some friendly competition between the two of us.  Let’s just say Jon is kicking my arse so hard my relatives in Korea can’t sit down.

His latest RegmonToRegfile tool works with yet another SysInternals tool, Regmon.

Winternals (maker of Sysinternal) released many fantastic tools for managing and spelunking your system.

So great, in fact, that Robb feels he owes them his child in gratitude.

Kudos to Microsoft for snatching up Mark Russinovich and Winternals Software.

Regmon is essentially a Tivo for your registry, allowing you to record and play back changes to the registry.

Regmon lacks the ability to export to a registry (.reg) file, which is where Jon’s tool comes to play.  It can parse the Regmon log files and translate them to .reg files.

Here is a link to Jon’s blog post on this tool.

 

0 comments suggest edit

Jeff Atwood writes a great rebuttal to Steve Yegge’s rant on Agile methodologies.  I won’t expound on it too much except to point out this quote which should be an instant classic, emphasis mine:

Steve talks about “staying lightweight” as if it’s the easiest thing in the world, like it’s some natural state of grace that developers and organizations are born into. Telling developers they should stay lightweight is akin to telling depressed people they should cheer up.

Heh heh.  Jeff moves from a Coding Horror to a Coding Hero.

Now while I agree much of it is religion, I like to think that the goal is to remove as much religion out of software development as possible.  A key step, as Jeff points out, is recognizing which aspects are religion.

…the only truly dangerous people are the religious nuts who don’t realize they are religious nuts.

It’s like alcoholism.  You have to first accept you are an alcoholic.  But then, once recognizing that, you strive to make changes.

For example, Java vs .NET is a religious issue insofar as one attempts to make an absolute claim that one is superior to the other. 

However it is less a religious issue to say that I prefer .NET over Java for reasons X, Y, and Z based on my experience with both, or even to say that in situation X, Java is a preferred solution.

Likewise, just because double-blind tests are nearly impossible to conduct does not mean that we cannot increase the body of knowledge of software engineering. 

For the most part, we turn to the techniques of social scientists and economists by poring over historical data and looking at trends to extrapolate what information we can, with appropriate margins of error. 

Thus we can state, with a fair degree of certainty, that:

Design is a complex, iterative process. Initial design solutions are usually wrong and certainly not optimal.

That is fact 28 of Facts and Fallacies of Software Engineering by Robert L. Glass, who is by no means an agile zealot.  Yet this fact does show a weakness in waterfall methodologies that is addressed by agile methodologies, a differentiation that owes more to the scientific method than pure religion.

0 comments suggest edit

Personal matters (good stuff) and work has been keeping me really busy lately, but every free moment I get I plod along, coding a bit here and there, getting Subtext 1.9.1 “Shields Up” ready for action.

There were a couple of innovations I wanted to include in this version as well as a TimeZone handling fix, but recent comment spam shit storms have created a sense of urgency to get what I have done out the door ASAP.

In retrospect, as soon as I finished the Akismet support, I should have released.

I have a working build that I am going to test on my own site tonight.  If it works out fine, I will deploy a beta to SourceForge.  This will be the first Subtext release that we label Beta.  I think it will be just as stable as any other release, but there’s a significant schema change involved and I want to test it more before I announce a full release.

Please note, there is a significant schema change in which data gets moved around, so backup your database and all applicable warnings apply.  Upgrade at your own risk.  I am going to copy my database over and upgrade offline to test it out before deploying.

Shields up edition will contain Akismet support and CAPTCHA.  The Akismet support required adding comment “folders” to allow the user to report false positives and false negatives.

0 comments suggest edit

Disk Defragmenter

For the most part, the Disk Defragmenter application (located at %SystemRoot%\system32\dfrg.msc) that comes with Windows XP does a decent enough job of defragmenting a hard drive for most users.

But if you’re a developer, you are not like most users, often dealing with very large files and installing and uninstalling applications like there’s no tomorrow.  For you, there are a couple of other free utilities you should have in your utility belt.

Recently I noticed my hard drive grinding a lot.  After defragmenting my drive, I clicked on the View Report button this time (I normally never do this out of hurriedness).

Disk Defrag
Dialog

This brings up a little report dialog.

Defrag
Report

And in the bottom, there is a list of files that Disk Defragmenter could not defragment.  In this case, I think the file was simply too large for the poor utility.  So I reached into my utility belt and whipped out Contig.

Contig

Contig is a command line utility from SysInternals that can report on the fragmentation of individual files and defrag an individual file.

I opened up a console window, changed directory to the Backup directory, and ran the command:

contig *.tib

Which defragmented every file ending with the tib extension (in this case just one).  This took a good while to complete working against a 29 Gig file, but successfully reduced the fragmens from four to two, which made a huge difference.  I may try again to see if it can bring it down to a single fragment. 

I ran Disk Defragmenter again and here are the results.

Disk
Defragmenter

Keep in mind that the disk usage before this pass with the defragger was the usage after running Disk Defragmenter once.  After using contig and then defragging again, I received much better results.

PageDefrag

Another limitation of Disk Defragmenter is that it cannot defragment files open for exclusive access, such as the Page File.  Again, reaching into my utility belt I pull yet another tool from Sysinternals (those guys rock!), PageDefrag.

Running PageDefrag brings up a list of page files, event log files, registry files along with how many clusters and fragments make up those files.

Page
Defrag

This utility allows you to specify which files to defrag and either defragment them on the next reboot, or have them defragmented at every boot.  As you can see in the screenshot, there was only one fragmentted file, so the need for this tool is not great at the moment.  But it is good to have it there when I need it.

With these tools in hand, you are ready to be a defragmenting ninja.

0 comments suggest edit

TimeZones Right now, there is no easy way to convert a time from one arbitrary timezone to another arbitrary timezone in .NET.  Certainly you can convert from UTC to the local system time, or from the local system time to UTC. But how do you convert from PST to EST?

Well Scott Hanselman recently pointed me to some ingenious code in DasBlog originally written by Clemens Vasters that does this.  I recently submitted a patch to DasBlog so that this code properly handles daylight savings and I had planned to blog about it in more detail later.  Unfortunately, we recently found out that changes in Vista may break this particular approach.

It turns out that the Orcas release introduces a new TimeZone2 class.  This class will finally allow conversions between arbitrary timezones.

Krzysztof Cwalina (who wins the award for Microsoft blogger with the highest consonants to vowel ration in a first name) points out that many people are not thrilled with the “2” suffix and provides context on the naming choice

Kathy Kam of the BCL team points out some other proposed names for the new TimeZone2 class and the problems with each.

I’m fine with TimeZone2 or TimeZoneRegion.

 

0 comments suggest edit

Hello
World

Jeff Atwood asks the question in a recent post if writing your own blog software is a form of procrastination (no, blogging is).

I remember reading something where someone equated rolling your own blog engine is the modern day equivalent of the Hello World program.  I wish I could remember where I heard that so I can give proper credit. UPDATE: Kent Sharkey reminds me that I read it on his blog. It was a quote from Scott Wigart. Thanks for the memory refresh Kent!

Obviously, as an Open Source project founder building a blog engine, I have a biased opinion on this topic (I can own up to that).  My feeling is that for most cases (not all) rolling your own blog engine is a waste of time given that there are several good open source blog engines such as Dasblog, SUB, and Subtext.

It isn’t so much that writing a rudimentary blog engine is hard.  It isn’t.  To get a basic blog engine up and running is quite easy.  The challenge lies in going beyond that basic engine.

The common complaint with these existing solutions (and motivation for rolling your own) is that they contain more features than a person needs.  Agreed.  There’s no way a blog engine designed for mass consumption is going to only have the features needed by any given individual.

However, there are a lot of features these blog engines support that you wouldn’t realize you want or need till you get your own engine up and running.  And in implementing these common features, a developer can spend a lot of time playing catch-up by reinventing the kitchen sink.  Who has that kind of time?

Why reinvent the sink, when the sink is there for the taking?

For example, let’s look at fighting comment spam.

Implementing comments on a blog is quite easy. But then you go live with your blog and suddenly you’re overwhelmed with insurance offers.  Implementing comments is easy, implementing it well takes more time.

If you are going to roll your own blog engine, at least “steal” the Subtext Akismet API library in our Subversion repositoryDasblog did.  However, even with that library, you still ought to build a UI for reporting false positives and false negatives back to Akismet etc…  Again, not difficult, but it is time consuming and it has already been done before.

Some other features that modern blog engines provide that you might not have thought about (not all are supported by Subtext yet, but by at least one of the blogs I mentioned):

  • RFC3229 with Feeds
  • BlogML
    • So you can get your posts in there.
  • Email to Weblog
  • Gravatars
  • Multiple Blog Support (more useful than you think)
  • Timezone Handling (for servers in other timezone)
  • Windows Live Writer support
  • Metablog API
  • Trackbacks/Pingbacks
  • Search
  • Easy Installation and Upgrade
  • XHTML Compliance
  • Live Comment Preview

My point isn’t necessarily to dissuade developers from rolling their own blog engine.  It’s fun code to write, I admit.  My point is really this (actually two points):

​1. If you plan to write your own blog engine, take a good hard look at the code for existing Open Source blog engines and ask yourself if your needs wouldn’t be better served by contributing to one of these projects.  They could use your help and it gets you a lot of features for free. Just don’t use the ones you don’t need.

Jerry
Maguire

  1. If you still want to write your own, at least take a look at the code contained in these projects and try to avail yourself of the gems contained therein.  It’ll help you keep your wheel reinventions to a minimum.

That’s all I’m trying to say.  Help us… help you.

0 comments suggest edit

A friend of mine sent me an interesting report that Brad Greenspan, the founder of eUniverse (now Intermix Media) that created and owned MySpace.com issued an online report that the sale of MySpace intentionally defrauded shareholders out of multiple billions of dollars because they hid MySpace revenues from shareholders.

Disclosure: Technically, I used to work for Intermix Media as they owned my last employer, SkillJam, before SkillJam was sold to Fun technologies.

The most surprising bit to me is that (according to the report)

Shareholders were not aware that Myspace’s revenue was growing at a 1200 percent annualized rate and increasing.

I wonder how much of this is true.  If it is true, what happens next?  Gotta love the smell of scandal in the morning. Wink

0 comments suggest edit

This is a pretty sweet video that demonstrates a system for sketching on a whiteboard using mimio-like projection system.  The instructor draws objects, adds a gravity vector, and then animates his drawings to see the result.

Another interesting take on user interfaces for industrial design.

code 0 comments suggest edit

Keyvan Nayyeri has a great tip for how to control the display of a type in the various debugger windows using a DebuggerTypeProxy attribute.  His post includes screenshots with this in use.

This is an attribute you can apply to a class, assembly, or struct to specify another class to examine within a debugger window. 

You still get access to the raw view of the original type, just in case some other developer plays a practical joke on you by specifying a DebuggerTypeProxy that displays the value of every field as being 42.

0 comments suggest edit

Alex Papadimoulis, the man behind the ever entertaining (and depressing) TheDailyWTF announces a new business venture to help connect employers fielding good jobs with good developers interested in a career change.  At least that’s the hope.

The Hidden Network is the name of the venture and the plan is to pay bloggers $5.00 per thousand ad views (cha-ching!) for hosting pre-screened job postings in small ad blocks that look very similar to the standard Google AdSense ad.  In the interest of full disclosure, I will be taking part as a blogger hosting these job postings.

I’ve written before about how hiring is challenging and that blogs are a great means to connecting with and hiring good developers.  In fact, that’s how I recruited Jon Galloway to join my company, brought in Steve Harman (a Subtext Developer) as a contractor, and almost landed another well known blogger, until his own company grabbed his leg and dragged him kicking and screaming back into the fold giving him anything he wanted (I kid).

My hope is that somehow, my blog helps connect a good employer with a good developer. It’s worked for my company, it may work for yours.  Connecting good developers with good jobs is a win win experience for all.  My fear is that the ads will turn into a bunch of phising expeditions by headhunders looking to collect resumes.  It will be imperative that Hidden Network work hard at trying to filter out all but the high quality job postings.

As Alex states in the comments of that post, blogs are the new Trade Publications, so paying $5.00 per CPM is quite sustainable and viable.  It will be interesting to see if his grand experiment works out.

0 comments suggest edit

Great show on .NET Rocks today featuring Rob Conery, architect of the Commerce Starter Kit and SubSonic.

However, being the self-centered person that I am, the only thing I remember hearing is my last name being mispronounced.

For the record, my last name is “Haack” which is pronounced Hack, as in,

Yeah, I took a look at the new code Phil put into Subtext.  Talk about an ugly Haack!

On the show Rob pronounced it Hock which is only okay if you are British, which Rob is not.  I already gave Rob crap about it, so we’re cool. Wink

0 comments suggest edit

UPDATE: I could not slip the subtle beg for an MSDN subscription I surreptitiously embedded in this post past my astute readers. Many thanks to James Avery for contributing an MSDN subscription to this grateful developer. Now that I have my MSDN subscription, I say this whole VPC licensing thing is a non-issue and quit whining about it. (I joke, I joke!).

In a recent post I declared that Virtual PC is a suitable answer to the lack of backwards compatibility support for Visual Studio.NET 2003.  In the comments to that post Ryan Smith asks a great question surrounding the licensing issues involved.

Is Microsoft going to let me use my OEM license key from an ancient machine so that I can run Windows XP in a virtual machine on Vista to test and debug in VS 2003?

I think as developers, we take for granted that we are going to have MSDN subscriptions (I used to but I don’t right now) and plenty of OS licenses for development purposes.  But suppose I sell my old machine and purchase a new machine with Vista installed.  How can I apply the suggested workaround of installing Virtual PC with Windows XP if I don’t have a license to XP?

Ryan wrote Microsoft with this question and received a response that indicated that Microsoft hasn’t quite figured this out. Does this mean that developers need to shell out another $189 or so in order to develop with Visual Studio.NET 2003 in a Virtual PC running Windows XP on Vista?

code, blogging 0 comments suggest edit

I recently wrote about a lightweight invisible CAPTCHA validator control I built as a defensive measure against comment spam.  I wanted the control to work in as many situations as possible, so it doesn’t rely on ViewState nor Session since some users of the control may want to turn those things off.

Of course this raises the question, how do I know the answer submitted in the form is the answer to the question I asked?  Remember, never trust your inputs, even form submissions can easily be tampered with.

Well one way is to give the client the answer in some manner that it can’t be read and can’t be tampered with.  Encryption to the rescue!

Using a few new objects from the System.Security.Cryptography namespace in .NET 2.0, I quickly put together code that would encrypt the answer along with the current system time into a base 64 encoded string.  That string would then be placed in a hidden input field.

When the form is submitted, I made sure that the encrypted value contained the answer and that the date inside was not too old, thus defeating replay attacks.

The first change was to initialize the encryption algorithm via a static constructor.

The code can be hard to read in a browser, so I did include the source code in the download link at the end of this post.

static SymmetricAlgorithm encryptionAlgorithm 
    = InitializeEncryptionAlgorithm();

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

With that in place, I added a couple static methods to the control.

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

public static string EncryptString(string clearText)
{
  byte[] clearTextBytes = Encoding.UTF8.GetBytes(clearText);
  byte[] encrypted = encryptionAlgorithm.CreateEncryptor()
    .TransformFinalBlock(clearTextBytes, 0
    , clearTextBytes.Length);
  return Convert.ToBase64String(encrypted);
}

In the PreRender method I simply took the answer, appended the date using a pipe character as a separator, encrypted the whole stew, and the slapped it in a hidden form field.

//Inside of OnPreRender
Page.ClientScript.RegisterHiddenField
    (this.HiddenEncryptedAnswerFieldName
    , EncryptAnswer(answer));

string EncryptAnswer(string answer)
{
  return EncryptString(answer 
    + "|" 
    + DateTime.Now.ToString("yyyy/MM/dd HH:mm"));
}

Now with all that in place, when the user submits the form, I can determine if the answer is valid by grabbing the value from the form field, calling decrypt on it, splitting it using the pipe character as a delimiter, and examining the result.

protected override bool EvaluateIsValid()
{
  string answer = GetClientSpecifiedAnswer();
    
  string encryptedAnswerFromForm = 
    Page.Request.Form[this.HiddenEncryptedAnswerFieldName];
    
  if(String.IsNullOrEmpty(encryptedAnswerFromForm))
    return false;
    
  string decryptedAnswer = DecryptString(encryptedAnswerFromForm);
    
  string[] answerParts = decryptedAnswer.Split('|');
  if(answerParts.Length < 2)
    return false;
    
  string expectedAnswer = answerParts[0];
  DateTime date = DateTime.ParseExact(answerParts[1]
    , "yyyy/MM/dd HH:mm", CultureInfo.InvariantCulture);
  if ((DateTime.Now - date).Minutes > 30)
  {
    this.ErrorMessage = "Sorry, but this form has expired. 
      Please submit again.";
    return false;
  }

  return !String.IsNullOrEmpty(answer) 
    && answer == expectedAnswer;
}

// Gets the answer from the client, whether entered by 
// javascript or by the user.
private string GetClientSpecifiedAnswer()
{
  string answer = Page.Request.Form[this.HiddenAnswerFieldName];
  if(String.IsNullOrEmpty(answer))
    answer = Page.Request.Form[this.VisibleAnswerFieldName];
  return answer;
}

This technique could work particularly well for a visible CAPTCHA control as well. The request for a CAPTCHA image is an asynchronous request and the code that renders that image has to know which CAPTCHA image to render. Implementations I’ve seen simply store an image in the CACHE using a GUID as a key when rendering the control. Thus when the asynchronous request to grab the CAPTCHA image arrives, the CAPTCHA image rendering HttpHandler looks up the image using the GUID and renders that baby out.

Using encryption, the URL for the CAPTCHA image could embed the answer (aka the word to render).

If you are interested, you can download an updated binary and source code for the Invisible CAPTCHA control which now includes the symmetric encryption from here.

0 comments suggest edit

UPDATE: I think a good measure of a blog is the intelligence and quality of the comments. This comments in response to this post makes my blog look good (not all do).

As several commenters pointed out, the function returns a local DateTime adjusted from the specified UTC date. By calling ToUniversalTime() on the result, I get the behavior I am looking for. That’s why I ask you smart people before making an ass of myself on the bug report site.

Before I post this as a bug, can anyone tell me why this test fails when I think it should pass?

[Test]
public void ParseUsingAssumingUniversalReturnsDateTimeKindUtc()
{
  IFormatProvider culture = new CultureInfo("en-US", true);
  DateTime utcDate = DateTime.Parse("10/01/2006 19:30", culture, 
    DateTimeStyles.AssumeUniversal);
  Assert.AreEqual(DateTimeKind.Utc, utcDate.Kind, 
    "Expected AssumeUniversal would return a UTC date.");
}

What is going on here is I am calling the method DateTime.Parse passing in a DateTimeStyle.AssumeUniversal as an argument. My understanding is that it should indicate to the Parse method that the passed in string denotes a Coordinated Univeral Time (aka UTC).

But when I check the Kind property of the resulting DateTime instance, it returns DatTimeKind.Local rather than DatTimeKind.Utc.

The unit test demonstrates what I think should happen. Either this really is a bug, or I am wrong in my assumptions, in which case I would like to know, how are you supposed to parse a string representing a date/time in the UTC timezone?

0 comments suggest edit

Atlas With The Weight Of The
Codebase I read this article recently that describes the mind frying complexity of the Windows development process.  With Vista sporting around 50 million lines of code, it’s no wonder Vista suffers from delays.  Quick, what does line #37,920,117 say?

Microsoft has acknowledged the need to release more often (as in sometime this millenia), but that agility is difficult to achieve with the current codebase due to its immense complexity as well as Microsoft’s (stubbornly?) heroic efforts to maintain backward compatibilty.  The author of the article labels this the Curse of Backward Compatibility.

I don’t think anyone doubts that maintaining backwards compatibility can be a Herculean effort because it goes beyond supporting legacy specification (which is challenging enough).  Just look at how Microsoft supports old code that broke the rules.  Additionally, the fact that old code poses a security threat requires even more code to patch those security threats.  Ideally alot of that code would be removed outright, but it is challenging to remove or rewrite any of it in fear of breaking too many applications.

Of course there are very good business reasons for Microsoft to maintain this religious adherence to backwards compatibility (starts with an m ends with a y and has one in the middle).  The primary one being they have a huge user base when compared to Apple, which does not give Microsoft the luxury of a “Do Over” as Apple has done with OSX.

A different article (same magazine) points to virtualization technology as the answer.  This article talks suggests a virtualization layer that is core to the operating system.  I think we are already seeing hints of this in play with Microsoft’s answer to developers angry that Vista is not going to support Visual Studio.NET 2003.

The big technical challenge is with enabling scenarios like advanced debugging. Debuggers are incredibly invasive in a process, and so changes in how an OS handles memory layout can have big impacts on it. Vista did a lot of work in this release to tighten security and lock down process/memory usage - which is what is affecting both the VS debugger, as well as every other debugger out there. Since the VS debugger is particularly rich (multi-language, managed/native interop, COM + Jscript integration, etc) - it will need additional work to fully support all scenarios on Vista. That is also the reason we are releasing a special servicing release after VS 2005 SP1 specific to Vista - to make sure everything (and especially debugging and profiling) work in all scenarios. It is actually several man-months of work (we’ve had a team working on this for quite awhile). Note that the .NET 1.1 (and ASP.NET 1.1) is fully supported at runtime on Vista. VS 2003 will mostly work on Vista. What we are saying, though, is that there will be some scenarios where VS 2003 doesn’t work (or work well) on Vista - hence the reason it isn’t a supported scenario. Instead, we recommend using a VPC/VM image for VS 2003 development to ensure 100% compat.

This answer did not satisfy everyone (which answer does?), many seeing it as a copout as it pretty much states that to maintain backward compatibility, use Virtual PC.

Keep in mind that this particular scenario is not going to affect the average user.  Instead, it affects developers, who are notorious for being early adopters and, one would think, would be more amenable to adopting virtualization as an answer, because hey! It’s cool new technology!

Personally I am satisfied by this answer because I have no plans to upgrade to Vista any time soon (my very own copout).  Sure, it’s not the best answer I would’ve hoped for if I was planning an impending upgrade.  But given a choice between a more secure Vista released sooner, or a several months delay to make sure that developers with advanced debugging needs on VS.NET 2003 are happy, I’m going to have to say go ahead and break with backward compatibility.  But at the same time, push out the .NET 2.0 Framework as a required update to Windows XP.

With Windows XP, Microsoft finally released a consumer operating system that was good enough.  Many users will not need to upgrade to Vista for a looong time.  I think it is probably a good time to start looking at cleaning up and modularizing that 50 million line rambling historical record they call a codebase.

If my DOS app circa 1986 stops working on Vista, so be it.  If I’m still running DOS apps, am I really upgrading to Vista?  Using a virtual operating system may not be the best answer we could hope for, but I think it’s good enough and should hopefully free Microsoft up to really take Windows to the next level.  It may cause some difficulties, but there’s no easy path to paying off the immense design debt that Microsoft has accrued with Windows.

0 comments suggest edit

A few days back Jon Galloway and I were discussing a task he was working on to document a database for a client.  He had planned to use some code generation to initially populate a spreadsheet and would fill in the details by hand.  I suggested he store the data with the schema using SQL extended properties.

We looked around and found some stored procs for pulling properties out, but no useful applications for putting them in there in a nice, quick, and easy manner.

A few days later, the freaking guy releases this Database Dictionary Creator, a nice GUI tool to document your database, storing the documentation as part of your database schema.

Database Dictionary Entry
Form

The tool allows you to add your own custom properties to track, which then get displayed in the data dictionary form grid as seen above. Audit and Source are custom properties. It is a way to tag our database schema.

You ask the guy to build a house with playing cards and he comes back with the Taj Mahal.

Check it out.

0 comments suggest edit

As developers, I think we tend to take the definition of Version for granted.  What are the components of a version?  Well that’s easy, it is:

Major.Minor.Build.Revision

Where the Build and Revision numbers are optional.  At least that is the definition given my the MSDN documentation for the Version class.

But look up Version in Wikipedia and you get a different answer.

The most common software versioning scheme is a scheme in which different major releases of the software each receive a unique numerical identifier. This is typically expressed as three numbers, separated by periods, such as version 2.4.13. One very commonly followed structure for these numbers is:

major.minor[.revision[.build]]

or

major.minor[.maintenance[.build]]

Notice that this scheme differs from the Microsoft scheme in that it places the build number at the very end, rather than the revision number.

Other versioning schemes such as the Unicode Standard and Solaris/Linux figure that three components is enough for a version with Major, Minor, and Update (for Unicode Standard) or Micro (for Solaris/Linux).

According to the MSDN documentation, the build number represents a recompilation of the same source, so it seems to me that it belongs at the end of the version, as it is the least significant element.

In Subtext, we roughly view the version as follows, though it is not set in stone:

  • Major: Major update.  If a library assembly, probably not backwards compatible with older clients.  This would include major changes. Most likely will include database schema changes and interface changes.
  • Minor: Minor change, may introduce new features, but backwards compatibility is mostly retained.  Likely will include schema changes.
  • Revision: Minor bug fixes, no significant new features implemented, though a few small improvements may be included.  May include a schema change.
  • Build: A recompilation of the code in progress towards a revision.  No schema changes.

Internally, we may have schema changes between build increments, but when we are prepared to release, a schema change between releases would require a revision (or higher) increment.

I know some developers like to embed the date and counter in the build number.  For example, 20060927002 would represent compilation #2 on September 27, 2006.

What versioning schemes are you fans of and why?

0 comments suggest edit

When Log4Net doesn’t work, it can be a very frustrating experience.  Unlike your typical application library, log4net doesn’t throw exceptions when it fails.  Well that is to be expected and makes a lot of sense since it is a logging library.  I wouldn’t want my application to fail because it had trouble logging a message.

Unfortunately, the downside of this is that problems with log4net aren’t immediately apparent.  99.9% of the time, when Log4Net doesn’t work, it is a configuration issue.  Here are a couple of troubleshooting tips that have helped me out.

Enable Internal Debugging

This tip is straight from the Log4Net FAQ, but not everyone notices it. To enable internal debugging, add the following app setting to your App.config (or Web.config for web applications) file.

<add key="log4net.Internal.Debug" value="true"/>

This will write internal log4net messages to the console as well as the System.Diagnostics.Trace system.  You can easily output the log4net internal debug messages by adding a trace listener.  The following snippet is taken from the log4net FAQ and goes in your <configuration> section of your application config file.

<system.diagnostics>
  <trace autoflush="true">
    <listeners>
      <add 
        name="textWriterTraceListener" 
        type="System.Diagnostics.TextWriterTraceListener" 
        initializeData="C:\tmp\log4net.txt" />
    </listeners>
  </trace>
</system.diagnostics>

Passing Nulls For Value Types Into AdoNetAppender {.clear}

Another common problem I’ve dealt with is logging using the AdoNetAppender. In particular, attempting to log a null value into an int parameter (or other value type), assuming your stored procedure allows null for that parameter.

The key here is to use the RawPropertyLayout for that parameter. Here is a snippet from a log4net.config file that does this.

<parameter>
  <parameterName value="@BlogId" />
  <dbType value="Int32" />
  <layout type="log4net.Layout.RawPropertyLayout">
    <key value="BlogId" />
  </layout>
</parameter>

Hopefully this helps you with your log4net issues.

tags: Log4Net