comments suggest edit

Personal matters (good stuff) and work has been keeping me really busy lately, but every free moment I get I plod along, coding a bit here and there, getting Subtext 1.9.1 “Shields Up” ready for action.

There were a couple of innovations I wanted to include in this version as well as a TimeZone handling fix, but recent comment spam shit storms have created a sense of urgency to get what I have done out the door ASAP.

In retrospect, as soon as I finished the Akismet support, I should have released.

I have a working build that I am going to test on my own site tonight.  If it works out fine, I will deploy a beta to SourceForge.  This will be the first Subtext release that we label Beta.  I think it will be just as stable as any other release, but there’s a significant schema change involved and I want to test it more before I announce a full release.

Please note, there is a significant schema change in which data gets moved around, so backup your database and all applicable warnings apply.  Upgrade at your own risk.  I am going to copy my database over and upgrade offline to test it out before deploying.

Shields up edition will contain Akismet support and CAPTCHA.  The Akismet support required adding comment “folders” to allow the user to report false positives and false negatives.

comments suggest edit

Disk Defragmenter

For the most part, the Disk Defragmenter application (located at %SystemRoot%\system32\dfrg.msc) that comes with Windows XP does a decent enough job of defragmenting a hard drive for most users.

But if you’re a developer, you are not like most users, often dealing with very large files and installing and uninstalling applications like there’s no tomorrow.  For you, there are a couple of other free utilities you should have in your utility belt.

Recently I noticed my hard drive grinding a lot.  After defragmenting my drive, I clicked on the View Report button this time (I normally never do this out of hurriedness).

Disk Defrag
Dialog

This brings up a little report dialog.

Defrag
Report

And in the bottom, there is a list of files that Disk Defragmenter could not defragment.  In this case, I think the file was simply too large for the poor utility.  So I reached into my utility belt and whipped out Contig.

Contig

Contig is a command line utility from SysInternals that can report on the fragmentation of individual files and defrag an individual file.

I opened up a console window, changed directory to the Backup directory, and ran the command:

contig *.tib

Which defragmented every file ending with the tib extension (in this case just one).  This took a good while to complete working against a 29 Gig file, but successfully reduced the fragmens from four to two, which made a huge difference.  I may try again to see if it can bring it down to a single fragment. 

I ran Disk Defragmenter again and here are the results.

Disk
Defragmenter

Keep in mind that the disk usage before this pass with the defragger was the usage after running Disk Defragmenter once.  After using contig and then defragging again, I received much better results.

PageDefrag

Another limitation of Disk Defragmenter is that it cannot defragment files open for exclusive access, such as the Page File.  Again, reaching into my utility belt I pull yet another tool from Sysinternals (those guys rock!), PageDefrag.

Running PageDefrag brings up a list of page files, event log files, registry files along with how many clusters and fragments make up those files.

Page
Defrag

This utility allows you to specify which files to defrag and either defragment them on the next reboot, or have them defragmented at every boot.  As you can see in the screenshot, there was only one fragmentted file, so the need for this tool is not great at the moment.  But it is good to have it there when I need it.

With these tools in hand, you are ready to be a defragmenting ninja.

comments suggest edit

TimeZones Right now, there is no easy way to convert a time from one arbitrary timezone to another arbitrary timezone in .NET.  Certainly you can convert from UTC to the local system time, or from the local system time to UTC. But how do you convert from PST to EST?

Well Scott Hanselman recently pointed me to some ingenious code in DasBlog originally written by Clemens Vasters that does this.  I recently submitted a patch to DasBlog so that this code properly handles daylight savings and I had planned to blog about it in more detail later.  Unfortunately, we recently found out that changes in Vista may break this particular approach.

It turns out that the Orcas release introduces a new TimeZone2 class.  This class will finally allow conversions between arbitrary timezones.

Krzysztof Cwalina (who wins the award for Microsoft blogger with the highest consonants to vowel ration in a first name) points out that many people are not thrilled with the “2” suffix and provides context on the naming choice

Kathy Kam of the BCL team points out some other proposed names for the new TimeZone2 class and the problems with each.

I’m fine with TimeZone2 or TimeZoneRegion.

 

comments suggest edit

Hello
World

Jeff Atwood asks the question in a recent post if writing your own blog software is a form of procrastination (no, blogging is).

I remember reading something where someone equated rolling your own blog engine is the modern day equivalent of the Hello World program.  I wish I could remember where I heard that so I can give proper credit. UPDATE: Kent Sharkey reminds me that I read it on his blog. It was a quote from Scott Wigart. Thanks for the memory refresh Kent!

Obviously, as an Open Source project founder building a blog engine, I have a biased opinion on this topic (I can own up to that).  My feeling is that for most cases (not all) rolling your own blog engine is a waste of time given that there are several good open source blog engines such as Dasblog, SUB, and Subtext.

It isn’t so much that writing a rudimentary blog engine is hard.  It isn’t.  To get a basic blog engine up and running is quite easy.  The challenge lies in going beyond that basic engine.

The common complaint with these existing solutions (and motivation for rolling your own) is that they contain more features than a person needs.  Agreed.  There’s no way a blog engine designed for mass consumption is going to only have the features needed by any given individual.

However, there are a lot of features these blog engines support that you wouldn’t realize you want or need till you get your own engine up and running.  And in implementing these common features, a developer can spend a lot of time playing catch-up by reinventing the kitchen sink.  Who has that kind of time?

Why reinvent the sink, when the sink is there for the taking?

For example, let’s look at fighting comment spam.

Implementing comments on a blog is quite easy. But then you go live with your blog and suddenly you’re overwhelmed with insurance offers.  Implementing comments is easy, implementing it well takes more time.

If you are going to roll your own blog engine, at least “steal” the Subtext Akismet API library in our Subversion repositoryDasblog did.  However, even with that library, you still ought to build a UI for reporting false positives and false negatives back to Akismet etc…  Again, not difficult, but it is time consuming and it has already been done before.

Some other features that modern blog engines provide that you might not have thought about (not all are supported by Subtext yet, but by at least one of the blogs I mentioned):

  • RFC3229 with Feeds
  • BlogML
    • So you can get your posts in there.
  • Email to Weblog
  • Gravatars
  • Multiple Blog Support (more useful than you think)
  • Timezone Handling (for servers in other timezone)
  • Windows Live Writer support
  • Metablog API
  • Trackbacks/Pingbacks
  • Search
  • Easy Installation and Upgrade
  • XHTML Compliance
  • Live Comment Preview

My point isn’t necessarily to dissuade developers from rolling their own blog engine.  It’s fun code to write, I admit.  My point is really this (actually two points):

​1. If you plan to write your own blog engine, take a good hard look at the code for existing Open Source blog engines and ask yourself if your needs wouldn’t be better served by contributing to one of these projects.  They could use your help and it gets you a lot of features for free. Just don’t use the ones you don’t need.

Jerry
Maguire

  1. If you still want to write your own, at least take a look at the code contained in these projects and try to avail yourself of the gems contained therein.  It’ll help you keep your wheel reinventions to a minimum.

That’s all I’m trying to say.  Help us… help you.

comments suggest edit

A friend of mine sent me an interesting report that Brad Greenspan, the founder of eUniverse (now Intermix Media) that created and owned MySpace.com issued an online report that the sale of MySpace intentionally defrauded shareholders out of multiple billions of dollars because they hid MySpace revenues from shareholders.

Disclosure: Technically, I used to work for Intermix Media as they owned my last employer, SkillJam, before SkillJam was sold to Fun technologies.

The most surprising bit to me is that (according to the report)

Shareholders were not aware that Myspace’s revenue was growing at a 1200 percent annualized rate and increasing.

I wonder how much of this is true.  If it is true, what happens next?  Gotta love the smell of scandal in the morning. Wink

comments suggest edit

This is a pretty sweet video that demonstrates a system for sketching on a whiteboard using mimio-like projection system.  The instructor draws objects, adds a gravity vector, and then animates his drawings to see the result.

Another interesting take on user interfaces for industrial design.

code comments suggest edit

Keyvan Nayyeri has a great tip for how to control the display of a type in the various debugger windows using a DebuggerTypeProxy attribute.  His post includes screenshots with this in use.

This is an attribute you can apply to a class, assembly, or struct to specify another class to examine within a debugger window. 

You still get access to the raw view of the original type, just in case some other developer plays a practical joke on you by specifying a DebuggerTypeProxy that displays the value of every field as being 42.

comments suggest edit

Alex Papadimoulis, the man behind the ever entertaining (and depressing) TheDailyWTF announces a new business venture to help connect employers fielding good jobs with good developers interested in a career change.  At least that’s the hope.

The Hidden Network is the name of the venture and the plan is to pay bloggers $5.00 per thousand ad views (cha-ching!) for hosting pre-screened job postings in small ad blocks that look very similar to the standard Google AdSense ad.  In the interest of full disclosure, I will be taking part as a blogger hosting these job postings.

I’ve written before about how hiring is challenging and that blogs are a great means to connecting with and hiring good developers.  In fact, that’s how I recruited Jon Galloway to join my company, brought in Steve Harman (a Subtext Developer) as a contractor, and almost landed another well known blogger, until his own company grabbed his leg and dragged him kicking and screaming back into the fold giving him anything he wanted (I kid).

My hope is that somehow, my blog helps connect a good employer with a good developer. It’s worked for my company, it may work for yours.  Connecting good developers with good jobs is a win win experience for all.  My fear is that the ads will turn into a bunch of phising expeditions by headhunders looking to collect resumes.  It will be imperative that Hidden Network work hard at trying to filter out all but the high quality job postings.

As Alex states in the comments of that post, blogs are the new Trade Publications, so paying $5.00 per CPM is quite sustainable and viable.  It will be interesting to see if his grand experiment works out.

comments suggest edit

Great show on .NET Rocks today featuring Rob Conery, architect of the Commerce Starter Kit and SubSonic.

However, being the self-centered person that I am, the only thing I remember hearing is my last name being mispronounced.

For the record, my last name is “Haack” which is pronounced Hack, as in,

Yeah, I took a look at the new code Phil put into Subtext.  Talk about an ugly Haack!

On the show Rob pronounced it Hock which is only okay if you are British, which Rob is not.  I already gave Rob crap about it, so we’re cool. Wink

comments suggest edit

UPDATE: I could not slip the subtle beg for an MSDN subscription I surreptitiously embedded in this post past my astute readers. Many thanks to James Avery for contributing an MSDN subscription to this grateful developer. Now that I have my MSDN subscription, I say this whole VPC licensing thing is a non-issue and quit whining about it. (I joke, I joke!).

In a recent post I declared that Virtual PC is a suitable answer to the lack of backwards compatibility support for Visual Studio.NET 2003.  In the comments to that post Ryan Smith asks a great question surrounding the licensing issues involved.

Is Microsoft going to let me use my OEM license key from an ancient machine so that I can run Windows XP in a virtual machine on Vista to test and debug in VS 2003?

I think as developers, we take for granted that we are going to have MSDN subscriptions (I used to but I don’t right now) and plenty of OS licenses for development purposes.  But suppose I sell my old machine and purchase a new machine with Vista installed.  How can I apply the suggested workaround of installing Virtual PC with Windows XP if I don’t have a license to XP?

Ryan wrote Microsoft with this question and received a response that indicated that Microsoft hasn’t quite figured this out. Does this mean that developers need to shell out another $189 or so in order to develop with Visual Studio.NET 2003 in a Virtual PC running Windows XP on Vista?

code, blogging comments suggest edit

I recently wrote about a lightweight invisible CAPTCHA validator control I built as a defensive measure against comment spam.  I wanted the control to work in as many situations as possible, so it doesn’t rely on ViewState nor Session since some users of the control may want to turn those things off.

Of course this raises the question, how do I know the answer submitted in the form is the answer to the question I asked?  Remember, never trust your inputs, even form submissions can easily be tampered with.

Well one way is to give the client the answer in some manner that it can’t be read and can’t be tampered with.  Encryption to the rescue!

Using a few new objects from the System.Security.Cryptography namespace in .NET 2.0, I quickly put together code that would encrypt the answer along with the current system time into a base 64 encoded string.  That string would then be placed in a hidden input field.

When the form is submitted, I made sure that the encrypted value contained the answer and that the date inside was not too old, thus defeating replay attacks.

The first change was to initialize the encryption algorithm via a static constructor.

The code can be hard to read in a browser, so I did include the source code in the download link at the end of this post.

static SymmetricAlgorithm encryptionAlgorithm 
    = InitializeEncryptionAlgorithm();

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

With that in place, I added a couple static methods to the control.

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

public static string EncryptString(string clearText)
{
  byte[] clearTextBytes = Encoding.UTF8.GetBytes(clearText);
  byte[] encrypted = encryptionAlgorithm.CreateEncryptor()
    .TransformFinalBlock(clearTextBytes, 0
    , clearTextBytes.Length);
  return Convert.ToBase64String(encrypted);
}

In the PreRender method I simply took the answer, appended the date using a pipe character as a separator, encrypted the whole stew, and the slapped it in a hidden form field.

//Inside of OnPreRender
Page.ClientScript.RegisterHiddenField
    (this.HiddenEncryptedAnswerFieldName
    , EncryptAnswer(answer));

string EncryptAnswer(string answer)
{
  return EncryptString(answer 
    + "|" 
    + DateTime.Now.ToString("yyyy/MM/dd HH:mm"));
}

Now with all that in place, when the user submits the form, I can determine if the answer is valid by grabbing the value from the form field, calling decrypt on it, splitting it using the pipe character as a delimiter, and examining the result.

protected override bool EvaluateIsValid()
{
  string answer = GetClientSpecifiedAnswer();
    
  string encryptedAnswerFromForm = 
    Page.Request.Form[this.HiddenEncryptedAnswerFieldName];
    
  if(String.IsNullOrEmpty(encryptedAnswerFromForm))
    return false;
    
  string decryptedAnswer = DecryptString(encryptedAnswerFromForm);
    
  string[] answerParts = decryptedAnswer.Split('|');
  if(answerParts.Length < 2)
    return false;
    
  string expectedAnswer = answerParts[0];
  DateTime date = DateTime.ParseExact(answerParts[1]
    , "yyyy/MM/dd HH:mm", CultureInfo.InvariantCulture);
  if ((DateTime.Now - date).Minutes > 30)
  {
    this.ErrorMessage = "Sorry, but this form has expired. 
      Please submit again.";
    return false;
  }

  return !String.IsNullOrEmpty(answer) 
    && answer == expectedAnswer;
}

// Gets the answer from the client, whether entered by 
// javascript or by the user.
private string GetClientSpecifiedAnswer()
{
  string answer = Page.Request.Form[this.HiddenAnswerFieldName];
  if(String.IsNullOrEmpty(answer))
    answer = Page.Request.Form[this.VisibleAnswerFieldName];
  return answer;
}

This technique could work particularly well for a visible CAPTCHA control as well. The request for a CAPTCHA image is an asynchronous request and the code that renders that image has to know which CAPTCHA image to render. Implementations I’ve seen simply store an image in the CACHE using a GUID as a key when rendering the control. Thus when the asynchronous request to grab the CAPTCHA image arrives, the CAPTCHA image rendering HttpHandler looks up the image using the GUID and renders that baby out.

Using encryption, the URL for the CAPTCHA image could embed the answer (aka the word to render).

If you are interested, you can download an updated binary and source code for the Invisible CAPTCHA control which now includes the symmetric encryption from here.

comments suggest edit

UPDATE: I think a good measure of a blog is the intelligence and quality of the comments. This comments in response to this post makes my blog look good (not all do).

As several commenters pointed out, the function returns a local DateTime adjusted from the specified UTC date. By calling ToUniversalTime() on the result, I get the behavior I am looking for. That’s why I ask you smart people before making an ass of myself on the bug report site.

Before I post this as a bug, can anyone tell me why this test fails when I think it should pass?

[Test]
public void ParseUsingAssumingUniversalReturnsDateTimeKindUtc()
{
  IFormatProvider culture = new CultureInfo("en-US", true);
  DateTime utcDate = DateTime.Parse("10/01/2006 19:30", culture, 
    DateTimeStyles.AssumeUniversal);
  Assert.AreEqual(DateTimeKind.Utc, utcDate.Kind, 
    "Expected AssumeUniversal would return a UTC date.");
}

What is going on here is I am calling the method DateTime.Parse passing in a DateTimeStyle.AssumeUniversal as an argument. My understanding is that it should indicate to the Parse method that the passed in string denotes a Coordinated Univeral Time (aka UTC).

But when I check the Kind property of the resulting DateTime instance, it returns DatTimeKind.Local rather than DatTimeKind.Utc.

The unit test demonstrates what I think should happen. Either this really is a bug, or I am wrong in my assumptions, in which case I would like to know, how are you supposed to parse a string representing a date/time in the UTC timezone?

comments suggest edit

Atlas With The Weight Of The
Codebase I read this article recently that describes the mind frying complexity of the Windows development process.  With Vista sporting around 50 million lines of code, it’s no wonder Vista suffers from delays.  Quick, what does line #37,920,117 say?

Microsoft has acknowledged the need to release more often (as in sometime this millenia), but that agility is difficult to achieve with the current codebase due to its immense complexity as well as Microsoft’s (stubbornly?) heroic efforts to maintain backward compatibilty.  The author of the article labels this the Curse of Backward Compatibility.

I don’t think anyone doubts that maintaining backwards compatibility can be a Herculean effort because it goes beyond supporting legacy specification (which is challenging enough).  Just look at how Microsoft supports old code that broke the rules.  Additionally, the fact that old code poses a security threat requires even more code to patch those security threats.  Ideally alot of that code would be removed outright, but it is challenging to remove or rewrite any of it in fear of breaking too many applications.

Of course there are very good business reasons for Microsoft to maintain this religious adherence to backwards compatibility (starts with an m ends with a y and has one in the middle).  The primary one being they have a huge user base when compared to Apple, which does not give Microsoft the luxury of a “Do Over” as Apple has done with OSX.

A different article (same magazine) points to virtualization technology as the answer.  This article talks suggests a virtualization layer that is core to the operating system.  I think we are already seeing hints of this in play with Microsoft’s answer to developers angry that Vista is not going to support Visual Studio.NET 2003.

The big technical challenge is with enabling scenarios like advanced debugging. Debuggers are incredibly invasive in a process, and so changes in how an OS handles memory layout can have big impacts on it. Vista did a lot of work in this release to tighten security and lock down process/memory usage - which is what is affecting both the VS debugger, as well as every other debugger out there. Since the VS debugger is particularly rich (multi-language, managed/native interop, COM + Jscript integration, etc) - it will need additional work to fully support all scenarios on Vista. That is also the reason we are releasing a special servicing release after VS 2005 SP1 specific to Vista - to make sure everything (and especially debugging and profiling) work in all scenarios. It is actually several man-months of work (we’ve had a team working on this for quite awhile). Note that the .NET 1.1 (and ASP.NET 1.1) is fully supported at runtime on Vista. VS 2003 will mostly work on Vista. What we are saying, though, is that there will be some scenarios where VS 2003 doesn’t work (or work well) on Vista - hence the reason it isn’t a supported scenario. Instead, we recommend using a VPC/VM image for VS 2003 development to ensure 100% compat.

This answer did not satisfy everyone (which answer does?), many seeing it as a copout as it pretty much states that to maintain backward compatibility, use Virtual PC.

Keep in mind that this particular scenario is not going to affect the average user.  Instead, it affects developers, who are notorious for being early adopters and, one would think, would be more amenable to adopting virtualization as an answer, because hey! It’s cool new technology!

Personally I am satisfied by this answer because I have no plans to upgrade to Vista any time soon (my very own copout).  Sure, it’s not the best answer I would’ve hoped for if I was planning an impending upgrade.  But given a choice between a more secure Vista released sooner, or a several months delay to make sure that developers with advanced debugging needs on VS.NET 2003 are happy, I’m going to have to say go ahead and break with backward compatibility.  But at the same time, push out the .NET 2.0 Framework as a required update to Windows XP.

With Windows XP, Microsoft finally released a consumer operating system that was good enough.  Many users will not need to upgrade to Vista for a looong time.  I think it is probably a good time to start looking at cleaning up and modularizing that 50 million line rambling historical record they call a codebase.

If my DOS app circa 1986 stops working on Vista, so be it.  If I’m still running DOS apps, am I really upgrading to Vista?  Using a virtual operating system may not be the best answer we could hope for, but I think it’s good enough and should hopefully free Microsoft up to really take Windows to the next level.  It may cause some difficulties, but there’s no easy path to paying off the immense design debt that Microsoft has accrued with Windows.

comments suggest edit

A few days back Jon Galloway and I were discussing a task he was working on to document a database for a client.  He had planned to use some code generation to initially populate a spreadsheet and would fill in the details by hand.  I suggested he store the data with the schema using SQL extended properties.

We looked around and found some stored procs for pulling properties out, but no useful applications for putting them in there in a nice, quick, and easy manner.

A few days later, the freaking guy releases this Database Dictionary Creator, a nice GUI tool to document your database, storing the documentation as part of your database schema.

Database Dictionary Entry
Form

The tool allows you to add your own custom properties to track, which then get displayed in the data dictionary form grid as seen above. Audit and Source are custom properties. It is a way to tag our database schema.

You ask the guy to build a house with playing cards and he comes back with the Taj Mahal.

Check it out.

comments suggest edit

As developers, I think we tend to take the definition of Version for granted.  What are the components of a version?  Well that’s easy, it is:

Major.Minor.Build.Revision

Where the Build and Revision numbers are optional.  At least that is the definition given my the MSDN documentation for the Version class.

But look up Version in Wikipedia and you get a different answer.

The most common software versioning scheme is a scheme in which different major releases of the software each receive a unique numerical identifier. This is typically expressed as three numbers, separated by periods, such as version 2.4.13. One very commonly followed structure for these numbers is:

major.minor[.revision[.build]]

or

major.minor[.maintenance[.build]]

Notice that this scheme differs from the Microsoft scheme in that it places the build number at the very end, rather than the revision number.

Other versioning schemes such as the Unicode Standard and Solaris/Linux figure that three components is enough for a version with Major, Minor, and Update (for Unicode Standard) or Micro (for Solaris/Linux).

According to the MSDN documentation, the build number represents a recompilation of the same source, so it seems to me that it belongs at the end of the version, as it is the least significant element.

In Subtext, we roughly view the version as follows, though it is not set in stone:

  • Major: Major update.  If a library assembly, probably not backwards compatible with older clients.  This would include major changes. Most likely will include database schema changes and interface changes.
  • Minor: Minor change, may introduce new features, but backwards compatibility is mostly retained.  Likely will include schema changes.
  • Revision: Minor bug fixes, no significant new features implemented, though a few small improvements may be included.  May include a schema change.
  • Build: A recompilation of the code in progress towards a revision.  No schema changes.

Internally, we may have schema changes between build increments, but when we are prepared to release, a schema change between releases would require a revision (or higher) increment.

I know some developers like to embed the date and counter in the build number.  For example, 20060927002 would represent compilation #2 on September 27, 2006.

What versioning schemes are you fans of and why?

comments suggest edit

When Log4Net doesn’t work, it can be a very frustrating experience.  Unlike your typical application library, log4net doesn’t throw exceptions when it fails.  Well that is to be expected and makes a lot of sense since it is a logging library.  I wouldn’t want my application to fail because it had trouble logging a message.

Unfortunately, the downside of this is that problems with log4net aren’t immediately apparent.  99.9% of the time, when Log4Net doesn’t work, it is a configuration issue.  Here are a couple of troubleshooting tips that have helped me out.

Enable Internal Debugging

This tip is straight from the Log4Net FAQ, but not everyone notices it. To enable internal debugging, add the following app setting to your App.config (or Web.config for web applications) file.

<add key="log4net.Internal.Debug" value="true"/>

This will write internal log4net messages to the console as well as the System.Diagnostics.Trace system.  You can easily output the log4net internal debug messages by adding a trace listener.  The following snippet is taken from the log4net FAQ and goes in your <configuration> section of your application config file.

<system.diagnostics>
  <trace autoflush="true">
    <listeners>
      <add 
        name="textWriterTraceListener" 
        type="System.Diagnostics.TextWriterTraceListener" 
        initializeData="C:\tmp\log4net.txt" />
    </listeners>
  </trace>
</system.diagnostics>

Passing Nulls For Value Types Into AdoNetAppender {.clear}

Another common problem I’ve dealt with is logging using the AdoNetAppender. In particular, attempting to log a null value into an int parameter (or other value type), assuming your stored procedure allows null for that parameter.

The key here is to use the RawPropertyLayout for that parameter. Here is a snippet from a log4net.config file that does this.

<parameter>
  <parameterName value="@BlogId" />
  <dbType value="Int32" />
  <layout type="log4net.Layout.RawPropertyLayout">
    <key value="BlogId" />
  </layout>
</parameter>

Hopefully this helps you with your log4net issues.

tags: Log4Net

comments suggest edit

Tag Duncan Mackenzie writes about the issue of Categories vs Tags in blogs and blog editors.  I tried to comment there with my thoughts, but received some weird javascript errors.

I’ve thought alot about the same issues with Subtext. Orginally my plan was to simply repurpose the existing category functionality by slapping a big tag sticker on its forehead and from henceforth, a category was really a tag.  One big rename and bam!, I’m done.

But the API issue Duncan describes is a problem.  After more thinking about it, I now plan to make tags a first class citizen alongside categories.  In my mind, they serve different purposes.

I see categories as a structural element and navigational aid.  It is a way to group posts into large high-level groupings.  Use sparingly.

By contrast, I see tags as meta-data, use liberally.

One thought around the API issue is that there is a microformat for specifying tags (rel=”tag”) and Windows Live Writer has plugins for inserting tags into the body of a post. 

My current thinking is to pursue parsing tags from posted content and using that to tag content.

tags: Rel-Tag, Microformat, Categories, Tags

personal, asp.net comments suggest edit

UPDATE: This code is now hosted in the Subkismet project on CodePlex.

Source:
http://www.dpchallenge.com/image.php?IMAGE_ID=138743 Not too long ago I wrote about using heuristics to fight comment spam.  A little later I pointed to the NoBot control as an independent implementation of the ideas I mentioned using Atlas.

I think that control is a great start, but it does suffer from a few minor issues that prevent me from using it immediately.

  1. It requires Atlas and Atlas is pretty heavyweight.
  2. Atlas is pre-release right now.
  3. We’re waiting on a bug fix in Atlas to be implemented.
  4. It is not accessible as it doesn’t work if javascript is enabled.

Let me elaborate on the first point.  In order to get the NoBot control working, a developer needs to add a reference to two separate assemblies, Atlas and the Atlas Control Toolkit, as well as make a few changes to Web.config.  Some developers will simply want a control they can simply drop in their project and start using right away.

I wanted a control that meets the following requirements.

  1. Easy to use. Only one assembly to reference.
  2. Is invisible.
  3. Works when javascript is disabled.

The result is the InvisibleCaptcha control which is a validation control (inherits from BaseValidator)so it can be used just like any other validator, only this validator is invisible and should not have the ControlToValidate property set.  The way it works is that it renders some javascript to perform a really simple calculation and write the answer into a hidden text field using javascript.

What!  Javascript?  What about accessibility!? Calm down now, I’ll get to that.

When the user submits the form, we take the submitted value from the hidden form field, combine it with a secret salt value, and then hash the whole thing together.  We then compare this value with the hash of the expected answer, which is stored in a hidden form field base64 encoded.

The whole idea is that most comment bots currently don’t have the ability to evaluate javascript and thus will not be able to submit the form correctly.  Users with javascript enabled browsers have nothing to worry about.

So what happens if javascript is disabled?

If javascript is disabled, then we render out the question as text alongside a visible text field, thus giving users reading your site via non-javascript browsers (think Lynx or those text-to-speech browsers for the blind) a chance to comment.

Accessible version of the Invisible CAPTCHA
control

This should be sufficient to block a lot of comment spam.

Quick Aside: As Atwood tells me, the idea that CAPTCHA has to be really strong is a big fallacy.  His blog simply asks you to type in orange every time and it blocks 99.9% of his comment spam.

I agree with Jeff on this point when it comes to websites and blogs with small audiences. Websites and blogs tend to implement different CAPTCHA systems from one to another and beating each one brings diminishing margins of returns.

However, for a site with a huge audience like Yahoo! or Hotmail, I think strong CAPTCHA is absolutely necessary as it is a central place for spammers to target.  (By the way, remind me to write a bot to post comment spam on Jeff’s blog)

If you do not care for accessibility, you can turn off the rendered form so that only javascript enabled browsers can post comments by setting the Accessible property to false.

I developed this control as part of the Subtext.Web.Control.dll assembly which is part of the Subtext project, thus you can grab this assembly from our Subversion repository.

To make things easier, I am also providing a link to a zip file that contains the assembly as well as the source code for the control. You can choose to either reference the assembly in order to get started right away, or choose to add the source code file and the javascript file (make sure to mark it as an embedded resource) to your own project.

Please not that if you add this control to your own assembly, you will need to add the following assembly level WebResource attribute in order to get the web resource handler working.

[assembly: WebResource("YourNameSpace.InvisibleCaptcha.js", 
    "text/javascript")]

You will also need to find the call to Page.ClientScript.GetWebResourceUrl inside InvisibleCaptcha.cs and change it to match the namespace specified in the WebResource attribute.

If you look at the code, you’ll notice I make use of several hidden input fields. I didn’t use ViewState for values the control absolutely needs to work because Subtext disables ViewState.  Likewise, I could have chosen to use ControlState, but that can also be disabled.  I took the most defensive route.

[Download InvisibleCaptcha here].

tags: CAPTCHA, Comment Spam, ASP.NET, Validator

comments suggest edit

Akismet is all the rage among the kids these days for blocking comment spam.  Started by the founder of Wordpress, Matt Mullenweg, Akismet is a RESTful web service used to filter comment spam.  Simply submit a comment to the service and it will give you a thumbs up or thumbs down on whether it thinks the comment is spam.

In order to use Akismet you need to sign up for a free non-commercial API key with WordPress and hope that your blog engine supports the Akismet API.

There are already two Akismet API implementations for ASP.NET, but they are both licensed under the GPL which I won’t allow near Subtext (for more on open source licenses, see my series on the topic).

So I recently implemented an API for Akismet in C# to share with the DasBlog (despite the bitter public mudslinging between blog engines, there is nothing but hugs behind the scenes.) folks as part of the Subtext project, thus it is BSD licensed.

You can download the assembly and source code and take a look.  It is also in the Subtext Subversion repository.