tech comments edit

For a project I’m working on, we have an automated build server running CruiseControl.NET hosted in a virtual machine.  We do the same thing for Subtext

Some of you may have multiple virtual servers running on the same machine.  Typically in such a setup (at least typically for me), each virtual server won’t have its own public IP Address, instead sharing the public IP of the host computer.

This makes it a tad bit difficult to manage the virtual servers since using Remote Desktop to connect to the public IP will connect to the host computer and not the virtual machine.  The same thing applies to multiple real computers behind a firewall.

One solution (and the one I use) is set up each virtual server to run Terminal Services, but each one listens on a different port.  Then set up port-forwarding on your firewall to forward requests for the respective ports to the correct virtual machine.

Configuring the Server

The setting for the Terminal Services port lives in the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp{.registry .smallnote}

Open up Regedit, find this key, and look for the the PortNumber value.

PortNumber
Setting

Double click on the PortNumber setting and enter in the port number you wish to use. Unless you think in hex (pat yourself on the back if you do), you might want to click on decimal before entering your new port number.

Port Number Value
Dialog

Or, you can use my creatively named VelocIT Terminal Services Port Changer application, which is available with source.  This is a simple five minute application that does one thing and one thing only. It allows you to change the port number that Terminal Services listens on.

VelocIT Terminal Services Port
Changer

Remember, all the usual caveats apply about tinkering with the registry. You do so at your own risk.

Connecting via Remote Desktop to the non-standard Port

Now that you have the server all set up, you need to connect to it.  This is pretty easy.  Suppose you change the port for the virtual machine to listen in on port 3900.  You simply append 3900 to the server name (or IP) when connecting via Remote Desktop.

Remote
Desktop

Keep In Mind

Keep in mind that the user you attempt to connect with must have the logon interactively right as well as permissions to logon to the terminal services session.  For more on that, check out this extremely helpful article with its trouble shooting section.

That’s pretty easy, no?  Now you should have no problem managing your legions of virtual servers.

Related Posts:

comments edit

Just upgraded my blog to the latest version of Subtext in the Subversion 1.9 branch, not that you needed to know that. I just appreciate you letting me know if you run into problems leaving a comment and such by using my Contact page.

Before I release 1.9.2 (long story why that’s ends with a 2 and not 1), I need to update the Contact page so that spam filters also apply to the contact page.

comments edit

Weird how coding on Subtext relaxes me. For the past couple days I’ve been feeling a bit under the weather and getting worse.  The weird part is that anytime I try and eat something, there’s a terrible after-taste. And no, it’s not my breathe.  I couldn’t finish my pizza tonight.  Pizza!

Anyways, I couldn’t sleep tonight so I figured hacking on some Subtext code might relax me.  I fixed some bugs and implemented FeedBurner support in Subtext, using the DasBlog code as a guide, though the way we implement feeds is much different.  It’ll come out in the next edition of Subtext.

Unfortunately, it may take me longer to release because although coding on Subtext feels good when I’m sick, QA testing doesn’t.

comments edit

Quick tip for you if you need to remotely connect to a server with VMWare Server installed in order to manage the virtual server. 

VMWare Server Console doesn’t work correctly if you Remote Desktop or Terminal in. You have to physically be at the machine or Remote Desktop into the Console session.

The symptoms I ran into was that I could not open a virtual machine, and when I tried to create a new one, I got an “Invalid Handle” error.

Technorati Tags: Tips

comments edit

Recently I wrote a .NET based Akismet API component for Subtext.  In attempting to make as clean as interface as possible, I made the the type of the property to store the commenter’s IP address of type IPAddress.

This sort of falls in line with the Framework Design Guidelines, which mention using the Uri class in your public interface rather than a string to represent an URL.  I figured this advice equally applied to IP Addresses as well.

To obtain the user’s IP Address, I simply used the UserHostAddress property of the HttpRequest object like so.

HttpContext.Current.Request.UserHostAddress

The UserHostAddress property is simply a wrapper around the REMOTE_ADDR server variable which can be accessed like so.

HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"]

For users behind a proxy (or router), this returns only one IP Address, the IP Address of the proxy (or router).  After some more digging, I learned that many large proxy servers will append their IP Address to a list maintained within another HTTP Header, HTTP_X_FORWARDED_FOR or HTTP_FORWARDED.

For example, if you make a request from a country outside of the U.S., your proxy server might add the header HTTP_X_FORWARDED_FOR and put in your real IP and append its own IP Address to the end. If your request then goes through yet another proxy server, it may append its IP Address to the end.  Note that not all proxy servers follow this convention, the notable exception being anonymizing proxies.

Thus to get the real IP address for the user, it makes sense to check the value of this first:

HttpContext.Current.Request.ServerVariables[“HTTP_X_FORWARDED_FOR”]

If that value is empty or null, then check the UserHostAddress property.

So what does this mean for my Akismet implementation?  I could simply change that property to be a string and return the entire list of IP addresses.  That’s probably the best choice, but I am not sure whether or not Akismet accepts multiple IPs.  Not only that, I’m really tired and lazy, and this change would require that I change the Subtext schema since we store the commenter’s IP in a field just large enough to hold a single IP address.

So unless smart slap me upside the head and call me crazy for this approach, I plan to look at the HTTP_X_FORWARDED_FOR header first and take the first IP address in the list if there are any.  Otherwise I will grab the value of UserHostAddress.  As far as I am concerned, it’s really not that important that I am 100% accurate in identifying the remote IP, I just need something consistent to pass to Akismet.

code, tdd comments edit

UPDATE: I’ve since supplemented this with another approach.

Jeremy Miller asks the question, “How do you organize your NUnit test code?”.  My answer? I don’t, I organize my MbUnit test code.

Bad jokes aside, I do understand that his question is more focused on the structure of unit testing code and not the structure of any particular unit testing framework.

I pretty much follow the same structure that Jeremy does in that I have a test fixture per class (sometimes more than one per class for special cases).  I experimented with having a test fixture per method, but gave up on that as it became a maintenance headache.  Too many files!

One convention I use is to prefix my unit test projects with “UnitTest”.  Thus the unit tests for Subtext are in the project UnitTests.Subtext.dll.  The main reason for this, besides the obvious fact that it’s a sensible name for a project that contains unit tests, is that for most projects, the unit test assembly would show up on bottom in the Solution Explorer because of alphabetic ordering.

So then I co-found a company whose name starts with the letter V.  Doh!

UPDATE: I neglected to point out (as David Hayden did) that with VS.NET 2005 I can use Solution Folders to group tests. We actually use Solution Folders within Subtext. Unfortunately, many of my company work is still using VS.NET 2003, which does not boast such a nice feature.

One thing I don’t do is separate my unit tests and integration tests into two separate assemblies.  Currently I don’t separate those tests at all, though I have plans to start. 

Even when I do start separating tests, one issue with having unit tests in two separate assemblies is that I don’t know how to produce NCover reports that merge the results of coverage from two separate assemblies.

One solution I proposed in the comments to Jeremy’s post is to use a single assembly for tests, but have UnitTests and Integration Tests live in two separate top level namespaces.  Thus in MbUnit or in TD.NET, you can simply run the tests for one namespace or another.

Example Namespaces: Tests.Unit and Tests.Integration

In the root of a unit test project, I tend to have a few helper classes such as UnitTestHelper, which contains static methods useful for unit tests. I also have a ReflectionHelper class, just in case I need to “cheat” a little. Any other classes I might find useful typically go in the root, such as my SimulatedHttpRequest class as well.

comments edit

Tivo
Icon Ever prolific Jon Galloway has released another tool on our tools site.  When we started the tools site, I talked some trash to spur some friendly competition between the two of us.  Let’s just say Jon is kicking my arse so hard my relatives in Korea can’t sit down.

His latest RegmonToRegfile tool works with yet another SysInternals tool, Regmon.

Winternals (maker of Sysinternal) released many fantastic tools for managing and spelunking your system.

So great, in fact, that Robb feels he owes them his child in gratitude.

Kudos to Microsoft for snatching up Mark Russinovich and Winternals Software.

Regmon is essentially a Tivo for your registry, allowing you to record and play back changes to the registry.

Regmon lacks the ability to export to a registry (.reg) file, which is where Jon’s tool comes to play.  It can parse the Regmon log files and translate them to .reg files.

Here is a link to Jon’s blog post on this tool.

 

comments edit

Jeff Atwood writes a great rebuttal to Steve Yegge’s rant on Agile methodologies.  I won’t expound on it too much except to point out this quote which should be an instant classic, emphasis mine:

Steve talks about “staying lightweight” as if it’s the easiest thing in the world, like it’s some natural state of grace that developers and organizations are born into. Telling developers they should stay lightweight is akin to telling depressed people they should cheer up.

Heh heh.  Jeff moves from a Coding Horror to a Coding Hero.

Now while I agree much of it is religion, I like to think that the goal is to remove as much religion out of software development as possible.  A key step, as Jeff points out, is recognizing which aspects are religion.

…the only truly dangerous people are the religious nuts who don’t realize they are religious nuts.

It’s like alcoholism.  You have to first accept you are an alcoholic.  But then, once recognizing that, you strive to make changes.

For example, Java vs .NET is a religious issue insofar as one attempts to make an absolute claim that one is superior to the other. 

However it is less a religious issue to say that I prefer .NET over Java for reasons X, Y, and Z based on my experience with both, or even to say that in situation X, Java is a preferred solution.

Likewise, just because double-blind tests are nearly impossible to conduct does not mean that we cannot increase the body of knowledge of software engineering. 

For the most part, we turn to the techniques of social scientists and economists by poring over historical data and looking at trends to extrapolate what information we can, with appropriate margins of error. 

Thus we can state, with a fair degree of certainty, that:

Design is a complex, iterative process. Initial design solutions are usually wrong and certainly not optimal.

That is fact 28 of Facts and Fallacies of Software Engineering by Robert L. Glass, who is by no means an agile zealot.  Yet this fact does show a weakness in waterfall methodologies that is addressed by agile methodologies, a differentiation that owes more to the scientific method than pure religion.

comments edit

Personal matters (good stuff) and work has been keeping me really busy lately, but every free moment I get I plod along, coding a bit here and there, getting Subtext 1.9.1 “Shields Up” ready for action.

There were a couple of innovations I wanted to include in this version as well as a TimeZone handling fix, but recent comment spam shit storms have created a sense of urgency to get what I have done out the door ASAP.

In retrospect, as soon as I finished the Akismet support, I should have released.

I have a working build that I am going to test on my own site tonight.  If it works out fine, I will deploy a beta to SourceForge.  This will be the first Subtext release that we label Beta.  I think it will be just as stable as any other release, but there’s a significant schema change involved and I want to test it more before I announce a full release.

Please note, there is a significant schema change in which data gets moved around, so backup your database and all applicable warnings apply.  Upgrade at your own risk.  I am going to copy my database over and upgrade offline to test it out before deploying.

Shields up edition will contain Akismet support and CAPTCHA.  The Akismet support required adding comment “folders” to allow the user to report false positives and false negatives.

comments edit

Disk Defragmenter

For the most part, the Disk Defragmenter application (located at %SystemRoot%\system32\dfrg.msc) that comes with Windows XP does a decent enough job of defragmenting a hard drive for most users.

But if you’re a developer, you are not like most users, often dealing with very large files and installing and uninstalling applications like there’s no tomorrow.  For you, there are a couple of other free utilities you should have in your utility belt.

Recently I noticed my hard drive grinding a lot.  After defragmenting my drive, I clicked on the View Report button this time (I normally never do this out of hurriedness).

Disk Defrag
Dialog

This brings up a little report dialog.

Defrag
Report

And in the bottom, there is a list of files that Disk Defragmenter could not defragment.  In this case, I think the file was simply too large for the poor utility.  So I reached into my utility belt and whipped out Contig.

Contig

Contig is a command line utility from SysInternals that can report on the fragmentation of individual files and defrag an individual file.

I opened up a console window, changed directory to the Backup directory, and ran the command:

contig *.tib

Which defragmented every file ending with the tib extension (in this case just one).  This took a good while to complete working against a 29 Gig file, but successfully reduced the fragmens from four to two, which made a huge difference.  I may try again to see if it can bring it down to a single fragment. 

I ran Disk Defragmenter again and here are the results.

Disk
Defragmenter

Keep in mind that the disk usage before this pass with the defragger was the usage after running Disk Defragmenter once.  After using contig and then defragging again, I received much better results.

PageDefrag

Another limitation of Disk Defragmenter is that it cannot defragment files open for exclusive access, such as the Page File.  Again, reaching into my utility belt I pull yet another tool from Sysinternals (those guys rock!), PageDefrag.

Running PageDefrag brings up a list of page files, event log files, registry files along with how many clusters and fragments make up those files.

Page
Defrag

This utility allows you to specify which files to defrag and either defragment them on the next reboot, or have them defragmented at every boot.  As you can see in the screenshot, there was only one fragmentted file, so the need for this tool is not great at the moment.  But it is good to have it there when I need it.

With these tools in hand, you are ready to be a defragmenting ninja.

comments edit

TimeZones Right now, there is no easy way to convert a time from one arbitrary timezone to another arbitrary timezone in .NET.  Certainly you can convert from UTC to the local system time, or from the local system time to UTC. But how do you convert from PST to EST?

Well Scott Hanselman recently pointed me to some ingenious code in DasBlog originally written by Clemens Vasters that does this.  I recently submitted a patch to DasBlog so that this code properly handles daylight savings and I had planned to blog about it in more detail later.  Unfortunately, we recently found out that changes in Vista may break this particular approach.

It turns out that the Orcas release introduces a new TimeZone2 class.  This class will finally allow conversions between arbitrary timezones.

Krzysztof Cwalina (who wins the award for Microsoft blogger with the highest consonants to vowel ration in a first name) points out that many people are not thrilled with the “2” suffix and provides context on the naming choice

Kathy Kam of the BCL team points out some other proposed names for the new TimeZone2 class and the problems with each.

I’m fine with TimeZone2 or TimeZoneRegion.

 

comments edit

Hello
World

Jeff Atwood asks the question in a recent post if writing your own blog software is a form of procrastination (no, blogging is).

I remember reading something where someone equated rolling your own blog engine is the modern day equivalent of the Hello World program.  I wish I could remember where I heard that so I can give proper credit. UPDATE: Kent Sharkey reminds me that I read it on his blog. It was a quote from Scott Wigart. Thanks for the memory refresh Kent!

Obviously, as an Open Source project founder building a blog engine, I have a biased opinion on this topic (I can own up to that).  My feeling is that for most cases (not all) rolling your own blog engine is a waste of time given that there are several good open source blog engines such as Dasblog, SUB, and Subtext.

It isn’t so much that writing a rudimentary blog engine is hard.  It isn’t.  To get a basic blog engine up and running is quite easy.  The challenge lies in going beyond that basic engine.

The common complaint with these existing solutions (and motivation for rolling your own) is that they contain more features than a person needs.  Agreed.  There’s no way a blog engine designed for mass consumption is going to only have the features needed by any given individual.

However, there are a lot of features these blog engines support that you wouldn’t realize you want or need till you get your own engine up and running.  And in implementing these common features, a developer can spend a lot of time playing catch-up by reinventing the kitchen sink.  Who has that kind of time?

Why reinvent the sink, when the sink is there for the taking?

For example, let’s look at fighting comment spam.

Implementing comments on a blog is quite easy. But then you go live with your blog and suddenly you’re overwhelmed with insurance offers.  Implementing comments is easy, implementing it well takes more time.

If you are going to roll your own blog engine, at least “steal” the Subtext Akismet API library in our Subversion repositoryDasblog did.  However, even with that library, you still ought to build a UI for reporting false positives and false negatives back to Akismet etc…  Again, not difficult, but it is time consuming and it has already been done before.

Some other features that modern blog engines provide that you might not have thought about (not all are supported by Subtext yet, but by at least one of the blogs I mentioned):

  • RFC3229 with Feeds
  • BlogML
    • So you can get your posts in there.
  • Email to Weblog
  • Gravatars
  • Multiple Blog Support (more useful than you think)
  • Timezone Handling (for servers in other timezone)
  • Windows Live Writer support
  • Metablog API
  • Trackbacks/Pingbacks
  • Search
  • Easy Installation and Upgrade
  • XHTML Compliance
  • Live Comment Preview

My point isn’t necessarily to dissuade developers from rolling their own blog engine.  It’s fun code to write, I admit.  My point is really this (actually two points):

​1. If you plan to write your own blog engine, take a good hard look at the code for existing Open Source blog engines and ask yourself if your needs wouldn’t be better served by contributing to one of these projects.  They could use your help and it gets you a lot of features for free. Just don’t use the ones you don’t need.

Jerry
Maguire

  1. If you still want to write your own, at least take a look at the code contained in these projects and try to avail yourself of the gems contained therein.  It’ll help you keep your wheel reinventions to a minimum.

That’s all I’m trying to say.  Help us… help you.

comments edit

A friend of mine sent me an interesting report that Brad Greenspan, the founder of eUniverse (now Intermix Media) that created and owned MySpace.com issued an online report that the sale of MySpace intentionally defrauded shareholders out of multiple billions of dollars because they hid MySpace revenues from shareholders.

Disclosure: Technically, I used to work for Intermix Media as they owned my last employer, SkillJam, before SkillJam was sold to Fun technologies.

The most surprising bit to me is that (according to the report)

Shareholders were not aware that Myspace’s revenue was growing at a 1200 percent annualized rate and increasing.

I wonder how much of this is true.  If it is true, what happens next?  Gotta love the smell of scandal in the morning. Wink

comments edit

This is a pretty sweet video that demonstrates a system for sketching on a whiteboard using mimio-like projection system.  The instructor draws objects, adds a gravity vector, and then animates his drawings to see the result.

Another interesting take on user interfaces for industrial design.

code comments edit

Keyvan Nayyeri has a great tip for how to control the display of a type in the various debugger windows using a DebuggerTypeProxy attribute.  His post includes screenshots with this in use.

This is an attribute you can apply to a class, assembly, or struct to specify another class to examine within a debugger window. 

You still get access to the raw view of the original type, just in case some other developer plays a practical joke on you by specifying a DebuggerTypeProxy that displays the value of every field as being 42.

comments edit

Alex Papadimoulis, the man behind the ever entertaining (and depressing) TheDailyWTF announces a new business venture to help connect employers fielding good jobs with good developers interested in a career change.  At least that’s the hope.

The Hidden Network is the name of the venture and the plan is to pay bloggers $5.00 per thousand ad views (cha-ching!) for hosting pre-screened job postings in small ad blocks that look very similar to the standard Google AdSense ad.  In the interest of full disclosure, I will be taking part as a blogger hosting these job postings.

I’ve written before about how hiring is challenging and that blogs are a great means to connecting with and hiring good developers.  In fact, that’s how I recruited Jon Galloway to join my company, brought in Steve Harman (a Subtext Developer) as a contractor, and almost landed another well known blogger, until his own company grabbed his leg and dragged him kicking and screaming back into the fold giving him anything he wanted (I kid).

My hope is that somehow, my blog helps connect a good employer with a good developer. It’s worked for my company, it may work for yours.  Connecting good developers with good jobs is a win win experience for all.  My fear is that the ads will turn into a bunch of phising expeditions by headhunders looking to collect resumes.  It will be imperative that Hidden Network work hard at trying to filter out all but the high quality job postings.

As Alex states in the comments of that post, blogs are the new Trade Publications, so paying $5.00 per CPM is quite sustainable and viable.  It will be interesting to see if his grand experiment works out.

comments edit

Great show on .NET Rocks today featuring Rob Conery, architect of the Commerce Starter Kit and SubSonic.

However, being the self-centered person that I am, the only thing I remember hearing is my last name being mispronounced.

For the record, my last name is “Haack” which is pronounced Hack, as in,

Yeah, I took a look at the new code Phil put into Subtext.  Talk about an ugly Haack!

On the show Rob pronounced it Hock which is only okay if you are British, which Rob is not.  I already gave Rob crap about it, so we’re cool. Wink

comments edit

UPDATE: I could not slip the subtle beg for an MSDN subscription I surreptitiously embedded in this post past my astute readers. Many thanks to James Avery for contributing an MSDN subscription to this grateful developer. Now that I have my MSDN subscription, I say this whole VPC licensing thing is a non-issue and quit whining about it. (I joke, I joke!).

In a recent post I declared that Virtual PC is a suitable answer to the lack of backwards compatibility support for Visual Studio.NET 2003.  In the comments to that post Ryan Smith asks a great question surrounding the licensing issues involved.

Is Microsoft going to let me use my OEM license key from an ancient machine so that I can run Windows XP in a virtual machine on Vista to test and debug in VS 2003?

I think as developers, we take for granted that we are going to have MSDN subscriptions (I used to but I don’t right now) and plenty of OS licenses for development purposes.  But suppose I sell my old machine and purchase a new machine with Vista installed.  How can I apply the suggested workaround of installing Virtual PC with Windows XP if I don’t have a license to XP?

Ryan wrote Microsoft with this question and received a response that indicated that Microsoft hasn’t quite figured this out. Does this mean that developers need to shell out another $189 or so in order to develop with Visual Studio.NET 2003 in a Virtual PC running Windows XP on Vista?

code, blogging comments edit

I recently wrote about a lightweight invisible CAPTCHA validator control I built as a defensive measure against comment spam.  I wanted the control to work in as many situations as possible, so it doesn’t rely on ViewState nor Session since some users of the control may want to turn those things off.

Of course this raises the question, how do I know the answer submitted in the form is the answer to the question I asked?  Remember, never trust your inputs, even form submissions can easily be tampered with.

Well one way is to give the client the answer in some manner that it can’t be read and can’t be tampered with.  Encryption to the rescue!

Using a few new objects from the System.Security.Cryptography namespace in .NET 2.0, I quickly put together code that would encrypt the answer along with the current system time into a base 64 encoded string.  That string would then be placed in a hidden input field.

When the form is submitted, I made sure that the encrypted value contained the answer and that the date inside was not too old, thus defeating replay attacks.

The first change was to initialize the encryption algorithm via a static constructor.

The code can be hard to read in a browser, so I did include the source code in the download link at the end of this post.

static SymmetricAlgorithm encryptionAlgorithm 
    = InitializeEncryptionAlgorithm();

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

With that in place, I added a couple static methods to the control.

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

public static string EncryptString(string clearText)
{
  byte[] clearTextBytes = Encoding.UTF8.GetBytes(clearText);
  byte[] encrypted = encryptionAlgorithm.CreateEncryptor()
    .TransformFinalBlock(clearTextBytes, 0
    , clearTextBytes.Length);
  return Convert.ToBase64String(encrypted);
}

In the PreRender method I simply took the answer, appended the date using a pipe character as a separator, encrypted the whole stew, and the slapped it in a hidden form field.

//Inside of OnPreRender
Page.ClientScript.RegisterHiddenField
    (this.HiddenEncryptedAnswerFieldName
    , EncryptAnswer(answer));

string EncryptAnswer(string answer)
{
  return EncryptString(answer 
    + "|" 
    + DateTime.Now.ToString("yyyy/MM/dd HH:mm"));
}

Now with all that in place, when the user submits the form, I can determine if the answer is valid by grabbing the value from the form field, calling decrypt on it, splitting it using the pipe character as a delimiter, and examining the result.

protected override bool EvaluateIsValid()
{
  string answer = GetClientSpecifiedAnswer();
    
  string encryptedAnswerFromForm = 
    Page.Request.Form[this.HiddenEncryptedAnswerFieldName];
    
  if(String.IsNullOrEmpty(encryptedAnswerFromForm))
    return false;
    
  string decryptedAnswer = DecryptString(encryptedAnswerFromForm);
    
  string[] answerParts = decryptedAnswer.Split('|');
  if(answerParts.Length < 2)
    return false;
    
  string expectedAnswer = answerParts[0];
  DateTime date = DateTime.ParseExact(answerParts[1]
    , "yyyy/MM/dd HH:mm", CultureInfo.InvariantCulture);
  if ((DateTime.Now - date).Minutes > 30)
  {
    this.ErrorMessage = "Sorry, but this form has expired. 
      Please submit again.";
    return false;
  }

  return !String.IsNullOrEmpty(answer) 
    && answer == expectedAnswer;
}

// Gets the answer from the client, whether entered by 
// javascript or by the user.
private string GetClientSpecifiedAnswer()
{
  string answer = Page.Request.Form[this.HiddenAnswerFieldName];
  if(String.IsNullOrEmpty(answer))
    answer = Page.Request.Form[this.VisibleAnswerFieldName];
  return answer;
}

This technique could work particularly well for a visible CAPTCHA control as well. The request for a CAPTCHA image is an asynchronous request and the code that renders that image has to know which CAPTCHA image to render. Implementations I’ve seen simply store an image in the CACHE using a GUID as a key when rendering the control. Thus when the asynchronous request to grab the CAPTCHA image arrives, the CAPTCHA image rendering HttpHandler looks up the image using the GUID and renders that baby out.

Using encryption, the URL for the CAPTCHA image could embed the answer (aka the word to render).

If you are interested, you can download an updated binary and source code for the Invisible CAPTCHA control which now includes the symmetric encryption from here.