comments edit

Alex Papadimoulis, the man behind the ever entertaining (and depressing) TheDailyWTF announces a new business venture to help connect employers fielding good jobs with good developers interested in a career change.  At least that’s the hope.

The Hidden Network is the name of the venture and the plan is to pay bloggers $5.00 per thousand ad views (cha-ching!) for hosting pre-screened job postings in small ad blocks that look very similar to the standard Google AdSense ad.  In the interest of full disclosure, I will be taking part as a blogger hosting these job postings.

I’ve written before about how hiring is challenging and that blogs are a great means to connecting with and hiring good developers.  In fact, that’s how I recruited Jon Galloway to join my company, brought in Steve Harman (a Subtext Developer) as a contractor, and almost landed another well known blogger, until his own company grabbed his leg and dragged him kicking and screaming back into the fold giving him anything he wanted (I kid).

My hope is that somehow, my blog helps connect a good employer with a good developer. It’s worked for my company, it may work for yours.  Connecting good developers with good jobs is a win win experience for all.  My fear is that the ads will turn into a bunch of phising expeditions by headhunders looking to collect resumes.  It will be imperative that Hidden Network work hard at trying to filter out all but the high quality job postings.

As Alex states in the comments of that post, blogs are the new Trade Publications, so paying $5.00 per CPM is quite sustainable and viable.  It will be interesting to see if his grand experiment works out.

comments edit

Great show on .NET Rocks today featuring Rob Conery, architect of the Commerce Starter Kit and SubSonic.

However, being the self-centered person that I am, the only thing I remember hearing is my last name being mispronounced.

For the record, my last name is “Haack” which is pronounced Hack, as in,

Yeah, I took a look at the new code Phil put into Subtext.  Talk about an ugly Haack!

On the show Rob pronounced it Hock which is only okay if you are British, which Rob is not.  I already gave Rob crap about it, so we’re cool. Wink

comments edit

UPDATE: I could not slip the subtle beg for an MSDN subscription I surreptitiously embedded in this post past my astute readers. Many thanks to James Avery for contributing an MSDN subscription to this grateful developer. Now that I have my MSDN subscription, I say this whole VPC licensing thing is a non-issue and quit whining about it. (I joke, I joke!).

In a recent post I declared that Virtual PC is a suitable answer to the lack of backwards compatibility support for Visual Studio.NET 2003.  In the comments to that post Ryan Smith asks a great question surrounding the licensing issues involved.

Is Microsoft going to let me use my OEM license key from an ancient machine so that I can run Windows XP in a virtual machine on Vista to test and debug in VS 2003?

I think as developers, we take for granted that we are going to have MSDN subscriptions (I used to but I don’t right now) and plenty of OS licenses for development purposes.  But suppose I sell my old machine and purchase a new machine with Vista installed.  How can I apply the suggested workaround of installing Virtual PC with Windows XP if I don’t have a license to XP?

Ryan wrote Microsoft with this question and received a response that indicated that Microsoft hasn’t quite figured this out. Does this mean that developers need to shell out another $189 or so in order to develop with Visual Studio.NET 2003 in a Virtual PC running Windows XP on Vista?

code, blogging comments edit

I recently wrote about a lightweight invisible CAPTCHA validator control I built as a defensive measure against comment spam.  I wanted the control to work in as many situations as possible, so it doesn’t rely on ViewState nor Session since some users of the control may want to turn those things off.

Of course this raises the question, how do I know the answer submitted in the form is the answer to the question I asked?  Remember, never trust your inputs, even form submissions can easily be tampered with.

Well one way is to give the client the answer in some manner that it can’t be read and can’t be tampered with.  Encryption to the rescue!

Using a few new objects from the System.Security.Cryptography namespace in .NET 2.0, I quickly put together code that would encrypt the answer along with the current system time into a base 64 encoded string.  That string would then be placed in a hidden input field.

When the form is submitted, I made sure that the encrypted value contained the answer and that the date inside was not too old, thus defeating replay attacks.

The first change was to initialize the encryption algorithm via a static constructor.

The code can be hard to read in a browser, so I did include the source code in the download link at the end of this post.

static SymmetricAlgorithm encryptionAlgorithm 
    = InitializeEncryptionAlgorithm();

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

With that in place, I added a couple static methods to the control.

static SymmetricAlgorithm InitializeEncryptionAlgorithm()
{
  SymmetricAlgorithm rijaendel = RijndaelManaged.Create();
  rijaendel.GenerateKey();
  rijaendel.GenerateIV();
  return rijaendel;
}

public static string EncryptString(string clearText)
{
  byte[] clearTextBytes = Encoding.UTF8.GetBytes(clearText);
  byte[] encrypted = encryptionAlgorithm.CreateEncryptor()
    .TransformFinalBlock(clearTextBytes, 0
    , clearTextBytes.Length);
  return Convert.ToBase64String(encrypted);
}

In the PreRender method I simply took the answer, appended the date using a pipe character as a separator, encrypted the whole stew, and the slapped it in a hidden form field.

//Inside of OnPreRender
Page.ClientScript.RegisterHiddenField
    (this.HiddenEncryptedAnswerFieldName
    , EncryptAnswer(answer));

string EncryptAnswer(string answer)
{
  return EncryptString(answer 
    + "|" 
    + DateTime.Now.ToString("yyyy/MM/dd HH:mm"));
}

Now with all that in place, when the user submits the form, I can determine if the answer is valid by grabbing the value from the form field, calling decrypt on it, splitting it using the pipe character as a delimiter, and examining the result.

protected override bool EvaluateIsValid()
{
  string answer = GetClientSpecifiedAnswer();
    
  string encryptedAnswerFromForm = 
    Page.Request.Form[this.HiddenEncryptedAnswerFieldName];
    
  if(String.IsNullOrEmpty(encryptedAnswerFromForm))
    return false;
    
  string decryptedAnswer = DecryptString(encryptedAnswerFromForm);
    
  string[] answerParts = decryptedAnswer.Split('|');
  if(answerParts.Length < 2)
    return false;
    
  string expectedAnswer = answerParts[0];
  DateTime date = DateTime.ParseExact(answerParts[1]
    , "yyyy/MM/dd HH:mm", CultureInfo.InvariantCulture);
  if ((DateTime.Now - date).Minutes > 30)
  {
    this.ErrorMessage = "Sorry, but this form has expired. 
      Please submit again.";
    return false;
  }

  return !String.IsNullOrEmpty(answer) 
    && answer == expectedAnswer;
}

// Gets the answer from the client, whether entered by 
// javascript or by the user.
private string GetClientSpecifiedAnswer()
{
  string answer = Page.Request.Form[this.HiddenAnswerFieldName];
  if(String.IsNullOrEmpty(answer))
    answer = Page.Request.Form[this.VisibleAnswerFieldName];
  return answer;
}

This technique could work particularly well for a visible CAPTCHA control as well. The request for a CAPTCHA image is an asynchronous request and the code that renders that image has to know which CAPTCHA image to render. Implementations I’ve seen simply store an image in the CACHE using a GUID as a key when rendering the control. Thus when the asynchronous request to grab the CAPTCHA image arrives, the CAPTCHA image rendering HttpHandler looks up the image using the GUID and renders that baby out.

Using encryption, the URL for the CAPTCHA image could embed the answer (aka the word to render).

If you are interested, you can download an updated binary and source code for the Invisible CAPTCHA control which now includes the symmetric encryption from here.

comments edit

UPDATE: I think a good measure of a blog is the intelligence and quality of the comments. This comments in response to this post makes my blog look good (not all do).

As several commenters pointed out, the function returns a local DateTime adjusted from the specified UTC date. By calling ToUniversalTime() on the result, I get the behavior I am looking for. That’s why I ask you smart people before making an ass of myself on the bug report site.

Before I post this as a bug, can anyone tell me why this test fails when I think it should pass?

[Test]
public void ParseUsingAssumingUniversalReturnsDateTimeKindUtc()
{
  IFormatProvider culture = new CultureInfo("en-US", true);
  DateTime utcDate = DateTime.Parse("10/01/2006 19:30", culture, 
    DateTimeStyles.AssumeUniversal);
  Assert.AreEqual(DateTimeKind.Utc, utcDate.Kind, 
    "Expected AssumeUniversal would return a UTC date.");
}

What is going on here is I am calling the method DateTime.Parse passing in a DateTimeStyle.AssumeUniversal as an argument. My understanding is that it should indicate to the Parse method that the passed in string denotes a Coordinated Univeral Time (aka UTC).

But when I check the Kind property of the resulting DateTime instance, it returns DatTimeKind.Local rather than DatTimeKind.Utc.

The unit test demonstrates what I think should happen. Either this really is a bug, or I am wrong in my assumptions, in which case I would like to know, how are you supposed to parse a string representing a date/time in the UTC timezone?

comments edit

Atlas With The Weight Of The
Codebase I read this article recently that describes the mind frying complexity of the Windows development process.  With Vista sporting around 50 million lines of code, it’s no wonder Vista suffers from delays.  Quick, what does line #37,920,117 say?

Microsoft has acknowledged the need to release more often (as in sometime this millenia), but that agility is difficult to achieve with the current codebase due to its immense complexity as well as Microsoft’s (stubbornly?) heroic efforts to maintain backward compatibilty.  The author of the article labels this the Curse of Backward Compatibility.

I don’t think anyone doubts that maintaining backwards compatibility can be a Herculean effort because it goes beyond supporting legacy specification (which is challenging enough).  Just look at how Microsoft supports old code that broke the rules.  Additionally, the fact that old code poses a security threat requires even more code to patch those security threats.  Ideally alot of that code would be removed outright, but it is challenging to remove or rewrite any of it in fear of breaking too many applications.

Of course there are very good business reasons for Microsoft to maintain this religious adherence to backwards compatibility (starts with an m ends with a y and has one in the middle).  The primary one being they have a huge user base when compared to Apple, which does not give Microsoft the luxury of a “Do Over” as Apple has done with OSX.

A different article (same magazine) points to virtualization technology as the answer.  This article talks suggests a virtualization layer that is core to the operating system.  I think we are already seeing hints of this in play with Microsoft’s answer to developers angry that Vista is not going to support Visual Studio.NET 2003.

The big technical challenge is with enabling scenarios like advanced debugging. Debuggers are incredibly invasive in a process, and so changes in how an OS handles memory layout can have big impacts on it. Vista did a lot of work in this release to tighten security and lock down process/memory usage - which is what is affecting both the VS debugger, as well as every other debugger out there. Since the VS debugger is particularly rich (multi-language, managed/native interop, COM + Jscript integration, etc) - it will need additional work to fully support all scenarios on Vista. That is also the reason we are releasing a special servicing release after VS 2005 SP1 specific to Vista - to make sure everything (and especially debugging and profiling) work in all scenarios. It is actually several man-months of work (we’ve had a team working on this for quite awhile). Note that the .NET 1.1 (and ASP.NET 1.1) is fully supported at runtime on Vista. VS 2003 will mostly work on Vista. What we are saying, though, is that there will be some scenarios where VS 2003 doesn’t work (or work well) on Vista - hence the reason it isn’t a supported scenario. Instead, we recommend using a VPC/VM image for VS 2003 development to ensure 100% compat.

This answer did not satisfy everyone (which answer does?), many seeing it as a copout as it pretty much states that to maintain backward compatibility, use Virtual PC.

Keep in mind that this particular scenario is not going to affect the average user.  Instead, it affects developers, who are notorious for being early adopters and, one would think, would be more amenable to adopting virtualization as an answer, because hey! It’s cool new technology!

Personally I am satisfied by this answer because I have no plans to upgrade to Vista any time soon (my very own copout).  Sure, it’s not the best answer I would’ve hoped for if I was planning an impending upgrade.  But given a choice between a more secure Vista released sooner, or a several months delay to make sure that developers with advanced debugging needs on VS.NET 2003 are happy, I’m going to have to say go ahead and break with backward compatibility.  But at the same time, push out the .NET 2.0 Framework as a required update to Windows XP.

With Windows XP, Microsoft finally released a consumer operating system that was good enough.  Many users will not need to upgrade to Vista for a looong time.  I think it is probably a good time to start looking at cleaning up and modularizing that 50 million line rambling historical record they call a codebase.

If my DOS app circa 1986 stops working on Vista, so be it.  If I’m still running DOS apps, am I really upgrading to Vista?  Using a virtual operating system may not be the best answer we could hope for, but I think it’s good enough and should hopefully free Microsoft up to really take Windows to the next level.  It may cause some difficulties, but there’s no easy path to paying off the immense design debt that Microsoft has accrued with Windows.

comments edit

A few days back Jon Galloway and I were discussing a task he was working on to document a database for a client.  He had planned to use some code generation to initially populate a spreadsheet and would fill in the details by hand.  I suggested he store the data with the schema using SQL extended properties.

We looked around and found some stored procs for pulling properties out, but no useful applications for putting them in there in a nice, quick, and easy manner.

A few days later, the freaking guy releases this Database Dictionary Creator, a nice GUI tool to document your database, storing the documentation as part of your database schema.

Database Dictionary Entry
Form

The tool allows you to add your own custom properties to track, which then get displayed in the data dictionary form grid as seen above. Audit and Source are custom properties. It is a way to tag our database schema.

You ask the guy to build a house with playing cards and he comes back with the Taj Mahal.

Check it out.

comments edit

As developers, I think we tend to take the definition of Version for granted.  What are the components of a version?  Well that’s easy, it is:

Major.Minor.Build.Revision

Where the Build and Revision numbers are optional.  At least that is the definition given my the MSDN documentation for the Version class.

But look up Version in Wikipedia and you get a different answer.

The most common software versioning scheme is a scheme in which different major releases of the software each receive a unique numerical identifier. This is typically expressed as three numbers, separated by periods, such as version 2.4.13. One very commonly followed structure for these numbers is:

major.minor[.revision[.build]]

or

major.minor[.maintenance[.build]]

Notice that this scheme differs from the Microsoft scheme in that it places the build number at the very end, rather than the revision number.

Other versioning schemes such as the Unicode Standard and Solaris/Linux figure that three components is enough for a version with Major, Minor, and Update (for Unicode Standard) or Micro (for Solaris/Linux).

According to the MSDN documentation, the build number represents a recompilation of the same source, so it seems to me that it belongs at the end of the version, as it is the least significant element.

In Subtext, we roughly view the version as follows, though it is not set in stone:

  • Major: Major update.  If a library assembly, probably not backwards compatible with older clients.  This would include major changes. Most likely will include database schema changes and interface changes.
  • Minor: Minor change, may introduce new features, but backwards compatibility is mostly retained.  Likely will include schema changes.
  • Revision: Minor bug fixes, no significant new features implemented, though a few small improvements may be included.  May include a schema change.
  • Build: A recompilation of the code in progress towards a revision.  No schema changes.

Internally, we may have schema changes between build increments, but when we are prepared to release, a schema change between releases would require a revision (or higher) increment.

I know some developers like to embed the date and counter in the build number.  For example, 20060927002 would represent compilation #2 on September 27, 2006.

What versioning schemes are you fans of and why?

comments edit

When Log4Net doesn’t work, it can be a very frustrating experience.  Unlike your typical application library, log4net doesn’t throw exceptions when it fails.  Well that is to be expected and makes a lot of sense since it is a logging library.  I wouldn’t want my application to fail because it had trouble logging a message.

Unfortunately, the downside of this is that problems with log4net aren’t immediately apparent.  99.9% of the time, when Log4Net doesn’t work, it is a configuration issue.  Here are a couple of troubleshooting tips that have helped me out.

Enable Internal Debugging

This tip is straight from the Log4Net FAQ, but not everyone notices it. To enable internal debugging, add the following app setting to your App.config (or Web.config for web applications) file.

<add key="log4net.Internal.Debug" value="true"/>

This will write internal log4net messages to the console as well as the System.Diagnostics.Trace system.  You can easily output the log4net internal debug messages by adding a trace listener.  The following snippet is taken from the log4net FAQ and goes in your <configuration> section of your application config file.

<system.diagnostics>
  <trace autoflush="true">
    <listeners>
      <add 
        name="textWriterTraceListener" 
        type="System.Diagnostics.TextWriterTraceListener" 
        initializeData="C:\tmp\log4net.txt" />
    </listeners>
  </trace>
</system.diagnostics>

Passing Nulls For Value Types Into AdoNetAppender {.clear}

Another common problem I’ve dealt with is logging using the AdoNetAppender. In particular, attempting to log a null value into an int parameter (or other value type), assuming your stored procedure allows null for that parameter.

The key here is to use the RawPropertyLayout for that parameter. Here is a snippet from a log4net.config file that does this.

<parameter>
  <parameterName value="@BlogId" />
  <dbType value="Int32" />
  <layout type="log4net.Layout.RawPropertyLayout">
    <key value="BlogId" />
  </layout>
</parameter>

Hopefully this helps you with your log4net issues.

tags: Log4Net

comments edit

Tag Duncan Mackenzie writes about the issue of Categories vs Tags in blogs and blog editors.  I tried to comment there with my thoughts, but received some weird javascript errors.

I’ve thought alot about the same issues with Subtext. Orginally my plan was to simply repurpose the existing category functionality by slapping a big tag sticker on its forehead and from henceforth, a category was really a tag.  One big rename and bam!, I’m done.

But the API issue Duncan describes is a problem.  After more thinking about it, I now plan to make tags a first class citizen alongside categories.  In my mind, they serve different purposes.

I see categories as a structural element and navigational aid.  It is a way to group posts into large high-level groupings.  Use sparingly.

By contrast, I see tags as meta-data, use liberally.

One thought around the API issue is that there is a microformat for specifying tags (rel=”tag”) and Windows Live Writer has plugins for inserting tags into the body of a post. 

My current thinking is to pursue parsing tags from posted content and using that to tag content.

tags: Rel-Tag, Microformat, Categories, Tags

personal, asp.net comments edit

UPDATE: This code is now hosted in the Subkismet project on CodePlex.

Source:
http://www.dpchallenge.com/image.php?IMAGE_ID=138743 Not too long ago I wrote about using heuristics to fight comment spam.  A little later I pointed to the NoBot control as an independent implementation of the ideas I mentioned using Atlas.

I think that control is a great start, but it does suffer from a few minor issues that prevent me from using it immediately.

  1. It requires Atlas and Atlas is pretty heavyweight.
  2. Atlas is pre-release right now.
  3. We’re waiting on a bug fix in Atlas to be implemented.
  4. It is not accessible as it doesn’t work if javascript is enabled.

Let me elaborate on the first point.  In order to get the NoBot control working, a developer needs to add a reference to two separate assemblies, Atlas and the Atlas Control Toolkit, as well as make a few changes to Web.config.  Some developers will simply want a control they can simply drop in their project and start using right away.

I wanted a control that meets the following requirements.

  1. Easy to use. Only one assembly to reference.
  2. Is invisible.
  3. Works when javascript is disabled.

The result is the InvisibleCaptcha control which is a validation control (inherits from BaseValidator)so it can be used just like any other validator, only this validator is invisible and should not have the ControlToValidate property set.  The way it works is that it renders some javascript to perform a really simple calculation and write the answer into a hidden text field using javascript.

What!  Javascript?  What about accessibility!? Calm down now, I’ll get to that.

When the user submits the form, we take the submitted value from the hidden form field, combine it with a secret salt value, and then hash the whole thing together.  We then compare this value with the hash of the expected answer, which is stored in a hidden form field base64 encoded.

The whole idea is that most comment bots currently don’t have the ability to evaluate javascript and thus will not be able to submit the form correctly.  Users with javascript enabled browsers have nothing to worry about.

So what happens if javascript is disabled?

If javascript is disabled, then we render out the question as text alongside a visible text field, thus giving users reading your site via non-javascript browsers (think Lynx or those text-to-speech browsers for the blind) a chance to comment.

Accessible version of the Invisible CAPTCHA
control

This should be sufficient to block a lot of comment spam.

Quick Aside: As Atwood tells me, the idea that CAPTCHA has to be really strong is a big fallacy.  His blog simply asks you to type in orange every time and it blocks 99.9% of his comment spam.

I agree with Jeff on this point when it comes to websites and blogs with small audiences. Websites and blogs tend to implement different CAPTCHA systems from one to another and beating each one brings diminishing margins of returns.

However, for a site with a huge audience like Yahoo! or Hotmail, I think strong CAPTCHA is absolutely necessary as it is a central place for spammers to target.  (By the way, remind me to write a bot to post comment spam on Jeff’s blog)

If you do not care for accessibility, you can turn off the rendered form so that only javascript enabled browsers can post comments by setting the Accessible property to false.

I developed this control as part of the Subtext.Web.Control.dll assembly which is part of the Subtext project, thus you can grab this assembly from our Subversion repository.

To make things easier, I am also providing a link to a zip file that contains the assembly as well as the source code for the control. You can choose to either reference the assembly in order to get started right away, or choose to add the source code file and the javascript file (make sure to mark it as an embedded resource) to your own project.

Please not that if you add this control to your own assembly, you will need to add the following assembly level WebResource attribute in order to get the web resource handler working.

[assembly: WebResource("YourNameSpace.InvisibleCaptcha.js", 
    "text/javascript")]

You will also need to find the call to Page.ClientScript.GetWebResourceUrl inside InvisibleCaptcha.cs and change it to match the namespace specified in the WebResource attribute.

If you look at the code, you’ll notice I make use of several hidden input fields. I didn’t use ViewState for values the control absolutely needs to work because Subtext disables ViewState.  Likewise, I could have chosen to use ControlState, but that can also be disabled.  I took the most defensive route.

[Download InvisibleCaptcha here].

tags: CAPTCHA, Comment Spam, ASP.NET, Validator

comments edit

Akismet is all the rage among the kids these days for blocking comment spam.  Started by the founder of Wordpress, Matt Mullenweg, Akismet is a RESTful web service used to filter comment spam.  Simply submit a comment to the service and it will give you a thumbs up or thumbs down on whether it thinks the comment is spam.

In order to use Akismet you need to sign up for a free non-commercial API key with WordPress and hope that your blog engine supports the Akismet API.

There are already two Akismet API implementations for ASP.NET, but they are both licensed under the GPL which I won’t allow near Subtext (for more on open source licenses, see my series on the topic).

So I recently implemented an API for Akismet in C# to share with the DasBlog (despite the bitter public mudslinging between blog engines, there is nothing but hugs behind the scenes.) folks as part of the Subtext project, thus it is BSD licensed.

You can download the assembly and source code and take a look.  It is also in the Subtext Subversion repository.

comments edit

FebreezeThere’s nothing worse than waking up on game day and realizing you forgot to wash your soccer jersey from last game.

*SNIFF* BLEH!

Thank god for Febreeze!

UPDATE: The Febreeze worked! We won 8 to 1!

comments edit

I saw this story on the debugging section of Anecdota and thought it was funny, though I find it hard to believe.

Laptop warmer {.post-title}

In 1998, I made a C++ program to calculate pi to a billion digits. I coded it on my laptop (Pentium 2 I think) and then ran the program. The next day I got a new laptop but decided to keep the program running. It’s been over seven years now since I ran it. and this morning it finished calculating. The output:

“THE VALUE OF PI TO THE BILLIONTH DIGIT IS = ”

Mindblowing eh? I looked in the code of my program, and I found out that I forgot to output the value.

You would think he’d do a test run for smaller digits of PI, but I’ve done things like that.  You make a small test run. It works. You make a tiny tweak that shouldn’t affect anything and then start it running because you’re in a hurry.  Seven years later…

Of course, most (if not all) algorithms for calculating PI aren’t all or nothing.  Usually they start calculating digits immediately, so there ought to be immediate output as you calculate PI to further and further digits, unless this person decided to store all billion digits in a string before displaying it.

tags: C++, Bugs, PI

comments edit

Conceptus, a client of my company, recently launched not one, but two blogs using Subtext.

I emphasize two because I only really knew that their CEO wanted to start a blog.  Of course, once you have Subtext set up, it’s quite easy to start another blog.

This is our first (of hopefully many) commercial implementations of Subtext.  The best thing about this particular project was that our client was very kind in contributing some of the customization work we did back to the Subtext project.

For me, I loved that this projected combined my passion for Subtext with my passion for feeding my family.

DISCLAIMER: I am not a medical professional so my brief description of the product is not medical advice. This is merely information I gleaned off their product website.  For medical advice, consult your doctor.

To give you more background, the client is named Conceptus and they’ve developed a non-surgical permanent birth control device and procedure that takes around 35 minutes (not including doctor waiting room time and a typical post procedure wait of 45 minutes).  Their procedure beats the pants off the typical alternative, tubal ligation (getting the tubes tied).

We worked with this client before under the direction of Shepard Associates to develop Conceptus’s consumer focused site and their doctor focused site, both built on top of DotNetNuke.

blogging comments edit

Soccer
Game Way back when I announced the first Roadmap for Subtext, I stated that Subtext would remove the multiple blogs feature and only support a single blog.  Fortunately I was persuaded by many commenters to abandon that change and continue to support multiple blogs.  Instead, I set out to simplify the process of configuring multiple blogs.

Now I am really glad that I did so.  I currently have three blogs running off of a single installation of Subtext.

  1. This one https://haacked.com/
  2. My non-techie blog http://phil.haacked.com/
  3. My soccer team http://westsiderovers.com/

The benefit of this approach is that setting up a new blog is very easy. Rather than dealing with the rigamorel of setting up another IIS site and database, I can simply add a new DNS entry and point it to my existing IP address, add a host header in IIS, and then create the blog in my Host Admin.

Three easy steps to a new blog.  I better be careful or I may get too crazy with this.  A blog for every day of the week, anyone?  You know, to color coordinate with my outfit.

comments edit

UPDATE: In one comment, szeryf points out something I didn’t know and invalidates the need for the tool I wrote. This is why I post this stuff, so someone can tell me about a better way! Thanks szeryf! I’ve updated the post to point out the better technique.

Based on my recent spate of posts, you can probably guess that I am working on improving a particular build process. 

In this situation, I have a pre-build step to concatenate a bunch of files into a single file.  I tried to do this with a simple command like so:

FOR %%A in (*.sql) do CALL COPY /Y Output.sql + %%A Output.sql

Yeah, that would work, but it is so sloooooow.

Szeryf points out that I can simply pass *.sql to the COPY command and get the same result.

copy *.sql output.sql

This ends up running plenty fast as it doesn’t dumbly iterate over every file calling COPY once per file. Instead it lets COPY handle that internally and more efficiently. How did I not know about this?

So I wrote a one minute app by simply scavenging the code from BatchEncode and concatenating text files instead.

USAGE: batchconcat EXTENSION ENCODING OUTPUT
     sourcedir: source directory path
     extension: examples... .sql, .txt
     output:             the resulting file.
     encoding:  optional: utf7, utf8, unicode, 
                        bigendianunicode, ascii

     All paths must be fully qualified.
     USE AT YOUR OWN RISK! NO WARRANTY IS GIVEN NOR IMPLIED

This ended up being mighty fast!

I figure someone out there might need to do this exact same thing in their build process and won’t mind using such crappy code.

Technorati Tags: Tips, TDD, Utilities

community comments edit

Mix06
Hand I am going to pull a Nostradamus here and predict that the Mix series of conferences (did I ever tell you how much I loved Mix06?) will end with Mix09 if they even reach that point.  Why do I make this prediction?  Because Microsoft has Mix07.com, Mix08.com, and Mix09.com registered, but not Mix10.com (although mix010.com is available at the moment).  ;)

I know, I’m being rather silly.  This evening I was on a Skype chat with Atwood and Galloway asking when is the next big conference that we should all attend.  Atwood’s reply was Mix07 which is a bit long of a wait, but Ok.  That got me wondering if they plan on some more Mix conferences afterwards.

If they do, looks like Microsoft has it covered for the next three years.

open source comments edit

From Wikipedia:
http://en.wikipedia.org/wiki/Medici

You’re a struggling young 15th century composer (because who lived to be old back then?) in Europe struggling to make ends meet while trying to advance the state-of-the-art when it comes to counterpoint.

Perhaps you’re also dabbling in tonality as the next big thing in music, going so far as to call it Music 2.0 because you know that people are tired of that cookie-cutter modal music everyone else is producing.

What do you do to scrape by?

Well if you were lucky, you’d catch the attention of the Medici family.  Fuggedabout the Sopranos, if you want a lesson in powerful Italian families, the Medicis make episodes of Cribs look like a documentary on poverty.  With immense wealth, power, and influence, they had a big hand in jump starting the Italian Renaissance.

Many of the great artists of the time honed their crafts under the patronage of the Medicis such as Michelangelo and Donatello (thus paving the way for Teenage Mutant Ninja Turtles).  They were also patrons of the sciences, funding Galileo in much of his work.

I started thinking about all this when I read John Lam’s post, Open Source, The Microsoft Community and Funding.  Like the struggling 15th century composer, John is working on something that has a very small niche audience at the time.  However, it also has the potential to be the next big thing in .NET development, who knows?

A project like this is not necessarily something VCs line up to throw money at, because its commercial viability may lie far in the future or because it is ahead of its time and not well understood.  This is perhaps why John mentions in his comments that he is looking for a patron.

Yes, I am looking for a patron, and hopefully something will come out of my meetings here this week.

Another commenter then asks the question…

Doesn’t expecting to be paid for OSS work also belong in the “sense of entitlement” box? That someone chooses to develop and publish software in their spare time is that same as me choosing to go climbing in my spare time, and I doubt anyone will pay me to do that.

I wonder.  Did Galileo feel a sense of entitlement every time he had a bowl of pasta paid for by his patron while working on his equations?  What about Michelangelo?  Perhaps they would have if they were the only ones to enjoy their own work.  In their cases, their work was shared for many to benefit, unlike the rock climber.

In some respects, I see parallels with open source software in the recent direction of the music industry.  Many music critics feel the music industry is stuck in a rut with cookie-cutter music artists who all sound the same dominating the air waves.  The cost to produce a hit is so large, that the studios are unwilling to take gambles on something innovative (with notable exceptions of course!)

Not only that, the music industry is waging a losing battle against technology that makes it essentially free to copy and distribute its product.

Hmmm. What else is free to copy and distribute?  Oh, I know.  Open Source Software!The key difference obviously is that OSS makes this distribution intentional, causing many to wonder whether these people are simply nuts (we are).  Free distribution is the whole point in OSS.

So what is in store for the music industry? Some have suggested that the music industry will die if it does not adapt.  One proposed means to fund musicians is to take a fresh look at the patronage system, though refitted for the Internet Age.  MySpace comes to mind in that regards.  Perhaps some budding Medicis are online looking to start a new renaissance in music.

I’m not enough of a historian to understand the Medici’s true motivations in funding art and science.  Did they do it out of pride in their city to demonstrate to the world how Florence is the source of great art and science?  Was it pure showmanship?  Did someone lose a bet?  Or was there simply a desire to support the creation of beauty, whether it take the form of science or art?

Like I said, I have no idea, but I think the answer might shed light on whether the model of patronage would work today.

Recent discussions around who should contribute to Open Source projects tend to argue (myself included) that those who benefit from Open Source should consider contributing back to it.

Unfortunately only looking at it this way frames OSS as a quid-pro-quo situation.  You get what you give.  But many OSS project founders don’t see it that way.  I can’t speak for John, but I bet he gives a lot to his project without expecting an equivalent contribution from others.  What about the other side of the coin then?

Will we see the rise of the Medicis of Open Source Software, patrons with deep pockets who view interesting open source projects as a form art or science worth supporting because they push their fields forward, whether or not it equivalently lines their own pockets with cash? Or should the only software that be produced be software that is commercially viable, much like music on the radio?

Some are calling upon Microsoft to take that role. If so, would that even be a good thing?  Quite possibly, if done well.

These are all questions I ask myself when I’m trying to procrastinate and start to get a bit too philosophical for my own good.  These are not intended to be leading questions trying to promote one view or another, but rather questions whose answers I am working through for myself.