comments edit

Many of you noticed that my blog was down. Thanks for the heads up. For some reason, it was pegging the CPU at 100% all of a sudden. Not sure why this was happening since nobody made any changes to the server. At least no changes they would fess up to ;).

I migrated the blog to a dedicated server courtesy of Rob Conery and still had the problem. Migrating allowed me to narrow down the problem without affecting anyone else’s site.

After much trial and error, I narrowed it down to the Recent Comments and Recent Posts controls in the footer of my skin. If you scroll down, you’ll see I’ve removed them.

So again, thanks for those who pointed it out and sorry to anyone who was inconvenienced (like me!) looking for information etc….

comments edit

UPDATE: We decided on the Three Lions Pub in Redmond at 7:30 PM

beer-and-dinner I’m going to be in Redmond next week and would love to get a geek dinner together around 7:30 PM. My flight gets in at 5:18 PM so I hope that’s doable. Anybody interested in joining us?

I think Scott Hanselman, Brad Wilson, Scott Koon, and Scott Densmore will grace us with their presence. If you’re interested in showing up, post a comment so we get a rough idea of numbers.

Topic for discussion, Forget Separation of Concerns, what does MS Dev Div have against Spaghetti Code? I like spaghetti!

Another potential topic of discussion, Render on POST - A fool’s http response or a pragmatic approach? Choose a side and let the all-out bare knuckle bloody brawl begin.

Final topic, if you believe GET is the one true way to render and prefer redirect on POST, which one do you choose? 302 or 303?

comments edit

Here’s the dirty little secret about being a software developer. No matter how good the code you write is, it’s crap to another developer.

It doesn’t matter if the code is so clean you could eat sushi off of it. Doesn’t matter if both John Carmack and Linus Torvalds bow down in respect every time the code is shown on the screen. Some developer out there will call it crap, and it’s usually the developer who inherits the code when you leave.

The reasons are many and petty:

  • Your code uses string concatenation in that one method rather than using a StringBuilder. So what if in this one situation, that was a conscious decision because on average that method only concatenates three or four strings together. The next guy doesn’t care.
  • You put your curly braces on the same line rather than its own line as God intended (or vice versa).
  • You used a switch statement when everyone (including the next developer) knows you’re supposed to replace that with the State or Strategy pattern, always! Didn’t you read Design Patterns? Never mind the fact that there’s only one switch statement and thus no code duplication.
  • You’re using Spring.NET for dependency injection, but the next guy loves Windsor. Only idiots choose Spring.NET (or vice versa, again).
  • Or perhaps you used dependency injection at all. What the hell is dependency injection? I don’t understand the code now! :(

While we strive for perfect code, it is unattainable on real projects because real code is weighed down by the pressure of constraints such as time pressure. Unfortunately, these constraints aren’t reflected in the code, just the effect of the constraints. The next developer reading your code didn’t know that code was written with one hour left to deliver the project.

Although I admit, having been burned by misguided criticism before, it’s hard not to be tempted to take a pre-emptive strike at criticism by trying to embed the constraints in the code via comments.

For example,

public void SomeMethod()
{
  /*
  At most, there will only be 4 to 5 foos, so string concatenation 
  is just fine in this situation. Here are links to five blog posts that 
  talk about the perf implications. Give me a break, it’s 
  3 AM, I’m hopped up on Jolt, this project is 3 months
  late, and I have no social life anymore. Cut me some slack!
  ...
  */
  string result = string.Empty;
  foreach(Foo foo in Foos)
  {
    result += foo;
  }
  return result;
}

Seems awful defensive, no? There’s nothing wrong with leaving a comment to highlight why a particular non-obvious design decision is made. In fact, that’s exactly what comments are for, rather than simply reiterating what the code does.

The problem though, is that developers sometimes cut each other so little slack, you start writing a treatise in green (or whichever color you have comments set to in your IDE) to justify every line of code because you have no idea what is going to be obvious to the next developer.

That’s why I was particularly pleased to receive an email the other day from a developer who inherited some code I wrote and said that the solutions were, and I quote, “really well written”.

Seriously? Am I being Punk’d? Ashton, where the hell are you hiding?

This is quite possibly the highest compliment you can receive from another developer. And I don’t think it’s because I’m such a great developer. I really think the person who deserves credit here is the one giving the compliment.

I mean, my reaction when I’ve inherited code was typically, why the hell did they write this this way!? Did they learn to code from the back of a Cracker Jack Box!? Who better to serve as the scapegoat than the developer who just left?

Fortunately I had enough tact to keep those thoughts to myself. In the future, I’ll work harder on the empathy side of things. When I inherit code, I’ll assume the developer wrote it in a 72 hour straight coding binge, his World of Warcraft character held hostage, bees all over his body, with only an hour to finish the code on a 386 before everything really starts to go south.

Given those circumstances, it’s no wonder the idiot didn’t use a using block around that IDisposable instance.

comments edit

In his post Goodby CodeBetter and ALT.NET, Sam Gentile writes about his dissatisfaction with CodeBetter and the ALT.NET movement. I don’t know Sam personally, but I’ve read his blog for a long time and know him to be a well reasoned thoughtful person.

Sam, please don’t throw out the baby with the bathwater, to use an old cliche. I don’t think it’s necessary to equate CodeBetter with ALT.NET. Perhaps CodeBetter bloggers are very influential in the ALT.NET circles, but it’s important for ALT.NET to stand separately and on its own.

Sam mentions that ALT.NET is divisive.

ALT.NET is a divisive thing. No matter what they tell you, they are full of negative energy, they sneer at others that don’t buy into their view and sneer at the “enterprisey” folks. I know, I was there.

I think the divisive label can also be applied to the Agile Movement, which Sam was a part of. It divides people into two camps, those who agree and those who don’t. It’s divisive because it makes a stand, but hopefully without all the sneering and negative energy.

ALT.NET should be about considering alternatives, not being contrarian.

A lot of fuss has been made about the ALT.NET label on this particular movement. Personally, I think it’s darn near impossible to change the name of a movement once it sticks. The real work is in putting the meaning into the label so it reflects something positive.

For example, I don’t see ALT.NET as saying, “you must use alternatives to Microsoft technologies in all cases”. Otherwise the ALT.NET movement would really be the Ruby, Erlang, Python, Haskell, Java movement. ALT.NET is not about simply being contrarian.

I think the movement is really about opening people’s eyes to always be learning and considering better ALTernatives to the tools, methods, and practices they use now. As Dave Laribee wrote in his ALT.NET post

What does it mean to be to be ALT.NET? In short it signifies:

  • You’re the type of developer who uses what works while keeping an eye out for a better way.\
  • You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc.\
  • You’re not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc.\
  • You know tools are great, but they only take you so far. It’s the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.)

These are all noble goals.

In his Goodbye post, Same relents about his ALT.NET involvement.

Now, onto this whole ALT.NET thing. When I was in the weeds at Algo, coding away, I too got caught up in the low-level issues and put out a stupid ALT.NET Moniker and List. I took a ton of crap on this from my friends all over the world, both inside and outside Microsoft. It wasn’t about disagreement, it was just blindly putting a list that was stupid, cross out everything Microsoft in one column and replace it with something else.

In reading his post, I don’t think it’s anything to necessarily be ashamed of in that his post first lists the principles above, but then it goes on to make two lists, one for ALT.NET and one for Not ALT.NET. Sam’s mistake is not in joining in the ALT.NET fun, it’s in making the hot-or-not list appear to be a significant part of ALT.NET (whether he intended this or not).

As the last ALT.NET principle states, tools only take you so far. Not only that, they change rapidly. Tools that are “HOT” today will end up being “NOT” tomorrow. While picking the best tools for the job is important to developers, suggesting specific tools is an addendum to ALT.NET, not core to it.

I certainly have my list of tools I think are the best tools for the job, but I won’t go so far to say you must use these tools or you’re a Mort. Drawing lines among which tools you use is just plain silliness, reminiscent of elementary school lines drawn along who could afford designer jeans.

So Sam, if you’re reading, keep in mind that the Agile Manifesto wasn’t written in one shot. It took time to evolve and refine the message till it was something the signers could agree upon.

I think ALT.NET is in that stage. The message is still being defined and refined and we need voices of reason involved. So while you might leave CodeBetter, consider staying involved in ALT.NET. After all, the goal is to sift out the really gold nuggets and move them to the mainstream.

comments edit

I think Miguel de Icaza nails it regarding some of the FUD being written about Microsoft’s latest move to make the source code to the .NET Framework available under the Microsoft Reference License (Ms-RL).

In fact, his post inspired me to try my hand at creating a comic. I have no comic art skills (nor comic writing skills), so please forgive me for my lack of talent (click for full size)…

Microsoft opens the
code

I know some of the people involved who made this happen and I find it hard to believe that there were nefarious intentions involved. You have to understand that while Bill Gates and Steve Ballmer are known for playing hardball, they aren’t necessarily personally involved in every initiative at Microsoft (as far as I know).

Some things start from the grassroots with motives as simple as trying to give developers a better experience than they’ve had before.

Before: the original code, complete with helpful comments, original variable names, etc… was closed. You could use Reflector (and possibly violate EULAs in the process), but it wasn’t as nice as having the actual code.

After: The source is available to be seen. This is certainly not more closed than before. It is clearly better because you now have more choice. You can choose to view the code, or chose not to. Before, you only had one choice - no lookie lookie here!

But It’s Not Open Source!

Many pundits have pointed out that this is not Open Source. That is correct and as far as I can tell, nobody at Microsoft (at least in an official position) is claiming that.

The Ms-RL is not an open source license, so there is reason to be cautious should you be contributing to the Mono project, or plan to write a component that is similar to something within the framework. As Miguel wrote in his post, these precautions have been in place within the Open Source community for a very long time.

So yes, it’s not open source. But it’s a step in the right direction. As I’ve written before, we’re seeing steady progression within Microsoft regarding Open Source, albeit with the occasional setback.

My hope, when I start at Microsoft, is to be involved with that progress in one form or another as I see it as essential and beneficial to Microsoft. But I will be patient.

Should You Look At The Code?

So should you look at the source code? Frans Bouma says no!

Take for example the new ReaderWriterLockSlim class introduced in .NET 3.5. It’s in the System.Threading namespace which will be released in the pack of sourcecode-you-can-look-at. This class is a replacement for the flawed ReaderWriterLock in the current versions of .NET. This new lock is based on a patent, which (I’m told) is developed by Jeffrey Richter and sold to MS. This new class has its weaknesses as well (nothing is perfect). If you want to bend this class to meet your particular locking needs by writing a new one based on the ideas in that class’ sourcecode, you’re liable for a lawsuit as your code is a derivative work based on a patented class which is available in sourcecode form.

However I think the advice in Miguel’s post addresses this to some degree.

If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.

My advice would be to use your head and not veer towards one extreme or another. If you’re planning to ship a ReaderWriterLockSlim class, then I probably wouldn’t look at their implementation.

But that shouldn’t stop you from looking at code that you have no plans to rewrite or copy.

And what do you do if you happen to look at the ReaderWriterLockSlim class on accident and were planning to write one for your internal data entry app? Either have another member of your team write it, or follow the above advice and implement it along different lines.

For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different …

Or, on the contrary, emphasize simplicity instead of speed. For some applications, the speed of today’s computers makes simpler algorithms adequate.

Or go for generality. For example, Unix programs often have static tables or fixed-size strings, which make for arbitrary limits; use dynamic allocation instead.

Just don’t copy the existing implementation.

For many developers, their code is never distributed because it is completely internal, or runs on a web server. In that case, I think the risk is very low that anyone is going to prove you infringed on a patent because you happened to look at a piece of code, unless the code is a very visible UI element.

Please don’t misunderstand me on this point. I’m not recommending you violate any software patents (even though I think most if not all software patents are dubious), I’m just saying the risk of patent taint for many developers who look at the .NET source code is not as grave as many are making it out to be. When in doubt, you’d do well to follow the advice in Miguel’s post.

UPDATE: Upon further reflection, I realized there is one particular risk with what I’ve just said.

In the case of the ReaderWriteLockSlim, I believe the particular algorithm for high performance is patented. But what if the idea of a reader write lock in general (one that allows simultaneous reads unless blocking for a write) was patented.

Then you could get in trouble for implementing a reader write lock even if you never look at the source code. Patent infringement is a whole different beast than copyright infringement. This scenario is not so far fetched and is something Bill Gates has warned against in the past and has come to pass many times in the present.

Of course, this risk is present whether or not Microsoft makes the source available. By using Reflector, for example, you’d have the same risk of being exposed to patented techniques.

I should point out I’m not a lawyer so follow any of this advice at your own risk.

Having said that, I think a follow-up post on Frans’s blog proposes a solution I think Microsoft should jump on to clear things up. It comes from the JRL (Java Research License).

The JRL is not a tainting license and includes an express ‘residual knowledge’ clause which says you’re not contaminated by things you happen to remember after examining the licensed technology. The JRL allows you to use the source code for the purpose of JRL-related activities but does not prohibit you from working on an independent implementation of the technology afterwards.

It’d be nice if Microsoft added a similar clause to the Ms-RL so much of this FUD can just go away. Or even better, take the next step and look at putting this code (at least some of it) under the Ms-PL.

Disclaimer: Starting on October 15, I will be a Microsoft Employee, but the opinions expressed in this post are mine and mine only. I do not speak for Microsoft on these matters.

I’m also the leader of a couple OSS projects, so I will be very careful about separating what I learn on the job vs what I contribute to Subtext et all. But I’ll be a PM so I hear I won’t be looking at much code anyways. ;)

comments edit

ASP.NET 2.0
AnthologyI just received a few advanced copies of our new book and am giving away three of them to the first three people who leave a comment on this post.

But there’s a catch!

You have to have a blog and promise to write a review on your blog. This is on the honor system so I’ll send you the book and you can then review it.

In your comment, leave your email address in the email field (it’s not visible to anyone else) and I’ll follow up to get your mailing address. Also let me know if you want it signed or not. Not sure why you’d want that, but you never know.

comments edit

One weakness with many blog engines, Subtext included, is that it is difficult to change the tags and categories for multiple entries at a time. In general, most blog engines streamline the workflow for tagging and categorizing a single blog post.

Fortunately, Marco De Sanctis, a friend of Simo (a core Subtext Developer) wrote a nice application that you can use to bulk categorize and tag multiple posts. He developed it using Subtext as a test-bed so it handles the fact that we use the rel-tag microformat within the content as our tagging mechanism. Sweeeeet!

blogmanager_image

Many thanks to Simo for blogging about this and to Marco for writing this.

code, tdd comments edit

It is a sad fact of life that, in this day and age, arguments are not won with sound logic and reasoning. Instead, applying the principle of framing an argument is much more effective at swaying public opinion.

1364145387_b8cf994488 So the next time you try to make headway introducing Test Driven Development (or even simply introducing writing automated unit tests at all) into an organization and are rebuffed with…

Don’t bring your fancy schmancy flavor of the week agile manifesto infested “methodology” here kiddo. I’ve been writing software my way for a loooong time…

You can reply with…

I’m sorry, but I’m not a fan of Bug Driven Development. I think Test Driven Development is not without its challenges, but it’s a better alternative. Either you’re with us, or against us. Are you a bug lover? Bug Driven Development gives comfort to the bugs.

UPDATE: this is an example of my dry humor. I don’t believe that “Framing” is a good way to win an argument and I would never actually say or recommend saying anything similar to to this. It’s meant as a bit of a joke, but with a point.

A team that is not focused on automated testing of some sort throughout the lifecycle of the project is effectively embracing Bug Driven Development. Bugs are going to drive the development cycle at the end of the project.

Don’t believe me though, look at the research done by others. In Facts and Fallacies of Software Engineering, Robert Glass points out…

Fact 31. Error removal is the most time-consuming phase of the life cycle.

In Rapid Development, Steve McConnell relates…

Shortcutting 1 day of QA activity early in the project is likely to cost you from 3 to 10 days of activity downstream.

In other words, if you don’t control the bugs, the bugs control your schedule.

code, tdd comments edit

This is a simple little demonstration of how to write unit tests to test out a specific role based permission issue using NUnit/MbUnit and Rhino Mocks.

In Subtext, we have a class named FileBrowserConnector that really should only ever be constructed by a member of the Admins role. Because this class can write to the file system, we want to take extra precautions other than simply restricting access to the URL in which this object is created.

Here are two tests I wrote to begin with.

[Test]
[ExpectedException(typeof(SecurityException))]
public void NonAdminCannotCreateFileConnector()
{
  new FileBrowserConnector();
}

[Test]
public void AdminCanCreateFileConnector()
{
  MockRepository mocks = new MockRepository();

  IPrincipal principal;
  using (mocks.Record())
  {
    IIdentity identity = mocks.CreateMock<IIdentity>();
    SetupResult.For(identity.IsAuthenticated).Return(true);
    principal = mocks.CreateMock<IPrincipal>();
    SetupResult.For(principal.Identity).Return(identity);
    SetupResult.For(principal.IsInRole("Admins")).Return(true);
  }

  using (mocks.Playback())
  {
    IPrincipal oldPrincipal = Thread.CurrentPrincipal;
    try
    {
      Thread.CurrentPrincipal = principal;
      FileBrowserConnector connector = new FileBrowserConnector();
      Assert.IsNotNull(connector, "Could not create the connector.");
    }
    finally
    {
      Thread.CurrentPrincipal = oldPrincipal;
    }
  }
}

The first test is really straightforward. It simply tries to instantiate the FileBrowserConnector class.

The second test is a bit more involved, but the concept is simple. I’m using the Rhino Mocks mocking framework to dynamically construct instance that implement the IIdentity and IPrincipal interfaces.

The following line…

SetupResult.For(principal.IsInRole("Admins")).Return(true);

Tells the dynamic principal mock to return true when the IsInRole method is called with the parameter “Admins”. We then set the Thread.CurrentPrincipal to this constructed principal and try and create the instance of FileBrowserConnector.

Here’s the results of my first test run, trimmed down a bit.

Found 2 tests
[failure] FileBrowserConnectorTests.NonAdminCannotCreateFileConnector
Exception of type 'MbUnit.Core.Exceptions.ExceptionNotThrownException' 
was thrown. 

[success] FileBrowserConnectorTests.AdminCanCreateFileConnector
[reports] generating HTML report
TestResults: file:///D:/AppData/MbUnit/Reports/UnitTests.Subtext.Tests.html

1 passed, 1 failed, 0 skipped, took 4.37 seconds.

As expected, one test passed and one failed. Now I can go ahead and enforce security on the FileBrowserConnector class.

[PrincipalPermission(SecurityAction.Demand, Role = "Admins")]
public class FileBrowserConnector: Page
{
  //... implementation ...
}

That’s all there is to it. You might be wondering if this test is even needed because all I’m really testing is that the PrincipalPermission attribute does indeed work.

This test is still important to prevent regressions. You don’t want someone coming along and removing that attribute by accident or out of ignorance and you don’t notice it.

In codebases that I’ve worked with, I’ve seen a tendency to ignore or forget to write test cases for security requirements. This demo hopefully provides a starting point for myself and others to making sure that security requirements get good test coverage.

I should probably write yet another test to make sure a principal in a different role cannot create an instance of this class.

code, tdd comments edit

This is a quick follow-up to my last post. That seemed like such a common test situation I figured I’d write a quick generic method for encapsulating those two tests.

I’ll start with usage.

[Test]
public void FileBrowserSecureCreationTests()
{
  AssertSecureCreation<FileBrowserConnector>(new string[] {"Admins"});
}

And here’s the method.

/// <summary> 
/// Helper method. Makes sure you can create an instance  
/// of a type if you have the correct role.</summary> 
/// <typeparam name="T"></typeparam> 
/// <param name="allowedRoles"></param> 
public static void AssertSecureCreation<T>(string[] allowedRoles
  , params object[] constructorArguments)
{
  try   
  {     
    Activator.CreateInstance(typeof (T), constructorArguments);
    Assert.Fail("Was able to create the instance with no security.");
  }
  catch(TargetInvocationException e)
  {
    Assert.IsInstanceOfType(typeof(SecurityException)
      , e.InnerException
      , "Expected a security exception, got something else.");
  }

  MockRepository mocks = new MockRepository();

  IPrincipal principal;
  using (mocks.Record())
  {
    IIdentity identity = mocks.CreateMock<IIdentity>();
    SetupResult.For(identity.IsAuthenticated).Return(true);
    principal = mocks.CreateMock<IPrincipal>();
    SetupResult.For(principal.Identity).Return(identity);
    Array.ForEach(allowedRoles, delegate(string role) 
    {
      SetupResult.For(principal.IsInRole(role)).Return(true);
    });
  }

  using (mocks.Playback())
  {
    IPrincipal oldPrincipal = Thread.CurrentPrincipal;
    try
    {       
      Thread.CurrentPrincipal = principal;       
      Activator.CreateInstance(typeof(T), constructorArguments);
      //Test passes if no exception is thrown.
    }     
    finally
    {       
      Thread.CurrentPrincipal = oldPrincipal;     
    }   
  } 
}

There are definite improvements we can make, but this is a nice quick way to test the basic permission level for a class.

personal comments edit

UPDATE: We released Subtext 2.0 which also includes the fix for this vulnerability among many other bug fixes.

A Subtext user reported a security vulnerability due to a flaw in our integration with the FCKEditor control which allows someone to upload files into the images directory without being authenticated.

As far as we know, nobody has been seriously affected, but please update your installation as soon as possible. Our apologies for the inconvenience.

The fix should be relatively quick and painless to apply.

The Fix

If you’re running Subtext 1.9.* we have a fix available consisting of a single assembly, Subtext.Providers.BlogEntryEditor.FCKeditor.dll. After you download it (Subtext1.9.5-PATCH.zip 7.72KB) , unzip the assembly (I recommend backing up your old one just in case) and copy it into your bin directory.

Alternative Workaround

If you’re running a customized version and the above patch causes problems, you can workaround this issue by backing up and then temporarily removing the following directory in your installation.

Providers\BlogEntryEditor\FCKeditor\editor\filemanager

Notes

The Subtext team takes security very seriously and we regret that this flaw made it into our system. We appreciate that a user discretely brought it to our attention and worked quickly to create and test a patch. I went ahead and updated the release on SourceForge (if you’ve downloaded Subtext-1.9.5b then you’re safe) so that no new downloads are affected.

The code also has been fixed in Subversion in case you’re running a custom built version of Subtext.

I will follow up with a post later describing the issue in more detail and what we plan to do to mitigate such risks in the future. I’ll also write a post outlining general guidelines for reporting and handling security issues in an open source project based on guidance provided by the Karl Fogel book, Producing Open Source Software.

Again, I am sorry for any troubles and inconvenience this may have caused. If you know any Subtext users, please let them know. I’ll be updating the website momentarily.

Download

Again, here is the patch location.

comments edit

In his book, Producing Open Source Software, Karl Fogel gives sage advice on running an open source project. The section on how to deal with a security vulnerability was particularly interesting to me last night.

Upon learning of a potential security hole, Karl recommends the following:

  1. Don’t talk about the bug publicly until a fix is available.
  2. Make sure to have a private mailing list setup with a small group of trusted committers where users can send security reports.
  3. Fix the patch quickly. Time is of the essence.
  4. Don’t commit the fix into your source control lest someone scanning for such vulnerabilities find out about it. Wait till after the fix is released.
  5. Give well known administrators (and thus likely targets) using the software a heads up before announcing the flaw and the fix.
  6. Distribute the fix publicly.

There’s more elaboration in the book, but I think the above list distills the key points. Karl’s advice is born from his experience working on CVS and leading the Subversion project and makes a lot of sense.

But for a project built on Java, .NET, or a scripting language, there is an interesting dilemma. The security fix itself announces the vulnerability.

When the Subversion team releases a patch, it is generally compiled to native machine code, which is effectively opaque to the world. Sure with time and effort, a native executable can be decompiled, but the barrier is high to discover the actual exploit by examining the binary. It buys consumers time to patch their installations before exploits start becoming rampant.

With a language like C#, Java, or Ruby, the bar to looking at the code is extremely low. Such languages can raise the bar slightly by using obfuscators, but that is really not common for an Open Source project and creates very little delay for the determined attacker.

So no matter how well you keep the flaw private until you’re ready to announce the fix. The announcement and publication of the fix itself potentially points attackers to the flaw.

This is one situation in which the increased transparency of such languages can cause a problem. Consumers of projects built on these languages have to be extra vigilant about applying patches quickly, while developers of such code must be extra vigilant in threat modeling and code review to avoid security vulnerabilities in the first case. Then again, this doesn’t mean that code compiled to a native binary should be any less vigilant about security.\

If you have a better way of distributing security patches for VM-based/Scripting language projects than this, please do tell.

comments edit

41XDcuGaQrL._AA240_ Remember the book I mentioned that I was writing along with a few colleagues? Well it is finally available for pre-order on Amazon.com!

If you love me, you’ll buy five copies each. No. Ten copies!

Or, you could wait for the reviews and buy the book on its own merits, which I hope it warrants. But what’s the fun in that?

All kidding aside, this was a fun and tiring collaborative effort with Jeff “Coding Horror” Atwood, Jon Galloway, K. Scott Allen, and Wyatt Barnett. The book aggregates our collective wisdom on the topic of building web applications with ASP.NET 2.0 as a series of tips, tricks, and hacks.

The target audience for the book is the intermediate developer looking to raise his or her skills to the next level, so some of the material quickly rehashes the basics to set the stage for more interesting tips and tricks. The goal of the book is to be a survival guide for ASP.NET developers. We’ve been bitten and had the poison sucked out of our veins so you can avoid the vipers in the wild.

Technorati tags: Books, ASP.NET

personal, code, asp.net mvc comments edit

It was only two and a half months ago when I wrote about receiving my Microsoft MVP award. I was quite honored to receive this award.

In a follow-up comment to that post, rich with unintentional foreshadowing, I mentioned the following…

However, I would like to hit up that MVP conference in Redmond before doing anything to cause my MVP status to be dropped.

Unfortunately, I will not be retaining my MVP status long enough for the MVP conference. I have committed an action that has forced Microsoft’s hand in this matter and they must remove my MVP status.

To understand why this is the case, I must refer you to the Microsoft MVP FAQ which states the following in the fifth question…

Q5: Do MVPs represent Microsoft?

A5: No. MVPs are not Microsoft employees, nor do they speak on Microsoft’s behalf. MVPs are third-party individuals who have received an award from Microsoft that recognizes their exceptional achievements in technical communities.

Starting on October 15, 2007, I will join the ranks of Microsoft as an employee, thus putting myself in violation of this rule.

Don’t worry about me dear friend. I will cope well with this loss of status. I don’t hold Microsoft to blame.

Well, that’s not true. I do hold them to blame. While in Redmond recently, Scott Guthrie (aka ScottGu) showed me a rough prototype of a cool MVC framework they are working on for a future version of ASP.NET. When I saw it, I told Scott,

I want to work on that. How can I work on that?

So yes, I do blame Microsoft. I blame Microsoft for showing me something to which I absolutely could not resist contributing. I will be starting soon as a Senior Program Manager in the ASP.NET team.

I will continue to work from Los Angeles while we work on selling our house, which unfortunately is bad timing as housing prices have taken a bit of a dive around here. Once we have things settled over here, we’ll pack our things and move up to Seattle.

I’ll be in Seattle the week of October 15 for New Employee Orientation and to meet the rest of the team, so hopefully we can have another geek dinner/drink (I’m looking at youBrad,Scott,Peli, et all).

On the other side of the coin, work has been really fun lately at Koders, especially with the release of Pro Edition and the rails work I’ve been doing lately, so leaving is not easy, despite my short tenure. It’s a great company to work for and I wish them continued success.

My last day is this Wednesday and I will be taking a short break in between jobs to spend time with the family, travel, and get the house ready to sell.

As for Subtext, I will continue to contribute my spare moments leading the charge towards making it a fantastic blogging platform. When you think about it, joining the ASP.NET team is really just a clever ploy to make Subtext even better by being able to influence the underlying platform in a direction that makes it a joy to write code and tests for it. Yeah, I said tests. Of course, my goal would be to make every app built on ASP.NET, not just Subtext, better (and more testable as a contributing factor to being better) due to the work that we do.

Wish me luck in that endeavor.

asp.net comments edit

UPDATE: K. Scott Allen got to the root of the problem. It turns out it was an issue of precedence. Compiler options are not additive. Specifying options in @Page override those in web.config. Read his post to find out more.

Conditional compilation constants are pretty useful for targeting your application for a particular platform, environment, etc… For example, to have code that only executes in debug mode, you can define a conditional constant named DEBUG and then do this…

#if DEBUG
//This code only runs when the app is compiled for debug
Log.EverythingAboutTheMachine();
#endif

It’s not common knowledge to me that these constants work equally well in ASPX and ASCX files. At least it wasn’t common knowledge for me. For example:

<!-- Note the space between % and # -->
<% #if DEBUG %>
<h1>DEBUG Mode!!!</h1>
<% #endif %>

The question is, where do you define these conditional constants for ASP.NET. The answer is, well it depends on whether you’re using a Website project or a Web Application project.

For a Web Site project, one option is to define it at the Page level like so…

<%@ Page CompilerOptions="/d:QUUX" %>

The nice thing about this approach is that the conditional compilation works both in the ASPX file as well as in the CodeFile, for ASP.NET Website projects.

According to this post by K. Scott Allen, you can also define conditional compilation constants in the Web.config file using the <system.codedom /> element (a direct child of the <configuration /> element, but this didn’t work for me in either website projects nor web application projects.

<system.codedom>
  <compilers>
    <compiler
      language="c#;cs;csharp" extension=".cs"
      compilerOptions="/d:MY_CONSTANT"
      type="Microsoft.CSharp.CSharpCodeProvider, 
        System, Version=2.0.0.0, Culture=neutral, 
        PublicKeyToken=b77a5c561934e089" />
    </compilers>
</system.codedom>

At heart, Web Application Projects are no different from Class Library projects so you can set conditional compilation constants from the project properties dialog in Visual Studio.ConditionalCompilation -
Microsoft Visual
Studio

Unfortunately, these only seem to work in the code behind and not within ASPX files.

Here’s a grid based on my experiments that show when and where setting conditional compilation constants seem to work in ASP.NET.

Web.config Project Properties Page Directive
Website Code File No n/a Yes
Web Application Code File No Yes No
ASPX, ASCX File No No Yes

In order to create this grid, I created a solution that includes both a Web Application project and a Website project and ran through all nine permutations. You can download the solution here if you’re interested.

It’s a bit confusing, but hopefully the above table clears things up slightly. As for setting the conditional constants in Web.config, I’m quite surprised that it didn’t work for me (yes, I set it to full trust) and assume that I must’ve made a mistake somewhere. Hopefully someone will download this solution and show me why it doesn’t work.

comments edit

Here’s a little plug for something we’ve been working hard at over at Koders. Everyone knows that if you want to find open source code, you go to http://www.koders.com/ (it recently got a minor new facelift so check it out). That’s my area of responsibility here. However, after many many months of hard work, we released Koders Pro Edition 1.0 this week. I helped a bit with this, but it’s mostly due to the hard work of the rest of the team that this is out there, especially Ben, the product manager for Pro.

The Yin-Yang of Open Source and Private
CodePro Edition is the yin to the Koders.com yang. Pro Edition is great for searching and sharing your and your team’s internal code.

This should not be confused with desktop code search, although it can certainly be used in that manner. Rather, it’s more similar to the Google Search Appliance. Something you can install on a server, point it to your source control or files system, and now your whole team can quickly search and find your internal code.

While the focus of Pro Edition is on indexing your internal code, it doesn’t preclude you from indexing public open source code. After all, Pro Edition is cut from the same cloth (though scaled down) as the indexer we use for http://www.koders.com/, so you’re getting a lot of power under the hood.

Pro Edition allows private and public code to be intermingled if you so desire. For example, suppose your company has a limited set of open source projects you’d like to be able to search. Because Pro Edition supports indexing any CVS and Subversion repository (the two most widely used source control systems used by open source projects), there’s nothing stopping you from pointing your local Pro Edition at an open source code repository and start indexing that code along with your internal code.

Doing this would allow you to create a private searchable index of “approved” open source code. If this sounds interesting to you, try out the free trial.

code, tech, blogging comments edit

I was thinking about alternative ways to block comment spam the other day and it occurred to me that there’s potentially a simpler solution than the Invisible Captcha approach I wrote about.

The Invisible Captcha control plays upon the fact that most comment spam bots don’t evaluate javascript. However there’s another particular behavioral trait that bots have that can be exploited due to the bots inability to support another browser facility.

honeypot image from
http://www.cs.vu.nl/\~herbertb/misc/shelia/ You see, comment spam bots love form fields. When they encounter a form field, they go into a berserker frenzy (+2 to strength, +2 hp per level, etc…) trying to fill out each and every field. It’s like watching someone toss meat to piranhas.

At the same time, spam bots tend to ignore CSS. For example, if you use CSS to hide a form field (especially via CSS in a separate file), they have a really hard time knowing that the field is not supposed to be visible.

To exploit this, you can create a honeypot form field that should be left blankand then use CSS to hide it from human users, but not bots. When the form is submitted, you check to make sure the value of that form field is blank. For example, I’ll use the form field named body as the honeypot. Assume that the actual body is in another form field named the-real-body or something like that:

<div id="honeypotsome-div">
If you see this, leave this form field blank 
and invest in CSS support.
<input type="text" name="body" value="" />
</div>

Now in your code, you can just check to make sure that the honeypot field is blank…

if(!String.IsNullOrEmpty(Request.Form["body"]))
  IgnoreComment();

I think the best thing to do in this case is to act like you’ve accepted the comment, but really just ignore it.

I did a Google search and discovered I’m not the first to come up with this idea. It turns out that Ned Batchelder wrote about honeypots as a comment spam fighting vehicle a while ago. Fortunately I found that post after I wrote the following code.

For you ASP.NET junkies, I wrote a Validator control that encapsulates this honeypot behavior. Just add it to your page like this…

<sbk:HoneypotCaptcha ID="body" ErrorMessage="Doh! You are a bot!"
  runat="server"  />

This control renders a text box and when you call Page.Validate, validation fails if the textbox is not empty.

This control has no display by default by setting the style attribute to display:none. You can override this behavior by setting the UseInlineStyleToHide property to false, which makes you responsible for hiding the control in some other way (for example, by using CSS defined elsewhere). This also provides a handy way to test the validator.

To get your hands on this validator code and see a demo, download the latest Subkismet source from CodePlex. You’ll have to get the code from source control because this is not yet part of any release.

comments edit

Akumi-Phil-Cody Today my wife and I celebrate our fifth anniversary of being legally married. If you’ve read my blog long enough, you might have seen this post which suggests we were married June 14, not September 12.

It’s all pretty simple, you see. We had our wedding ceremony on June 14 2003, but were secretly legally married on September 12, 2002.

Ok perhaps the term secret marriage is a bit too strong. But it sounds cool, doesn’t it? The story is that at the time, my wife wanted to take a long trip back to Japan before our planned wedding. Unfortunately, with the tightening up of immigration following September 11, we were concerned she’d have trouble coming back. So we got legally married at the Beverly Hills Courthouse to make sure she could return.

Recently we decided to follow the DRY principle (Don’t Repeat Yourself) and only really celebrate our legal anniversary, as it keeps it simple for me.

Hard enough for a guy to remember one anniversary much less two!

So to Akumi (yes, she actually reads my blog), Happy Anniversary. I love you very much! And I was going to post that other silly pic of you we found, but I want to live to see the next five years of our life together.

comments edit

Last night I nearly lost a dear friend of mine. Now this is the sort of story most men, myself included, would understandably want keep to themselves. Although this deviates from my normal content, I feel a duty to tell all in this age of transparency because while I was in the middle of the ordeal, I turned to Google for help and didn’t find the information I needed. I write this in the hopes it helps some unfortunate guy in the future.

mainimageThe story begins late last night around 1:15 AM as I turned in to bed for the night. Tossing and turning, I started to feel a pain in my lower abdomen and right testicle. I could feel that my right testicle was swollen and a bit harder than one would expect. It felt like an impossibly bad case of blue balls. The worst case imaginable.

Since I hate dealing with hospitals and such, I tried to sleep it off telling myself it would be fine in the morning, as if somehow the Nut-helper Fairy would come in the middle of the night and make it all better.

Suffice it to say, when your genital region is in pain, it’s pretty damn difficult to get a good night’s sleep. You shouldn’t screw around (forgive the pun) when you have a pain in that region. So I got up, Googled it, found nothing but scary stories about testicular cancer and painful hernias, and decided then I should go see a doctor. I told my wife I had to go and I proceeded to walk over to the neighborhood Emergency Room at 2:30 AM.

During triage I explained that the pain was around a 7 on a scale of 1 to 10, it was a dull intense pain, not sharp, and it was constant, not coming in waves, centered around my testicle and my lower abdomen area.

After I was moved to a gurney, the doctor began black box testing on me. It’s not unlike debugging a bug in code for which you don’t have any means to step through a debugger. He’d prod around narrowing down the possible diagnoses. However, unlike debugging code, this process was excruciatingly painful.

After manhandling my right nut for a while, the doctor diagnosed me with Testicular Torsion. Wikipedia defines it thusly…

In testicular torsion the spermatic cord that provides the blood supply to a testicle is twisted, cutting off the blood supply, often causing orchalgia. Prolonged testicular torsion will result in the death of the testicle and surrounding tissues.

I define it as ow! ow! that fucking hurts!

So the doctor leaves to order an ultrasound and returns not long after to “try one more thing.” He then proceeds to grab the nut and twist it around, asking me to let him know when the pain subsides.

Riiiiight.

A man with a latex glove is twisting my nut and asking me when it doesn’t hurt? It doesn’t hurt when you’re not twisting it! 

Exactly how I wanted to spend my Monday morning.

Amazingly enough though, the pain subsided quickly after he stopped. I didn’t realize at the time that he was twisting it back. I thought he was just being sadistic.

The male nurse on duty quipped afterwards…

Probably the first time that someone twisting your testicle made you feel better, eh?

No, twisting my testicle normally elicits feelings of euphoria and joy. Of course it’s the first time! And by Zeus’s eye I hope it’s the last.

Afterwards I was pushed on a gurney into the ultrasound room by a big burly Russian dude who proceeded to ultrasound my testicular nether regions. At this point, there’s really no point in having any shame or bashfulness. I just tried to make small talk as he showed me the screen displaying blood flowing nicely.

As I was being discharged, the doctor told me it was a good thing I went in. Left untreated for around six hours, I could have lost the testicle. I later looked it up and this is what Wikipedia has to say on the subject (emphasis mine).

Testicular torsion is a medical emergency that needs immediate treatment. If treated within 6 hours, there is nearly a 100% chance of saving the testicle. Within 12 hours this rate decreases to 70%, within 24 hours is 20%, and after 24 hours the rate approaches 0. (eMedicineHealth) Once the testicle is dead it must be removed to prevent gangrenous infection.

Yeah, I’m going to be having nightmares too. In any case, it seems that all is well. I still have a slight bit of discomfort not unlike the feeling in your gut long after someone kicks you in the groin and I’ve been walking around a bit gingerly, worried any sudden movement might cause a relapse.

The moral of this story is when you have an intense pain in the balls, don’t be a tough guy about it. Go to the emergency room and be safe about it. No use trying to be stoic and losing a nut over it.

My next step now is to make an appointment with a Urologist so I can have yet another doctor see me in all my glory and make sure it’s all good.

To the doctor at the local neighborhood Emergency Room, I owe you a big one. Because of him, the next time someone asks me, “Hey! How’s it hanging” I can answer, “Pointed in the right direction.”

comments edit

Not too long ago I wrote a blog post on some of the benefits of Duck Typing for C# developers. In that post I wrote up a simplified code sample demonstrating how you can cast the HttpContext to an interface you create called IHttpContext, for lack of a better name.

Is it a duck or a
rabbit?Well I couldn’t just sit still on that one so I used Reflector and a lot of patience and created a set of interfaces to match the Http intrinsic classes. Here is a full list of interfaces I created along with the concrete existing class (all in the System.Web namespace except where otherwise stated) that can be cast to the interface (ones in bold are the most commonly used.

  • ICache - Cache
  • IHttpApplication - HttpApplication
  • IHttpApplicationState - HttpApplicationState
  • IHttpCachePolicy - CachePolicy
  • IHttpClientCertificate - HttpClientCertificate
  • IHttpContext -HttpContext
  • IHttpFileCollection - HttpFileCollection
  • IHttpModuleCollection - HttpModuleCollection
  • IHttpRequest - HttpRequest
  • IHttpResponse - HttpResponse
  • IHttpServerUtility - HttpServerUtility
  • IHttpSession- System.Web.SessionState.HttpSessionState
  • ITraceContext - TraceContext

As an aside, you might wonder why I chose the name IHttpSession instead of IHttpSessionState for the class HttpSessionState. It turns out that there already is an IHttpSessionState interface, but HttpSessionState doesn’t inherit from that interface. Go figure. Now that’s a juicy tidbit you can whip out at your next conference cocktail party.

Note that I focused on classes that don’t have public constructors and are sealed. I didn’t want to follow the entire object graph!

I also wrote a simple WebContext class with some helper methods. For example, to get the current HttpContext duck typed as IHttpContext, you simply call…

IHttpContext context = WebContext.Current;

I also added a bunch of Cast methods specifically for casting http intrinsic types. Here’s some demo code to show this in action. Assume this code is running in the code behind of your standard ASPX page.

public void HelloWorld(IHttpResponse response)
{
  response.Write("<p>Who’s the baddest!</p>");
}

protected void Page_Load(object sender, EventArgs e)
{
  //Grab it from the http context.
  HelloWorld(WebContext.Current.Response);
  
  //Or cast the actual Response object to IHttpResponse
  HelloWorld(WebContext.Cast(Response));
}

The goal of this library is to make it very easy to refactor existing code to use these interfaces (should you so desire), which will make your code less tied to the System.Web classes and more mockable.

Why would you want such a thing? Making classes mockable makes them easier to test, that’s a worthy goal in its own right. Not only that, this gives control over dependencies to you, as a developer, rather than having your code tightly coupled to the System.Web classes. One situation I’ve run into is wanting to write a command line tool to administer Subtext on my machine. Being able to substitute my own implementation of IHttpContext will make that easier.

UPDATE: The stack overflow problem mentioned below has since been fixed within the Duck Typing library.

One other note as you look at the code. You might notice I’ve had to create extra interfaces (commented with a //Hack). This works around a bug I found with the Duck Casting library reproduced with this code…

public class Foo
{
  public Foo ChildFoo
  {
    get { return new Foo();}
  }
}

public interface IFoo
{
  //Note this interface references itself
  IFoo ChildFoo { get;}
}

public static class FooTester
{
  public static void StackOverflowTest()
  {
    Foo foo = new Foo();
    IFoo fooMock = DuckTyping.Cast<IFoo>(foo);
    Console.WriteLine(fooMock);
  }
}

Calling FooTester.StackOverflowTest will cause a stack overflow exception. The fix is to do the following.

public interface IFoo2 : IFoo {}

public class IFoo
{
  IFoo2 ChildFoo { get; }
}

In any case, I hope some of you find this useful. Let me know if you find any bugs or mistakes. No warranties are implied. Download the code from here which includes the HttpInterfaces class library with all the interfaces, a Web Project with a couple of tests, and a unit test library with more unit tests.