asp.net, subtext, code comments suggest edit

By now, you’re probably aware of a serious ASP.NET Vulnerability going around. The ASP.NET team has been working around the clock to address this. Quite literally as last weekend, I came in twice over the weekend (to work on something unrelated) to find people working to address the exploit.

Recently, Scott Guthrie posted a follow-up blog post with an additional recommended mitigation you should apply to your servers. I’ve seen a lot of questions about these mitigations, as well as a lot of bad advice. The best advice I’ve seen is this - if you’re running an ASP.NET application, follow the advice in Scott’s blog to the letter. Better to assume your site is vulnerable than to second-guess the mitigation.

In the follow-up post, Scott recommends installing the handy dandy UrlScan IIS Module and applying a specific configuration setting. I’ve used UrlScan in the past and have found it extremely useful in dealing with DOS attacks.

However, when I installed UrlScan, my blog broke. Specifically, all the styles were gone and many images were broken. It took me a while to notice because of my blog cache. It wasn’t till someone commented that my new site design was a tad bit bland, that I hit CTRL+F5 to hard refresh my browser to see the changes.

I looked at the URLs for my CSS and I knew they existed physically on disk, but when I tried to visit them directly, I received a 404 error with some message in the URL about being blocked by UrlScan.

I opened up the UrlScan.ini file located:

%windir%\system32\inetsrv\urlscan\UrlScan.ini

And started scanning it. One of the entries that caught my eye was this one.

AllowDotInPath=0         ; If 1, allow dots that are not file
                         ; extensions. The default is 0. Note that
                         ; setting this property to 1 will make checks
                         ; based on extensions unreliable and is
                         ; therefore not recommended other than for
                         ; testing.

That’s when I had a hunch. I started digging around and remembered that I have a custom skin in my blog named “haacked-3.0”. I viewed source and noticed my CSS files and many images were in a URL that looked like:

https://haacked.com/skins/haacked-3.0/style/foo.css

Aha! Notice the dot in the URL segment there?

What I should have done next was go and rename my skin. Unfortunately, I have many blog posts with a dot in the slug (and thus in the blog post URL). So I changed that setting to be 1 and restarted my web server. There’s a small risk of making my site slightly less secure by doing so, but I’m willing to take that risk as I can’t easily go through and fix every blog post that has a dot in the URL right now.

So if you’ve run into the same problem, it may be that you have dots in your URL that UrlScan is blocking. The best and recommended solution is to remove the dots from the URL if you are able to.

asp.net, asp.net mvc, code comments suggest edit

I was drawn to an interesting question on StackOverflow recently about how to override a request for a non-existent .svc request using routing.

One useful feature of routing in ASP.NET is that requests for files that exist on disk are ignored by routing. Thus requests for static files and for .aspx and .svc files don’t run through the routing system.

In this particular scenario, the developer wanted to replace an existing .svc service with a call to an ASP.NET MVC controller. So he deletes the .svc file and adds the following route:

routes.MapRoute(
  "UpdateItemApi",
  "Services/api.svc/UpdateItem",
  new { controller = "LegacyApi", action = "UpdateItem" }
);

Since api.svc is not a physical file on disk, at first glance, this should work just fine. But I tried it out myself with a brand new project, and sure enough, it doesn’t work.

Baffling!

So I started digging into it. First, I looked in event viewer and saw the following exception.

System.ServiceModel.EndpointNotFoundException: The service '/Services/api.svc' does not exist.

Ok, so there’s probably something special about the .svc file extension. So I opened up the machine web.config file located here on my machine:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config

And I found this interesting entry within the buildProviders section.

<add extension=".svc" 
  type="System.ServiceModel.Activation.ServiceBuildProvider, 
  System.ServiceModel.Activation,
  Version=4.0.0.0, Culture=neutral, 
  PublicKeyToken=31bf3856ad364e35" 
/>

Ah! There’s a default build provider registered for the .svc extension. And as we all know, build providers allow for runtime compilation of requests for ASP.NET files and occur very early in response to a request.

The fix I came up with was to simply remove this registration within my application’s web.config file.

  <system.web>
    <compilation debug="true" targetFramework="4.0">
      <buildProviders>
        <remove extension=".svc"/>            
      </buildProviders>
    ...

Doing that now allowed my route with the .svc extension to work. Of course, if I have other .svc services that should continue to work, I’ve pretty much disabled all of them by doing this. However, if those services are in a common subfolder (for example, a folder named services), we may be able to get around this by adding the build provider in a web.config file within that common subfolder.

In any case, I thought the question was interesting as it demonstrated the delicate interplay between routing and build providers.

personal, humor comments suggest edit

Our eye in the sky reports two angry evil (but devishly good looking) cyborg units, XSP 2000 and TRS-80, are fast approaching Black Rock City. They are considered very armed and dangerous. In fact, they are mostly armed and not much else.

Cyborg
Battle

These cyborgs do not come in peace. I repeat, they are to be considerd hostiles. However, we’ve received a secret communiqué that reveals a weakness built into these cyborg models. Due to a lack of TDD during development, a bug in their FOF system (friend or foe) causes them to view anyone offering a frosty beverage to be a friend, not foe.

Any attempts to engage with these hostiles will result in calamity unless you offer them an ice cold beverage. For the sake of your beloved city, I suggest stocking up.

Intelligence confirms they are headed towards their evil cyborg camp at 8:15 and Kyoto on the Playa and are predicted to arrive on Tuesday morning. If we band together, we may be able to save our fair city by, once again, offering frosty alcoholic beverages in order to confuse their FOF system.

You’ve been duly warned.

This blog (and associated Twitter account) will go quiet for at least a week as communication systems are nonexistent within the Black Rock City area.

code comments suggest edit

On Twitter yesterday I made the following comment:

We’re not here to write software, we’re here to ship products and deliver value. Writing code is just a fulfilling  means to that end :)

binary-code All I see now is blonde, brunette, redhead.

For the most part, I received a lot of tweets in agreement, but there were a few who disagreed with me:

While I agree in principle, the stated sentiment “justifies” the pervasive lack of quality in development

Doctors with this mentality don’t investigate root causes, because patients don’t define that as valuable

That’s BS. If you live only, or even primarily, for end results you’re probably zombie. We’re here to write code AND deliver value.

I have no problem with people disagreeing with me. Eventually they’ll learn I’m always right. ;) In this particular case, I think an important piece of context was missing.

What’s that you say? Context missing in a 140 character limited tweet? That could never happen, right? Sure, you keep telling yourself that while I pop a beer over here with Santa Claus.

The tweet was a rephrasing of something I told a Program Manager candidate during a phone interview. It just so happens that the role of a program manager at Microsoft is not focused on writing code like developers. But that wasn’t the point I was making. I’ve been a developer in the past (and I still play at being a developer in my own time) and I still think this applies.

What I really meant to say was that we’re not paid to write code. I absolutely love writing code, but in general, it’s not what I’m paid to do and I don’t believe it ever was what I was paid to do even when I was a consultant.

For example, suppose a customer calls me up and says,

“Hey man, I need software that allows me to write my next book. I want to be able to print the book and save the book to disk. Can you do that for me?”

I’m not going to be half way through writing my first unit test in Visual Studio by the end of that phone call. Hell no! I’ll step away from the IDE and hop over to Best Buy to purchase a copy of Microsoft Word. I’ll then promptly sell it to the customer with a nice markup for my troubles and go and sip Pina Coladas on the beach the rest of the day. Because that’s what I do. I sip on Pina Coladas.

At the end of the day, I get paid to provide products to my customers that meet their needs and provides them real value, whether by writing code from scratch or finding something else that already does what they need.

Yeah, that’s a bit of cheeky example so let’s look at another one. Suppose a customer really needs a custom software product. I could write the cleanest most well crafted code the world has ever seen (what a guy like me might produce during a prototype session on an off night), but if it doesn’t ship, I don’t get paid. Customer doesn’t care how much time I spent writing that code. They’re not going to pay me, until I deliver.

Justifying lack of quality

Now, I don’t think, as one Twitterer suggested, that this “justifies a pervasive lack of quality in development” by any means.

Quality in development is important, but it has to be scaled appropriately. Hear that? That’s the sound of a bunch of eggs lofted at my house in angry disagreement. But hear me out before chucking.

A lot of people will suggest that all software should be written with the utmost of quality. But the reality is that we all scale the quality of our code to the needs of the product. If that weren’t true, we’d all use Cleanroom Software Engineering processes like those employed by the Space Shuttle developers.

So why don’t we use these same processes? Because there are factors more important than quality in building a product. While even the Space Shuttle coders have to deal with changing requirements from time to time, in general, the laws of physics don’t change much over time last I checked. And certainly, their requirements don’t undergo the level of churn that developers trying to satisfy business needs under a rapidly changing business climate would face. Hence the rise of agile methodologies which recognize the need to embrace change.

Writing software that meets changing business needs and provides value is more important than writing zero defect code. While this might seem I’m giving quality a short shrift, another way to look at it is that I’m taking a higher view of what defines quality in the first place. Quality isn’t just the defect count of the code. It’s also how well the code meets the business needs that defines the “quality” of an overall product.

The debunking of the Betamax is better than VHS myth is a great example of this idea. While Betamax might have been technically superior to VHS in some ways, when you looked at the “whole product”, it didn’t satisfy customer needs as well as VHS did.

Nate Kohari had an interesting insight on how important delivering value is when he writes about the lessons learned building Agile Zen, a product I think is of wonderful quality.

It also completely changed the way that I look at software. I’ve tried to express this to others since, but I think you just have to experience it firsthand in order to really understand. It’s a unique experience to build a product of your own, from scratch, with no paycheck or deferred responsibility or venture capital to save you — you either create real value for your customers, or you fail. And I don’t like to fail.

Update: Dare Obasanjo wrote a timely blog that dovetails nicely with the point I’m making. He writes that Google Wave and REST vs SOAP provide a cautionary tale for those who focus too much on solving hard technical problems and miss solving their customers actual problems. Sometimes, when we think we’re paid to code, we write way too much code. Sometimes, less code solves the actual problems we’re concerned with just fine.

Code is a part of the whole

The Betamax vs VHS point leads into another point I had in mind when I made the original statement. As narcissistic developers (c’mon admit it. You are all narcissists!), we tend to see the code as being the only thing that matters. But the truth is, it’s one part of the whole that makes a product.

There’s many other components that go into a product. A lot of time is spent identifying future business needs to look for areas where software can provide value. After all, no point in writing the code if nobody wants to use it or it doesn’t provide any value.

Not to mention, at Microsoft, we put a lot of effort into localization and globalization ensuring that the software is translated into multiple languages. On top of this, we have writers who produce documentation, legal teams who work on licenses, marketing teams who market the product, and the list goes on. A lot goes into a product beyond just the code. There are also a lot of factors outside the product that determines its success such as community ecosystem, availability of add-ons, etc.

I love to code

Now don’t go running to tell on me to my momma.

“Your son is talking trash about writing code!”

It’d break her heart and it’d be completely untrue. I love to code! There, I said it. In fact, I love it so much, I tried to marry it, but then got a much better offer from a very lovely woman. But I digress.

Yes, I love coding so much I often do it for free in my spare time.

I wasn’t trying to make a point that writing code isn’t important and doesn’t provide value. It absolutely does. In fact, I firmly believe that writing code is a huge part of providing that value or we wouldn’t be doing it in the first place. This importance is why we spend so much time and effort trying to elevate the craft and debating the finer points of how to write good software. It’s an essential ingredient to building great software products.

The mere point I was making is simply that while writing code is a huge factor in providing value, it’s not the part we get paid for. Customers pay to receive value. And they only get that value when the code is in their hands.

Tags: software development

code comments suggest edit

In my last blog post, I covered some challenges with versioning methods that differ only by optional parameters. If you haven’t read it, go read it. If I do say so myself, it’s kind of interesting. ;) In this post, I want to cover another very subtle versioning issue with using optional parameters.

At the very end of that last post, I made the following comment.

By the way, you can add overloads that have additional requiredparameters. So in this way, you are in the same boat as before.

However, this can lead to subtle bugs. Let’s walk through a scenario. Imagine that some class library has the following method in version 1.0.

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

And you have a client application which calls this method like so:

ClassName.Foo("one", "two");

That’s just fine right. You don’t need to supply a value for the argument s3 because its optional. Everything is hunky dory!

But now, the class library author decides to release version 2 of the library and adds the following overload.

public static void Foo(string s1, string s3 = "v2") {
    Console.WriteLine("version 2");
}

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

Notice that they’ve added an overload that only has two parameters. It differs from the existing method by one required parameter, which is allowed.

As I mentioned before, you’re always allowed to add overloads and maintain binary compatibility. So if you don’t recompile your client application and upgrade the class library, you’ll still get the following output when you run the application.

version 1

But what happens when you recompile your client application against version 2 of the class library and run it again with no source code changes. The output becomes:

version 2

Wow, that’s pretty subtle.

It may not seem so bad in this contrived example, but lets contemplate a real world scenario. Let’s suppose there’s a very commonly used utility method in the .NET Framework that follows this pattern in .NET 4. And in the next version of the framework, a new overload is added with one less required parameter.

Suddenly, when you recompile your application, every call to the one method is now calling the new one.

Now, I’m not one to be alarmist. Realistically, this is probably very unlikely in the .NET Framework because of stringent backwards compatibility requirements. Very likely, if such a method overload was introduced, calling it would be backwards compatible with calling the original.

But the same discipline might not apply to every library that you depend on today. It’s not hard to imagine that such a subtle versioning issue might crop up in a commonly used 3rd party open source library and it would be very hard for you to even know it exists without testing your application very thoroughly.

The moral of the story is, you do write unit tests dontcha? Well dontcha?!If not, now’s a good time to start.

code comments suggest edit

One nice new feature introduced in C# 4 is support for named and optional arguments. While these two features are often discussed together, they really are orthogonal concepts.

Let’s look at a quick example of these two concepts at work. Suppose we have a class with one method having the following signature.

  // v1
  public static void Redirect(string url, string protocol = "http");

This hypothetical library contains a single method that takes in two parameters, a required string url and an optional string protocol.

The following shows the six possible ways this method can be called.

HttpHelpers.Redirect("https://haacked.com/");
HttpHelpers.Redirect(url: "https://haacked.com/");
HttpHelpers.Redirect("https://haacked.com/", "https");
HttpHelpers.Redirect("https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(url: "https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(protocol: "https", url: "https://haacked.com/");

Notice that whether or not a parameter is optional, you can choose to refer to the parameter by name or not. In the last case, notice that the parameters are specified out of order. In this case, using named parameters is required.

The Next Version

One apparent benefit of using optional parameters is that you can reduce the number of overloads your API has. However, relying on optional parameters does have its quirks you need to be aware of when it comes to versioning.

Let’s suppose we’re ready to make version two of our awesome HttpHelpers library and we add an optional parameter to the existing method.

// v2
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

What happens when we try to execute the client without recompiling the client application?

We get a the following exception message.

Unhandled Exception: System.MissingMethodException: Method not found: 'Void HttpLib.HttpHelpers.Redirect(System.String, System.String)'....

Whoops! By changing the method signature, we caused a runtime breaking change to occur. That’s not good.

Let’s try to avoid a runtime breaking change by adding an overload instead of changing the existing method.

// v2.1
public static void Redirect(string url, string protocol = "http");
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

Now, when we run our client application, it works fine. It’s still calling the two parameter version of the method. Adding overloads is never a runtime breaking change.

But let’s suppose we’re now ready to update the client application and we attempt to recompile it. Uh oh!

The call is ambiguous between the following methods or properties: 'HttpLib.HttpHelpers.Redirect(string, string)' and 'HttpLib.HttpHelpers.Redirect(string, string, bool)'

While adding an overload is not a runtime breaking change, it can result in a compile time breaking change. Doh!

Talk about a catch-22! If we add an overload, we break in one way. If we instead add an argument to the existing method, we’re broken in another way.

Why Is This Happening?

When I first heard about optional parameter support, I falsely assumed it was implemented as a feature of the CLR which might allow dynamic dispatch to the method. This was perhaps very naive of me.

My co-worker Levi (no blog still) broke it down for me as follows. Keep in mind, he’s glossing over a lot of details, but at a high level, this is roughly what’s going on.

When optional parameters are in use, the C# compiler follows a simple algorithm to determine which overload of a method you actually meant to call. It considers as a candidate *every* overload of the method, then one by one it eliminates overloads that can’t possibly work for the particular parameters you’re passing in.

Consider these overloads:

public static void Blah(int i);
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

Suppose you make the following method call: Blah(0).

The last candidate is eliminated since the parameter types are incorrect, which leaves us with the first two.

public static void Blah(int i); // Candidate
public static void Blah(int i, int j = 5); // Candidate
public static void Blah(string i = "Hello");  // Eliminated

At this point, the compiler needs to perform a conflict resolution. The conflict resolution is very simple: if one of the candidates has the same number of parameters as the call site, it wins. Otherwise the compiler bombs with an error.

In the case of Blah(0), the first overload is chosen since the number of parameters is exactly one.

public static void Blah(int i); //WINNER!!!
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

This allows you to take an existing method that doesn’t have optional parameters and add overloads that have optional parameters without breaking anybody (except in Visual Basic which has a slightly different algorithm).

But what happens if you need to version an API that already has optional parameters?  Consider this example:

public static void Helper(int i = 2, int j = 3);            // v1
public static void Helper(int i = 2, int j = 3, int k = 4); // added in v2

And say that the call site is Helper(j: 10). Both candidates still exist after the elimination process, but since neither candidate has exactly one argument, the compiler will not prefer one over another. This leads to the compilation error we saw earlier about the call being ambiguous.

Conclusion

The reason that optional parameters were introduced to C# 4 in the first place was to support COM interop. That’s it. And now, we’re learning about the full implications of this fact.

If you have a method with optional parameters, you can never add an overload with additional optional parameters out of fear of causing a compile-time breaking change. And you can never remove an existing overload, as this has always been a runtime breaking change.

You pretty much need to treat it like an interface. Your only recourse in this case is to write a new method with a new name.

So be aware of this if you plan to use optional arguments in your APIs.

UPDATE: By the way, you can add overloads that have additional required parameters. So in this way, you are in the same boat as before. However, this can lead to other subtle versioning issues as my follow-up post describes.

code comments suggest edit

UPDATE: A reader named Matthias pointed out there is a flaw in my code. Thanks Matthias! I’ve corrected it in my GitHub Repository. The code would break if your attribute had an array property or constructor argument.

I’ve been working on a lovely little prototype recently but ran into a problem where my code receives a collection of attributes and needs to change them in some way and then pass the changed collection along to another method that consumes the collection.

reflection

I  want to avoid changing the attributes directly, because when you use reflection to retrieve attributes, those attributes may be cached by the framework. So changing an attribute is not a safe operation as you may be changing the attribute for everyone else who tries to retrieve them.

What I really wanted to do is create a copy of all these attributes, and pass the collection of copied attributes along. But how do I do that?

CustomAttributeData

Brad Wilson and David Ebbo to the rescue! In a game of geek telephone, David told Brad a while back, who then recently told me, about a little class in the framework called CustomAttributeData.

This class takes advantage of a feature of the framework known as a Reflection-Only context. This allows you to examine an assembly without instantiating any of its types. This is useful, for example, if you need to examine an assembly compiled against a different version of the framework or a different platform.

Copying an Attribute

As you’ll find out, it’s also useful when you need to copy an attribute. This might raise the question in your head, “if you have an existing attribute instance, why can’t you just copy it?” The problem is that a given attribute might not have a default constructor. So then you’re left with the challenge of figuring out how to populate the parameters of a constructor from an existing instance of an attribute. Let’s look at a sample attribute.

[AttributeUsage(AttributeTargets.All, AllowMultiple = true)]
public class SomethingAttribute : Attribute {
  public SomethingAttribute(string readOnlyProperty) {
      ReadOnlyProperty = readOnlyProperty;
  }
  public string ReadOnlyProperty { get; private set; }
  public string NamedProperty { get; set; }
  public string NamedField;
}

And here’s an example of this attribute applied to a class a couple of times.

[Something("ROVal1", NamedProperty = "NVal1", NamedField = "Val1")]
[Something("ROVal2", NamedProperty = "NVal2", NamedField = "Val2")]
public class Character {
}

Given an instance of this attribute, I might be able to figure out how the constructor argument should be populated by assuming a convention of using the property with the same name as the argument. But what if the attribute had a constructor argument that had no corresponding property? Keep in mind, I want this to work with arbitrary attributes, not just ones that I wrote.

CustomAttributeData saves the day!

This is where CustomAttributeData comes into play. An instance of this class tells you everything you need to know about the attribute and how to construct it. It provides access to the constructor, the constructor parameters, and the named parameters used to declare the attribute.

Let’s look at a method that will create an attribute instance given an instance of CustomAttributeData.

public static Attribute CreateAttribute(this CustomAttributeData data){
  var arguments = from arg in data.ConstructorArguments
                    select arg.Value;

  var attribute = data.Constructor.Invoke(arguments.ToArray())     as Attribute;

  foreach (var namedArgument in data.NamedArguments) {
    var propertyInfo = namedArgument.MemberInfo as PropertyInfo;
    if (propertyInfo != null) {
      propertyInfo.SetValue(attribute, namedArgument.TypedValue.Value, null);
    }
    else {
      var fieldInfo = namedArgument.MemberInfo as FieldInfo;
      if (fieldInfo != null) {
        fieldInfo.SetValue(attribute, namedArgument.TypedValue.Value);
      }
    }
  }

  return attribute;
}

The code sample demonstrates how we use the information within the CustomAttributeData instance to figure out how to create an instance of the attribute described by the data.

So how did we get the CustomAttributeData instance in the first place? That’s pretty easy, we called the CustomAttributeData.GetCustomAttributes() method. With these pieces in hand, it’s pretty straightforward now to copy the attributes on a type or member. Here’s a set of extension methods I wrote to do just that.

NOTE: The following code does not handle array properties and constructor arguments correctly. Check out my repository for the correct code.

public static IEnumerable<Attribute> GetCustomAttributesCopy(this Type type) {
  return CustomAttributeData.GetCustomAttributes(type).CreateAttributes();
}

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this Assembly assembly) {
  return CustomAttributeData.GetCustomAttributes(assembly).CreateAttributes();
}

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this MemberInfo memberInfo) {
  return CustomAttributeData.GetCustomAttributes(memberInfo).CreateAttributes();
}

public static IEnumerable<Attribute> CreateAttributes(    this IEnumerable<CustomAttributeData> attributesData) {
  return from attributeData in attributesData
          select attributeData.CreateAttribute();
}

And here’s a bit of code I wrote in a console application to demonstrate the usage.

foreach (var instance in typeof(Character).GetCustomAttributesCopy()) {
  var somethingAttribute = instance as SomethingAttribute;
  Console.WriteLine("ReadOnlyProperty: " + somethingAttribute.ReadOnlyProperty);
  Console.WriteLine("NamedProperty: " + somethingAttribute.NamedProperty);
  Console.WriteLine("NamedField: " + somethingAttribute.NamedField);
}

And there you have it, I can grab the attributes from a type and produce a copy of those attributes.

With this out of the way, I can hopefully continue with my original prototype which led me down this rabbit hole in the first place. It always seems to happen this way, where I start a blog post, only to start writing a blog post to support that blog post, and then a blog post to support that one. Much like a dream within a dream within a dream. ;)

asp.net, asp.net mvc, code comments suggest edit

In ASP.NET MVC 3 Preview 1, we introduced some syntactic sugar for creating and accessing view data using new dynamic properties.

sugarSugar, it’s not just for breakfast.

Within a controller action, the ViewModel property of Controller allows setting and accessing view data via property accessors that are resolved dynamically at runtime. From within a view, the View property provides the same thing (see the addendum at the bottom of this post for why these property names do not match).

Disclaimer

This blog post talks about ASP.NET MVC 3 Preview 1, which is a pre-release version. Specific technical details may change before the final release of MVC 3. This release is designed to elicit feedback on features with enough time to make meaningful changes before MVC 3 ships, so please comment on this blog post if you have comments.

Let’s take a look at the old way and the new way of doing this:

The old way

The following is some controller code that adds a string to the view data.

public ActionResult Index() {
  ViewData["Message"] = "Some Message";
  return View();
}

The following is code within a view that accesses the view data we supplied in the controller action.

<h1><%: ViewData["Message"] %></h1>

The new way

This time around, we use the ViewModel property which is typed as dynamic. We use it like we would any property.

public ActionResult Index() {
  ViewModel.Message = "Some Message";
  return View();
}

And we reference it in a Razor view. Note that this also works in a WebForms View too.

<h1>@View.Message</h1>

Note that View.Message is equivalent to View["Message"].

Going beyond properties

However, what might not be clear to everyone is that you can also store and call methods using the same approach. Just for fun, I wrote an example of doing this.

In the controller, I defined a lambda expression that takes in an index and two strings. It returns the first string if the index is even, and the second string if the index is odd. It’s very simple.

The next thing I do is assign that lambda to the Cycle property of ViewModel, which is created on the spot since ViewModel is dynamic.

public ActionResult Index() {
  ViewModel.Message = "Welcome to ASP.NET MVC!";

  Func<int, string, string, string> cycleMethod = 
    (index, even, odd) => index % 2 == 0 ? even : odd;
  ViewModel.Cycle = cycleMethod;

  return View();
}

Now, I can dynamically call that method from my view.

<table>
@for (var i = 0; i < 10; i++) {
    <tr class="@View.Cycle(i, "even-css", "odd-css")">
        <td>@i</td>
    </tr>
}
</table>

As a fan of dynamic languages, I find this technique to be pretty slick. :)

The point of this blog post was to show that this is possible, but it raises the question, “why would anyone want to do this over writing a custom helper method?”

Very good question! Right now, it’s mostly a curiosity to me, but I can imagine cases where this might come in handy. However, if you re-use such view functionality or really need Intellisense, I’d highly recommend making it a helper method. I think this approach works well for rapid prototyping and maybe for one time use helper functions.

Perhaps you’ll find even better uses I didn’t think of at all.

Addendum: The Property name mismatch

Earlier in this post I mentioned the mismatch between property names, ViewModel vs View. I also talked about this in a video I recorded for MvcConf on MVC 3 Preview 1. Originally, we wanted to pick a nice terse name for this property so when referencing it in the view, there is minimal noise. We liked the property View for this purpose and implemented it for our view page first.

But when we went to port this property over to the Controller, we realized it wouldn’t work. Anyone care to guess why? Yep, that’s right. Controller already has a method named View so it can’t also have a property named the same thing. So we called it ViewModel for the time being and figured we’d change it once we came up with a better name.

So far, we haven’t come up with a better name that’s both short and descriptive. And before you suggest it, the acronym of “View Data” is not an option.

If you have a better name, do suggest it. :)

Addendum 2: Unit Testing

Someone on Twitter asked me how you would unit test this action method. Here’s an example of a unit tests that shows you can simply call this dynamic method directly from within a unit test (see the act section below).

[TestMethod]
public void CanCallCycle() {
  // arrange
  var controller = new HomeController();
  controller.Index();

  // act
  string even = controller.ViewModel.Cycle(0, "even", "odd");

  // assert
  Assert.AreEqual("even", even);
}

Tags: aspnetmvc, dynamic, viewdata

asp.net, code, asp.net mvc comments suggest edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Feels like just yesterday that we released ASP.NET MVC 2 to the world and here I am already talking about an early preview. In a way, we’re right on schedule. It was almost exactly a year ago that we released Preview 1 of ASP.NET MVC 2.

Today I’m happy to announce that ASP.NET MVC 3 Preview 1 is available for download. Give it a try out and let us know what you think. Some key notes before you give it a whirl:

  • ASP.NET MVC 3 Preview 1 tooling requires Visual Studio 2010
  • ASP.NET MVC 3 Preview 1 runtime requires the ASP.NET 4 runtime

As usual, to find out what’s in this release, check out the release notes. Also at the recent MVCConf, a virtual conference about ASP.NET MVC, I recorded a talk that provided a sneak peek at ASP.NET MVC 3 Preview 1. The audio quality isn’t great, but I do demo some of the key new features so be sure to check it out.

So what’s in this release that I’m excited about? Here’s a small sampling:

  • Razor View Engine which ScottGu wrote about recently. Note that for Preview 1, we only support the C# version (CSHTML). IN later previews, we will add support for the VB.NET version (VBHTML). Also, Intellisense support for Razor syntax in Visual Studio 2010 will be released later.
  • Dependency Injection hooks using service locator interface. Brad Wilson should have a few blog posts on this over the next few days.
  • Support for .NET 4 Data Annotation and Validation attributes.
  • Add View dialog support for multiple view engines including custom view engines.
  • Global Action Filters

In the next few days you should see more details about each of these areas start to show up in various blog posts. I’ll try to keep this blog post updated with relevant blog posts so you can find them all. Enjoy!

asp.net mvc, asp.net, code comments suggest edit

I wanted to confirm something about how to upload a file or set of files with ASP.NET MVC and the first search result for the phrase “uploading a file with asp.net mvc” is Scott Hanselman’s blog post on the topic.

His blog post is very thorough and helps provide a great understanding of what’s happening under the hood. The only complaint I have is that the code could be much simpler since we’ve made improvements to the ASP.NET MVC 2. I write this blog post in the quixotic hopes of knocking his post from the #1 spot.

Uploading a single file

Let’s start with the view. Here’s a form that will post back to the current action.

<form action="" method="post" enctype="multipart/form-data">
  
  <label for="file">Filename:</label>
  <input type="file" name="file" id="file" />

  <input type="submit" />
</form>

Here’s the action method that this view will post to which saves the file into a directory in the App_Data folder named “uploads”.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file) {
            
  if (file.ContentLength > 0) {
    var fileName = Path.GetFileName(file.FileName);
    var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
    file.SaveAs(path);
  }
            
  return RedirectToAction("Index");
}

Notice that the argument to the action method is an instance of HttpPostedFileBase. ASP.NET MVC 2 introduces a new value providers feature which I’ve covered before.

Whereas model binders are used to bind incoming data to an object model, value providers provide an abstraction for the incoming data itself.

In this case, there’s a default value provider called the HttpFileCollectionValueProvider which supplies the uploaded files to the model binder.Also notice that the argument name, file, is the same name as the name of the file input. This is important for the model binder to match up the uploaded file to the action method argument.

Uploading multiple files

In this scenario, we want to upload a set of files. We can simply have multiple file inputs all with the same name.

<form action="" method="post" enctype="multipart/form-data">
    
  <label for="file1">Filename:</label>
  <input type="file" name="files" id="file1" />
  
  <label for="file2">Filename:</label>
  <input type="file" name="files" id="file2" />

  <input type="submit"  />
</form>

Now, we just tweak our controller action to accept an IEnumerable of HttpPostedFileBase instances. Once again, notice that the argument name matches the name of the file inputs.

[HttpPost]
public ActionResult Index(IEnumerable<HttpPostedFileBase> files) {
  foreach (var file in files) {
    if (file.ContentLength > 0) {
      var fileName = Path.GetFileName(file.FileName);
      var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
      file.SaveAs(path);
    }
  }
  return RedirectToAction("Index");
}

Yes, it’s that easy. :)

Tags: aspnetmvc, upload

asp.net, asp.net mvc, razor comments suggest edit

UPDATE: Check out my Razor View Syntax Quick Reference for a nice quick reference to Razor.

There’s an old saying, “Good things come to those who wait.” I remember when I first joined the ASP.NET MVC project, I (and many customers) wanted to include a new streamlined custom view engine. Unfortunately at the time, it wasn’t in the card since we had higher priority features to implement.

Well the time for a new view engine has finally come as announced by Scott Guthrie in this very detailed blog post.

Photo by "clix"
http://www.sxc.hu/photo/955098

While I’m very excited about the new streamlined syntax, there’s a lot under the hood I’m also excited about.

Andrew Nurse, who writes the parser for the Razor syntax, provides more under-the-hood details in this blog post. Our plan for the next version of ASP.NET MVC is to make this the new default view engine, but for backwards compatibility we’ll keep the existing WebForm based view engine.

As part of that work, we’re also focusing on making sure ASP.NET MVC tooling supports any view engine. In ScottGu’s blog post, if you look carefully, you’ll see Spark listed in the view engines drop down in the Add View dialog. We’ll make sure it’s trivially easy to add Spark, Haml, whatever, to an ASP.NET MVC project. :)

Going back to Razor, one benefit that I look forward to is that unlike an ASPX page, it’s possible to fully compile a CSHTML page without requiring the ASP.NET pipeline. So while you can allow views to be compiled via the ASP.NET runtime, it may be possible to fully compile a site using T4 for example. A lot of cool options are opened up by a cleanly implemented parser.

In the past several months, our team has been working with other teams around the company to take a more holistic view of the challenges developing web applications. ScottGu recently blogged about the results of some of this work:

  • SQLCE 4 – Medium trust x-copy deployable database for ASP.NET.
  • IIS Express – A replacement for Cassini that does the right thing.

The good news is there’s a lot more coming! In some cases, we had to knock some heads together (our heads and the heads of other teams) to drive focus on what developers really want and need rather than too much pie in the sky architectural astronomy.

I look forward to talking more about what I’ve been working on when the time is right. :)

subtext, personal comments suggest edit

My son and I returned from a week long vacation to visit my parents in Anchorage Alaska last night. Apparently, having the boys out of the house was quite the vacation for my wife as well. :)

We had a great time watching the World Cup and going on outings to the zoo as well as hiking.

cody-phil-hiking

Well, at least one of us was hiking while another was just enjoying the ride. We hiked up a trail to Flattop which has spectacular views of Anchorage. Unfortunately, we didn’t make it all the way to the top as the trail became a bit too much while carrying a toddler who was more interested in watching Go, Diego, Go episodes on his iPod.

hiking-trip

Funny how all that “hiking” works up an appetite.

cody-burger

Also, while in Alaska I gave a talk on ASP.NET MVC 2 to the local .NET User Group. It was their second meeting ever and somehow, in the delirium of perpetual sunlight, I spent two hours talking! It was slated to be a one hour talk.

DotNetLicense

I didn’t see a hint of resentfulness in the group though as they peppered me with great questions after the talk. Apparently, some of them are fans of .NET. ;)

The other thing I was able to do while in Alaska was finish up a bug fix release of Subtext in the wake of our big 2.5 release. There were some high priority bugs in that release. Simone has the details and breakdown on the Subtext 2.5.1 release.

code comments suggest edit

In my last blog post, I wrote about the proper way to check for empty enumerations and proposed an IsNullOrEmpty method for collections which sparked a lot of discussion.

This post covers a similar issue, but from a different angle. A very long time ago, I wrote about my love for the null coalescing operator. However, over time, I’ve found it to be not quite as useful as it could be when dealing with strings. For example, here’s the code I might want to write:

public static void DoSomething(string argument) {
  var theArgument = argument ?? "defaultValue";
  Console.WriteLine(theArgument);
}

But here’s the code I actually end up writing:

public static void DoSomething(string argument) {
  var theArgument = argument;
  if(String.IsNullOrWhiteSpace(theArgument)) {
    theArgument = "defaultValue";
  }
  Console.WriteLine(theArgument);
}

The issue here is that I want to treat an argument that consists only of whitespace as if the argument is null and replace the value with my default value. This is something the null coalescing operator won’t help me with.

This lead me to jokingly propose a null or empty coalescing operator on Twitter with the syntax ???. This would allow me to write something like:

var s = argument ??? "default";

Of course, that doesn’t go far enough because wouldn’t I also need a null or whitespace coalescing operator???? ;)

Perhaps a better approach than the PERLification of C# is to write an extension method that normalizes string in such a way you can use the tried and true (and existing!) null coalescing operator.

Thus I present to you the AsNullIfEmpty and AsNullIfWhiteSpace methods!

Here’s my previous example refactored to use these methods.

public static void DoSomething(string argument) {
  var theArgument = argument.AsNullIfWhiteSpace() ?? "defaultValue";

  Console.WriteLine(theArgument);
}

You can also take the same approach with collections.

public static void DoSomething(IEnumerable<string> argument) {
  var theArgument = argument.AsNullIfEmpty() ?? new string[]{"default"};

  Console.WriteLine(theArgument.Count());
}

The following is the code for these simple methods.

public static class EnumerationExtensions {
  public static string AsNullIfEmpty(this string items) {
    if (String.IsNullOrEmpty(items)) {
      return null;
    }
    return items;
  }

  public static string AsNullIfWhiteSpace(this string items) {
    if (String.IsNullOrWhiteSpace(items)) {
      return null;
    }
    return items;
  }
        
  public static IEnumerable<T> AsNullIfEmpty<T>(this IEnumerable<T> items) {
    if (items == null || !items.Any()) {
      return null;
    }
    return items;
  }
}

Another approach that some commenters to my last post recommended is to write a Coalesce method. That’s also a pretty straightforward approach which I leave as an exercise to the reader. :)

code comments suggest edit

While spelunking in some code recently I saw a method that looked something like this:

public void Foo<T>(IEnumerable<T> items) {
  if(items == null || items.Count() == 0) {
    // Warn about emptiness
  }
}

This method accepts a generic enumeration and then proceeds to check if the enumeration is null or empty. Do you see the potential problem with this code? I’ll give you a hint, it’s this line:

items.Count() == 0

What’s the problem? Well that line right there has the potential to be vastly inefficient.

If the caller of the Foo method passes in an enumeration that doesn’t implement ICollection<T> (for example, an IQueryable as a result from an Entity Framework or Linq to SQL query) then the Count method has to iterate over the entire enumeration just to evaluate this expression.

In cases where the enumeration that’s passed in to this method does implement ICollection<T>, this code is fine. The Count method has an optimization in this case where it will simply check the Count property of the collection.

If we translated this code to English, it’s asking the question “Is the count of this enumeration equal to zero?”. But that’s not really the question we’re interested in. What we really want to know is the answer to the question “Are there any elements in this enumeration?

When you think of it that way, the solution here becomes obvious. Use the Any extension method from the System.Linq namespace!

public void Foo<T>(IEnumerable<T> items) {
  if(items == null || !items.Any()) {
    // Warn about emptiness
  }
}

The beauty of this method is that it only needs to call MoveNext on the IEnumerable interface once! You could have an infinitely large enumeration, but Any will return a result immediately.

Even better, since this pattern comes up all the time, consider writing your own simple extension method.

public static bool IsNullOrEmpty<T>(this IEnumerable<T> items) {
    return items == null || !items.Any();
}

Now, with this extension method, our original method becomes even simpler.

public void Foo<T>(IEnumerable<T> items) {
  if(items.IsNullOrEmpty()) {
    // Warn about emptiness
  }
}

With this extension method in your toolbelt, you’ll never inefficiently check an enumeration for emptiness again.

subtext comments suggest edit

Deploying a Subtext skin used to be one of the biggest annoyances with Subtext prior to version 2.5. The main problem was that you couldn’t simply copy a skin folder into the Skins directory and just have it work because the configuration for a given skin is centrally located in the Skins.config file.

elephant-skinIn other words, a skin wasn’t self contained in a single folder. With Subtext 2.5, this has changed. Skins are fully self contained and there is no longer a need for a central configuration file for skins.

What this means for you is that it is now way easier to share skins. When you get a skin folder, you just drop it into the /skins directory and you’re done!

In most cases, there’s no need for any configuration file whatsoever. If your skin contains a CSS stylesheet named style.css, that stylesheet is automatically picked up. Also, with Subtext 2.5, you can provide a thumbnail for your skin by adding a file named SkinIcon.png into your skin folder. That’ll show up in the improved Skin picker.

When To Use A Skin.config File

Each skin can have its own manifest file named Skin.config.This file is useful when you have multiple CSS and JavaScript files you’d like to include other than style.css (though even in this case it’s not absolutely necessary as you can reference the stylesheets in PageTemplate.ascx directly).

The other benefit of using the skin.config file to reference your stylesheets and script files is you can take advantage of our ability to merge these files together at runtime using the StyleMergeMode and ScriptMergeMode attributes.

Also, in some cases, a skin can have multiple themes differentiated by stylesheet as described in this blog post. A skin.config file can be used to specify these skin themes and their associated CSS file.

Creating a Skin.config file

Creating a skin.config file shouldn’t be too difficult. If you already have a Skins.User.config file, it’s a matter of copying the section of that file that pertains to your skin into a skin.config file within your skin folder and removing some extraneous nodes.

Here’s an example of a new skin.config file for my personal skin.

<?xml version="1.0" encoding="utf-8" ?>
<SkinTemplates>
    <SkinTemplate Name="Haacked-3.0">
        <Scripts>
            <Script Src="~/scripts/lightbox.js" />
            <Script Src="~/scripts/XFNHighlighter.js" />
        </Scripts>
        <Styles>
            <Style href="~/css/lightbox.css" />
            <Style href="~/skins/_System/csharp.css" />
            <Style href="~/skins/_System/commonstyle.css" />
            <Style href="~/skins/_System/commonlayout.css" />
            <Style href="~/scripts/XFNHighlighter.css" />
            <Style href="IEPatches.css" conditional="if IE" />
        </Styles>
    </SkinTemplate>
</SkinTemplates>

If you compare it to the old format, you’ll notice the <Skins> element is gone and there’s no need to specify the TemplateFolder since it’s assumed the folder containing this file is the template folder.

Hopefully soon, we’ll provide more comprehensive documentation on our wiki so you don’t have to go hunting around my blog for information on how to skin your blog. My advice is to copy an existing skin and just tweak it.

comments suggest edit

Wow, has it already been over a year since the last major version of Subtext? Apparently so.

Today I’m excited to announce the release of Subtext 2.5. Most of the focus on this release has been under the hood, but there are some great new features you’ll enjoy outside of the hood.

Major new features

  • New Admin Dashboard: When you login to the admin section of your blog after upgrading, you’ll notice a fancy schmancy new dashboard that summarizes the information you care about in a single page.subtext-dashboardThe other thing you’ll notice in the screenshot is the admin section received a face lift with a new more polished look and feel and many usability improvements.
  • Improved Search:We’ve implemented a set of great search improvements. The biggest change is the work that Simone Chiaretta did integrating Lucene.NET, a .NET search engine, as our built-in search engine. Be sure to check out his tutorial on Lucene.NET. Also, when clicking through to Subtext from a search engine result, we’ll show related blog posts. Subtext also implements the OpenSearch API.

Core Changes

We’ve put in huge amounts of effort into code refactoring, bulking up our unit test coverage, bug fixes, and performance improvements. Here’s a sampling of some of the larger changes.

  • Routing: We’ve replaced the custom regex based URL handling with ASP.NET Routing using custom routes based on the page routing work in ASP.NET 4. This took a lot of work, but will lead to better control over URLs in the long run.
  • Dependency Injection:Subtext now uses Ninject, an open source Dependency Injection container, for its Inversion of Control (IoC) needs. This improves the extensibility of Subtext.
  • Code Reorganization and Reduced Assemblies: A lot of work went into better organizing the code into a more sane and understandable structure. We also reduced the overall number of assemblies in an attempt to improve application startup times.
  • Performance Optimizations:We made a boat load of code focused performance improvements as well as caching improvements to reduce the number of SQL queries per request.
  • Skinning Improvements:This topic deserves its own blog post, but to summarize, skins are now fully self contained within a folder. Prior to this version, adding a new skin required adding a skin folder to the /Skins directory and then modifying a central configuration file. We’ve removed that second step by having each skin contain its own manifest, if needed. Most skins don’t need the manifest if they follow a set of skin conventions. For a list of Breaking changes, check out our wiki.

Upgrading

Because of all the changes and restructuring of files and directories, upgrading is not as straightforward as it has been in the past.

To help with all the necessary changes, we’ve written a tool that will attempt to upgrade your existing Subtext blog.

I’ve recorded a screencast that walks through how to upgrade a blog to Subtext 2.5 using this new tool.

Installation

Installation should be as easy and straightforward as always, especially if you install it using the Web Platform Installer (Note, it may take up to a week for the new version to show up in Web PI). If you’re deploying to a host that supports SQLExpress, we’ve included a freshly installed database in the App_Data folder.

To install, download the zip file here and follow the usual Subtext installation instructions.

More information

We’ll be updating our project website with more information about this release in the next few weeks and I’ll probably post a blog post here and there.

I’d like to thank the entire Subtext team for all their contributions. This release probably contains the most diversity of patches and commits of all our releases with lots of new people pitching in to help.

personal comments suggest edit

I saw a recent Twitter thread discussing the arrogance of Steve Jobs. One person (ok, it was my buddy Rob) postulated that it was this very arrogance that led Apple to their successes.

I suppose it’s quite possible that it had a factor, but I tend to think Steve Job’s vision and drive were much bigger factors.

This idea is a reflection of a pervasive belief out there that arrogance is excusable, perhaps even acceptable and admirable in successful people and institutions. In contrast, I think we’d all agree that that arrogance is universally detestable in unsuccessful people.

But is arrogance necessary for success? I certainly don’t think so. I think there’s an alternative characteristic that can lead to just as much success.

Joy.

pele
pic

My example here is the most successful national soccer team ever, Brazil. They’ve won the most world cups of any team and yet the one word you’d be hard pressed to find anyone using to describe them is “Arrogant.” (Yes, I know that many from Argentina would disagree, but this is the perception out there) ;)

Instead, the word often associated with them is “Joy.” When Brazil plays, their joy for the beautiful game is so infectious you can’t help but share in the joy when they win. Heck, even as you’re grumbling about your own team losing to them, it’s hard not to join in the Samba spirit (again, unless you’re from Argentina).

This is a team that has been incredibly successful over the years and arrogance was unnecessary.

I think there are probably many examples in the technology and business world we could point to where incredible success and visionary leadership came from a joy in the work they do rather than arrogance. Have any examples for me? Leave them in the comments.

The World Cup starts in 6 days! I’ll try not to make all my posts soccer themed if I can help it. :)

asp.net mvc, personal, open source comments suggest edit

The June issue (also in pdf) of the online PragPub magazine, published by the Pragmatic Bookshelf has two articles on ASP.NET MVC.

The first is called Agile Microsoft and is an introduction to ASP.NET MVC geared towards those who’ve never seen it. It’s nice seeing ASP.NET MVC featured in this magazine which in its own words tends to cater to a non-Microsoft crowd.

To some developers, Microsoft’s technologies are a given, the river they swim in. To others, not using Microsoft’s tools is the default. PragPub being an open source- and Agile-friendly kind of magazine, we tend to connect with the latter group.

So when we get an article titled “Agile Microsoft,” we are naturally intrigued. And we think you’ll also be intrigued by Jonathan McCracken’s introduction toASP.NET MVC, a framework that some have called “Rails for .NET.”

The second article is an interview with me entitled “Why ASP.NET MVC?” where I ramble on about how if wandering the halls of Microsoft doesn’t get you tossed out by security, it might land you a great job, as well as ASP.NET MVC, TDD, and Open Source Software.

Something I found interesting was that the person who interviewed me was Michael Swaine who co-wrote the book, Fire in the Valley, that the movie Pirates of Silicon Valley (no Jonny Depp in this one) was based on. The movie (and book) covers the rise of the computer industry and Steve Jobs, Bill Gates, among others.

code comments suggest edit

A while ago I was talking with my manager at the time about traits that we value in a Program Manager. He related an anecdote about an interview he gave where it became clear that the candidate did not deal well with ambiguity.

This is an important trait for nearly every job, but especially for PMs as projects can often change on a dime and it’s important understand how to make progress amidst ambiguity and eventually drive towards resolving ambiguity.

Lately, I’ve been asking myself the question, doesn’t this apply just as much to software?

One of the most frustrating aspects of software today is that it doesn’t deal well with ambiguity. You could take the most well crafted robust pieces of software, and a cosmic ray could flip one bit in memory and potentially take the whole thing down.

The most common case of this fragility that we experience is in the form of breaking changes. Pretty much all applications have dependencies on other libraries or frameworks. One little breaking change in such a library or framework and upgrading that dependency will quickly take down your application.

Someday, I’d love to see software that really did deal well with ambiguity.

For example, lets take imagine a situation where a call to a method which has changed its signature wouldn’t result in a failure but would be resolved automatically.

In the .NET world, we have something close with the concept of assembly binding redirection, which allow you to redirect calls compiled against one version of an assembly to another. This is great if none of the signatures of existing methods have changed. I can imagine taking this further and allowing application developers to apply redirection to method calls account for such changes. In many cases, the method itself that changed could indicate how to perform this redirection. In the simplest case, you simply keep the old method and have it call the new method.

More challenging is the case where the semantics of the call itself have changed. Perhaps the signature hasn’t changed, but the behavior has changed in subtle ways that could break existing applications.

In the near future, I think it would be interesting to look at ways that software that introduce such breaks could also provide hints at how to resolve the breaks. Perhaps code contracts or other pre conditions could look at how the method is called and in cases where it would be broken, attempt to resolve it.

Perhaps in the further future, a promising approach would move away from programming with objects and functions and look at building software using autonomous software agents that communicate with each other via messages as the primary building block of programs.

In theory, autonomous agents are aware of their environment and flexible enough to deal with fuzzy situations and make decisions without human interaction. In other words, they know how to deal with some level of ambiguity.

I imagine that even in those cases, situations would arise that the software couldn’t handle without human involvement, but hey, that happens today even with humans. I occasionally run into situations I’m not sure how to resolve and I enlist the help of my manager and co-workers to get to a resolution. Over time, agents should be able to employ similar techniques of enlisting other agents in making such decisions.

Thus when an agent is upgraded, ideally the entire system continues to function without coming to a screeching halt. Perhaps there’s a brief period where the system’s performance is slightly degraded as all the agents learn about the newly upgraded agent and verify their assumptions, etc. But overall, the system deals with the changes and moves on.

A boy can dream, eh? In the meanwhile, if reducing the tax of backwards compatibility is the goal, there are other avenues to look at. For example, by you could apply isolation using virtualization so that an application always runs in the environment it was designed for, thus removing any need for dealing with ambiguity (apart from killer cosmic rays).

In any case, I’m excited to see what new approaches will appear over the next few decades in this area that I can’t even begin to anticipate.

personal comments suggest edit

The last time I wrote about one of my hiking adventures, it started off great, but really didn’t end well. But I survived, so on that scale, yes it did end well! It’s a matter of perspective.

On Saturday, I went on my first hike of the spring to Lake Serene and Bridal Veil Falls. This hike is really two hikes in one. The main destination is Lake Serene, but there’s an absolutely wonderful half mile (1 mile round trip) side trip to the Bridal Veil Falls on the way to Lake Serene.

The trail starts in the small town of Index in the county of HomeController (sorry, I couldn’t resist). The entire trail is lush with greenery as you would expect in the pacific northwest. All along the trail are many little waterfalls and river crossings like the one seen here.

IMG_0346

The early part of the hike is dominated by moss covered trees and bushes. Higher up there’s less moss, but just as many trees.

IMG_0359Nearly two miles in, there’s a juncture with a sign pointing to the right towards Bridal Veil Falls. There’s a juncture before this one without the sign, don’t take that one, take this one.

The roar of the falls served as our guide through the series of switchbacks leading us to a grand view.

IMG_0360

There are two viewing platforms, both with great views of the falls.

The trail is well marked and easy to follow, though it wasn’t without its occasional obstacle. Nothing a strong man like myself can’t handle though.

IMG_0372 As we got closer to the lake, we captured glimpses of snow capped mountains jutting into the sky. Near the end of the trail, we rounded a bend and were greeted with the sight of a calm lake nestled in a snow covered valley surrounded by jagged peaks. The lake lives up to its name.

IMG_0376We followed a little trail along the lake to a large rock face where we were able to get a better view of the trail. There just happened to be a large group of friends already there, breaking the sense of solitude. IMG_0386But who are we to blame them, the view was beautiful despite the clouds and rain rolling in right as we got there.

IMG_0392If you live in the Seattle or Bellevue area, I highly recommend this hike. One member of the large group told us that he did the same hike a month ago and the lake was still frozen over and they sat back and enjoyed the dramatic site of constant avalanches on the other side of the lake. We didn’t get the opportunity to witness any.