nuget, code, open source 0 comments suggest edit

When it comes to running an open source project, the book Producing Open Source Software - How to Run a Successful Free Software Project by Karl Fogel (free pdf available) is my bible (see my review and summary of the book).

The book is based on Karl Fogel’s experiences as the leader of the Subversion project and has heavily influenced how I run the projects I’m involved in. Lately though, I’ve noticed one problem with some of his advice. It’s so Subversion-y.

Take a look at this snippet on Committers.

As the only formally distinct class of people found in all open source projects, committers deserve special attention here. Committers are an unavoidable concession to discrimination in a system which is otherwise as non-discriminatory as possible. But “discrimination” is not meant as a pejorative here. The function committers perform is utterly necessary, and I do not think a project could succeed without it.

A Committer in this sense is someone who has direct commit access to the source code repository. This makes sense in a world where your source control is completely centralized as it would be with a Subversion repository. But what about a world in which you’re using a completely decentralized version control like Git or Mercurial? What does it mean to be a “committer” when anyone can clone the repository, commit to their local copy, and then send a pull request?

In the book, Mercurial: The Definitive Guide, Bryan O’Sullivan discusses different collaboration models. The one the Linux kernel uses for example is such that Linus Torvalds maintains the “master” repository and only pulls from his “trusted lieutenants”.

At first glance, it might seem reasonable that a project could allow anyone to send a pull request to main and thus focus the “discrimination”, that Karl mentions, on the technical merits of each pull request rather than the history of a person’s involvement in the project.

One one level, that seems even more merit based egalitarian, but you start to wonder if that is scalable. Based on the Linux kernel model, it clearly is not scalable. As Karl points out,

Quality control requires, well, control. There are always many people who feel competent to make changes to a program, and some smaller number who actually are. The project cannot rely on people’s own judgement; it must impose standards and grant commit access only to those who meet them.

Many projects make a distinction between who may contribute a bug fix as opposed to who may contribute a feature. Such projects may require anyone contributing a feature or a non-trivial bug fix to sign a Contributor License Agreement. This agreement becomes the gate to being a contributor, which leaves me with the question, do we go through the process of getting this paperwork done for anyone who asks? Or do we have a bar to meet before we even consider this?

On one hand, if someone has a great feature idea, wouldn’t it be nice if we could just pull in their work without making them jump through hoops? On the other hand, if we have a hundred people go through this paperwork process, but only one actually ends up contributing anything, what a waste of our time. I would love to hear your thoughts on this.

NuGet, a package manager project I work on is currently following the latter approach as described in our guide to becoming a core contributor, but we’re open to refinements and improvements. I should point out that a hosted Mercurial solution does support the centralized committer model where we provide direct commit access. It just so happens that while some developers in the NuGet project have direct commit access, most don’t and shouldn’t make use of it per project policy as we’re still following a distributed model. We’re not letting the technical abilities/limitations of our source control system or project hosting define our collaboration model.

I know I’m late to the game when it comes to distributed source control, but it’s really striking to me how it’s turned the concept of committers on its head. In the centralized source control world, being a contributor was enforced via a technical gate, either you had commit access or you didn’t. With distributed version control it’s become more a matter of social contract and project policies.

asp.net mvc, code, open source, nuget 0 comments suggest edit

NuGet (recently renamed from NuPack) is a free open source developer focused package manager intent on simplifying the process of incorporating third party libraries into a .NET application during development.

After several months of work, the Outercurve Foundation (formerly CodePlex Foundation) today announced the acceptance of the NuGet project to the ASP.NET Open Source Gallery. This is another contribution to the foundation by the Web Platform and Tools (WPT) team at Microsoft.

Also be sure to read Scott Guthrie’s announcement post and Scott Hanselman’s NuGet walkthrough. There’s also a video interview with me on Web Camps TV where I talk about NuGet.

nuget-229x64Just to warn you, the rest of this blog post is full of blah blah blah about NuGet so if you’re a person of action, feel free to go:

Now back to my blabbing. I have to tell you, I’m really excited to finally be able to talk about this in public as we’ve been incubating this for several months now. During that time, we collaborated with various influential members of the .NET open source community including the Nu team in order to gather feedback on delivering the right project.

What Does NuGet Solve?

The .NET open source community has churned out a huge catalog of useful libraries. But what has been lacking is a widely available easy to use manner of discovering and incorporating these libraries into a project.

Take ELMAH, for example. For the most part, this is a very simple library to use. Even so, it may take the following steps to get started:

  1. You first need to discover ELMAH somehow.
  2. The download page for ELMAH includes multiple zip files. You need to make sure you choose the correct one.
  3. After downloading the zip file, don’t forget to unblock it.
  4. If you’re really careful, you’ll verify the hash of the downloaded file against the hash provided by the download page.
  5. The package needs to be unzipped, typically into a lib folder within the solution.
  6. You’ll then add an assembly reference to the assembly from within the Visual Studio solution explorer.
  7. Finally, you need to figure out the correct configuration settings and apply them to the web.config file.

That’s a lot of steps for a simple library, and it doesn’t even take into account what you might do if the library itself depends on multiple other libraries.

NuGet automates all of these common and tedious tasks, allowing you to spend more time using the library than getting it set up in your project.

NuGet Guiding Principles

I remember several months ago, Hot on the heels of shipping ASP.NET MVC 2, I was in a meeting with Scott Guthrie (aka “The Gu”) reviewing plans for ASP.NET MVC 3 when he laid the gauntlet down and said it was time to ship a package manager for .NET developers. The truth was, it was long overdue.

I set about doing some research looking at existing package management systems on other platforms for inspiration such as Ruby Gems, Apt-Get, and Maven. Package Management is well trodden ground and we have a lot to learn from what’s come before.

After this research, I came up with a set of guiding principles for the design of NuGet that I felt specifically addressed the needs of .NET developers.

  1. Works with your source code. This is an important principle which serves to meet two goals: The changes that NuGet makes can be committed to source control and the changes that NuGet makes can be x-copy deployed. This allows you to install a set of packages and commit the changes so that when your co-worker gets latest, her development environment is in the same state as yours. This is why NuGet packages do not install assemblies into the GAC as that would make it difficult to meet these two goals. NuGet doesn’t touch anything outside of your solution folder. It doesn’t install programs onto your computer. It doesn’t install extensions into Visual studio. It leaves those tasks to other package managers such as the Visual Studio Extension manager and the Web Platform Installer.
  2. Works against a well known central feed.As part of this project, we plan to host a central feed that contains (or points to) NuGet packages. Package authors will be able to create an account and start adding packages to the feed. The NuGet client tools will know about this feed by default.
  3. No central approval process for adding packages. When you upload a package to the NuGet Package Gallery (which doesn’t exist yet), you won’t have to wait around for days or weeks waiting for someone to review it and approve it. Instead, we’ll rely on the community to moderate and police itself when it comes to the feed. This is in the spirit of how CodePlex.com and RubyGems.org work.
  4. Anyone can host a feed. While we will host a central feed, we wanted to make sure that anyone who wants to can also host a feed. I would imagine that some companies might want to host an internal feed of approved open source libraries, for example. Or you may want to host a feed containing your curated list of the best open source libraries. Who knows! The important part is that the NuGet tools are not hard-coded to a single feed but support pointing them to multiple feeds.
  5. Command Line and GUI based user interfaces. It was important to us to support the productivity of a command line based console interface. Thus NuGet ships with the PowerShell based Package Manager Console which I believe will appeal to power users. Likewise, NuGet also includes an easy to use GUI dialog for adding packages.

NuGet’s Primary Goal

In my mind, the primary goal of NuGet is to help foster a vibrant open source community on the .NET platform by providing a means for .NET developers to easily share and make use of open source libraries.

As an open source developer myself, this goal is something that is near and dear to my heart. It also reflects the evolution of open source in DevDiv (the division I work in) as this is a product that will ship with other Microsoft products, but also accepts contributions. Given the primary goal that I stated, it only makes sense that NuGet itself would be released as a truly open source product.

There’s one feature in particular I want to call out that’s particularly helpful to me as an open source developer. I run an open source blog engine called Subtext that makes use of around ten to fifteen other open source libraries. Before every release, I go through the painful process of looking at each of these libraries looking for new updates and incorporating them into our codebase.

With NuGet, this is one simple command: List-Package –updates. The dialog also displays which packages have updates available. Nice!

And keep in mind, while the focus is on open source, NuGet works just fine with any kind of package. So you can create a network share at work, put all your internal packages in there, and tell your co-workers to point NuGet to that directory. No need to set up a NuGet server.

Get Involved!

So in the fashion of all true open source projects, this is the part where I beg for your help. ;)

It is still early in the development cycle for NuGet. For example, the Add Package Dialog is really just a prototype intended to be rewritten from scratch. We kept it in the codebase so people can try out the user interface workflow and provide feedback.

We have yet to release our first official preview (though it’s coming soon). What we have today is closer in spirit to a nightly build (we’re working on getting a Continuous Integration (CI) server in place).

So go over to the NuGet website on CodePlex and check out our guide to contributing to NuGet. I’ve been working hard to try and get documentation in place, but I could sure use some help.

With your help, I hope that NuGet becomes a wildly successful example of how building products in collaboration with the open source community benefits our business and the community.

Tags: NuGet, Package Manager, Open Source

asp.net mvc, asp.net, code 0 comments suggest edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Wow! It’s been a busy two months and change since we released Preview 1 of ASP.NET MVC 3. Today I’m happy (and frankly, relieved) to announce the Beta release of ASP.NET MVC 3. Be sure to read Scott Guthrie’s announcement as well.

onward Credits: Image from ICanHazCheezburger http://icanhascheezburger.com/tag/onward/

Yes, you heard me right, we’re jumping straight to Beta with this release! To try it out…

As always, be sure to read the release notes (also available as a Word doc if you prefer that sort of thing) for all the juicy details about what’s new in ASP.NET MVC 3.

A big part of this release focuses on polishing and improving features started in Preview 1. We’ve made a lot of improvements (and changes) to our support for Dependency Injection allowing you to control how ASP.NET MVC creates your controllers and views as well as services that it needs.

One big change in this release is that client validation now is built on top of jQuery Validation in an unobtrusive manner. In ASP.NET MVC 3, jQuery Validation is the default client validation script. It’s pretty slick so give it a try and let us know what you think.

Likewise, our Ajax features such as the Ajax.ActionLink etc. are now built on top of jQuery. There’s a way to switch back to the old behavior if you need to, but moving forward, we’ll be leveraging jQuery for this sort of thing.

Where’s the Razor Syntax Highlighting and Intellisense?

This is probably a good point to stop and provide a little bit of bad news. One of the most frequently asked questions I hear is when are we going to get syntax highlighting? Unfortunately, it’s not yet ready for this release, but the Razor editor team is hard at work on it and we will see it in a future release.

I know it’s a bummer (believe me, I’m bummed about it) but I think it’ll make it that much sweeter when the feature arrives and you get to try it out the first time! See, I’m always looking for that silver lining. ;)

What’s this NuPack Thing?

That’s been the other major project I’ve been working on which has been keeping me very busy. I’ll be posting a follow-up blog post that talks about that.

What’s Next?

The plan is to have our next release be a Release Candidate. I’ve updated the Roadmap to provide an idea of some of the features that will be coming in the RC. For the most part, we try not to add too many features between Beta and RC preferring to focus on bug fixing and polish.

asp.net, subtext, code 0 comments suggest edit

By now, you’re probably aware of a serious ASP.NET Vulnerability going around. The ASP.NET team has been working around the clock to address this. Quite literally as last weekend, I came in twice over the weekend (to work on something unrelated) to find people working to address the exploit.

Recently, Scott Guthrie posted a follow-up blog post with an additional recommended mitigation you should apply to your servers. I’ve seen a lot of questions about these mitigations, as well as a lot of bad advice. The best advice I’ve seen is this - if you’re running an ASP.NET application, follow the advice in Scott’s blog to the letter. Better to assume your site is vulnerable than to second-guess the mitigation.

In the follow-up post, Scott recommends installing the handy dandy UrlScan IIS Module and applying a specific configuration setting. I’ve used UrlScan in the past and have found it extremely useful in dealing with DOS attacks.

However, when I installed UrlScan, my blog broke. Specifically, all the styles were gone and many images were broken. It took me a while to notice because of my blog cache. It wasn’t till someone commented that my new site design was a tad bit bland, that I hit CTRL+F5 to hard refresh my browser to see the changes.

I looked at the URLs for my CSS and I knew they existed physically on disk, but when I tried to visit them directly, I received a 404 error with some message in the URL about being blocked by UrlScan.

I opened up the UrlScan.ini file located:

%windir%\system32\inetsrv\urlscan\UrlScan.ini

And started scanning it. One of the entries that caught my eye was this one.

AllowDotInPath=0         ; If 1, allow dots that are not file
                         ; extensions. The default is 0. Note that
                         ; setting this property to 1 will make checks
                         ; based on extensions unreliable and is
                         ; therefore not recommended other than for
                         ; testing.

That’s when I had a hunch. I started digging around and remembered that I have a custom skin in my blog named “haacked-3.0”. I viewed source and noticed my CSS files and many images were in a URL that looked like:

https://haacked.com/skins/haacked-3.0/style/foo.css

Aha! Notice the dot in the URL segment there?

What I should have done next was go and rename my skin. Unfortunately, I have many blog posts with a dot in the slug (and thus in the blog post URL). So I changed that setting to be 1 and restarted my web server. There’s a small risk of making my site slightly less secure by doing so, but I’m willing to take that risk as I can’t easily go through and fix every blog post that has a dot in the URL right now.

So if you’ve run into the same problem, it may be that you have dots in your URL that UrlScan is blocking. The best and recommended solution is to remove the dots from the URL if you are able to.

asp.net, asp.net mvc, code 0 comments suggest edit

I was drawn to an interesting question on StackOverflow recently about how to override a request for a non-existent .svc request using routing.

One useful feature of routing in ASP.NET is that requests for files that exist on disk are ignored by routing. Thus requests for static files and for .aspx and .svc files don’t run through the routing system.

In this particular scenario, the developer wanted to replace an existing .svc service with a call to an ASP.NET MVC controller. So he deletes the .svc file and adds the following route:

routes.MapRoute(
  "UpdateItemApi",
  "Services/api.svc/UpdateItem",
  new { controller = "LegacyApi", action = "UpdateItem" }
);

Since api.svc is not a physical file on disk, at first glance, this should work just fine. But I tried it out myself with a brand new project, and sure enough, it doesn’t work.

Baffling!

So I started digging into it. First, I looked in event viewer and saw the following exception.

System.ServiceModel.EndpointNotFoundException: The service '/Services/api.svc' does not exist.

Ok, so there’s probably something special about the .svc file extension. So I opened up the machine web.config file located here on my machine:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config

And I found this interesting entry within the buildProviders section.

<add extension=".svc" 
  type="System.ServiceModel.Activation.ServiceBuildProvider, 
  System.ServiceModel.Activation,
  Version=4.0.0.0, Culture=neutral, 
  PublicKeyToken=31bf3856ad364e35" 
/>

Ah! There’s a default build provider registered for the .svc extension. And as we all know, build providers allow for runtime compilation of requests for ASP.NET files and occur very early in response to a request.

The fix I came up with was to simply remove this registration within my application’s web.config file.

  <system.web>
    <compilation debug="true" targetFramework="4.0">
      <buildProviders>
        <remove extension=".svc"/>            
      </buildProviders>
    ...

Doing that now allowed my route with the .svc extension to work. Of course, if I have other .svc services that should continue to work, I’ve pretty much disabled all of them by doing this. However, if those services are in a common subfolder (for example, a folder named services), we may be able to get around this by adding the build provider in a web.config file within that common subfolder.

In any case, I thought the question was interesting as it demonstrated the delicate interplay between routing and build providers.

personal, humor 0 comments suggest edit

Our eye in the sky reports two angry evil (but devishly good looking) cyborg units, XSP 2000 and TRS-80, are fast approaching Black Rock City. They are considered very armed and dangerous. In fact, they are mostly armed and not much else.

Cyborg
Battle

These cyborgs do not come in peace. I repeat, they are to be considerd hostiles. However, we’ve received a secret communiqué that reveals a weakness built into these cyborg models. Due to a lack of TDD during development, a bug in their FOF system (friend or foe) causes them to view anyone offering a frosty beverage to be a friend, not foe.

Any attempts to engage with these hostiles will result in calamity unless you offer them an ice cold beverage. For the sake of your beloved city, I suggest stocking up.

Intelligence confirms they are headed towards their evil cyborg camp at 8:15 and Kyoto on the Playa and are predicted to arrive on Tuesday morning. If we band together, we may be able to save our fair city by, once again, offering frosty alcoholic beverages in order to confuse their FOF system.

You’ve been duly warned.

This blog (and associated Twitter account) will go quiet for at least a week as communication systems are nonexistent within the Black Rock City area.

code 0 comments suggest edit

On Twitter yesterday I made the following comment:

We’re not here to write software, we’re here to ship products and deliver value. Writing code is just a fulfilling  means to that end :)

binary-code All I see now is blonde, brunette, redhead.

For the most part, I received a lot of tweets in agreement, but there were a few who disagreed with me:

While I agree in principle, the stated sentiment “justifies” the pervasive lack of quality in development

Doctors with this mentality don’t investigate root causes, because patients don’t define that as valuable

That’s BS. If you live only, or even primarily, for end results you’re probably zombie. We’re here to write code AND deliver value.

I have no problem with people disagreeing with me. Eventually they’ll learn I’m always right. ;) In this particular case, I think an important piece of context was missing.

What’s that you say? Context missing in a 140 character limited tweet? That could never happen, right? Sure, you keep telling yourself that while I pop a beer over here with Santa Claus.

The tweet was a rephrasing of something I told a Program Manager candidate during a phone interview. It just so happens that the role of a program manager at Microsoft is not focused on writing code like developers. But that wasn’t the point I was making. I’ve been a developer in the past (and I still play at being a developer in my own time) and I still think this applies.

What I really meant to say was that we’re not paid to write code. I absolutely love writing code, but in general, it’s not what I’m paid to do and I don’t believe it ever was what I was paid to do even when I was a consultant.

For example, suppose a customer calls me up and says,

“Hey man, I need software that allows me to write my next book. I want to be able to print the book and save the book to disk. Can you do that for me?”

I’m not going to be half way through writing my first unit test in Visual Studio by the end of that phone call. Hell no! I’ll step away from the IDE and hop over to Best Buy to purchase a copy of Microsoft Word. I’ll then promptly sell it to the customer with a nice markup for my troubles and go and sip Pina Coladas on the beach the rest of the day. Because that’s what I do. I sip on Pina Coladas.

At the end of the day, I get paid to provide products to my customers that meet their needs and provides them real value, whether by writing code from scratch or finding something else that already does what they need.

Yeah, that’s a bit of cheeky example so let’s look at another one. Suppose a customer really needs a custom software product. I could write the cleanest most well crafted code the world has ever seen (what a guy like me might produce during a prototype session on an off night), but if it doesn’t ship, I don’t get paid. Customer doesn’t care how much time I spent writing that code. They’re not going to pay me, until I deliver.

Justifying lack of quality

Now, I don’t think, as one Twitterer suggested, that this “justifies a pervasive lack of quality in development” by any means.

Quality in development is important, but it has to be scaled appropriately. Hear that? That’s the sound of a bunch of eggs lofted at my house in angry disagreement. But hear me out before chucking.

A lot of people will suggest that all software should be written with the utmost of quality. But the reality is that we all scale the quality of our code to the needs of the product. If that weren’t true, we’d all use Cleanroom Software Engineering processes like those employed by the Space Shuttle developers.

So why don’t we use these same processes? Because there are factors more important than quality in building a product. While even the Space Shuttle coders have to deal with changing requirements from time to time, in general, the laws of physics don’t change much over time last I checked. And certainly, their requirements don’t undergo the level of churn that developers trying to satisfy business needs under a rapidly changing business climate would face. Hence the rise of agile methodologies which recognize the need to embrace change.

Writing software that meets changing business needs and provides value is more important than writing zero defect code. While this might seem I’m giving quality a short shrift, another way to look at it is that I’m taking a higher view of what defines quality in the first place. Quality isn’t just the defect count of the code. It’s also how well the code meets the business needs that defines the “quality” of an overall product.

The debunking of the Betamax is better than VHS myth is a great example of this idea. While Betamax might have been technically superior to VHS in some ways, when you looked at the “whole product”, it didn’t satisfy customer needs as well as VHS did.

Nate Kohari had an interesting insight on how important delivering value is when he writes about the lessons learned building Agile Zen, a product I think is of wonderful quality.

It also completely changed the way that I look at software. I’ve tried to express this to others since, but I think you just have to experience it firsthand in order to really understand. It’s a unique experience to build a product of your own, from scratch, with no paycheck or deferred responsibility or venture capital to save you — you either create real value for your customers, or you fail. And I don’t like to fail.

Update: Dare Obasanjo wrote a timely blog that dovetails nicely with the point I’m making. He writes that Google Wave and REST vs SOAP provide a cautionary tale for those who focus too much on solving hard technical problems and miss solving their customers actual problems. Sometimes, when we think we’re paid to code, we write way too much code. Sometimes, less code solves the actual problems we’re concerned with just fine.

Code is a part of the whole

The Betamax vs VHS point leads into another point I had in mind when I made the original statement. As narcissistic developers (c’mon admit it. You are all narcissists!), we tend to see the code as being the only thing that matters. But the truth is, it’s one part of the whole that makes a product.

There’s many other components that go into a product. A lot of time is spent identifying future business needs to look for areas where software can provide value. After all, no point in writing the code if nobody wants to use it or it doesn’t provide any value.

Not to mention, at Microsoft, we put a lot of effort into localization and globalization ensuring that the software is translated into multiple languages. On top of this, we have writers who produce documentation, legal teams who work on licenses, marketing teams who market the product, and the list goes on. A lot goes into a product beyond just the code. There are also a lot of factors outside the product that determines its success such as community ecosystem, availability of add-ons, etc.

I love to code

Now don’t go running to tell on me to my momma.

“Your son is talking trash about writing code!”

It’d break her heart and it’d be completely untrue. I love to code! There, I said it. In fact, I love it so much, I tried to marry it, but then got a much better offer from a very lovely woman. But I digress.

Yes, I love coding so much I often do it for free in my spare time.

I wasn’t trying to make a point that writing code isn’t important and doesn’t provide value. It absolutely does. In fact, I firmly believe that writing code is a huge part of providing that value or we wouldn’t be doing it in the first place. This importance is why we spend so much time and effort trying to elevate the craft and debating the finer points of how to write good software. It’s an essential ingredient to building great software products.

The mere point I was making is simply that while writing code is a huge factor in providing value, it’s not the part we get paid for. Customers pay to receive value. And they only get that value when the code is in their hands.

Tags: software development

code 0 comments suggest edit

In my last blog post, I covered some challenges with versioning methods that differ only by optional parameters. If you haven’t read it, go read it. If I do say so myself, it’s kind of interesting. ;) In this post, I want to cover another very subtle versioning issue with using optional parameters.

At the very end of that last post, I made the following comment.

By the way, you can add overloads that have additional requiredparameters. So in this way, you are in the same boat as before.

However, this can lead to subtle bugs. Let’s walk through a scenario. Imagine that some class library has the following method in version 1.0.

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

And you have a client application which calls this method like so:

ClassName.Foo("one", "two");

That’s just fine right. You don’t need to supply a value for the argument s3 because its optional. Everything is hunky dory!

But now, the class library author decides to release version 2 of the library and adds the following overload.

public static void Foo(string s1, string s3 = "v2") {
    Console.WriteLine("version 2");
}

public static void Foo(string s1, string s2, string s3 = "v1") {
    Console.WriteLine("version 1");
}

Notice that they’ve added an overload that only has two parameters. It differs from the existing method by one required parameter, which is allowed.

As I mentioned before, you’re always allowed to add overloads and maintain binary compatibility. So if you don’t recompile your client application and upgrade the class library, you’ll still get the following output when you run the application.

version 1

But what happens when you recompile your client application against version 2 of the class library and run it again with no source code changes. The output becomes:

version 2

Wow, that’s pretty subtle.

It may not seem so bad in this contrived example, but lets contemplate a real world scenario. Let’s suppose there’s a very commonly used utility method in the .NET Framework that follows this pattern in .NET 4. And in the next version of the framework, a new overload is added with one less required parameter.

Suddenly, when you recompile your application, every call to the one method is now calling the new one.

Now, I’m not one to be alarmist. Realistically, this is probably very unlikely in the .NET Framework because of stringent backwards compatibility requirements. Very likely, if such a method overload was introduced, calling it would be backwards compatible with calling the original.

But the same discipline might not apply to every library that you depend on today. It’s not hard to imagine that such a subtle versioning issue might crop up in a commonly used 3rd party open source library and it would be very hard for you to even know it exists without testing your application very thoroughly.

The moral of the story is, you do write unit tests dontcha? Well dontcha?!If not, now’s a good time to start.

code 0 comments suggest edit

One nice new feature introduced in C# 4 is support for named and optional arguments. While these two features are often discussed together, they really are orthogonal concepts.

Let’s look at a quick example of these two concepts at work. Suppose we have a class with one method having the following signature.

  // v1
  public static void Redirect(string url, string protocol = "http");

This hypothetical library contains a single method that takes in two parameters, a required string url and an optional string protocol.

The following shows the six possible ways this method can be called.

HttpHelpers.Redirect("https://haacked.com/");
HttpHelpers.Redirect(url: "https://haacked.com/");
HttpHelpers.Redirect("https://haacked.com/", "https");
HttpHelpers.Redirect("https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(url: "https://haacked.com/", protocol: "https");
HttpHelpers.Redirect(protocol: "https", url: "https://haacked.com/");

Notice that whether or not a parameter is optional, you can choose to refer to the parameter by name or not. In the last case, notice that the parameters are specified out of order. In this case, using named parameters is required.

The Next Version

One apparent benefit of using optional parameters is that you can reduce the number of overloads your API has. However, relying on optional parameters does have its quirks you need to be aware of when it comes to versioning.

Let’s suppose we’re ready to make version two of our awesome HttpHelpers library and we add an optional parameter to the existing method.

// v2
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

What happens when we try to execute the client without recompiling the client application?

We get a the following exception message.

Unhandled Exception: System.MissingMethodException: Method not found: 'Void HttpLib.HttpHelpers.Redirect(System.String, System.String)'....

Whoops! By changing the method signature, we caused a runtime breaking change to occur. That’s not good.

Let’s try to avoid a runtime breaking change by adding an overload instead of changing the existing method.

// v2.1
public static void Redirect(string url, string protocol = "http");
public static void Redirect(string url, string protocol = "http",   bool permanent = false);

Now, when we run our client application, it works fine. It’s still calling the two parameter version of the method. Adding overloads is never a runtime breaking change.

But let’s suppose we’re now ready to update the client application and we attempt to recompile it. Uh oh!

The call is ambiguous between the following methods or properties: 'HttpLib.HttpHelpers.Redirect(string, string)' and 'HttpLib.HttpHelpers.Redirect(string, string, bool)'

While adding an overload is not a runtime breaking change, it can result in a compile time breaking change. Doh!

Talk about a catch-22! If we add an overload, we break in one way. If we instead add an argument to the existing method, we’re broken in another way.

Why Is This Happening?

When I first heard about optional parameter support, I falsely assumed it was implemented as a feature of the CLR which might allow dynamic dispatch to the method. This was perhaps very naive of me.

My co-worker Levi (no blog still) broke it down for me as follows. Keep in mind, he’s glossing over a lot of details, but at a high level, this is roughly what’s going on.

When optional parameters are in use, the C# compiler follows a simple algorithm to determine which overload of a method you actually meant to call. It considers as a candidate *every* overload of the method, then one by one it eliminates overloads that can’t possibly work for the particular parameters you’re passing in.

Consider these overloads:

public static void Blah(int i);
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

Suppose you make the following method call: Blah(0).

The last candidate is eliminated since the parameter types are incorrect, which leaves us with the first two.

public static void Blah(int i); // Candidate
public static void Blah(int i, int j = 5); // Candidate
public static void Blah(string i = "Hello");  // Eliminated

At this point, the compiler needs to perform a conflict resolution. The conflict resolution is very simple: if one of the candidates has the same number of parameters as the call site, it wins. Otherwise the compiler bombs with an error.

In the case of Blah(0), the first overload is chosen since the number of parameters is exactly one.

public static void Blah(int i); //WINNER!!!
public static void Blah(int i, int j = 5);
public static void Blah(string i = "Hello"); 

This allows you to take an existing method that doesn’t have optional parameters and add overloads that have optional parameters without breaking anybody (except in Visual Basic which has a slightly different algorithm).

But what happens if you need to version an API that already has optional parameters?  Consider this example:

public static void Helper(int i = 2, int j = 3);            // v1
public static void Helper(int i = 2, int j = 3, int k = 4); // added in v2

And say that the call site is Helper(j: 10). Both candidates still exist after the elimination process, but since neither candidate has exactly one argument, the compiler will not prefer one over another. This leads to the compilation error we saw earlier about the call being ambiguous.

Conclusion

The reason that optional parameters were introduced to C# 4 in the first place was to support COM interop. That’s it. And now, we’re learning about the full implications of this fact.

If you have a method with optional parameters, you can never add an overload with additional optional parameters out of fear of causing a compile-time breaking change. And you can never remove an existing overload, as this has always been a runtime breaking change.

You pretty much need to treat it like an interface. Your only recourse in this case is to write a new method with a new name.

So be aware of this if you plan to use optional arguments in your APIs.

UPDATE: By the way, you can add overloads that have additional required parameters. So in this way, you are in the same boat as before. However, this can lead to other subtle versioning issues as my follow-up post describes.

code 0 comments suggest edit

UPDATE: A reader named Matthias pointed out there is a flaw in my code. Thanks Matthias! I’ve corrected it in my GitHub Repository. The code would break if your attribute had an array property or constructor argument.

I’ve been working on a lovely little prototype recently but ran into a problem where my code receives a collection of attributes and needs to change them in some way and then pass the changed collection along to another method that consumes the collection.

reflection

I  want to avoid changing the attributes directly, because when you use reflection to retrieve attributes, those attributes may be cached by the framework. So changing an attribute is not a safe operation as you may be changing the attribute for everyone else who tries to retrieve them.

What I really wanted to do is create a copy of all these attributes, and pass the collection of copied attributes along. But how do I do that?

CustomAttributeData

Brad Wilson and David Ebbo to the rescue! In a game of geek telephone, David told Brad a while back, who then recently told me, about a little class in the framework called CustomAttributeData.

This class takes advantage of a feature of the framework known as a Reflection-Only context. This allows you to examine an assembly without instantiating any of its types. This is useful, for example, if you need to examine an assembly compiled against a different version of the framework or a different platform.

Copying an Attribute

As you’ll find out, it’s also useful when you need to copy an attribute. This might raise the question in your head, “if you have an existing attribute instance, why can’t you just copy it?” The problem is that a given attribute might not have a default constructor. So then you’re left with the challenge of figuring out how to populate the parameters of a constructor from an existing instance of an attribute. Let’s look at a sample attribute.

[AttributeUsage(AttributeTargets.All, AllowMultiple = true)]
public class SomethingAttribute : Attribute {
  public SomethingAttribute(string readOnlyProperty) {
      ReadOnlyProperty = readOnlyProperty;
  }
  public string ReadOnlyProperty { get; private set; }
  public string NamedProperty { get; set; }
  public string NamedField;
}

And here’s an example of this attribute applied to a class a couple of times.

[Something("ROVal1", NamedProperty = "NVal1", NamedField = "Val1")]
[Something("ROVal2", NamedProperty = "NVal2", NamedField = "Val2")]
public class Character {
}

Given an instance of this attribute, I might be able to figure out how the constructor argument should be populated by assuming a convention of using the property with the same name as the argument. But what if the attribute had a constructor argument that had no corresponding property? Keep in mind, I want this to work with arbitrary attributes, not just ones that I wrote.

CustomAttributeData saves the day!

This is where CustomAttributeData comes into play. An instance of this class tells you everything you need to know about the attribute and how to construct it. It provides access to the constructor, the constructor parameters, and the named parameters used to declare the attribute.

Let’s look at a method that will create an attribute instance given an instance of CustomAttributeData.

public static Attribute CreateAttribute(this CustomAttributeData data){
  var arguments = from arg in data.ConstructorArguments
                    select arg.Value;

  var attribute = data.Constructor.Invoke(arguments.ToArray())     as Attribute;

  foreach (var namedArgument in data.NamedArguments) {
    var propertyInfo = namedArgument.MemberInfo as PropertyInfo;
    if (propertyInfo != null) {
      propertyInfo.SetValue(attribute, namedArgument.TypedValue.Value, null);
    }
    else {
      var fieldInfo = namedArgument.MemberInfo as FieldInfo;
      if (fieldInfo != null) {
        fieldInfo.SetValue(attribute, namedArgument.TypedValue.Value);
      }
    }
  }

  return attribute;
}

The code sample demonstrates how we use the information within the CustomAttributeData instance to figure out how to create an instance of the attribute described by the data.

So how did we get the CustomAttributeData instance in the first place? That’s pretty easy, we called the CustomAttributeData.GetCustomAttributes() method. With these pieces in hand, it’s pretty straightforward now to copy the attributes on a type or member. Here’s a set of extension methods I wrote to do just that.

NOTE: The following code does not handle array properties and constructor arguments correctly. Check out my repository for the correct code.

public static IEnumerable<Attribute> GetCustomAttributesCopy(this Type type) {
  return CustomAttributeData.GetCustomAttributes(type).CreateAttributes();
}

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this Assembly assembly) {
  return CustomAttributeData.GetCustomAttributes(assembly).CreateAttributes();
}

public static IEnumerable<Attribute> GetCustomAttributesCopy(    this MemberInfo memberInfo) {
  return CustomAttributeData.GetCustomAttributes(memberInfo).CreateAttributes();
}

public static IEnumerable<Attribute> CreateAttributes(    this IEnumerable<CustomAttributeData> attributesData) {
  return from attributeData in attributesData
          select attributeData.CreateAttribute();
}

And here’s a bit of code I wrote in a console application to demonstrate the usage.

foreach (var instance in typeof(Character).GetCustomAttributesCopy()) {
  var somethingAttribute = instance as SomethingAttribute;
  Console.WriteLine("ReadOnlyProperty: " + somethingAttribute.ReadOnlyProperty);
  Console.WriteLine("NamedProperty: " + somethingAttribute.NamedProperty);
  Console.WriteLine("NamedField: " + somethingAttribute.NamedField);
}

And there you have it, I can grab the attributes from a type and produce a copy of those attributes.

With this out of the way, I can hopefully continue with my original prototype which led me down this rabbit hole in the first place. It always seems to happen this way, where I start a blog post, only to start writing a blog post to support that blog post, and then a blog post to support that one. Much like a dream within a dream within a dream. ;)

asp.net, asp.net mvc, code 0 comments suggest edit

In ASP.NET MVC 3 Preview 1, we introduced some syntactic sugar for creating and accessing view data using new dynamic properties.

sugarSugar, it’s not just for breakfast.

Within a controller action, the ViewModel property of Controller allows setting and accessing view data via property accessors that are resolved dynamically at runtime. From within a view, the View property provides the same thing (see the addendum at the bottom of this post for why these property names do not match).

Disclaimer

This blog post talks about ASP.NET MVC 3 Preview 1, which is a pre-release version. Specific technical details may change before the final release of MVC 3. This release is designed to elicit feedback on features with enough time to make meaningful changes before MVC 3 ships, so please comment on this blog post if you have comments.

Let’s take a look at the old way and the new way of doing this:

The old way

The following is some controller code that adds a string to the view data.

public ActionResult Index() {
  ViewData["Message"] = "Some Message";
  return View();
}

The following is code within a view that accesses the view data we supplied in the controller action.

<h1><%: ViewData["Message"] %></h1>

The new way

This time around, we use the ViewModel property which is typed as dynamic. We use it like we would any property.

public ActionResult Index() {
  ViewModel.Message = "Some Message";
  return View();
}

And we reference it in a Razor view. Note that this also works in a WebForms View too.

<h1>@View.Message</h1>

Note that View.Message is equivalent to View["Message"].

Going beyond properties

However, what might not be clear to everyone is that you can also store and call methods using the same approach. Just for fun, I wrote an example of doing this.

In the controller, I defined a lambda expression that takes in an index and two strings. It returns the first string if the index is even, and the second string if the index is odd. It’s very simple.

The next thing I do is assign that lambda to the Cycle property of ViewModel, which is created on the spot since ViewModel is dynamic.

public ActionResult Index() {
  ViewModel.Message = "Welcome to ASP.NET MVC!";

  Func<int, string, string, string> cycleMethod = 
    (index, even, odd) => index % 2 == 0 ? even : odd;
  ViewModel.Cycle = cycleMethod;

  return View();
}

Now, I can dynamically call that method from my view.

<table>
@for (var i = 0; i < 10; i++) {
    <tr class="@View.Cycle(i, "even-css", "odd-css")">
        <td>@i</td>
    </tr>
}
</table>

As a fan of dynamic languages, I find this technique to be pretty slick. :)

The point of this blog post was to show that this is possible, but it raises the question, “why would anyone want to do this over writing a custom helper method?”

Very good question! Right now, it’s mostly a curiosity to me, but I can imagine cases where this might come in handy. However, if you re-use such view functionality or really need Intellisense, I’d highly recommend making it a helper method. I think this approach works well for rapid prototyping and maybe for one time use helper functions.

Perhaps you’ll find even better uses I didn’t think of at all.

Addendum: The Property name mismatch

Earlier in this post I mentioned the mismatch between property names, ViewModel vs View. I also talked about this in a video I recorded for MvcConf on MVC 3 Preview 1. Originally, we wanted to pick a nice terse name for this property so when referencing it in the view, there is minimal noise. We liked the property View for this purpose and implemented it for our view page first.

But when we went to port this property over to the Controller, we realized it wouldn’t work. Anyone care to guess why? Yep, that’s right. Controller already has a method named View so it can’t also have a property named the same thing. So we called it ViewModel for the time being and figured we’d change it once we came up with a better name.

So far, we haven’t come up with a better name that’s both short and descriptive. And before you suggest it, the acronym of “View Data” is not an option.

If you have a better name, do suggest it. :)

Addendum 2: Unit Testing

Someone on Twitter asked me how you would unit test this action method. Here’s an example of a unit tests that shows you can simply call this dynamic method directly from within a unit test (see the act section below).

[TestMethod]
public void CanCallCycle() {
  // arrange
  var controller = new HomeController();
  controller.Index();

  // act
  string even = controller.ViewModel.Cycle(0, "even", "odd");

  // assert
  Assert.AreEqual("even", even);
}

Tags: aspnetmvc, dynamic, viewdata

asp.net, code, asp.net mvc 0 comments suggest edit

UPDATE: This post is a out of date. We recently released the Release Candidate for ASP.NET MVC 3.

Feels like just yesterday that we released ASP.NET MVC 2 to the world and here I am already talking about an early preview. In a way, we’re right on schedule. It was almost exactly a year ago that we released Preview 1 of ASP.NET MVC 2.

Today I’m happy to announce that ASP.NET MVC 3 Preview 1 is available for download. Give it a try out and let us know what you think. Some key notes before you give it a whirl:

  • ASP.NET MVC 3 Preview 1 tooling requires Visual Studio 2010
  • ASP.NET MVC 3 Preview 1 runtime requires the ASP.NET 4 runtime

As usual, to find out what’s in this release, check out the release notes. Also at the recent MVCConf, a virtual conference about ASP.NET MVC, I recorded a talk that provided a sneak peek at ASP.NET MVC 3 Preview 1. The audio quality isn’t great, but I do demo some of the key new features so be sure to check it out.

So what’s in this release that I’m excited about? Here’s a small sampling:

  • Razor View Engine which ScottGu wrote about recently. Note that for Preview 1, we only support the C# version (CSHTML). IN later previews, we will add support for the VB.NET version (VBHTML). Also, Intellisense support for Razor syntax in Visual Studio 2010 will be released later.
  • Dependency Injection hooks using service locator interface. Brad Wilson should have a few blog posts on this over the next few days.
  • Support for .NET 4 Data Annotation and Validation attributes.
  • Add View dialog support for multiple view engines including custom view engines.
  • Global Action Filters

In the next few days you should see more details about each of these areas start to show up in various blog posts. I’ll try to keep this blog post updated with relevant blog posts so you can find them all. Enjoy!

asp.net mvc, asp.net, code 0 comments suggest edit

I wanted to confirm something about how to upload a file or set of files with ASP.NET MVC and the first search result for the phrase “uploading a file with asp.net mvc” is Scott Hanselman’s blog post on the topic.

His blog post is very thorough and helps provide a great understanding of what’s happening under the hood. The only complaint I have is that the code could be much simpler since we’ve made improvements to the ASP.NET MVC 2. I write this blog post in the quixotic hopes of knocking his post from the #1 spot.

Uploading a single file

Let’s start with the view. Here’s a form that will post back to the current action.

<form action="" method="post" enctype="multipart/form-data">
  
  <label for="file">Filename:</label>
  <input type="file" name="file" id="file" />

  <input type="submit" />
</form>

Here’s the action method that this view will post to which saves the file into a directory in the App_Data folder named “uploads”.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file) {
            
  if (file.ContentLength > 0) {
    var fileName = Path.GetFileName(file.FileName);
    var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
    file.SaveAs(path);
  }
            
  return RedirectToAction("Index");
}

Notice that the argument to the action method is an instance of HttpPostedFileBase. ASP.NET MVC 2 introduces a new value providers feature which I’ve covered before.

Whereas model binders are used to bind incoming data to an object model, value providers provide an abstraction for the incoming data itself.

In this case, there’s a default value provider called the HttpFileCollectionValueProvider which supplies the uploaded files to the model binder.Also notice that the argument name, file, is the same name as the name of the file input. This is important for the model binder to match up the uploaded file to the action method argument.

Uploading multiple files

In this scenario, we want to upload a set of files. We can simply have multiple file inputs all with the same name.

<form action="" method="post" enctype="multipart/form-data">
    
  <label for="file1">Filename:</label>
  <input type="file" name="files" id="file1" />
  
  <label for="file2">Filename:</label>
  <input type="file" name="files" id="file2" />

  <input type="submit"  />
</form>

Now, we just tweak our controller action to accept an IEnumerable of HttpPostedFileBase instances. Once again, notice that the argument name matches the name of the file inputs.

[HttpPost]
public ActionResult Index(IEnumerable<HttpPostedFileBase> files) {
  foreach (var file in files) {
    if (file.ContentLength > 0) {
      var fileName = Path.GetFileName(file.FileName);
      var path = Path.Combine(Server.MapPath("~/App_Data/uploads"), fileName);
      file.SaveAs(path);
    }
  }
  return RedirectToAction("Index");
}

Yes, it’s that easy. :)

Tags: aspnetmvc, upload

asp.net, asp.net mvc, razor 0 comments suggest edit

UPDATE: Check out my Razor View Syntax Quick Reference for a nice quick reference to Razor.

There’s an old saying, “Good things come to those who wait.” I remember when I first joined the ASP.NET MVC project, I (and many customers) wanted to include a new streamlined custom view engine. Unfortunately at the time, it wasn’t in the card since we had higher priority features to implement.

Well the time for a new view engine has finally come as announced by Scott Guthrie in this very detailed blog post.

Photo by "clix"
http://www.sxc.hu/photo/955098

While I’m very excited about the new streamlined syntax, there’s a lot under the hood I’m also excited about.

Andrew Nurse, who writes the parser for the Razor syntax, provides more under-the-hood details in this blog post. Our plan for the next version of ASP.NET MVC is to make this the new default view engine, but for backwards compatibility we’ll keep the existing WebForm based view engine.

As part of that work, we’re also focusing on making sure ASP.NET MVC tooling supports any view engine. In ScottGu’s blog post, if you look carefully, you’ll see Spark listed in the view engines drop down in the Add View dialog. We’ll make sure it’s trivially easy to add Spark, Haml, whatever, to an ASP.NET MVC project. :)

Going back to Razor, one benefit that I look forward to is that unlike an ASPX page, it’s possible to fully compile a CSHTML page without requiring the ASP.NET pipeline. So while you can allow views to be compiled via the ASP.NET runtime, it may be possible to fully compile a site using T4 for example. A lot of cool options are opened up by a cleanly implemented parser.

In the past several months, our team has been working with other teams around the company to take a more holistic view of the challenges developing web applications. ScottGu recently blogged about the results of some of this work:

  • SQLCE 4 – Medium trust x-copy deployable database for ASP.NET.
  • IIS Express – A replacement for Cassini that does the right thing.

The good news is there’s a lot more coming! In some cases, we had to knock some heads together (our heads and the heads of other teams) to drive focus on what developers really want and need rather than too much pie in the sky architectural astronomy.

I look forward to talking more about what I’ve been working on when the time is right. :)

subtext, personal 0 comments suggest edit

My son and I returned from a week long vacation to visit my parents in Anchorage Alaska last night. Apparently, having the boys out of the house was quite the vacation for my wife as well. :)

We had a great time watching the World Cup and going on outings to the zoo as well as hiking.

cody-phil-hiking

Well, at least one of us was hiking while another was just enjoying the ride. We hiked up a trail to Flattop which has spectacular views of Anchorage. Unfortunately, we didn’t make it all the way to the top as the trail became a bit too much while carrying a toddler who was more interested in watching Go, Diego, Go episodes on his iPod.

hiking-trip

Funny how all that “hiking” works up an appetite.

cody-burger

Also, while in Alaska I gave a talk on ASP.NET MVC 2 to the local .NET User Group. It was their second meeting ever and somehow, in the delirium of perpetual sunlight, I spent two hours talking! It was slated to be a one hour talk.

DotNetLicense

I didn’t see a hint of resentfulness in the group though as they peppered me with great questions after the talk. Apparently, some of them are fans of .NET. ;)

The other thing I was able to do while in Alaska was finish up a bug fix release of Subtext in the wake of our big 2.5 release. There were some high priority bugs in that release. Simone has the details and breakdown on the Subtext 2.5.1 release.

code 0 comments suggest edit

In my last blog post, I wrote about the proper way to check for empty enumerations and proposed an IsNullOrEmpty method for collections which sparked a lot of discussion.

This post covers a similar issue, but from a different angle. A very long time ago, I wrote about my love for the null coalescing operator. However, over time, I’ve found it to be not quite as useful as it could be when dealing with strings. For example, here’s the code I might want to write:

public static void DoSomething(string argument) {
  var theArgument = argument ?? "defaultValue";
  Console.WriteLine(theArgument);
}

But here’s the code I actually end up writing:

public static void DoSomething(string argument) {
  var theArgument = argument;
  if(String.IsNullOrWhiteSpace(theArgument)) {
    theArgument = "defaultValue";
  }
  Console.WriteLine(theArgument);
}

The issue here is that I want to treat an argument that consists only of whitespace as if the argument is null and replace the value with my default value. This is something the null coalescing operator won’t help me with.

This lead me to jokingly propose a null or empty coalescing operator on Twitter with the syntax ???. This would allow me to write something like:

var s = argument ??? "default";

Of course, that doesn’t go far enough because wouldn’t I also need a null or whitespace coalescing operator???? ;)

Perhaps a better approach than the PERLification of C# is to write an extension method that normalizes string in such a way you can use the tried and true (and existing!) null coalescing operator.

Thus I present to you the AsNullIfEmpty and AsNullIfWhiteSpace methods!

Here’s my previous example refactored to use these methods.

public static void DoSomething(string argument) {
  var theArgument = argument.AsNullIfWhiteSpace() ?? "defaultValue";

  Console.WriteLine(theArgument);
}

You can also take the same approach with collections.

public static void DoSomething(IEnumerable<string> argument) {
  var theArgument = argument.AsNullIfEmpty() ?? new string[]{"default"};

  Console.WriteLine(theArgument.Count());
}

The following is the code for these simple methods.

public static class EnumerationExtensions {
  public static string AsNullIfEmpty(this string items) {
    if (String.IsNullOrEmpty(items)) {
      return null;
    }
    return items;
  }

  public static string AsNullIfWhiteSpace(this string items) {
    if (String.IsNullOrWhiteSpace(items)) {
      return null;
    }
    return items;
  }
        
  public static IEnumerable<T> AsNullIfEmpty<T>(this IEnumerable<T> items) {
    if (items == null || !items.Any()) {
      return null;
    }
    return items;
  }
}

Another approach that some commenters to my last post recommended is to write a Coalesce method. That’s also a pretty straightforward approach which I leave as an exercise to the reader. :)

code 0 comments suggest edit

While spelunking in some code recently I saw a method that looked something like this:

public void Foo<T>(IEnumerable<T> items) {
  if(items == null || items.Count() == 0) {
    // Warn about emptiness
  }
}

This method accepts a generic enumeration and then proceeds to check if the enumeration is null or empty. Do you see the potential problem with this code? I’ll give you a hint, it’s this line:

items.Count() == 0

What’s the problem? Well that line right there has the potential to be vastly inefficient.

If the caller of the Foo method passes in an enumeration that doesn’t implement ICollection<T> (for example, an IQueryable as a result from an Entity Framework or Linq to SQL query) then the Count method has to iterate over the entire enumeration just to evaluate this expression.

In cases where the enumeration that’s passed in to this method does implement ICollection<T>, this code is fine. The Count method has an optimization in this case where it will simply check the Count property of the collection.

If we translated this code to English, it’s asking the question “Is the count of this enumeration equal to zero?”. But that’s not really the question we’re interested in. What we really want to know is the answer to the question “Are there any elements in this enumeration?

When you think of it that way, the solution here becomes obvious. Use the Any extension method from the System.Linq namespace!

public void Foo<T>(IEnumerable<T> items) {
  if(items == null || !items.Any()) {
    // Warn about emptiness
  }
}

The beauty of this method is that it only needs to call MoveNext on the IEnumerable interface once! You could have an infinitely large enumeration, but Any will return a result immediately.

Even better, since this pattern comes up all the time, consider writing your own simple extension method.

public static bool IsNullOrEmpty<T>(this IEnumerable<T> items) {
    return items == null || !items.Any();
}

Now, with this extension method, our original method becomes even simpler.

public void Foo<T>(IEnumerable<T> items) {
  if(items.IsNullOrEmpty()) {
    // Warn about emptiness
  }
}

With this extension method in your toolbelt, you’ll never inefficiently check an enumeration for emptiness again.

subtext 0 comments suggest edit

Deploying a Subtext skin used to be one of the biggest annoyances with Subtext prior to version 2.5. The main problem was that you couldn’t simply copy a skin folder into the Skins directory and just have it work because the configuration for a given skin is centrally located in the Skins.config file.

elephant-skinIn other words, a skin wasn’t self contained in a single folder. With Subtext 2.5, this has changed. Skins are fully self contained and there is no longer a need for a central configuration file for skins.

What this means for you is that it is now way easier to share skins. When you get a skin folder, you just drop it into the /skins directory and you’re done!

In most cases, there’s no need for any configuration file whatsoever. If your skin contains a CSS stylesheet named style.css, that stylesheet is automatically picked up. Also, with Subtext 2.5, you can provide a thumbnail for your skin by adding a file named SkinIcon.png into your skin folder. That’ll show up in the improved Skin picker.

When To Use A Skin.config File

Each skin can have its own manifest file named Skin.config.This file is useful when you have multiple CSS and JavaScript files you’d like to include other than style.css (though even in this case it’s not absolutely necessary as you can reference the stylesheets in PageTemplate.ascx directly).

The other benefit of using the skin.config file to reference your stylesheets and script files is you can take advantage of our ability to merge these files together at runtime using the StyleMergeMode and ScriptMergeMode attributes.

Also, in some cases, a skin can have multiple themes differentiated by stylesheet as described in this blog post. A skin.config file can be used to specify these skin themes and their associated CSS file.

Creating a Skin.config file

Creating a skin.config file shouldn’t be too difficult. If you already have a Skins.User.config file, it’s a matter of copying the section of that file that pertains to your skin into a skin.config file within your skin folder and removing some extraneous nodes.

Here’s an example of a new skin.config file for my personal skin.

<?xml version="1.0" encoding="utf-8" ?>
<SkinTemplates>
    <SkinTemplate Name="Haacked-3.0">
        <Scripts>
            <Script Src="~/scripts/lightbox.js" />
            <Script Src="~/scripts/XFNHighlighter.js" />
        </Scripts>
        <Styles>
            <Style href="~/css/lightbox.css" />
            <Style href="~/skins/_System/csharp.css" />
            <Style href="~/skins/_System/commonstyle.css" />
            <Style href="~/skins/_System/commonlayout.css" />
            <Style href="~/scripts/XFNHighlighter.css" />
            <Style href="IEPatches.css" conditional="if IE" />
        </Styles>
    </SkinTemplate>
</SkinTemplates>

If you compare it to the old format, you’ll notice the <Skins> element is gone and there’s no need to specify the TemplateFolder since it’s assumed the folder containing this file is the template folder.

Hopefully soon, we’ll provide more comprehensive documentation on our wiki so you don’t have to go hunting around my blog for information on how to skin your blog. My advice is to copy an existing skin and just tweak it.

0 comments suggest edit

Wow, has it already been over a year since the last major version of Subtext? Apparently so.

Today I’m excited to announce the release of Subtext 2.5. Most of the focus on this release has been under the hood, but there are some great new features you’ll enjoy outside of the hood.

Major new features

  • New Admin Dashboard: When you login to the admin section of your blog after upgrading, you’ll notice a fancy schmancy new dashboard that summarizes the information you care about in a single page.subtext-dashboardThe other thing you’ll notice in the screenshot is the admin section received a face lift with a new more polished look and feel and many usability improvements.
  • Improved Search:We’ve implemented a set of great search improvements. The biggest change is the work that Simone Chiaretta did integrating Lucene.NET, a .NET search engine, as our built-in search engine. Be sure to check out his tutorial on Lucene.NET. Also, when clicking through to Subtext from a search engine result, we’ll show related blog posts. Subtext also implements the OpenSearch API.

Core Changes

We’ve put in huge amounts of effort into code refactoring, bulking up our unit test coverage, bug fixes, and performance improvements. Here’s a sampling of some of the larger changes.

  • Routing: We’ve replaced the custom regex based URL handling with ASP.NET Routing using custom routes based on the page routing work in ASP.NET 4. This took a lot of work, but will lead to better control over URLs in the long run.
  • Dependency Injection:Subtext now uses Ninject, an open source Dependency Injection container, for its Inversion of Control (IoC) needs. This improves the extensibility of Subtext.
  • Code Reorganization and Reduced Assemblies: A lot of work went into better organizing the code into a more sane and understandable structure. We also reduced the overall number of assemblies in an attempt to improve application startup times.
  • Performance Optimizations:We made a boat load of code focused performance improvements as well as caching improvements to reduce the number of SQL queries per request.
  • Skinning Improvements:This topic deserves its own blog post, but to summarize, skins are now fully self contained within a folder. Prior to this version, adding a new skin required adding a skin folder to the /Skins directory and then modifying a central configuration file. We’ve removed that second step by having each skin contain its own manifest, if needed. Most skins don’t need the manifest if they follow a set of skin conventions. For a list of Breaking changes, check out our wiki.

Upgrading

Because of all the changes and restructuring of files and directories, upgrading is not as straightforward as it has been in the past.

To help with all the necessary changes, we’ve written a tool that will attempt to upgrade your existing Subtext blog.

I’ve recorded a screencast that walks through how to upgrade a blog to Subtext 2.5 using this new tool.

Installation

Installation should be as easy and straightforward as always, especially if you install it using the Web Platform Installer (Note, it may take up to a week for the new version to show up in Web PI). If you’re deploying to a host that supports SQLExpress, we’ve included a freshly installed database in the App_Data folder.

To install, download the zip file here and follow the usual Subtext installation instructions.

More information

We’ll be updating our project website with more information about this release in the next few weeks and I’ll probably post a blog post here and there.

I’d like to thank the entire Subtext team for all their contributions. This release probably contains the most diversity of patches and commits of all our releases with lots of new people pitching in to help.

personal 0 comments suggest edit

I saw a recent Twitter thread discussing the arrogance of Steve Jobs. One person (ok, it was my buddy Rob) postulated that it was this very arrogance that led Apple to their successes.

I suppose it’s quite possible that it had a factor, but I tend to think Steve Job’s vision and drive were much bigger factors.

This idea is a reflection of a pervasive belief out there that arrogance is excusable, perhaps even acceptable and admirable in successful people and institutions. In contrast, I think we’d all agree that that arrogance is universally detestable in unsuccessful people.

But is arrogance necessary for success? I certainly don’t think so. I think there’s an alternative characteristic that can lead to just as much success.

Joy.

pele
pic

My example here is the most successful national soccer team ever, Brazil. They’ve won the most world cups of any team and yet the one word you’d be hard pressed to find anyone using to describe them is “Arrogant.” (Yes, I know that many from Argentina would disagree, but this is the perception out there) ;)

Instead, the word often associated with them is “Joy.” When Brazil plays, their joy for the beautiful game is so infectious you can’t help but share in the joy when they win. Heck, even as you’re grumbling about your own team losing to them, it’s hard not to join in the Samba spirit (again, unless you’re from Argentina).

This is a team that has been incredibly successful over the years and arrogance was unnecessary.

I think there are probably many examples in the technology and business world we could point to where incredible success and visionary leadership came from a joy in the work they do rather than arrogance. Have any examples for me? Leave them in the comments.

The World Cup starts in 6 days! I’ll try not to make all my posts soccer themed if I can help it. :)