asp.net, asp.net mvc, nuget, code comments edit

At the risk of getting punched in the face by my friend Miguel, I’m not afraid to admit I’m a fan of responsible use of dependency injection. However, for many folks, attempting to use DI runs into a roadblock when it comes to ASP.NET HttpModule.

In the past, I typically used “Poor man’s DI” for this. I wasn’t raised in an affluent family, so I guess I don’t have as much of a problem with this approach that others do.

However, when the opportunity for something better comes along, I’ll take it Daddy Warbucks. I was refactoring some code in Subtext when it occurred to me that the new ability to register HttpModules dynamically using the PreApplicationStartMethodAttribute could come in very handy.

Unfortunately, the API only allows for registering a module by type, which means the module requires a default constructor. However, as with many problems in computer science, the solution is another layer of redirection.

In this case, I wrote a container HttpModule that itself calls into the  the DependencyResolver feature of ASP.NET MVC 3 in order to find and initialize the http modules registered via your IoC/DI container. The approach I took happens to be very much similar to one that Mauricio Scheffer blogged about a while ago.

using System;
using System.Collections.Generic;
using System.Web;
using System.Web.Mvc;
using HttpModuleMagic;
using Microsoft.Web.Infrastructure.DynamicModuleHelper;

[assembly: PreApplicationStartMethod(typeof(ContainerHttpModule), "Start")]
namespace HttpModuleMagic
{
  public class ContainerHttpModule : IHttpModule
  {
    public static void Start()
    {
      DynamicModuleUtility.RegisterModule(typeof(ContainerHttpModule));
    }

    Lazy<IEnumerable<IHttpModule>> _modules 
      = new Lazy<IEnumerable<IHttpModule>>(RetrieveModules);

    private static IEnumerable<IHttpModule> RetrieveModules()
    {
      return DependencyResolver.Current.GetServices<IHttpModule>();
    }

    public void Dispose()
    {
      var modules = _modules.Value;
      foreach (var module in modules)
      {
        var disposableModule = module as IDisposable;
        if (disposableModule != null)
        {
          disposableModule.Dispose();
        }
      }
    }

    public void Init(HttpApplication context)
    {
      var modules = _modules.Value;
      foreach (var module in modules)
      {
        module.Init(context);
      }
    }
  }
}

The code is pretty straightforward, though there’s a lot going on here. At the top of the class we use the PreApplicationStartMethodAttribute which allows the http module to register itself! Just reference the assembly containing this code and you’re all set to go. No mucking around with web.config!

Note that this code does require that you’re application has the following two assemblies in bin:

  1. System.Web.Mvc.dll 3.0
  2. Microsoft.Web.Infrastructure.dll 1.0

The nice part about this is after referencing this assembly, I can simply register the Http Modules using my favorite DI container and I’m good to go. For example, I installed the Ninject.Mvc3 package and added the following Subtext http module bindings:

kernel.Bind<IHttpModule>().To<BlogRequestModule>();
kernel.Bind<IHttpModule>().To<FormToBasicAuthenticationModule>();
kernel.Bind<IHttpModule>().To<AuthenticationModule>();
kernel.Bind<IHttpModule>().To<InstallationCheckModule>();
kernel.Bind<IHttpModule>().To<CompressionModule>();

There is one caveat I should point out. You’ll notice that when the container http module is disposed, Dispose is called on each of the registered http modules.

This could be problematic if you happen to register them in singleton scope. In my case, all of my modules are stateless and the Dispose method is a no-op, which in general is a good idea unless you absolutely need to hold onto state.

If your modules do hold onto state and need to be disposed of, you’ll have to be careful to scope your http modules appropriately. It’s possible for multiple instances of your http module to be created in an ASP.NET application.

DI for a Single Http Module

Just in case your DI container doesn’t support the ability to register multiple instances of a type (in other words, it doesn’t support the DependencyResolver.GetServices call), or it can’t handle the scoping properly and your http module holds onto state that needs to be disposed at the right time, I did write another class for registering an individual module, while still allowing your DI container to hook into creation of that one module.

In this case, you won’t be using DI to register the set of http modules. But you will be using it to create instances of the modules that you register.

Here’s the class.

using System;
using System.Web;
using System.Web.Mvc;

namespace HttpModuleMagic
{
  public class ContainerHttpModule<TModule> 
    : IHttpModule where TModule : IHttpModule
  {
    Lazy<IHttpModule> _module = new Lazy<IHttpModule>(RetrieveModule);

    private static IHttpModule RetrieveModule()
    {
      return DependencyResolver.Current.GetService<IHttpModule>();
    }

    public void Dispose()
    {
      _module.Value.Dispose();
    }

    public void Init(HttpApplication context)
    {
      _module.Value.Init(context);
    }
  }
}

This module is much like the other container one, but it only wraps a single http module. You would register it like so:

DynamicModuleUtility.RegisterModule(typeof(ContainerHttpModule<MyHttpModule>));

In this case, you’d need to set up your own PreApplicationStartMethod attribute or use the WebActivator.

And of course, I created a little NuGet package for this.

Install-Package HttpModuleMagic

Note that this requires that you install it into an application with the ASP.NET MVC 3 assemblies.

personal comments edit

I’m reading through the archives of a blog where the author posts something random every Friday (yesterday was Thursday, and tomorrow is Saturday). His Friday posts are completely unrelated to the main theme and content of his blog.

I like that idea a lot. I don’t blog as much as I used to mostly because I feel the need to spend so much time on each blog post. A lot of the posts I write take a bit of research and experimentation before I’m ready to post them.

But a random thought? I can pull one of those out of my ascot any day of the week, and twice on Friday. But I’ll only do it once.

And yes, thanks for asking, but the thought has occurred to me that I already have another medium where I post random thoughts 7 days a week, Twitter (I’m @haacked on Twitter).

But my twist on this is that every Friday, I’ll post something random, funny, amusing, or whatever in this blog post, and I’ll use more than 140 characters but I’ll always end the post with something I appreciated either during the week, or in general.

This Friday’s random thought is about starting a random thought Friday blog series and whether this will end up being a one post series like the rest. So there, I’ve done that part.

And the thing I appreciate this past week is how nice it was to take a day off and spend it with my wife. Oh, and Instagram. I appreciate Instagram a lot. Perhaps too much. I’ll try easing off the Tilt-Shift from now on.

my-lovely-wife

Here’s where I violate my wife’s privacy and post a picture of her on a ferry to Bainbridge Island on my blog. We had a really nice outing on Tuesday.

asp.net, asp.net mvc, code comments edit

When you build an ASP.NET MVC 3 application and are ready to deploy it to your hosting provider, there are a set of assemblies you’ll need to include with your application for it to run properly, unless they are already installed in the Global Assembly Cache (GAC) on the server.

In previous versions of ASP.NET MVC, this set of assemblies was rather small. In fact, it was only one assembly, System.Web.Mvc.dll,though in the case of ASP.NET MVC 1.0, if you didn’t have SP1 of .NET 3.5 installed, you would have also needed to deploy System.Web.Abstractions.dll and System.Web.Routing.dll.

But ASP.NET MVC 3 makes use of technology shared with the new ASP.NET Web Pages product such as Razor. If you’re not familiar with ASP.NET Web Pages and how it fits in with Web Matrix and ASP.NET MVC, read David Ebbo’s blog post, How WebMatrix, Razor, ASP.NET Web Pages, and MVC fit together.

If your server doesn’t have ASP.NET MVC 3 installed, you’ll need to make sure the following set of assemblies are deployed in the bin folder of your web application:

  • Microsoft.Web.Infrastructure.dll
  • System.Web.Helpers.dll
  • System.Web.Mvc.dll
  • System.Web.Razor.dll
  • System.Web.WebPages.Deployment.dll
  • System.Web.WebPages.dll
  • System.Web.WebPages.Razor.dll

In this case, it’s not as simple as looking at your list of assembly references and setting Copy Local to True as I’ve instructed in the past.

As you can see in the following screenshot, not every assembly is referenced. Not all of these assemblies are meant to be programmed against so it’s not necessary to actually reference**each of these assemblies. They just need to be available on the machine either from the GAC or in the bin folder.

referenced-assemblies

But the Visual Web Developer team has you covered. They added a feature specifically for adding these deployable assemblies. Right click on the project and select Add Deployable Assemblies and you’ll see the following dialog.

add-deployable-assemblies

When building an ASP.NET MVC application, you only need to check the first option. Ignore the fact that the second one says “Razor”. “ASP.NET Web Pages with Razor syntax”was the official full name of the product we simply call ASP.NET Web Pages now. Yeah, it’s confusing.

Note that there’s also an option for SQL Server Compact, but that’s not strictly necessary if you’ve installed SQL Server Compact via NuGet.

So what happens when you click “OK”?

bin-deployable-assemlies

A special folder named _bin_deployableAssemblies is created and the necessary assemblies are copied into this folder. Web projects have a built in build task that copies any assemblies in this folder into the bin folder when the project is compiled.

Note that this dialog did not add any assembly references to these assemblies. That ensures that the types in these assemblies don’t pollute Intellisense, while still being available to your deployed application. If you actually need to use a type in one of these assemblies, you’re free to reference them.

So here’s the kicker. If you’re building a web application, and you need an assembly deployed but don’t want it referenced and don’t want it checked into the bin directory, you can simply add this folder yourself and put your own assemblies in here.

If you’ve ever run into a problem where an ASP.NET MVC site you developed locally doesn’t work when you deploy it, this dialog may be just the ticket to fix it.

open source, nuget comments edit

Most developers I know are pretty anal about the formatting of their source code. I used to think I was pretty obsessive compulsive about it, but then I joined Microsoft and faced a whole new level of OCD (Obsessive Compulsive Disorder). For example, many require all using statements to be sorted and unused statements to be removed, which was something I never cared much about in the past.

There’s no shortcut that I know of for removing unused using statements. Simply right click in the editor and select Organize Usings > Remove and Sort**in the context menu.

SubtextSolution - Microsoft Visual Studio (Administrator)
(2)

In Visual Studio, you can specify how you want code formatted by launching the Options dialog via Tools> Options and then select the Text Editor node. Look under the language you care about and there are multiple formatting options providing hours of fun fodder for religious debates.

Options

Once you have the settings just the way you want them, you can select the Edit > Advanced > Format Document (or simply use the shortcut CTRL + K, CTRL + D ) to format the document according to your conventions.

The problem with this approach is it’s pretty darn manual. You’ll have to remember to do it all the time, which if you really have OCD, is probably not much of a problem.

However, for those that keep forgetting these two steps and would like to avoid facing the wrath of nitpicky code reviewers (try submitting a patch to NuGet to experience the fun), you can install the Power Commands for Visual Studio via the Visual Studio Extension manager which provides an option to both format the document and sort and remove using statements every time you save the document.

I’m actually not a fan of having using statements removed on every save because I save often and it tends to remove namespaces containing extension methods that I will need, but haven’t yet used, such as System.Linq.

Formatting Every Document

Also, if you have a large solution with many collaborators, the source code can start to drift away from your OCD ideals over time. That’s why it would be nice to have a way of applying formatting to every document in your solution.

One approach is to purchase ReSharper, which I’m pretty sure can reformat an entire solution and adds a lot more knobs you can tweak for the formatting.

But for you cheap bastards, there are a couple of free approaches you can make. One approach is to write a Macro, like Brian Schmitt did. His doesn’t sort and remove using statements, but it’s a one line addition to add that.

Of course, the approach I was interested in trying was to use Powershell to do it within the NuGet Package Manager Console. A couple nights ago I was chatting with my co-worker and hacker extraordinaire, David Fowler, way too late at night about doing this and we decided to have a race to see who could implement it first.

I knew I had no chance unless I cheated so I wrote this monstrosity (I won’t even post it here I’m so ashamed). David calls it “PM code”, which in this case was well deserved as it was simply a proof of concept, but also because it’s wrong. It doesn’t traverse the files recursively. But hey, I was first! But I at least gave him the code needed to actually format the document.

It was very late and I went to sleep knowing in the morning, I’d see something elegant from David. I was not disappointed as he posted this gist.

He wrote a generic command named Recurse-Project that recursively traverses every item in every project within a solution and calls an action against each item.

That allowed him to easily write Format-Document which leverages Recurse-Project and automates calling into Visual Studio’s Format Document command.

function Format-Document {
  param(
    [parameter(ValueFromPipelineByPropertyName = $true)]
    [string[]]$ProjectName
  )
  Process {
    $ProjectName | %{ 
      Recurse-Project -ProjectName $_ -Action {
        param($item)
        if($item.Type -eq 'Folder' -or !$item.Language) {
          return
        }
    
        $win = $item.ProjectItem.Open('{7651A701-06E5-11D1-8EBD-00A0C90F26EA}')
        if ($win) {
          Write-Host "Processing `"$($item.ProjectItem.Name)`""
          [System.Threading.Thread]::Sleep(100)
          $win.Activate()
          $item.ProjectItem.Document.DTE.ExecuteCommand('Edit.FormatDocument')
          $item.ProjectItem.Document.DTE.ExecuteCommand('Edit.RemoveAndSort')
          $win.Close(1)
        }
      }
    }
  }
}

Adding Commands to NuGet Powershell Profile

Great! He did the work for me. So what’s the best way to make use of his command? I could add it to a NuGet package, but that would then require that I install the package first any time I wanted to use the package. That’s not very usable. NuGet doesn’t yet support installing PS scripts at the machine level, though it’s something we’re considering.

To get this command available on my machine so I can run it no matter which solution is open, I need to set up my NuGet-specific Powershell profile as documented here.

The NuGet Powershell profile script is located at:

%UserProfile%\Documents\WindowsPowerShell\NuPack_profile.ps1

The easiest way to find the profile file is to type $profile within the NuGet Package Manager Console. The profile file doesn’t necessarily exist by default, but it’s easy enough to create it. The following screenshot shows a session where I did just that.

nuget-ps-profile

The mkdir –force (split-path $profile) command creates the WindowsPowershell directory if it doesn’t already exist.

Then simply attempting to open the script in Notepad prompts you to create the file if it doesn’t already exist. Within the profile file, you can change PowerShell settings or add new commands you might find useful.

For example, you can cut and paste the code in David’s gist and put it in here. Just make sure to omit the first example line in the gist which simply prints all project items to the console.

When you close and re-open Visual Studio, the Format-Document command will be available in the NuGet Package Manager Console. When you run the command, it will open each file and run the format command on it. It’s rather fun to watch as it feels like a ghost has taken over Visual Studio.

The script has a Thread.Sleep call for 100ms to work around a timing issue when automating Visual Studio. It can take a brief moment after you open the document before you can activate it. It doesn’t hurt anything to choose a lower number. It only means you may get the occasional error when formatting a document, but the script will simply move to the next document.

The following screenshot shows the script in action.

formatting-documents

With this in place, you can now indulge your OCD and run the Format-Document command to clean up your entire solution. I just ran it against Subtext and now can become the whitespace Nazi I’ve always wanted to be.

open source, personal comments edit

Almost two years ago, I announced the launch of http://letmebingthatforyou.com/, a blatant and obvious rip-off of the Let me Google that for you website.

The initial site was created by Maarten Balliauw and Juliën Hanssens in response to a call for help I made. It was just something we did for fun. I’ve been maintaining the site privately always intending to spend some time to refresh the code and open source it.

Just recently, I upgraded the site to ASP.NET MVC 3, refactored a bunch of code, and moved the site to AppHarbor.

Why AppHarbor?

I’ve heard such good things about how easy it is to deploy to AppHarbor so I wanted to try it out firsthand myself, and this small little project seemed like a perfect fit.

I had been working on the code in a private Mercurial repository so it was trivially easy for me to push it to a BitBucket repository. From there it’s really easy to integrate the BitBucket account with AppHarbor.

So now, my deployment workflow is really easy when working on this simple project:

  1. Make some changes and commit them into my local HG (Mercurial) repository. I have my local repository syncing to all my machines using Live Mesh.
  2. At some point, when I’m ready to publish the changes, I run the hg push command on my repository.
  3. That’s it! AppHarbor builds my project and if all the unit tests pass, it deploys it live.

I’m not planning to spend a lot of time on Let Me Bing That For You. It’s just a fun little side project that allows me to play around with ASP.NET MVC 3, jQuery, etc. If you want to look at the source, or contribute a patch, check it out on BitBucket.

nuget, open source comments edit

It’s a common refrain you hear when it comes to documentation for open source projects. It typically sucks! In part, because nobody wants to work on docs. But also in part because good documentation is challenging to write.

What is good documentation in the first place? The following is a list of some qualities that make for great documentation. This list is by no means complete. Good docs are…

  • Written for the right audience
  • Comprehensive and accurate
  • Easily browsable and searchable
  • Written in a clear and concise language
  • Laid out in a readable format
  • Versioned with the source code

While it’s challenging to write and maintain great documentation, my co-worker Matthew was up to the challenge of building a simple Markdown based system to help us manage our documentation. Read about our new docs site in his blog post, Introducing NuGet Docs: Community Driven Documentation.

Our goal in the long run is to have a great set of docs for NuGet with help from the community. So if you’re interested in helping out, please visit our NuGet Docs project page and let us know. It’s a separate repository with its own Mercurial repository so we can give a lot more people write access directly to the repository.

So please, if you’re looking for a low commitment easy way to get a toe in the waters with open source in general or with NuGet, consider helping us with our docs. It’s a great way to get started with OSS. It’s how I got my start a long time ago by contributing docs to RSS Bandit.

asp.net mvc, asp.net comments edit

In April we announced the release of ASP.NET MVC 3 Tools Update which added Scaffolding, HTML 5 project templates, Modernizr, and EF Code First Magic Unicorn Edition.

Today, just shy of one month later I’m happy to announce that this release is now available in nine other languages via the Web Platform Installer (Web PI).

We’ve also included release notes translated into the nine languages as well.

The best way to install the language specific version of ASP.NET MVC 3 is via the Web Platform installer because it will chain in the full installer. If you install the language specific version directly from the Download Details page, you’ll need to run two installers, the full installer and then the language pack installer.

asp.net, asp.net mvc comments edit

ASP.NET MVC project templates include support for precompiling views, which is useful for finding syntax errors within your views at build time rather than at runtime.

In case you missed the memo, the following outline how to enable this feature.

  • Right click on your ASP.NET MVC project in the Solution Explorer
  • Select Unload Project in the context menu. Your project will show up as unavailableunavailable-project
  • Right click on the project again and select Edit ProjectName.csproj.

This will bring up the project file within Visual Studio. Search for the entry <MvcBuildViews> and set the value to true. Then right click on the project again and select Reload Project.

Compiling in a build environment

If you search for MvcBuildViews on the web, you’ll notice a lot of people having problems when attempting to build their projects in a build environment. For example, this StackOverflow question describes an issue when compiling MVC on a TFS Build. I had an issue when trying to deploy an ASP.NET MVC 3 application to AppHarbor.

It turns out we had a bug in our project templates in earlier versions of ASP.NET MVC that we fixed in ASP.NET MVC 3 Tools Update.

But if you created your project using an older version of ASP.NET MVC including ASP.NET MVC 3 RTM (the one before the Tools Update), your csproj/vbproj file will still have this bug.

To fix this, look for the following element within your project file:

<Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">
  <AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)\..\$(ProjectName)" />
</Target>

And replace it with the following.

<Target Name="MvcBuildViews" AfterTargets="AfterBuild" Condition="'$(MvcBuildViews)'=='true'">
  <AspNetCompiler VirtualPath="temp" PhysicalPath="$(WebProjectOutputDir)" />
</Target>

After I did that, I was able to deploy my application to AppHarbor without any problems.

Going back to the StackOverflow question I mentioned earlier, notice that the accepted answer is not the best answer. Jim Lamb provided a better answer and is the one who provided the solution that we use in ASP.NET MVC 3 Tools Update. Thanks Jim!

nuget, code, open source comments edit

Not too long ago, I posted a survey on my blog asking a set of questions meant to gather information that would help the NuGet team make a decision about a rather deep change.

You can see the results of the survey here.

If there’s one question that got to the heart of the matter, it’s this one.

survey-question-result

We’re considering a feature that would only allow a single package version per solution. As you can see by the response to the question, that would fit what most people need just fine, though there are a small number of folks that might run into problems with this behavior.

One variant of this idea would allow multiple package versions if the package doesn’t contain any assemblies (for example, a JavaScript package like jQuery).

Thanks again for filling out the survey. We think we have a pretty good idea of how to proceed at this point, but there’s always room for more feedback. If you want to provide more feedback on this proposed change, please review the spec here and post your thoughts in our discussion forum in the thread dedicated to this change.

The spec describes what pain point we’re trying to solve and shows a few examples of how the behavior change would affect common scenarios, so it’s worth taking a look at.

comments edit

On a personal level, NuGet has been an immensely satisfying project to work on. I’ve always enjoyed working on open source projects with an active community in my spare time, but being able to do it as part of my day job is really fulfilling.

And I don’t think I’m alone in this as evidenced by this tweet from a co-worker, Matt Osborn who was contributing to NuGet on his own time from the early days.

Matthew M. Osborn (osbornm) on Twitter - Google
Chrome

A big part of the satisfaction comes from being able to collaborate with members of the community, aka you, in a deeper manner than before, which includes accepting contributions.

If you go to the OSS community site, Ohloh.net, you can see a list of contributors to Nuget. As you might expect, the top five contributors are Microsofties who work on NuGet as part of their day job. But three of the top ten are external contributors from the community.

NuGet Contributors - Ohloh - Google
Chrome

It looks like 21 of the 36 contributors are external. Take these numbers with a slight grain of salt because we use a distributed version control system and it appears some developers are counted twice because they used a different email address on a different computer.

Note to those developers! Create an account on Ohloh.net and claim those check-ins! Ohloh will provide a merged view of your contributions.

Contributions come in all sizes.We’ve had folks come in and “scratch an itch” with single commits adding things like support for WiX or the .NET Micro Framework. Such commits form a key pillar of open source software as Linus Torvalds stated when discussing a Microsoft patch to Linux:

I agree that it’s driven by selfish reasons, but that’s how all open source code gets written! We all “scratch our own itches”.

While other contributions took a lot of work among multiple community members such as the work to fix Proxy issues within NuGet. We didn’t have the ability to test the wide range of proxy servers people had in the wild. Fortunately several folks in our forums worked on this and tested out daily builds till we got it working in Package Explorer. This will soon be rolled into NuGet proper. Thanks!

As with most open source projects, commits do not tell the full story of a community’s contributions to a project. In some cases, these folks were involved in a lot of design and verification work that ended up being perhaps one commit.

Our discussion boards are full of active participants telling us we’re doing it wrong, or doing it right, or what we need to do. And that’s great! The commitment of their time to help us shape a better project is greatly appreciated. Even those who come in and criticize the product are making a noteworthy contribution as they’ve taken the time to give us food for thought. As they say, indifference is worse than hate and we’ve found a lot of folks who are not indifferent.

Getting Results!

I think all this community contribution to NuGet is a big factor in the success of NuGet. With your help (and a few recent tweaks to their popularity algorithm), we’ve become the #1 most popular extension on the Visual Studio Extension Gallery website.

Visual Studio Gallery - Google
Chrome

If you enjoy using NuGet and have a moment, consider going to the site and rate NuGet.

Moving Forward

As well as I think NuGet is doing, I’m by no means satisfied. In fact, I’m probably one of the most critical people when it comes to where NuGet is today as compared to where I’d like NuGet to be.

My team is a very small team. If we’re going to make even more progress than we’ve had, we’re going to need to cultivate contributors, both drive-by and consistent. That seems to me like the best way to scale out our development.

If you have tips on how to do that best, do let me know! In the meanwhile, I’ll brainstorm some ideas on how we can encourage more people to participate in the development of NuGet.

nuget, code, open source comments edit

When installing a package into a project, NuGet creates a packages.config file within the project (if it doesn’t already exist) which is an exact record of the packages that are installed in the project. At the same time, a folder for the package is created within the solution level packages folder containing the package and its contents.

Currently, it’s expected that the packages folder is committed to source control. The reason for this is that certain files from the package, such as assemblies, are referenced from their location in the packages folder. The benefit of this approach is that a package that is installed into multiple projects does not create multiple copies of the package nor the assembly. Instead, all of the projects reference the same assembly in one location.

If you commit the entire solution and packages folder into source contral, and another user gets latest from source control, they are in the same state you are in. If you omitted the the packages folder, the project would fail to build because the referenced assembly would be missing.

However

This approach doesn’t work for everyone. We’ve heard from many folks that they don’t want their packages folder to be checked into their source control.

Fortunately, you can enable this workflow today by following David Ebbo’s approach described in his blog post, Using NuGet without committing Packages.

But in NuGet 1.4 we’re planning to make it integrated into NuGet. We will be adding a new feature to restore any missing packages and the packages folder based on the packages.config file in each project when you attempt to build the project. This ensures that your application will compile even if the packages folder is missing at the time, which might be the case if you don’t commit it to source control.

Requirements

We have certain requirements we plan to meet with this feature. Primarily, it has to work in a Continuous Integration (CI Server) scenario. So it must work both within Visual Studio when you build, but also outside of Visual Studio when you use msbuild.exe to compile the solution.

For more details, please refer to:

If you have feedback on the design of this feature, please provide it in the discussion thread. Also, do keep in mind that this next release is our first iteration to address this scenario. We think we’ll hit the primary use cases, but we may not get everything. But don’t worry, we’ll continue to release often and address scenarios that we didn’t anticipate.

Thanks for your support!

nuget, code, open source comments edit

In continuing our efforts to release early, release often, I’m happy to announce the release of NuGet 1.3!

Upgrade!If you go into Visual Studio and select Tools > Extension Manager, click on the Update tab and you’ll see that this new version of NuGet is available as an update. Click the Upgrade button and you’re all set. It only takes a minute and it really is that easy to upgrade.

Extension
Manager

As always, there’s a new version of NuGet.exe corresponding with this release as well as a new version of the Package Explorer. If you have a fairly recent version of NuGet.exe, you can upgrade it by simply running the following command:

NuGet.exe u

076583
fg1019

Expect a new version of Package Explorer to be released soon as well. It is a click once application so all you need to do is open it and it will prompt you to upgrade it when an upgrade is available.

There’s a lot of cool improvements and bug fixes in this release as you can see in the release announcement. One of my favorite features is the ability to quicky create a package from a project file (csproj, vbproj) and push the package with debugging symbols to the server. David Ebbo wrote a great blog post about this feature and Scott Hanselman and I demonstrated this feature 20 minutes into our NuGet in Depth talk at Mix 11.

asp.net, asp.net mvc, code comments edit

Say you want to apply an action filter to every action except one. How would you go about it? For example, suppose you want to apply an authorization filter to every action except the action that lets the user login. Seems like a pretty good idea, right?

Currently, it takes a bit of work to do this. If you add a filter to the GlobalFilters.Filters collection, it applies to every action, which in the previous scenario would mean you already need to be authorized to login. Now that is security you can trust!

security

You can also manually add the filter attribute to every controller and/or action method except one. This solution is a potential bug magnet since you would you need to remember to apply this attribute every time you add a new controller. Update: There’s yet another approach you can try which is to write a custom authorize attribute as described in this blog post on Securng your ASP.NET MVC 3 Application.

Fortunately, ASP.NET MVC 3 introduced a new feature called filter providers which allow you to write a class that will be used as a source of action filters. For more details about what filter providers are, I highly recommend reading Brad Wilson’s blog post on filters.

In this case, what I need to write is a conditional action filter. I actually started writing one during my ASP.NET MVC 3 presentation at this past Mix 11 but never actually finished the demo. One of the many mistakes that inspired my blog post on presentation tips.

In this blog post, I’ll finish what I started and walk through an implementation of a conditional filter provider which will let us accomplish applying filters to action methods based on any criteria we can think of.

Here’s the approach I took. First, I wrote a custom filter provider.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;

public class ConditionalFilterProvider : IFilterProvider {
  private readonly 
    IEnumerable<Func<ControllerContext, ActionDescriptor, object>> _conditions;

  public ConditionalFilterProvider(
    IEnumerable<Func<ControllerContext, ActionDescriptor, object>> conditions)
  {
        
      _conditions = conditions;
  }

  public IEnumerable<Filter> GetFilters(
      ControllerContext controllerContext, 
      ActionDescriptor actionDescriptor) {
    return from condition in _conditions
           select condition(controllerContext, actionDescriptor) into filter
           where filter != null
           select new Filter(filter, FilterScope.Global, null);
  }
}

The code here is fairly straightforward despite all the angle brackets. We implement the IFilterProvider interface, but only return the filters given that meet the set of criterias represented as a Func. But each Func gets passed two pieces of information, the current ControllerContext and an ActionDescriptor. Through the ActionDescriptor, we can get access to the ControllerDescriptor.

The ActionDescriptor and ControllerDescriptor are abstractions of actions and controllers that don’t assume that the controller is a type and the action is a method. That’s why they were implemented in that way.

So now, to use this provider, I simply need to instantiate it and add it to the global filter provider collection (or register it via my Dependency Injection container like Brad described in his blog post).

Here’s an example of creating a conditional filter provider with two conditions. The first adds an instance of MyFilter to every controller except HomeController. The second adds SomeFilter to any action that starts with “About”. These scenarios are a bit contrived, but I bet you can think of a lot more interesting and powerful uses for this.

IEnumerable<Func<ControllerContext, ActionDescriptor, object>> conditions = 
    new Func<ControllerContext, ActionDescriptor, object>[] { 
    
    (c, a) => c.Controller.GetType() != typeof(HomeController) ? 
      new MyFilter() : null,
    (c, a) => a.ActionName.StartsWith("About") ? new SomeFilter() : null
};

var provider = new ConditionalFilterProvider(conditions);
FilterProviders.Providers.Add(provider);

Once we create the filter provider, we add it to the filter provider collection. Again, you can also do this via dependency injection instead of adding it to this static collection.

I’ve posted the conditional filter provider as a package in my personal NuGet repository I use for my own little samples located at http://nuget.haacked.com/nuget/. Feel free to add that URL as a package source and Install-Package ConditionalFilterProvider in order to get the source.

Tags: aspnetmvc, asp.net, filter, filter providers

code, open source comments edit

Eric S. Raymond in the famous essay, The Cathedral and the Bazaar, states,

Release early. Release often. And listen to your customers.

This advice came from Eric’s experience of managing an open source project as well as his observations of how the Linux kernel was developed.

But why? Why release often? Do I really have to listen to my customers? They whine all the time! To question this advice is sacrilege to those who have this philosophy so deeply ingrained. It’s obvious!

Or is it?

When I was asked this in earnest, it took me a moment to answer. It’s one of those aphorisms you know is true, but perhaps you’ve never had to explain it before. It’s hard to answer not because there isn’t a good answer, but because it’s difficult to know where to begin.

It’s healthy to challenge conventional wisdom from time to time to help avoid the cargo cult mentality and remind oneself all the reasons good advice is, well, good advice.

One great approach is to take a step back and imagine explaining this to someone who isn’t ingrained in software development, such as a business or marketing person. Why is releasing early and often a good thing?It helps to clarify that releasing early doesn’t mean waking up at 3:00 AM to release, though that may happen from time to time.

In this blog post, I’ll look into this question as well as a couple of other related questions that came to mind as I thought about this topic:

  • If releasing often is a good thing, why not release even more often?
  • What factors affect how often is often enough?
  • What are common objections to releasing often?
  • Why does answering a question always leave you with more questions?

I’ll try answering these questions as best as I can based on my own experiences and research.

Why is it a good thing?

What are the benefits of releasing early and often? As I thought through this question and looked at the various responses that I received from asking the Twitterista for their opinions, three key themes kept recurring. These themes became the summary of my TL;DR version of the answer:

  1. It’s results in a better product.
  2. It results in happier customers.
  3. It fosters happier developers.

So how does it accomplish these three things? Let’s take a look.

It provides a rapid feedback loop

Steve Smith had this great observation (emphasis mine):

The shorter the feedback loop, the faster value is added. Try driving while only looking at the road every 10 secs, vs. constantly.

Driving like that is a sure formula for receiving a Darwin Award.

Every release is an opportunity to stop theorizing about what customers want and actually put your hypotheses to the test by getting your product in their hands. The results are often surprising. After all, who expected Twitter to be so big?

Matt Mullenweg, the creator of WordPress put it best in his blog post, 1.0 is the loneliest number:

Usage is like oxygen for ideas. You can never fully anticipate how an audience is going to react to something you’ve created until it’s out there. That means every moment you’re working on something without it being in the public it’s actually dying, deprived of the oxygen of the real world.

This has played out time and time again in my experience. This happened recently with NuGet when we released a bug that caused sluggishness in certain TFS scenarios, something very difficult to discover without putting the product in real customers hands to use in real scenarios. Thankfully we didn’t have to wait a year to release a proper fix.

As Miguel De Icaza points out,

Early years of the project, you get to course correct faster, keep up with demand/needs

It’s not just customer demands that require course corrections. At times, changing market conditions and other external factors may require you to quickly adjust your planned feature sets and come out with a release in response to these changes. Having short iterations allow more agility to respond to such events keeping your product relevant.

It gets features and bug fixes in customers hands faster

This point is closely related to the last point, but worth calling out. In the Chromium blog, the open source project that makes up the core of the Google Chrome browser, they point out the following in their blog post, also titled Release Early, Release Often (emphasis mine):

The first goal is fairly straightforward, given our pace of development. We have new features coming out all the time and do not want users to have to wait months before they can use them. While pace is important to us, we are all committed to maintaining high quality releases — if a feature is not ready, it will not ship in a stable release.

Well why not make users wait a few months? As Nate Kohari points out,

Nothing is real until it’s providing value (or not) to your users. Having completed work that isn’t released is wasteful.

The longer a feature is implemented, but not being used in real scenarios, the more the context for the feature is lost. By the time it’s in customers hands, the original reason for the feature may be lost in the smoky mists of memory. And as feedback on the feature comes in, it takes time for the team to re-acquaint itself with the code and the reasons the code was written the way it was. All of that ramp up time is wasteful.

Likewise, the faster the cycle, the shorter the time the team has to live with a known bug out there in the product. Sometimes products ship with bugs that aren’t serious enough for an emergency patch, but annoying enough that customers are unhappy with having to live with the bug till the next release. This also makes developers unhappy as well as they are the ones who hear about it from the customers. A short release cycle means nobody has to live with these sort of bugs for long.

It reduces pressure on the development team to “make” a release

This point is also taken from the Chromium blog post as well. You can probably tell that post really resonated with me.

As a project gets closer and closer to the end of the release cycle, the team starts to make hard decisions regarding which bugs or features will get implemented or get punted. The pressure builds as the team realizes, if they don’t get the fix in this release, customers will have to wait a long time to get it in the next. As the Chromium blog post states:

Under the old model, when we faced a deadline with an incomplete feature, we had three options, all undesirable: (1) Engineers had to rush or work overtime to complete the feature by the deadline, (2) We delayed the release to complete that feature (which affected other un-related features), or (3) The feature was disabled and had to wait approximately 3 months for the next release. With the new schedule, if a given feature is not complete, it will simply ride on the the next release train when it’s ready. Since those trains come quickly and regularly (every six weeks), there is less stress.

The importance of this point can’t be overstated. Releasing often is not only good for the customers, it’s good for the development team.

It makes the developers excited!

This point was one of the original observations that Eric Raymond made in his essay,

So, if rapid releases and leveraging the Internet medium to the hilt were not accidents but integral parts of Linus’s engineering-genius insight into the minimum-effort path, what was he maximizing? What was he cranking out of the machinery?

Put that way, the question answers itself. Linus was keeping his hacker/users constantly stimulated and rewarded – stimulated by the prospect of having an ego-satisfying piece of the action, rewarded by the sight of constant (evendaily) improvement in their work.

Contrary to popular depictions, developers love people! And we especially love happy people, which makes us very excited to see features and bug fixes get into the customers hands because it makes them happy.

It’s demoralizing to implement a great feature or key bug fix and then watch it sit and stagnate with nobody using it.

This is especially important when you’re trying to harness the energy of a community of open source contributors within your project. It’s important to keep their attention and interest in the project high, or you will lose them. And nothing makes contributors more excited than seeing their hard work be released into the public for the world to use and recognize.

Yes, appeal to your contributors egos! Let them share in the glory now, and not months from now! Let them receive the recognition they deserve sooner rather than later!

It makes the schedule more predictable and easier to scope

Quick! Tell me how many piano tuners there are in Chicago? At first glance, this is a very difficult task. But if you break it down into smaller pieces, you can come up with a pretty good estimate for each small piece which leads to a decent overall estimate.

This type of problem is known as a Fermi problem named after the physicist Enrico Fermi who was renown for his estimation abilities. The story goes that he estimated the strength of an atomic bomb by measuring the distance pieces of paper travelled that he ripped up and dropped from his hands.

Breaking down a long product schedule into short iterations is similar to attacking a Fermi Problem. It’s much easier to scope and estimate a short iteration than it is a large one.

Again, going back to the Chromium blog post,

The second goal is about implementing good project management practice. Predictable fixed duration development periods allow us to determine how much work we can do in a fixed amount of time, and makes schedule communication simple. We basically wanted to operate more like trains leaving Grand Central Station (regularly scheduled and always on time), and less like taxis leaving the Bronx (ad hoc and unpredictable).

Keeps your users excited and happy

Ultimately, all the previous points I made lead to happy customers. When a customer reports a bug, and a fix comes out soon afterwards, the customer is happy. When a customer sees new features continually released that make their lives better, they are happy. When your product does what they want because of the tight feedback cycle, the customers are ultimately much happier with the product.

And this doesn’t just benefit your current customers. Potential new customers will be attracted to the buzz that frequent releases generate. As Atley Hunter points out,

Offering software consistently and frequently helps to foster both market buzz and continued interested from your installbase.

Continual releases are the sign of an active and vibrant product and product community. This is great for marketing your product to new audiences.

So Why Not Release All The Time?

If releasing often is such a good thing, why not release all the time? Isn’t releasing more often better than less often?

Releasing every second of the day time might not be possible since it does take time to implement features, but it’s not unheard of to release features as soon as they are done. This is the idea behind a technique called continuous deployment, which is particularly well suited to web sites.

When I worked at Koders (now part of BlackDuck software), we pushed a release every two weeks. We wanted to move towards a weekly release, but it took a couple of days to build our source code index. Our plan was to make the index build incrementally so we could deploy features more often and hopefully reach the ultimate goal of releasing as soon as a feature was completely done.

I think this is a great idea, but not always attainable without significant changes in how a product is developed. For example, with NuGet, we have a continuous integration server, that produces a build with every check-in. In a manner of speaking, we do have continuous deployment because anyone can go and try out these builds.

But I wouldn’t apply the “continuous deployment” label to what we do because we these CI builds are not release quality. To get to that point would require changing our development process so that every check-in to our main branch represented a completely end-to-end tested feature that we’re ready to ship.

At this point, I’m not even sure that continuous deployment is right for every product, though I’m still learning and open to new and better ideas. To understand why I feel this way, let me transition to my next question.

What factors affect how often is often enough?

I think there are several key factors that determine how often a product should be released.

Adoption Cost to the Customers

Some products have a higher adoption cost when a new release is produced. For example, websites have a very low adoption cost. When StackOverflow.com produces a release daily or even more than once a day, the cost to me is very little as long as the changes aren’t too drastic.

I simply visit the site and if I notice the new feature, I start taking advantage of it.

A product like Google Chrome has a slightly higher adoption cost, but not much. I’m unlikely to have critical infrastructure completely dependent on their browser. The browser updates itself and I simply take advantage of the new features.

But a product like a web framework has a much higher adoption cost. There’s a steeper learning curve for a new release of a programming framework than there is for a browser update. Also, authors will want time to be able to write their training materials, courses, books before they become obsolete by the next version.

And folks running sites on these framework versions want time to upgrade to the next version without having that version become obsolete immediately. Major releases of frameworks allow the entire ecosystem to congeal around these major release points. Imagine if ASP.NET MVC had 24 official releases in two years. How much harder would it be to hire developers to support your ASP.NET MVC v18 application when they want to be on the latest and greatest because we all know v24 is the cats pajamas.

I believe this is why you see Ruby on Rails releasing every two years and ASP.NET MVC release yearly. Note that both of these products still release previews early and often, but the final “blessed” builds come out relatively infrequently.

Maturity of the product

The other factor that might affect release cadence is the maturity of the product. When a product is playing catch-up, it has to release more often or risk falling further and further behind.

As a product matures, all the low-hanging fruit gets taken and sometimes a longer release cycle is necessary as they tackle deeper features which require heavier investments of time. Keep in mind, this is me theorizing here. I don’t have hard numbers to base this on, but it’s based on my observations.

Customer Costs

Sometimes, the customers tolerance for change affects release cadence. For example, I don’t think customers would tolerate a new iPhone hardware device every month because they’ll constantly feel left behind. A year is enough time for many (but not all) consumers to feel ready to upgrade to the next version.

Deployment Costs

One last consideration might be the costs to deploy the product. For hardware, this can be a big factor when the design of the product changes drastically from one version to the next.

Suddenly new supply chains may need to be set in place. Factories may need to be retrofitted to support the new or changing components. The products have to physically be shipped to the stores.

All these things can affect how often a new product can be shipped.

Common objections to Releasing Often

I think there are three main objections to this model I’ve heard or can think of. The first is that it forces end users to update their software more often. I’ve addressed this point already by looking at customer adoption as a gating factor in how often the product should be released. Many products can be updated quietly without requiring users to take any action. If the new features are designed well, customers will naturally discover them and learn them without too much fuss. In this model, avoid having these regular releases move everyone’s cheese by re-arranging the UI and that sort of thing.

Another concern raised is that this leads to more frequent lower quality releases rather than less frequent releases with higher polish and quality. After all, releases always contain overhead and by having more releases, you’re multiplying this overhead over multiple releases.

This is definitely a concern, but one that’s easily mitigated. Before we address that, but as my co-worker Drew Miller points out, long release cycles mask wasteand that waste is far greater than the cost of more frequest release overhead.

  • The more often you release, the better you are at releasing; release overhead decreases over time. With long release cycles, the pain of release inefficiency is easy to ignore and justify and the urgency and incentive to trim that waste is very low.
  • The sense of urgency for frequent releases drives velocity, more than off-setting the cost of release overhead.
  • The rapid feedback with frequent releases reduces the waste we always have for course correction within a long release cycle. A great example of this is the introduction of ActionResult to ASP.NET MVC 1.0 between Preview 2 and 3. That was costly, but would’ve been more costly if we had made that change much later.
  • The slow start of a long release cycle alone is usually more wasteful than the total cost of release overhead over the same period.
  • Long release cycles may have milestone overhead that can be as great (or greater) than release overhead.

Release as Often as Possible and Prudent

There’s probably a lot more that can be written on this topic, but I’m quickly approaching the TL;DR event horizon (if I haven’t passed it already). I’m excited to continue to learn more about effective release strategies so I look forward to your thoughtful comments on this topic.

At the end, my goal was to make it clear why releasing early and often is a good thing. I don’t currently believe there’s an empirical answer to the question, how often should you release? Rather, my answer right now is to suggest as often as possible and prudent.

If you release often, but find that your releases tend to be of a low quality, then perhaps it’s time to take the dial back a bit. If your releases are of a very high quality, perhaps it’s worth looking at any waste that goes into each release and trying to eliminate it so you can release even more often if doing so would appeal to your customers.

For more reading, I recommend:

code, open source, nuget comments edit

The Magic 8-ball toy is a toy usually good for maybe one or two laughs before it quickly gets boring. Even so, some have been known to make all their important life/strategic decisions using it, or an equivalent mechanism.

The way the toy works is you ask it a question, shake it, and the answer to your question appears in a little viewport. What you’re seeing is one side of an icosahedron (20-sided polyhedron, or for you D&D folks, a d20). On each face of the d20 is a potential answer to your yes or no question.

magic-8-ball

I thought it would be fun to write a NuGet package that emulates this toy as one of my demos for the NuGet talk at Mix11. Yes, I am odd when it comes to defining what I think is fun. When you install the package, it adds a new command to the Package Manager Console.

The command I wrote didn’t have twenty possible answers, because I was lazy, but it followed the same general format. This command also includes support for tab expansions, which feel a lot like Intellisense.

The following screenshot shows an example of this new command, Get-Answer, in use. Note that when you hit tab after typing the command, you can see a tab expansion suggesting a set of questions. It’s important to note that unlike Intellisense, you are free to ignore the tab expansion here and type in any question you want.

magic-eight-ball

In this blog post, I will walk through how I wrote and packaged that command. I must warn you, I’m no PowerShell expert. I wrote this as a learning experience with the help of other PS experts.

The first thing to do is write an init.ps1 file. As described in the NuGet documentation for creating a package on CodePlex:

Init.ps1 runs the first time a package is installed in a solution. If the same package is installed into additional projects in the solution, the script is not run during those installations. The script also runs every time the solution is opened. For example, if you install a package, close Visual Studio, and then start Visual Studio and open the solution, the Init.ps1script runs again.

This script is useful for packages that need to add commands to the console because they’ll run each time the solution is opened. Here’s what my init.ps1 file looks like:

param($installPath, $toolsPath, $package)

Import-Module (Join-Path $toolsPath MagicEightBall.psm1)

The first line declares the set of parameters to the script. These are the parameters that NuGet will pass into the init.ps1 script (note that install.ps1, a different script that can be included in NuGet packages, receives a fourth $project parameter).

  • $installPath is the path to your package install
  • $toolsPath is the path to the tools directory under the package
  • $package is a reference to your package

The second line of the script is used to import a PowerShell module. In this case, we specify a script named MagicEightBall.psm1 by its full path. We could write the entire script here in init.ps1, but I’ve been told it’s good form to simply write scripts as modules and then import them via init.ps1and I have no reason to not believe my source. I suppose init.ps1 could also import multiple modules rather than one.

Let’s look at the code for MagicEightBall.psm1. It’s pretty brief!

$answers =  "As I see it, yes", 
            "Reply hazy, try again", 
            "Outlook not so good"

function Get-Answer($question) {
    $answers | Get-Random
}

Register-TabExpansion 'Get-Answer' @{
    'question' = { 
        "Is this my lucky day?",
        "Will it rain tonight?",
        "Do I watch too much TV?"
    }
}

Export-ModuleMember Get-Answer

The first line of code simply declares an array of answers. The real Magic Eight Ball has 20 in all, so feel free to add them all there.

I then define a function named Get-Answer. The implementation demonstrates one of the cool things I like about Powershell. I can simply pipe it into the Get-Random method and it returns a random answer from the array.

Skipping to the end, the last line of code calls Export-Module on this function, which makes it available in the Package Manager Console.

So what about that middle bit of code that calls Register-TabExpansion? Glad you asked. That function provides the Intellisense-like behavior for our function by registering a tab expansion.

It takes two parameters, the first is the name of the function, in this case Get-Answer. The second is a dictionary where the keys are the names of the parameters of the function, and the values contain an array of expansion options for that function. Since are function only has one parameter named question, we add 'question' as the key to the dictionary and supply an array of potential questions as the value.

With these two files in place, I simply opened up Package Explorer and selected File > New from the menu to start a new package and dragged both of the script files into the Package contentswindow. NuGet recognized the files as being PowerShell scripts and offered to put them in the Tools folder.

I then selected Edit > Edit Package Metadata from the menu to enter the NuSpec metadata for the package and clicked OK at the bottom.

magic-eight-ball-pkg

With all that done, I selected the File > Save As… menu to save the package on disk so I could test it out. Once I was done testing, I selected File > Publish to publish the package to the real NuGet feed.

It’s really that simple to write a package that adds a command to the Package Manager console complete with tab expansions.

In a future blog post, I’ll write about how I wrote MoodSwings, a package that can automate Visual Studio from within the Package Manager Console. If you have the NuGet Package Manager Console open, you can try out this package by running the command:

Install-Package MagicEightBall

code, tech, personal comments edit

One aspect of my job that I love is being able to go in front of other developers, my peers, and give presentations on the technologies that my team and I build. I’m very fortunate to be able to do so, especially given the intense stage fright I used to have.

phil-mvc-talk

But over time, through giving multiple presentations, the stage fright has subsided to mere abject horror levels. Even so, I’m still nowhere near the numbers of much more polished and experienced speakers such as my cohort, Scott Hanselman.

Always looking for the silver lining, I’ve found that my lack of raw talent in this area has one great benefit, I make a lot of mistakes. A crap ton of them. But as Byron Pulsifer says, every mistake is a an “opportunity to learn”, which means I’m still cramming for that final exam.

At this past Mix 11, I made several mistakeslearning opportunities in my first talk that I was able to capitalize on by the time my second talk came around.

I thought it might be helpful for my future self (and perhaps other budding presenters) if I jotted down some of the common mistakes I’ve made and how I attempt to mitigate them.

Have a Backup For Everything!

An alternative title for this point could be worry more! I tend to be a complete optimist when it comes to preparing for a talk. I assume things will just work and it’ll generally work itself out and this attitude drives Mr. Hanselman crazy when we give a talk together. This attitude is also a setup for disaster when it comes to giving a talk.

During my talk, there were several occasions where I fat-fingered the code I was attempting to write on stage in front of a live audience. For most of my demos, I had snippets prepared in advance. But there were a couple of cases where I thought the code was simple enough that I could hack it out live.

Bad mistake!

You never know when nervousness combined with navigating a method that takes a Func-y lambda expression of a generic type can get you so lost in angle brackets you think you’re writing XML. I had to delete the method I was writing and start from scratch because I didn’t create a snippet for it, which was my backup for other code samples. This did not create a smooth experience for people attending the talk.

Another example of having a backup in place is to always have a finished version of your demo you can switch to and explain in case things get out of control with your fat fingers.

For every demo you give, think about how it could go wrong and what your backup plan will be when it does go wrong.

Minimize Dependencies Not Under Your Control

In my ASP.NET MVC 3 talk at Mix, I tried to publish a web application to the web that I had built during the session. This was meant to be the finale for the talk and would allow the attendees to visit the site and give it a spin.

It’s a risky move for sure, made all the more risky in that I was publishing over a wireless network that could be a bit creaky at times.

Prior to the talk, I successfully published multiple times in preparation. But I hadn’t set up a backup site (see previous mistake). Sure enough, when the time came to do it live with a room full of people watching, the publishing failed. The network inside the room was different than the one outside the room.

If I had a backup in place, I could have apologized for the failure and sent the attendees to visit the backup site in order for them to play with the finished demo. Instead, I sat there, mouth agape, promising attendees that it worked just before the talk. I swear!

Your audience will forgive the occasional demo failure that’s not in your control as long as the failure doesn’t distract from the overall flow of the presentation too much and as long as you can continue and still drive home the point you were trying to make.

Mock Your Dependencies

This tip is closely related to and follows up on the last tip. While at Mix, I learned how big keynotes, such as the one at Mix, are produced. These folks are Paranoid with a capital “P”! I listened intently to them about the level of fail safes they put in place for a conference keynote.

For example, they often re-create local instances of all aspects of the Internet and networking they might need on their machine through the use of local web servers, HOST files, local fake instances of web services, etc.

Not only that, but there is typically a backup person shadowing what the presenter is doing on another machine. But this person is following along the demo script carefully. If something goes wrong with the presenter’s demo, they are able to switch a KVM script so that the main presenter is now displaying and controlling the backup machine, while the shadow presenter now has the presenter’s original machine and can hopefully fix it and continue shadowing. Update:Scott Hanselman posted a video of behind-the-scenes footage from Mix11 where he and Jonathan Carter discuss keynote preparations and how the mirroring works.

It’s generally a single get-out-of jail free card for a keynote presenter.

I’m not suggesting you go that far for a standard break-out session. But faking some of your tricky dependencies (and having backups) is a very smart option.

Sometimes, a little smoke and mirrors is a good backup

In our following NuGet talk the next day, Scott and I prepared a demo in which I would create a website to serve up NuGet packages, and he would going visit the site to install a package.

We realized that publishing the site on stage was too risky and was tangential to the point of our talk, so we did something very simple. I created the site online in advance at a known location, http://nuget.haacked.com/. This site would be an exact duplicate of the one I would create on stage.

During the presentation, I built the site on my local machine and casually mentioned that I had made the site available to him at that URL. We switched to his computer, he added that URL to his list of package sources, and installed the package.

The point here is that while we didn’t technically lie, we also didn’t tell the full story because it wasn’t relevant to our demo. A few people asked me afterwards how we did that, and this is how.

I would advise against using smoke and mirrors for your primary demo though! Your audience is very smart and they probably wouldn’t like it the key technology you’re demoing is fake.

Prepare and Practice, Practice, Practice

This goes without saying, but is sometimes easier said than done. I highly recommend at least one end-to-end walkthrough of your talk and practice each demo multiple times.

Personally, I don’t try to memorize or plan out exactly what I will say in between demos (except for the first few minutes of the talk). But I do think it’s important to memorize and nail the demos and have a rough idea of the key points that I plan to say in between demos.

The following screenshot depicts a page of chicken scratch from Scott Hanselman’s notebook where we planned out the general outline of our talk.

IMG_1165

I took these notes, typed them up into an orderly outline, and printed out a simple script that we referred to during the talk to make sure we were on the right pace. Scott also makes a point to mark certain milestones in the outline. For example, we knew that around the 45 minute mark, we had better be at the AddMvcToWebForms demo or we were falling behind.

Writing the script is my way of preparing as I end up doing the demos multiple times each when writing the script. But that’s definitely not enough.

For my first talk, I never had the opportunity to do a full dry-run. I can make a lot of excuses about being busy leading up to the conference, but in truth, there is no excuse for not practicing the talk end to end at least once.

When you do a dry run, you’ll find so many issues you’ll want to streamline or fix for the actual talk. Trust me, it’s a lot better to find them during a practice run than during a live talk.

Don’t change anything before the talk

Around the Around 24:40 mark in our joint NuGet in Depth session, you can see me searching for a menu option in the Solution Explorer. I’m looking for the “Open CMD Prompt Here” menu, but I can’t find it.

It turns out, this is a feature of the Power Commands for Visual Studio 2010 VSIX extension. An extension I had just uninstalled on the suggestion from my speaking partner, Mr. Hanselman. Just prior to our talk, he suggested I disable some Visual Studio extensions to “clean things up”

I had practiced my demos with that extension enabled so it threw me off a bit during the talk (Well played Mr. Hanselman!). The point of this story is you should practice your demo in the same state as you plan to give the demo and don’t change a single thing with your machine before giving the actual talk.

I know it’s tempting to install that last Window Update just before a talk because it keeps annoying you with its prompting and what could go wrong, right? But resist that temptation. Wait till after your talk to make changes to your machine.

Conclusion

This post isn’t meant to be an exhaustive list of presentation tips. These are merely tips I learned recently based on mistakes I’ve made that I hope and plan to never repeat.

For more great tips, check out Scott Hanselman’s Tips for a Successful MSFT Presentation and Venkatarangan’s Tips for doing effective Presentations.

Tags: mix11,mix,presentations,tips

code, open source, asp.net mvc, asp.net, nuget comments edit

Another Spring approaches and once again, another Mix is over. This year at Mix, my team announced the release of the ASP.NET MVC 3 Tools Update at Mix, which I blogged about recently.

Working on this release as well as NuGet has kept me intensely busy since we released ASP.NET MVC 3 RTM only this past January. Hopefully now, my team and I can take a moment to breath as we start making big plans for ASP.NET MVC 4. It’s interesting to me to think that the version number for ASP.NET MVC is quickly catching up to the version of ASP.NET proper. Smile

Once again, Mix has continued to be one of my favorite conferences due to the eclectic mix of folks who attend.

trouble-inc

The previous photo was taken from Joey De Villa’s Blog post.

Me-and-elvis

It’s not just a conference where you’ll run into Scott Guthrie and Hanselman, but you’ll also run into Douglas Crockford, Miguel De Icaza or even Elvis!

I was involved with two talks at Mix which are now available on Channel9 and embedded here.

ASP.NET MVC 3 @:The Time Is Now

In this talk, I cover the new features of ASP.NET MVC 3 and the ASP.NET MVC 3 Tools Update while building an application that allows me to ask the audience survey questions. The application is hosted at http://mix11.haacked.com/.

Errata: I ran into a few problems during this talk, which I will cover in a follow-up blog post about speaking tips I learned due to mistakes I’ve made.

If you attended the talk (or watched it), I learned at the end that the failure to publish was due to a proxy issue in the room’s network that I didn’t have in my hotel room or the main conference area.

I plan to follow up on various topics I covered in the talk with blog posts. For example, I wrote a helper method during the talk that allows you to pass in a Razor snippet as a template for a loop. That’s now covered in this blog post, A Better Razor Foreach Loop.

NuGet in Depth: Empowering Open Source on the .NET Platform

In this talk, Scott and I perform what we call our “HaaHa” show, which is a name derived from a combination of our last names, Phil Haack and Scott Hanselman but pronounced like our aliases PhilHaand ScottHa.

We spent the entire talk attempting to one-up each other with demos of NuGet. Each demo built on the last and showed more and more what you can do with NuGet.

Errata: During the demo, there was one point where I expected a License Agreement to pop up, but it didn’t. I gave a misleading explanation for why that happened. We should have seen the pop-up because we do not install SqlServerCompact by default.

Turns out I ran into an edge case potential bug in NuGet. Usually, when I create a project, I make sure to create a folder for the solution so that the solution is isolated in its own folder. For some reason, I didn’t have that checked and the solution was being created in my temp directory. Thus the packages folder was being shared with every project I’ve created in that folder which made NuGet think that SqlServerCompact was already installed.

If you’ve never accepted that agreement, it will pop up.

The second mistake I made was in describing install.ps1, which indeed runs every time you install it into a project, not once per solution. To get the correct definition, read our documentation page on Creating a Package.

Another minor mistake I made was in describing the Magic 8-Ball, I said it had a dodecahedron inside. I meant to say icosahedron which is a twenty-sided polyhedron.

During the talk, we randomly start talking about a ringtone. That was due to someone’s phone going off in the audience. You can’t hear it in the recording. Smile

Oh, I just pushed MoodSwings to the main feed so you can try it out.

Summary

This was the first time I stayed till the following day of a conference rather than hopping on a cab to the airport immediately after my last talk.

I highly recommend that approach. It was nice to have time to relax after my last talk. A few of us went to ride the rollercoaster at NY NY, walk around the strip, and take in a show JabbaWockeez.

IMG_1183IMG_1188

Tags: aspnetmvc, nuget-gallery, mix11, mix, nuget

razor, code, asp.net mvc comments edit

Yesterday, during my ASP.NET MVC 3 talk at Mix 11, I wrote a useful helper method demonstrating an advanced feature of Razor, Razor Templated Delegates.

There are many situations where I want to quickly iterate through a bunch of items in a view, and I prefer using the foreach statement. But sometimes, I need to also know the current index. So I wrote an extension method to IEnumerable<T> that accepts Razor syntax as an argument and calls that template for each item in the enumeration.

public static class HaackHelpers {
  public static HelperResult Each<TItem>(
      this IEnumerable<TItem> items, 
      Func<IndexedItem<TItem>, 
      HelperResult> template) {
    return new HelperResult(writer => {
      int index = 0;

      foreach (var item in items) {
        var result = template(new IndexedItem<TItem>(index++, item));
        result.WriteTo(writer);
      }
    });
  }
}

This method calls the template for each item in the enumeration, but instead of passing in the item itself, we wrap it in a new class, IndexedItem<T>.

public class IndexedItem<TModel> {
  public IndexedItem(int index, TModel item) {
    Index = index;
    Item = item;
  }

  public int Index { get; private set; }
  public TModel Item { get; private set; }
}

And here’s an example of its usage within a view. Notice that we pass in Razor markup as an argument to the method which gets called for each item. We have access to the direct item and the current index.


@model IEnumerable<Question>

<ol>
@Model.Each(@<li>Item @item.Index of @(Model.Count() - 1): @item.Item.Title</li>)
</ol>

If you want to try it out, I put the code in a package in my personal NuGet feed for my code samples. Just connect NuGet to http://nuget.haacked.com/nuget/ and Install-Package RazorForEach. The package installs this code as source files in App_Code.

UPDATE: I updated the code and package to be more efficient (4/16/2011).

asp.net, asp.net mvc, code comments edit

I’m at Mix11 all week and this past Monday, I attended the Open Source Fest where multiple tables were set up for open source project owners to show off their projects.

One of  my favorite projects is also a NuGet package named Glimpse Web Debugger. It adds a FireBug like experience for grabbing server-side diagnostics from an ASP.NET MVC application while looking at it in your browser. It provides a browser plug-in like experience without the plug-in.

One of the features of their plug-in is a route debugger inspired by my route debugger. Over time, as Glimpse catches on, I’ll probably be able to simply retire mine.

But in the meanwhile, inspired by their route debugger, I’ve updated my route debugger so that it acts like tracing and puts the debug information at the bottom of the page (click to enlarge).

About Us - Windows Internet
Explorer

Note that this new feature requires that you’re running against .NET 4 and that you have the Microsoft.Web.Infrastructure assembly available (which you would in an ASP.NET MVC 3 application).

The RouteDebugger NuGet package includes the older version of RouteDebug.dll for those still running against .NET 3.5.

This takes advantage of a new feature included in the Microsoft.Web.Infrastructure assembly that allows you to register an HttpModule dynamically. That allows me to easily append this route debug information to the end of every request.

By the way, RouteDebugger is now part of the RouteMagic project if you want to see the source code.

To try it out, Install-Package RouteDebugger.

comments edit

Today at Mix, Scott Guthrie announced an update to the ASP.NET MVC 3 we’re calling the ASP.NET MVC 3 Tools Update. You can install it via Web PI or download the installer by going to the download details page. Check out the release notes as well for more details.

Notice the emphasis on calling it a Tools Update? The reason for that is simple. This only updates the tooling for ASP.NET MVC 3 and not the runtime. There are no changes to System.Web.Mvc.dll or any of its other assemblies that ship as part of the ASP.NET MVC 3 Framework. Instead, given that we just released ASP.NET MVC 3 this past January, we focused on improvements to the tools and project templates that we wanted to ship in time for Mix.

To drive this point home, here’s a screenshot of the Programs and Features dialog with ASP.NET MVC 3 RTM installed.

BEFORE

mvc3-installed

And here’s one with the Tools Update installed.

AFTER

mvc3-update-installed

Did you see what changed? If not, I’ll help you. Smile

mvc3-update-installed-highlighted

What’s new in this release?

We’ve added a lot of improvements to the tooling experience in this release. For more details, check out the release notes.

  • New Intranet Project Template that enables Windows Authentication and does not include the AccountController.
  • HTML 5 checkbox to enable HTML 5 versions of project templates.
  • Add Controller Dialog now supports full automatic scaffolding of Create, Read, Update, and Delete controller actions and corresponding views. By default, this scaffolds data access code using EF Code First.
  • Add Controller Dialog supports extensible scaffolds via NuGet packages such as MvcScaffolding. This allows plugging in custom scaffolds into the dialog which would allow you to create scaffolds for other data access technologies such as NHibernate or even JET with ODBCDirect if you’re so inclined!
  • JavaScript libraries within project templates are updatable via NuGet! (We included them as pre-installed NuGet packages.)
  • Includes Modernizr 1.7. This provides compatibility support for HTML 5 and CSS 3 in down-level browsers.
  • Includes EF Code First 4.1 as a pre-installed NuGet package.

We’ve also made several other small changes and fixed several bugs in the MVC tooling for Visual Studio:

  • We did some major cleanup to the AccountController in the Internet project template
  • We now have more “sticky” options that remember their settings even when you restart Visual Studio
  • We have much smarter model type filtering logic in the Add View and Add Controller dialogs

NuGet 1.2 Included

Around 12 days ago, we released NuGet 1.2. If you don’t already have NuGet 1.2 installed, ASP.NET MVC 3 Tools Update will install it for you. In fact, it requires it because of the pre-installed NuGet packages feature I mentioned earlier. When you create a new ASP.NET MVC 3 Tools Update project, the script libraries such as jQuery are installed as NuGet packages so that it’s easy to update them after the fact.

Give it a spin and let us know what you think!