comments edit

Getting
Real Now that 37signals have put their book Getting Real online for free, I’ve finally gotten around to start reading it. And so far, I **love it. I think there are a lot of great lessons, reminders, ideas in here that will help me make products I work on that much better.

I have a premonition, even before writing this, that a lot of people will tell me that the book is crap because they don’t believe in functional specifications and even Joel believes in functional specifications. As the authors point out in their Caveats, disclaimers, and other preemptive strikes page, their techniques don’t apply to everyone (though probably to more people than one would think). Also, their ideas are not an all or nothing affair. Gotta have your functional specs, then by all means keep doing it. Don’t throw out the baby with the bathwater.

Open Source projects are probably in the best position to make use of their guidelines, though I am sure many more projects would be made much better by getting real. Subtext will most certainly gain from many of these approaches. My hope is to use Subtext as a testing ground for the principles in Getting Real, and then using its success to show my clients that they don’t need to be so rigid about product development. We’ll see.

As you might guess, many of my upcoming blog posts will focus on some of these topics.

comments edit

Wallet Well if you’re the punk malcontent who found my wallet yesterday, you go on a shopping spree…at Rite Aid, Vons, and Ralphs (a convenience store and two grocery stores). 

Really, if you’re going to try and spend my hard-earned credit, at least go buy something decent like a stereo system.  There’s a Best Buy just down the street that isn’t much further away than the local Vons.

I mean who spends $103 at a Rite-Aid?!  Who!?  I really mean it. Who? I’d like to know because I want my wallet back.  And to the store clerks there, doesn’t it even slightly ring the suspicion bell in your head when someone tries to buy $103 worth of anything at Rite-Aid?  I didn’t even know you could buy that much stuff at a Rite-Aid. I figured buying out everything in the store would lighten your wallet by about $70, tops.  Maybe he bought the cash register too.

Did you even look at the guy and compare his picture to the Id, my Id, he presented?  There’s not a lot of hapas running around L.A. so I doubt he looked that much like me, assuming it was a “he”.

Anyways, yesterday on a short walk, I dropped my wallet.  A really comical mistake like you see in the cartoons. I put my wallet in my side cargo-short pocket, which unbeknownst to me, had a huge gaping hole in it.  I returned back to the area literally 10 to 15 minutes later, and it was gone.  And then the charges started showing up in my account as I frantically called to close my cards.

Anyways, that’s why I rushed out a new release of Subtext last night.  I figure follow something bad with something good.  Oh, and I bought a lottery ticket.  I know, the chances are pretty much nill. But what were the chances I’d put a wallet in a pocket with no bottom?  This is a chance for the world to get back to even terms with me. Wink

comments edit

As I mentioned in my last post, someone reported a bug with deleting posts in the admin tool.  That posts also describes a quick and dirty workaround.

However, I fixed the bug and updated our current 1.9.2 release at SourceForge.  The existing URL to download the release is still valid.  The full version number for this release is 1.9.2.23.

To find out which version of Subtext you are running, just log into your admin and look in the bottom left corner.  You should see the version there.

Subtext
Version

If you downloaded Subtext 1.9.2 before I applied the update, then you will probably see version 1.9.2.19 in there.  If so, you have two options.  You can either apply the quick and dirty patch mentioned in my last post, or you can download the latest build and update the Subtext.Framework.dll file in the bin directory.

However, this latest update also includes some CSS fixes for the KeyWest skin in IE 7 designed (and fixed) by Robb Allen.  So if you are using that skin, you should download and apply this update.

comments edit

Someone reported that they cannot delete posts in the just released Subtext 1.9.2. I am mortified that we do do not have a unit test for this function!  To our defense, we did start with 0% code coverage in unit tests and have now reached 37.9% and rising!

I have a quick fix if this problem affects you. I am also currently building a more permanent fix which I will release soon.

Run the following query in Query Analyzer (don’t forget to hit CTRL+SHIFT+M to replace the template parameters before executing this).

ALTER PROC 
    [<dbUser,varchar,dbo>].[subtext_DeletePost]
(
    @ID int
    , @BlogID int = NULL
)
AS

DELETE FROM [<dbUser,varchar,dbo>].[subtext_Links] 
WHERE PostID = @ID

DELETE FROM [<dbUser,varchar,dbo>].[subtext_EntryViewCount] 
WHERE EntryID = @ID

DELETE FROM [<dbUser,varchar,dbo>].[subtext_Referrals] 
WHERE EntryID = @ID

DELETE FROM [<dbUser,varchar,dbo>].[subtext_Feedback] 
WHERE EntryId = @ID

DELETE FROM [<dbUser,varchar,dbo>].[subtext_Content] 
WHERE [ID] = @ID

GO
SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON 
GO

GRANT  EXECUTE  ON 
 [<dbUser,varchar,dbo>].[subtext_DeletePost]  
TO [public]
GO

Sorry for those that this affects. Like I said, we’ll have a bug fix out soon.

comments edit

Making the world safe for trackbacks again!

UPDATE: A bug was reported that blog posts could not be deleted. We have updated the release files with a fixed version. There’s also a quick and dirty workaround.  You only need to apply the fix if you downloaded and installed Subtext before this update message was posted.  See here for details.

Well it took me a little longer than anticipated, but I finally teased out the remaining show stopper bugs and put the finishing touches on Subtext 1.9.2. If you plan on upgrading to Subtext 1.9.2, please consider reading this entire post carefully. If not, at least remember to backup your database and site first before upgrading!

What happened to 1.9.1? Long story for another time. We are skipping from 1.9 to 1.9.2. Here is a list of releases so far just in case you were curious. UPDATE: I had only listed the versions that required an automatic upgrade. Here is the complete list with the ones in bold required an automatic database update:

  • Subtext 1.0
  • Subtext 1.5
  • Subtext 1.5.1
  • Subtext 1.5.2 
  • Subtext 1.9
  • Subtext 1.9.2

As you can see, our version numbering has been a bit less than consistent. But starting with 2.0, we should stay a bit more consistent. With the launch of IE 7, don’t be surprised if we come out with a 1.9.3 version that just includes fixes to the skins. Volunteers for that effort are welcome.

New Features

The reason we call this the “Shields Up” edition is that the focus has been on dealing with Comment Spam. I had few interesting ideas I could not implement in time, but I did implement three key features that have really made a huge difference in regards to comment spam, and may not necessitate my other ideas.

Akismet Support

Subtext now has full integrated support for Akismet spam filtering. Not too long ago, I released an Akismet API so that others can use it for spam filtering. Scott Hanselman Members of the DasBlog team implemented it for DasBlog (in their source control tree. Not yet released.) and has nothing but praise for it, saying it has completely eliminated comment spam for him.

Now let me explain what I mean by full integrated support. When you enable Akismet and are looking at approved comments in the feedback section, if you notice something that should have been filtered as spam, you can check it and click the Spam button. That will then report a false negative to Akismet, indicating that they failed to mark this item as spam and then move the item to the new Trash folder.

Likewise, if you notice an item that is flagged as spam, but should not have been, you can check the item and click Approve which reports a false positive to Akismet. By doing so, you will be training Akismet to become more adept at filtering spam.

Note, to enable Akismet, you must sign up for anAkismet API keyand supply that key to Subtext via the admin section. In order to get this key, you have to register for a Wordpress.com user account, whether or not you plan to use a Wordpress blog.

Another note, Akismet may not work in Medium Trust scenarios. So if you host your site with a hosting provider such as GoDaddy, WebHost4Life, etc… who run their sites in Medium trust, Akismet might not work for you. I wrote about the problem here.

In the comments to the aforementioned post, Scott Watermasysk points out a promising approach. Have your hosting provider set up a proxy which can be used to make requests. To that end, I did add some Web.config appSettings for enabling proxy support: ProxyHost, ProxyPort, ProxyUsername, ProxyPassword.

If specified, all web requests for subtext will use this proxy.

Invisible Captcha

Not too long ago, I released a lightweight invisible CAPTCHA validator control. As far as I can tell, it has worked pretty well at keeping out automated comment spam. But as I also warned, it did nothing to stop Trackback/Pingback spam. Hence the need for Akismet support.

Invisible CAPTCHA is probably not necessary given the Akismet support, but since Akismet is not enabled by default and Invisible CAPTCHA is, it will provide some relief until you get your Akismet API key, if you so choose.

If Invisible CAPTCHA causes problems for you for some reason, you can turn it off for the entire site (no per-blog setting) via Web.config.

Visible CAPTCHA

Some of you twisted my arm, so I am complying. I love my users so I gotta keep them happy. We now have support for the standard visible CAPTCHA you know and love.

Time Zone Fix

This topic gets its own special treatment because it is both a bug fix and a new feature. Not too long ago I mentioned some work I did for DasBlog regarding daylight savings time.

Long story short, there was no way in the .NET framework to convert from one arbitrary timezone to another. The way this manifested in Subtext is that users who had their blog hosted in another timezone always had incorrect timestamps on their blog posts.  This fix resolves that issue.

Previously, Subtext only stored a timezone offset for each timezone. For example, the offset for Pacific Standard Time (PST) is -8. However that is not accurate enough to be correct. For example, at the time of this blog post, the offset is actually -7 because of Daylight Savings Time.

Likewise, choosing -7 for a timezone is not accurate because Arizona does not observe daylight savings, but Mountain Time does. The only way to really get this accurate is to store the actual timezone and not just the offset. So that is what Subtext now does.

We now list every timezone (at least every one I could extract from the Windows XP registry). There is no natural integer identifier for a timezone, so based on a suggestion by someone at Microsoft, I used the hash code of the timezone’s Standard Name. This is the only guaranteed consistent unique identifier for a timezone. So when you upgrade to 1.9.2, we try and update your offset to the most likely timezone id. Obviously, we can’t be perfect about this, so after you upgrade, you probably want to login and configure your blog with the correct timezone if we chose unwisely.

Other Release Note Items

  • Feature [1565237] Support referencing External CSS
  • Feature [1577073] Flag All, Destroy All options
  • Fixed [1584075] Disabling trackbacks didn’t disable trackbacks properly.

Upgrade Instructions and Warnings

If you are upgrading from Subtext 1.5 or below, then please read this important note on upgrading. Subtext 1.9.2 runs on ASP.NET 2.0, so upgrading from 1.5 and below(which ran on ASP.NET 1.1) takes a few additional steps than normal.  As always, don’t forget to merge your web.config customizations into the new web.config file.

Also, for all people upgrading to 1.9.2. The Subtext 1.9.2 upgrade process performs a major database schema change, moving all comments and trackbacks into a new subtext_Feedback table. We’ve tested this over and over again working out all the kinks we could find, but we can’t guarantee that it will be 100% perfect. Thus backup your database first before upgrading!

Also, as I mentioned in the previous section, after you upgrade, please check the timezone setting in the admin section.

Lastly, in order to improve the commenting experience, we’ve added a tiny dash of Ajax using the MagicAjax control for leaving comments on a blog. However, this does not work well with the excellent ReverseDOS because ReverseDOS acts before the request is passed on the our code. If you plan on using Akismet, it is recommended that you turn off ReverseDOS support in the Web.config. In fact, the web.config file that comes with 1.9.2 disables ReverseDOS. I’ve been in contact with the ReverseDOS creator, Michael Campbell, about these issues and he has plans to work together to address them. But like all things, life comes first.

Even though Akismet does a great job with comment spam, I still think ReverseDOS is worthwhile and a nice complement to Akismet. However, we need tighter control of when ReverseDOS is triggered in the request pipeline in order to integrate it into Subtext’s existing spam filtering functionality.

Download

Ok, enough talk already, where do I download this sucker?  The download is hosted by SourceForge here: [DOWNLOAD]

comments edit

Dan Appleman takes the .NET focused custom search engine idea one step further by grabbing a great domain name for his search engine.  Now why didn’t I think of that!  Sometimes that’s all it takes between a search engine that will get used alot (his) and one that won’t (mine).  I’d be happy to throw my support over to his, though I’m keeping my baby.  The one key difference is I plan to leave my search engine open for others to contribute sites.  Perhaps the two will complement each other.

And no, the domain name is not the only reason. This guy *is* Dan Appleman after all! He wrote the Visual Basic Programmer’s Guide to the Win32 API which allowed me to feel like a real programmer despite using VB back in the day.  For this, I owe him a debt of gratitude as I was thus equipped to stand up to the C++ snobs.

Now we just need someone to help spruce up the logo.

Tags: Custom Search, Search, .NET, Dan Appleman

comments edit

Chameleon Having trouble sleeping lately. Thought I’d start an intermittent blog series about the questions that keep me up at night.

For example, this question popped in my head tonight and would not let me rest.

What the hell is a Karma Chameleon?

Ponder amongst yourselves.  I bid you adieu.

comments edit

UPDATE: A bug was reported that blog posts could not be deleted. We have updated the release files with a fixed version.  There’s also a quick and dirty workaround.

Reading over my last blog post, I realized I can be a bit long winded when describing the latest release of Subtext. Can you blame me? I pour my blood, sweat, and tears (minus the blood) into this project.  Doesn’t that give me some leeway to be a total blowhard? Wink

This post is for those who just want the meat. Subtext 1.9.2 has the goal of making the world safe for trackbacks and comments.  It adds significant comment spam blocking support.  Here are the key take-aways for upgrading.

  • As always, backup your database and site first before upgrading.  We implemented a major schema change which requires that the upgrade process move some data to a new table.
  • If you are upgrading from Subtext 1.5 or earlier, read this.
  • Instructions for upgrading.
  • Instructions for a clean installation. This is easier than upgrading.
  • When upgrading, make sure to merge your customizations to web.config into the new web.config.
  • If you use Akismet, make sure not to use ReverseDOS until we resolve some issues.
  • After upgrading, login to the admin and select the correct timezone that you are located in.

Download it here.

comments edit

In response to question about integrating my custom search engine with the browser, Oran left a comment with a link to a post on how to implement searching your FogBugz database in your browser via the OpenSearch provider, which is supported by Firefox 2.0 and IE7.

So I went ahead and used this as a guide to implementing OpenSearch for my custom search engine on my blog.  When you visit my blog, you should notice that the the search icon in the top left corner of your browser is highlighted (screenshot from Firefox 2).

Open search
box.

Click on the down arrow and you will see my own search engine Haack Attack in the list of search providers.

Haack
Attack

Now you can search using Haack Attack via your browser.  Implementing this required two easy steps.  First I created an OpenSearch.xml file and dropped it in the root of my website. Here is my file with some of the gunk removed from the url.

<?xml version="1.0" encoding="UTF-8" ?>
<OpenSearchDescription 
    xmlns="http://a9.com/-/spec/opensearch/1.1/">
  <ShortName>Haack Attack</ShortName>
  <Tags>Software Development C# ASP.NET</Tags>
  <Description>
    Search the web for relevant 
    .NET and software development content.
  </Description>
  <Url type="text/html" 
    template="http://www.google.com/custom?
    StuffOmitted&amp;q={searchTerms}" />
</OpenSearchDescription>

Remember to make sure to use &amp; for query string ampersands, as this is an XML file.  Also, if you are using your own Google Custom search engine, the actual template value looks something like:

http://www.google.com/custom?cx=016071428520527893278%3A3kvxtxmsfga &q={searchTerms}&sa=Search&cof=GFNT%3A%23000000%3BGALT%3A%23000066% 3BLH%3A23%3BCX%3AHaack%2520 Attack%2520The%2520Web%3BVLC% 3A%23663399%3BLW%3A100%3BDIV%3A%23336699%3BFORID%3A0%3BT%3A%23000000 %3BALC%3A%23660000%3BLC %3A%23660000%3BS%3Ahttp%3A%2F%2Fhaacked%2Ecom %2F%3BL%3Ahttp%3A%2F%2Fhaacked%2Ecom%2Fskins%2Fhaacked%2Fimages%2F Header%2Ejpg%3BGIMP%3A%23000000%3BLP%3A1%3BBGC%3A%23FFFFFF%3BAH%3Aleft& client=pub-7694059317326582

So be sure to change it appropriate to your own search engine.

The second step is to add auto-discovery. I added the following <link /> tag to my blog.  The bolded sections you would obviously want to customize for your own needs.

<link title="Haack Attack" 
  type="application/opensearchdescription+xml" 
  rel="search" 
  href="http://haacked.com/OpenSearch.xml">
</link>

So give it a try and let me know what you think. Be sure to add sites you think are relevant to this searh engine.

comments edit

The Viper
RoomLast night I went to the “World Famous Viper Room”.  Gotta respect that their website makes sure to mention the World Famous part.  I suppose you’d have to have a real club inferiority complex to promote your club as The In This Neighborhood Sorta Famous Viper Room.

Anyways, the purpose of my visit was to see my soccer (sorry, football) teammate and team captain Pete perform in the acoustic lounge, an intimate (read tiny) lounge downstairs from the main room.

Pete’s the one from Glasgow with a heavy Scottish brogue.  We can hardly understand him most of the time, though I’m getting better at it.  Usually I just nod my head in agreement.

After our last game we asked Pete if he’d get us on the guest list.  “Not after that performance I won’t.” was his reply…I think.  He was referring to the 12 to 1 drubbing received at the hands of Hollywood United.  This is the team fielding 9 internationally capped players and two World Cup players, one of whom was on the winning squad of the 1998 champions.

In contrast, we are fielding one internationally capped player who played for the Cayman Islands.  We should be making some acquisitions for the next season that should help.  Our goal for the next season is to keep them in the single digits.  Incremental improvements, ya know?

comments edit

Google
Beta Google just launched a neat build-your-own-search-engine feature.  You can choose sites that should be emphasized in the search results, or even restrict search results to that list.

It offers a lot of customization, but I seemed to have run into a problem in which it doesn’t seem to be saving the images for my profile for some reason. 

One particularly neat feature is the ability to allow others to collaborate on the search engine.  For example, check out my search engine called Haack Attack the Web.  I entered a few .NET and software development focused websites to the list of sites to search, restricting results to just those sites.

Feel free to add your favorite websites to the search engine.  I think I’ll actually find this as a useful first stop for finding .NET content.

The next step is to integrate this into the search features for Firefox and IE7.  Anyone want to help?

code, tech comments edit

Space Shuttle
Landing Jeff Atwood writes a great post about The Last Responsible Moment. Take a second to read it and come back here. I’ll wait.

In the comments, someone named Steve makes this comment:

This is madness. Today’s minds have been overly affected by short attention span music videos, video games, film edits that skip around every .4 seconds, etc.

People are no longer able to focus and hold a thought, hence their “requirements” never settle down, hence “agile development”, extreme coding, etc.

I wonder what methodology the space shuttle folks use.

You shouldn’t humor this stuff, it’s a serious disease.

Ahhhh yes. The Space Shuttle. The paragon of software development. Nevermind the fact that 99.999% of developers are not developing software for the Space Shuttle, we should all write code like the Shuttle developers. Let’s delve into that idea a bit, shall we?

Invoking the Space Shuttle is common when arguing against iterative or agile methodologies to building software. Do people really think hey, if agile wont work for the Space Shuttle, how the hell is it going to work for my clients widget tracking application?Gee, your widget app is doomed.

The Space Shuttle is a different beast entirely from what most developers deal with day in and day out. This is not to say that there aren’t lessons to be learned from how the Shuttle software is built, there certainly are. Good lessons. No, great lessons! But in order to make good use of the lessons, you must understand how your client is very different from the client to the Shuttle developers.

One reason that the requirements for the Space Shuttle can be more formally specified beforehand and up front is because the requirements have very little need to changonce the project is underway.When was the last time the laws of gravity changed? The Shuttle code is mostly an autonomous system, which means the “users” of the code is the Shuttle itself as well as the various electronic and mechanical systems that it must coordinate.  These are well specified systems that do not need to change often, if at all, in the course of a project.

Contrast this to business processes which are constantly evolving and heavily people centric. Many times, the users of a system aren’t even sure about how to solve the business problem they are trying to solve with software.  This is partly why they don’t exactly know what they want until they have the solution in hand. We can wave activity diagrams, list of requirements, user stories and use cases in front of them all day long, but these are rough approximations of what the final system will do and look like. It’s showing the user a pencil sketch of a stick figure and hoping they see the Mona Lisa.

Later in the comments, the same guy Steve responds with what we should do with users to focuse them.

​1) what do you want it to do?\ 2) understand the business as much as you can\ 3) draw a line in the sand for that which you can safely deliver pretty soon\ 4) build a system that is extensible, something that can be added on too fairly easily, because changes are coming (that is agile-ness)\ 5) charge for changes

And I agree. One of the common misperceptions of agile approaches is that you never draw a line in the sand. This is flat out wrong. You do draw a line in the sand, but you do it every iteration.

Unlike the BDUF Waterfall approach which requires that you force the user to spew requirements until he or she is blue in the face, with iterative approaches you gather a list of requirements and prioritize them according to iterations.  This helps a long way to avoiding poor requirements due to design fatique. The user can change any requirements for later iterations, but once an iteration has commenced, the line in the sand is drawn for that iteration.

To me, this sounds like the last responsible moment for deciding on a set of requirements to implement. You don’t have to decide on the entire system at the beginning. You get some leeway to trade requirements for later iterations. You only are forced to decide for the current iteration.

 

comments edit

Mona
Lisa When I was a bright eyed bushy tailed senior in college, I remember wading through pages and pages of job ads in Jobtrak (which has since been absorbed into Monster.com).

Most of the ads held my attention in the same way reading a phone book does. The bulk of them had something like the following format.

Responsibilites:

Design and develop data-driven internet based applications that meet functional and technical specifications. Use [technology X] and [technology Y] following the [methodology du jour] set of best practices. Create documentation, review code, and perform testing.

Required Skills and Experience:

Must have X years in language [now outdated language]. Must be proficient in BLAH, BLAH, and BLAH. Ability to work in a team environment. Able to work in a fast-paced [meaning we’ll work your ass off] environment.

I know what you’re thinking. Where do I sign up!

Yaaaaawn. Keep in mind, this was in 1997 just as the job market was starting to reach the stratosphere. Competition was tight back then. Do a search on Dice.com right now and you’ll still see a lot of the same.

I’m sorry, but your job posting is not the place to spew forth a list of Must have this and Must have that and a list of responsibilities so plain and vanilla that…that… I just don’t have a good analogy for it. Sorry.

These type of ads are attempting to filter out candidates who do not meet some laundry list of requirements. But this is not the goal of a good job ad. A good job ad should not explain what hoops the candidate must jump through to join your company, it should explain why the candidate should even want to jump through those hoops in the first place.

This of course assumes you are attempting to hire some star developers away from their current jobs rather than resume padders who have spent most of their careers in training classes so they can place your laundry list of technology TLAs underneath their name on their resume.

Certainly, a job posting should explain briefly the type of work and experience desired to fill the role. No point in having a person who only has experience in sales and marketing applying for your senior DBA position (true story). But first and foremost, you want to catch the attention of a great candidate. Boring job ads that read like the one above do not capture the imagination of good developers.

Back to my story. As I said, most of the ads fit this mold, but there were a few here and there that popped off the screen. I wish I had saved the one that really caught my attention. It was from a small company named Sequoia Softworks (which is still around but now named Solien) My memory of it is vague, so I’ll just make something up that resembles the spirit of the ad. All I remember is that it started off by asking questions.

Are you a fast learner and good problem solver? Are you interested in solving interesting business problems using the latest web technologies and building dynamic data driven web applications?

We’re a small web development company in Seal Beach (right by the beach in fact!) looking for bright developers. We have a fun casual environment (we sometimes wear shorts to work) with some seriously interesting software projects.

Experience in Perl, VBScript, and SQL is helpful, but being a quick learner is even more important. If you’ve got what it takes, give us a call.

I ended up working there for six years, moving up the ranks to end up as the Manager of Software Development (never once writing a line of PERL). They did several things right in their ad, as I recall.

Challenge the reader and demand an answer!

Do you have two years experience in C#? is not a challenge to the reader. This is not a question that captures my attention nor draws me in demanding an answer.

Do you know C# like Paris Hilton knows manafacturing fame?Now that is a challenge to my intellect! Hell yeah I know C# like nobody’s business. That kind of question demands an answer.  And a good candidate is more likely to drive over to your headquarters and give it to you.

Appeal to vanity

Not every appeal to vanity is a bad thing. It doesn’t always amount to sucking up. This point is closely related to the last point in that an appeal to vanity is also a challenge to a candidate to show just how great they are. Asking someone if they are a good problem solver, for example, conjures up a desire to prove it.

Show some personality

Sure, many corporations seem like soulless cubicle farms in which workers are seen as mindless drones. But surely not your company, right? So why does your job posting have a tombstone all over it?

Who wants to be another cog in a machine performing mundane tasks for god knows what reason? Your ad should express a bit of your company’s personality and culture.  It should also indicate to the reader that people who come to work for you are going to work with people. Interesting people. And they are going to work on interesting projects.

I write all this because of an article I read about business schools. It was a throw-away quote in which some employer mentioned how his new employee fresh out of business school helped rewrite some job postings and they were able to quickly fill some positions with high quality candidates they had been struggling to fill.

A well written job posting makes a difference.

I mentioned before that I am participating in the HiddenNetwork job board network because I really believe in the power of blogs to connect good people with good jobs. It’s in its nascent stages so I really don’t know how well it is fulfilling that mission yet, but I believe that it will do well.

If you do post a job via my blog (yes, I get a little something something if you do), be sure to make it a good posting that really captures the imagination of good developers (as all my readers are!  See. Appeal to vanity.). It’s even more of a challenge given how few words you have at your disposal for these ads.

comments edit

In a recent post I ranted about how ASP.NET denies WebPermission in Medium Trust. I also mentioned that there may be some legitimate reasons to deny this permission based on this hosting guide.

Then Cathal (thanks!) emailed me and pointed out that the originUrl does not take wildcards, it takes a regular expression.

So I updated the <trust /> element of web.config like so:

<trust level="Medium" originUrl=".*" />

Lo and Behold, it works! Akismet works. Trackbacks work. All in Medium Trust.

Of course, a hosting provider can easily override this as Scott Guthrie points out in my comments. I need to stop blogging while sleep deprived. I have a tendency to say stupid things.

Now a smart hosting company can probably create a custom medium trust policy in order to make sure this doesn’t work, but as far as I can tell, this completely works around the whole idea of denying WebPermission in Medium Trust.

If I can simply add a regular expression to allow all web requests, what’s the point of denying WebPermission?

comments edit

Tyler, an old friend and an outstanding contractor for VelocIT recently wrote a post suggesting one would receive better performance by passing in an array of objects to the Microsoft Data Application Block methods rather than passing in an array of SqlParameter instances. He cited this article.

The article suggests that instead of this:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlParameter[] parameters = new SqlParameter[]
  {
    new SqlParameter("@id", aUser.Id)
    , new SqlParameter("@name", aUser.Name)
    , new SqlParameter("@name", aUser.Email)
    , new SqlParameter("@name", aUser.LastLogin)
    , new SqlParameter("@name", aUser.LastLogOut)
  };

  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , parameters);
}

You should do something like this for performance reasons:

public void GetWithSqlParams(SystemUser aUser)
{
  SqlHelper.ExecuteNonQuery(Settings.ConnectionString
    , CommandType.StoredProcedure
    , "User_Update"
    , aUser.Id
    , aUser.Name
    , aUser.Email
    , aUser.LastLogin
    , aUser.LastLogout);
}

Naturally, when given such advice, I fall back to the first rule of good performance from the perf guru himself, Rico Mariani. Rule #1 Is to Measure. So I mentioned to Tyler that I’d love to see metrics on both approaches. He posted the result on his blog.

Calling the methods included in a previous post, 5000 times each,

With parameters took 1203.125 milliseconds. With objects took 1250 milliseconds. Objects took -46.875 milliseconds less.

20000 times each:

With parameters took 4859.375 milliseconds.
With objects took 5015.625 milliseconds. Objects took -156.25 milliseconds less.

The results show that the performance difference is negligible. However, even before seeing the performance results, I would agree with the article to choose the second approach, but for different reasons. It results in a lot less code. As I have said before, Less code is better code.

I tend to prefer optimizing for productivity all the time, but only optimizing for performance after carefully measuring for bottlenecks.

There’s also a basic economics question hidden in this story. The first approach does seem to eke out slightly better performance, but at what cost? That’s a lot more code to write to eke out 47 milliseconds worth of performance out of 5000 method calls. Is it really worth it?

This particular example may not be the best example of this principle of wasting time optimizing at the expense of productivity because there is one redeeming factor worth mentioning with the first approach.

By explicitly specifying the parameters, the parameters can be listed in any order, whereas the second approach requires that the parameters be listed in the same order that they are specified in the stored procedure. Based on that, some may find the first approach preferable. Me, I prefer the second approach because it is cleaner, easier to read, and I don’t see keeping the order intact much more work than getting the parameter names correct.

But that’s just me.

comments edit

Source:
http://macibolt.hu/pag/goldilock.htmlThis is a bit of rant born out of some frustrations I have with ASP.NET. When setting the trust level of an ASP.NET site, you have the following options:

Full, High, Medium, Low, Minimal

It turns out that many web hosting companies have chosen to congregate around Medium trust as a sweet spot in terms of tightened security while still allowing decent functionality. Only natural as it is the one in the middle.

For the most part, I am sure there are very good reasons for which permissions make it into Medium trust and which ones are not allowed. But some decisions seem rather arbitrary. For example, WebPermission. Why couldn’t that be a part of the default Medium trust configuration? I mean really? Why not? (Ok, there are really good reasons, but remember, this is a rant, not careful analysis. Bear with me. Let me get it off my chest.)

Web applications have very good reason to make web requests (ever hear of something called a web service. They may take off someday) and how damaging is that going to be to a hosting environment. I mean, put a throttle on it if you are that concerned, but don’t rule it out entirely!

I really do want to be a good ASP.NET citizen and support Medium Trust with applications such as Subtext, but what a huge pain it is when some of the best features do not work under Medium Trust. For example, Akismet.

Akismet makes a web request in order to check incoming comments for spam. I tried messing around with wildcards for the originUrl attribute of the <trust /> element, but they don’t work. In fact, I only found a single blog post that said it would work, but no documentation that backed that claim up.

Instead, you need access to the machine.config file (as the previously linked blog post describes), which no self respecting host is going to just give you willy nilly. Nope. In order to get Akismet to work under medium trust, I have to tell Subtext users that they must beg, canoodle, and plead with their host provider to update the machine.config file to allow unrestricted access to the WebPermission. Good luck with that.

If they don’t give unrestricted access, then they need to add an originURl entry for each URL you wish to request. Hopefully machine.config entries do allow wildcards because the URL for an Akismet request includes the Akismet API code. Otherwise running an Akismet enabled multiple user blog in a custom Medium Trust environment would be a royal pain.

Hopefully you can see the reason behind all my bitching and moaning. A major goal for Subtext is to provide a really simple and easy installation experience. At least as easy as possible for installing a web application.  Having an installation step that requires groveling does not make for a good user experience.  But then again, security and usability have always created tension between them.

Scott Watermasysk points out a great guide to enabling WebPermission in Medium Trust for hosters. So if you’re going to be groveling, at least you have a helpful guide to give them. The guide also points out the security risks in involved with Medium Trust.

Related Posts: