Deals Well With Ambiguity

code 0 comments suggest edit

A while ago I was talking with my manager at the time about traits that we value in a Program Manager. He related an anecdote about an interview he gave where it became clear that the candidate did not deal well with ambiguity.

This is an important trait for nearly every job, but especially for PMs as projects can often change on a dime and it’s important understand how to make progress amidst ambiguity and eventually drive towards resolving ambiguity.

Lately, I’ve been asking myself the question, doesn’t this apply just as much to software?

One of the most frustrating aspects of software today is that it doesn’t deal well with ambiguity. You could take the most well crafted robust pieces of software, and a cosmic ray could flip one bit in memory and potentially take the whole thing down.

The most common case of this fragility that we experience is in the form of breaking changes. Pretty much all applications have dependencies on other libraries or frameworks. One little breaking change in such a library or framework and upgrading that dependency will quickly take down your application.

Someday, I’d love to see software that really did deal well with ambiguity.

For example, lets take imagine a situation where a call to a method which has changed its signature wouldn’t result in a failure but would be resolved automatically.

In the .NET world, we have something close with the concept of assembly binding redirection, which allow you to redirect calls compiled against one version of an assembly to another. This is great if none of the signatures of existing methods have changed. I can imagine taking this further and allowing application developers to apply redirection to method calls account for such changes. In many cases, the method itself that changed could indicate how to perform this redirection. In the simplest case, you simply keep the old method and have it call the new method.

More challenging is the case where the semantics of the call itself have changed. Perhaps the signature hasn’t changed, but the behavior has changed in subtle ways that could break existing applications.

In the near future, I think it would be interesting to look at ways that software that introduce such breaks could also provide hints at how to resolve the breaks. Perhaps code contracts or other pre conditions could look at how the method is called and in cases where it would be broken, attempt to resolve it.

Perhaps in the further future, a promising approach would move away from programming with objects and functions and look at building software using autonomous software agents that communicate with each other via messages as the primary building block of programs.

In theory, autonomous agents are aware of their environment and flexible enough to deal with fuzzy situations and make decisions without human interaction. In other words, they know how to deal with some level of ambiguity.

I imagine that even in those cases, situations would arise that the software couldn’t handle without human involvement, but hey, that happens today even with humans. I occasionally run into situations I’m not sure how to resolve and I enlist the help of my manager and co-workers to get to a resolution. Over time, agents should be able to employ similar techniques of enlisting other agents in making such decisions.

Thus when an agent is upgraded, ideally the entire system continues to function without coming to a screeching halt. Perhaps there’s a brief period where the system’s performance is slightly degraded as all the agents learn about the newly upgraded agent and verify their assumptions, etc. But overall, the system deals with the changes and moves on.

A boy can dream, eh? In the meanwhile, if reducing the tax of backwards compatibility is the goal, there are other avenues to look at. For example, by you could apply isolation using virtualization so that an application always runs in the environment it was designed for, thus removing any need for dealing with ambiguity (apart from killer cosmic rays).

In any case, I’m excited to see what new approaches will appear over the next few decades in this area that I can’t even begin to anticipate.

Found a typo or error? Suggest an edit! If accepted, your contribution is listed automatically here.

Comments

avatar

16 responses

  1. Avatar for Jonas Follesø
    Jonas Follesø May 25th, 2010

    Isn't the idea of autonomous pieces of software communicating with messages, where each piece can be upgraded independently, the failed promise of SOA architectures? I.e. as long as you comply to the contract (messages) you should be fine when upgrading to a new version?
    I deff. get your point, but it's really hard to envision a technology or framework to solve this. I tend to think about it as the move from "CRUD applications" to "smart applications". Today we built so much software that is mere forms over database tables - and it's almost like we're not even trying to improve and try to build smarter software capable of learning and making more and more decisions on its own. Have the AI research had any impact on day-to-day software projects (I know the Xbox Natal project uses machine learning to teach Natal about movement), but for "business apps"...?
    Anyway, interesting food for thought. :)

  2. Avatar for Filini
    Filini May 25th, 2010

    I'm not really a genius of innovation, but I find it hard to imagine connected systems/agents that "adapt" to big breaking changes.
    The key point is your "even in those cases, situations would arise that the software couldn’t handle without human involvement".
    But, it would be nice :)

  3. Avatar for Jesse
    Jesse May 25th, 2010

    As much as I would like to see AI, the core principle of a computer is logic. Without allowing the computer itself to learn in some capacity, over time, the dealing with ambiguity couldn't even begin -- there's some level of feeling involved (completely irrational, illogical ... emotion!)
    So I would tweak your idea and ask not (what your country can do for you) why can't it deal with ambiguity ... how do we make a computer think, learn and have some kind of emotions all by itself?
    -insert picture of Commander Data here-

  4. Avatar for LosManos
    LosManos May 25th, 2010

    hejdig.
    It is easy for us computer guys to use computers for everything but as little as a carpenter uses a hammer for everything, so shouldn't we.
    I am not sure making an adaptable application is always a good idea. One of the strengths with computers is that they don't handle strays from the right path.
    If I want a perfect machine I choose a computer. 1s and 0s. Nothing in between.
    Instead of crippling a computer and its user with uncertain outcome - instead make the application solid but usable in a manner that allows ambiguity on another level (i.e. user level).
    (i still think your article mentions a good idea)
    /OF

  5. Avatar for Andy Mortimer
    Andy Mortimer May 25th, 2010

    There is a flip side to having systems that can cope with ambiguity: we have to cope with the fact that the system's output is no longer deterministic.
    To go back to your original example of the project manager, given an ambiguous situation, a good project manager will resolve it and move the project forward somehow, but a different good project manager might resolve the situation in a different way. Even the same project manager, on a different day or in a different mood, might take different approaches.
    Until we, as programmers and as users, are prepared to deal with the software making it's best guess and sometimes doing something other than what we expected, we will remain stuck in this fragile world where the tiniest change throws the whole system off.
    We thought the transition to controlled indeterminacy in multi-threaded code was difficult. This change to total non-determinacy will hit us even harder, I suspect.
    - Andy

  6. Avatar for Ray
    Ray May 25th, 2010

    Very interesting. This is the kind of stuff that gets Terminators to come back in time to take you out!

  7. Avatar for haacked
    haacked May 25th, 2010

    @Jonas SOA and autonomous agents share the idea of autonomy, but are otherwise very different. Autonomous agents are flexible, something lacking in SOA which by definition requires an explicit contract.
    In SOA, boundaries are explicit, but not necessarily so with agents. For example, you can have a set of agents from a hive where they share a set of data as a collective, but are each independently autonomous.
    @Jesse I think there's a subtle distinction between autonomous agents and AI. Agents don't require "intelligence" but rather flexibility. Sometimes, the behavior that emerges appears intelligent. For example, some satellites already employ these concepts to enable satellites to deal with unexpected situations. I'll write a follow-up to describe examples.
    @LosManos yes, this would require a new way of programming where we're more concerned about setting a *range* of valid outcomes and not so concerned about how the software reaches the outcome. I'll cover than in my follow-up.
    @Andy indeed. So if we ask the software to deal with ambiguity, we must also be prepared to deal with ambiguity well. Interesting point. :)

  8. Avatar for EricH
    EricH May 25th, 2010

    Maybe the main issue is that we strive for shipping a perfect snapshot of software rather than a system designed for change? (ie. we're building with wood and steel beams rather than mouldable materials such as clay).
    "a promising approach would move away from programming with objects and functions and look at building software using autonomous software agents that communicate with each other via messages as the primary building block of programs."
    Why does this have to be in the absence of objects? Smalltalk has largely promoted messages, and Alan Kay has apologized for the confusion of classes time and time again. Message based systems exist alongside objects.. the two are not mutually exclusive.
    For what you are envisioning I think late-binding will sufficiently satisfy this need. Besides, look at the most popular/successful system which exists, ie. the web, all it is, is one large late-bound system. The late-boundness of it, is largely a huge part of its success. Its a great example of the system you are describing, ie. a system which was designed for change and to evolve.
    Alot of comments mention AI, AI is not necessary, there is no need for "intelligent" / "smart" machines which can understand and resolve. Don't get me wrong, the idea is great.. but for 80% of what we need, intelligence is not needed. Late-binding can facilitate all of this, along with messages as you say.
    I truly encourage you to watch Alan Kay's 1997 OOPSLA keynote. It is a true eye-opening experience and an amazing talk. I would recommend it to anyone interested in computers... there are so many gems.
    Video:
    video.google.com/videoplay
    Transcript:
    blog.moryton.net/...
    Thanks for the great posts and great work!

  9. Avatar for Mike Murray
    Mike Murray May 25th, 2010

    I think EricH is on the right track. What you are describing sounds like the original intention of "object-orientated" programming, as first by coined by Alan Kay. Some of this could be done today in C#, but not very easily. C# (along with Java, C++, and the other popular programming languages we somewhat misapply the term "objected-oriented" to) was not really built on top of the same goals and philosophical assumptions that would easily enable the flexibility you're looking for; but again, some of it can be done to a degree. Having never used it myself, it seems Smalltalk was built on philosophies and goals more inline with what you seek.
    I currently in the process of reading Object Thinking by David West (published by Microsoft Press), where it discusses this in great detail.

  10. Avatar for fschwiet
    fschwiet May 25th, 2010

    Aren't a lot of the issues of flexibility addressed with unit testing? When you have good code coverage, you will see and be able to fix breaking changes from related components. If you have a class and you want to change the semantics of the interface, while supporting users of the old one, you can just support a 2nd interface? Assuming the consumers load the class via IoC they'll get the interface they know...
    Maybe you want this type of adaptibility to be automatic? Good luck with that. I think more progress would be made getting more .Net developers up to speed on unit testing.

  11. Avatar for haacked
    haacked May 26th, 2010

    @fschwiet unit testing is an orthogonal concern. Suppose my mom upgrades Windows and windows introduces a breaking changes. How are her *existing* apps going to continue working? What if they could adapt automatically.
    Unit testing doesn't address this scenario. It only addresses those who write such apps to be able to update their apps to accommodate the change. But the end consumer is not necessarily going to update *every* app on their system.

  12. Avatar for scottt732
    scottt732 May 27th, 2010

    "The system originally went online on August 4th 1997. Human decisions were removed from strategic defense. Skynet began to learn at a geometric rate. It originally became self aware on August 29th 1997 2:14 am Eastern Time."

  13. Avatar for GazNewt
    GazNewt June 3rd, 2010

    It's a rabbit!

  14. Avatar for Alois Kraus
    Alois Kraus June 12th, 2010

    To make at least for .Net programming dealing with breaking Api changes easier there is a new (free) kid on the block:
    ApiChange:
    geekswithblogs.net/.../140207.aspx
    The project is hosted on CodePlex:
    http://apichange.codeplex.com/
    Yours,
    Alois Kraus

  15. Avatar for foobar
    foobar October 2nd, 2010

    MEF recomposition -> upgraded software w/no downtime?

  16. Avatar for Fantasy Writing Guy
    Fantasy Writing Guy May 5th, 2011

    From the perspective of a speculative fiction writer, the computers that are prevalent in works of science fiction are able to handle ambiguity well. That's what allows them to really take on the role of characters in a story. The computers that exist today are (for the most part) so darn literal that they are impossible to interact with in a meaningful way.