Fighting Confirmation Bias

0 comments suggest edit

This month’s Scientific American has an interesting commentary by Scott Lilienfield entitled Fudge Factor that discusses the fine line between academic misconduct and errors caused by confirmation bias.

For a great description of confirmation bias, read the’s post on the topic.

The Misconception: Your opinions are the result of years of rational, objective analysis.

The Truth:Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.

The fudge factor article talks about some of the circumstances that contribute to confirmation bias in the sciences.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

Obviously this doesn’t just apply to scientists. I’m sure we all know developers who are equally prone to confirmation bias, present company excluded of course. Winking
smile Pretty much everybody is succeptbile. We all probably witnessed an impressive (in magnitude) display of confirmation bias in the recent elections.

However, there’s another contributing factor that the article doesn’t touch upon that I think is worth calling out, our education system. I remember when I was in high school and college, I had a lot of “lab” classes for the various sciences. We’d conduct experiments, take measurements, and plot the measurements on a graph. However, we already knew what the results were supposed to look like. So if a measurement was way off the expected graph, there was a tendency to retake the measurement.

“Whoops, I must’ve nudged the apparatus when I took that measurement, let’s try it again.”

As the article points out (emphasis mine)…

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias.

So how can schools do a better job of teaching scientific methods? I think one interesting thing a teacher can do is have students conduct an experiment where the students think they know what the expected results should be beforehand, but where the actual results will not match up.

I think this would be interesting as an experiment in its own right. I’d be curious to see how many students turn in results which match their expectations rather than what matched their actual observations. That could provide a powerful teaching opportunity about scientific methods and confirmation bias.

Found a typo or error? Suggest an edit! If accepted, your contribution is listed automatically here.



12 responses

  1. Avatar for Chad Myers
    Chad Myers November 5th, 2010

    Although these days, not even the scientific method will avail you. Especially if it's something that's inherently subjective or very complex that objectivity becomes problematic.
    In my experience, the best way is to listen to those that tell you you're wrong (even when it hurts) and try to prove them wrong.
    I've proven myself into and out of many biases by arguing (in the good sense of the word). Trying to prove someone wrong, for me, provides a useful motivation for doing the research and legwork. Frequently during that process I realize that I'm the wrong one and come around to the correct assumption. Many times, I find out both me and my counterpart were wrong and come up with some other solution entirely.
    Comfort, as your post alludes to, is the great enemy. Scientific method, rational argumentation, apologetics (defending your ideas) are all means of getting us out of our ruts and into "thinking mode."

  2. Avatar for Ken
    Ken November 5th, 2010

    There were a couple of labs in high school and college that I remember repeating the step/experiment because the results didn't match what I/we expected. Sometimes I/we still couldn't get the results we expected so we turned in what we had. Typically we got docked some points for having the wrong answer.

  3. Avatar for Mark
    Mark November 5th, 2010

    I'm not so sure about your apparatus here in the U.S, but I went to school in Northern Canada. It was quite possible to do an experiment 3 different times and get 3 different results based on faulty/outdated equipment. Confirmation bias saved us. ;)
    There was also the other factor to alter your thinking about the experiment - peer pressure. Many times we would do an experiment with clearly measurable results, but your partner would make you second guess or introduce confirmation bias. It adds another dimension to steering away from Scientific method.
    Great post Phil,

  4. Avatar for Paul
    Paul November 6th, 2010

    I remember doing an experiment in high school physics (33 years ago) where we measured the position of a falling object. We then had to fit a curve to the graph of position vs time. Eveyone in the class dutifully found that the position was proportional to the square of the time. In my report I said that the power of time could have been anywhere between 1.8 and 3 due to the estimated errors in the measurements. The teacher marked this as wrong with the comment that the position is propotional to the square of the time. Having been penalised for following the scientific method, I never did it again and always returned the 'correct' answer as the conclusion of an experimental report.
    I did very well in science subjects at school because I knewn the answers that were required (and kept the correct answers to myself)

  5. Avatar for Richard Nienaber
    Richard Nienaber November 6th, 2010

    This post reminded me of the 'Oil drop experiment'. Wikipedia has a section on it:

  6. Avatar for Kelly
    Kelly November 6th, 2010

    While I agree that science educators need to include more open-inquiry labs (which they are these days actually), in general it is far more likely that the student did actually bump the apparatus to cause the strange measurement rather than stumbling upon a strange result. Further, in science as a field, if you are going to report something that bucks the traditional understanding, you had better repeat and repeat and confirm and confirm what you are going to report. You are going to be tested and questioned by EVERYONE. Therefore, I do not think its a problem to teach scientists to be wary of strange results and be sure of what they think they've seen.

  7. Avatar for Josh Carroll
    Josh Carroll November 7th, 2010

    Years ago, my father told me about an article he read where a student (now grown up) was recounting the best teacher he ever had. The story<super>*</super> goes something like this:

    An elementary science teacher brought in a set of skeletal animal remains to class one day for the class to study. He spent two weeks showing the bones and talking about the prehistoric animal to which they belonged. He was very detailed about the animal from diet, habitation, and environment.
    Eventually we were tested at the end of the two weeks, on what we had learned. Much to our dismay we all failed the test. The reasoning... They were cat bones! Our teacher had made the whole thing up. He stated that we all failed because not once did we ever attempt to question the information he presented. No one ever verified what he said, but blindly accepted it as truth.
    Of course there was a small witch trial for the teacher and the parents were ready to crucify him over damaging their poor little innocent babies in such a cruel way. Being that this story took place decades ago in a much less politically correct environment, the school board upheld his decision.
    The author went on to say that the entire class questioned everything from then on out. It was the greatest and most valuable lesson he ever learned.

    In the end I think we should always question answers, even our own.
    <super>*</super>I don't have the original source, so this was all drawn from memory.

  8. Avatar for JackJackson
    JackJackson November 7th, 2010

    ... You might think MVC should have a Model View and a Controller.
    But in reality it's a Matress, Vacuum and a Carrot.
    Loose binding... Make of it what you will!

  9. Avatar for Peter.O
    Peter.O November 7th, 2010

    I always felt that computer scientists are very odd scientists in that they hardly concern themselves with issues in the other vistas of science that are not directly relevant to software and hardware.
    This article is definitely an exception. There's hope that someday, we in CS will mature into a discipline that also pays attention to the philosophy of science.
    And when you hear some folks wondering what the point of this blog post is, they probably can't directly relate it to much of compiling code ... lots of code. People, it's sufficient for it to be just a scientific discourse, ok?!

  10. Avatar for R Rogers
    R Rogers November 7th, 2010

    I remember a story my Dad used to tell about following the crowd while in the boy scouts: All the father/son scout teams were given the same objective, with the same clues, using compass, etc. At the end of the day they were left feeling pretty uncomfortable, since everyone had reached the same destination, and it was a long distance from theirs. However, in the end, they were the only team that got the challenge right.
    In an age where we are all taught to question authority/the status quo, it's ironic that it has really only meant to adhere to a new status quo, but without ever asking why.
    I find it rare to find an 'expert' that actually is.

  11. Avatar for Jamie Dixon
    Jamie Dixon November 8th, 2010

    Phil, thanks for posting this information. Very interesting to read.
    One thing that's not usually covered in this topic is the benefits of bias.
    One of my mentors tells the story of when he ran a research and development company. He'd have a group of around 15 people who would meet weekly to discuss developments they'd made and each week he'd tell them how he'd found a way to do x and for them to figure it out also.
    Once they'd figured out how to do x they'd explain how they got the same result too. The important thing was that he hadn't figured out how to do x at all, he just wanted them to think it was possible and thus the 15 of them together would figure it out and teach him.
    Sometimes when the bias is based on the plausibility and possibility of something being achievable, it becomes so.
    A bit of food for thought.

  12. Avatar for hcoverlambda
    hcoverlambda November 9th, 2010

    Reminds me of a couple of examples in physics.
    First, Rutherford's work on the atom c. 1910. He had an apparatus that would fire alpha particles (From either a radium or polonium source, cant remember) through a thin gold foil onto a scintillation screen. Rutherford was a great experimentalist and he had he insight to know what variations of the setup to try. So he had Ernest Marsden, an undergrad in his lab, put a scintillation screen *behind* the gold foil. Amazingly flashes were observed there and this lead to the hypothesis that the atom has a massive nucleus concentrated in a small space (Which then lead to Borh's early work in 1913 on quantum theory, etc, etc). The amazing thing is that Rutherford did not expect this setup to yield any results but wanted to eliminate other setups making sure he was covering all his bases. So he had a bias but choose to explore all possibilities despite that bias and it yielded results that lead to the quantum revolution and nuclear physics (And a TON of other stuff).
    Another example is Michelson and his experiments to prove the existence of the aether (IE, Michelson & Morley experiment, et al). This example goes along with what Chad was saying where its good to try to prove what you believe. In the end the experiments did not successfully prove that there was an aether. Unfortunately Michelson allowed his confirmation bias to prevail and did not allow the results of this experiment to shape his beliefs. Einstein's work on relativity showed that whether the aether existed or not was irrelevant. Throughout all this Michelson never budged on his belief of the aether and how it could unify physics. He allowed confirmation bias to prevail.
    There are many other examples, like with Einstein & quantum mechanics. In that case Einstein's bias towards a classical world picture (With explanations that can be understood classically) served him well with his development of relativity but lead him to largely disregard quantum mechanics (Ironically his 1905 work on the photo electric effect was part of the foundation of QM). Although as we've see throughout the last century that relativity and QM have both proven themselves to be solid theories. So can confirmation bias be a good thing sometimes? I guess if your bias yields good results. :)