This month’s Scientific American has an interesting commentary by Scott Lilienfield entitled Fudge Factor that discusses the fine line between academic misconduct and errors caused by confirmation bias.

For a great description of confirmation bias, read the YouAreNotSoSmart.com’s post on the topic.

The Misconception: Your opinions are the result of years of rational, objective analysis.

The Truth:Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.

The fudge factor article talks about some of the circumstances that contribute to confirmation bias in the sciences.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers.

Obviously this doesn’t just apply to scientists. I’m sure we all know developers who are equally prone to confirmation bias, present company excluded of course. Winking smile Pretty much everybody is succeptbile. We all probably witnessed an impressive (in magnitude) display of confirmation bias in the recent elections.

However, there’s another contributing factor that the article doesn’t touch upon that I think is worth calling out, our education system. I remember when I was in high school and college, I had a lot of “lab” classes for the various sciences. We’d conduct experiments, take measurements, and plot the measurements on a graph. However, we already knew what the results were supposed to look like. So if a measurement was way off the expected graph, there was a tendency to retake the measurement.

“Whoops, I must’ve nudged the apparatus when I took that measurement, let’s try it again.”

As the article points out (emphasis mine)…

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias.

So how can schools do a better job of teaching scientific methods? I think one interesting thing a teacher can do is have students conduct an experiment where the students think they know what the expected results should be beforehand, but where the actual results will not match up.

I think this would be interesting as an experiment in its own right. I’d be curious to see how many students turn in results which match their expectations rather than what matched their actual observations. That could provide a powerful teaching opportunity about scientific methods and confirmation bias.