The scientist's siren song

Cognitive biases aren't confined to laypeople, but researchers have several strategies for countering the foibles of the human mind.

 
 

Have you ever experienced that odd sensation where a new thing, be it a word, number, song, or comic book character, seems to crop up everywhere after you notice it for the first time? It can feel like a coincidence, or even like a supreme force is trying to tell you something. It's positively befuddling.

You’ve likely experienced the Baader-Meinhof phenomenon, otherwise known as the frequency illusion, a type of cognitive bias. It’s undoubtedly a very weird feeling, but it's less a supernatural force and more a feature of a brain that is paradoxically both lazy and efficient.

The phenomenon has two steps: first, selective attention grabs hold of the novel stimulus, alerting you that ‘befuddle’ is indeed a zany new word. Second, confirmation bias comes into play, telling you that each sighting is indeed proof that this new word is everywhere, while ignoring the obvious evidence to the contrary — namely that the word has been there all along and you just hadn’t noticed it. This all sounds like a bit of fun until one thinks about the implications for science; namely, that scientists have been befuddling themselves with similar patterns of thought since time immemorial.

We all like to think that we are rational people. However, our choices and feelings are often guided by processes of the unconscious mind. They’re not directed by robust systems of deduction and inference, but concerned more with what is quick, simple, and often merely good enough. These simple mental rules or shortcuts — which psychologists call heuristics — are one type of process that give rise to cognitive biases.

Heuristics allow us to respond to a problem or situation without the laboured analysis that a novel situation might otherwise demand, resulting in a reduced response time or a smaller cognitive load. When our ancestors roamed the plains, it made sense to quickly react to an unclear situation as if there were a threat. Investing time and energy into removing doubt costs time — and, by then, it might be too late. It was evolutionarily advantageous to employ heuristics.

However, heuristics do not always shape our thinking for the better. Stereotyping, for example, is a mechanism that allows us to make judgements about someone or something because we perceive them to be a member of a group, without actually knowing anything about them at all. Stereotyping simplifies our view of the world and allows information to be more easily categorised and recalled, but has the unintended consequence of creating prejudice and discrimination.

 
  Scientists are just as likely as the rest of us to take a logical shortcut and see a pattern where none truly exists, or to unconsciously convince themselves of something because it aligns with their expectations.   Igor Ovsyannykov/Unsplash  (CC0)

Scientists are just as likely as the rest of us to take a logical shortcut and see a pattern where none truly exists, or to unconsciously convince themselves of something because it aligns with their expectations. Igor Ovsyannykov/Unsplash (CC0)

 

Perhaps the most well-known example of a cognitive bias that arises from these heuristics is the aforementioned confirmation bias: the tendency to seek out information that confirms our preconceptions, and to disregard information that does not. We instinctively fall victim to confirmation bias because it lightens our cognitive load, compared to individually examining every new piece of information we come across and then adjusting our understanding to make it fit. However, forming a view of the world based upon our preexisting expectations will not lead to a view that accurately reflects reality every time.

Scientists, even with the best of intentions, are just as likely as the rest of us to take a logical shortcut, to see a pattern where none truly exists, or to unconsciously convince themselves of something because it aligns with their expectations.

This is why we do science the way we do. The practice of science, from experimental design to statistical analyses, is intended to mitigate the effects of biases inherent to the scientific puzzle. These effects arise from sampling, instrumentation, statistical analyses, environment, experimental execution, and crucially the experimenters themselves. The purpose of replicating a study is to reduce doubt around the variability of scientific results, and generally it works well.

Of course, any scientist worth their salt will tell you that science is a process that self-corrects. The ability to separate good research from bad is ostensibly built into the very foundations of science, from the beginnings of the scientific method all the way to the rigorous peer review process. We know that this method is successful because we can see the fruits of this endeavour all around us. I'm writing this on a computer — a black box of voodoo sorcery as far as I'm concerned — while sitting in a chair suspended 50 metres or more in the air by a mix of sand, water, and a network of steel tubes. Many of the pedestrian activities we take for granted today would likely not have come to pass had scientists’ work not adhered to the stringency of the scientific method, the exacto-knife of the post-Enlightenment modern world.

  Sketches of the "canals" on Mars made by Percival Lowell.   Яков Перельман/Wikimedia Commons  (public domain)

Sketches of the "canals" on Mars made by Percival Lowell. Яков Перельман/Wikimedia Commons (public domain)

In the late 19th century, Italian astronomer Giovanni Schiaparelli described and illustrated a series of mysterious straight lines on the surface of Mars, which he called canali (meaning channels). Through a mistranslation, the canali came to be known as ‘canals’ in the popular lexicon. American Percival Lowell took the idea of canals of unknown origin and ascribed it to a possibly alien origin; an intelligent race desperate to transport Martian water over the surface of the desert planet. In actual fact, Schiaparelli was observing an optical illusion, unknowingly connecting the dots of darkish spots with lines running from one to another, the apparition made ever more murky by the refractivity of earth’s atmosphere. But once word was out it was too late. Percival Lowell and Charles E. Burton both continued to observe and document the ephemeral canals. All of this due to the ability of the mind to see order in noise and accept a lacklustre explanation without question.

Some years later, Percival Lowell documented similar lines and a central dark spot on the surface of Venus. These turned out to be shadows of blood vessels in the retina of his own eye.

Some might think that these are issues of the distant past, but the problem still persists, whether through self-deception or deception of others. In the 1980s, the study of cold fusion enjoyed a great deal of attention before running into the small issue of negative replications. Blindly convinced of the legitimacy of their experiments, cold fusion’s main proponents, Martin Fleischmann and Stanley Pons, refused to admit any wrongdoing. As a result, they were effectively sidelined after their findings were wholly discredited. The study of cold fusion is now more of a hobby than a genuine scientific pursuit.

Yoshitaka Fujii, an anaesthesiologist, falsified data in his research for almost 20 years. Once discovered, a massive investigation was undertaken and in 2012 the tally of papers requiring retraction stood at a staggering 183. He currently holds the record for most retractions from a single author.

More recently, it was revealed that a prodigious stem cell biologist named Haruko Obokata, who at the time headed up her own lab and was being hyped as a possible future Nobel prize recipient, had falsified key elements of her work in generating a new type of embryonic stem cell. Her career ended as spectacularly as it did expeditiously.

  This woodcut etching (circa 1886) depicts the sirens calling to Odysseus and his crew.   Wikimedia Commons  (public domain)

This woodcut etching (circa 1886) depicts the sirens calling to Odysseus and his crew. Wikimedia Commons (public domain)

Even worse, entire fields of study have been undergoing a crisis of confidence. In attempting to replicate 100 psychological studies, the Open Science Collaboration found that only one-third produced statistically significant results, and the mean effect size was only half of that in the original studies.

The above are examples where 'bad science', whether deliberately fraudulent or naively pursued with the best of intentions, has been identified as such, and the science has indeed self-corrected. But they also highlight a deeply rooted problem, not with the scientific method, but with the nature of academia and the brain itself. If ‘new’ science truly is subject to interrogation by intensive peer review, one has to wonder how it is that these papers were approved for publication in the first place.

Perhaps unsurprisingly, the peer review system itself is not immune to cognitive bias. Psychologist Michael Mahoney explored this issue back in 1977 by asking 75 journal reviewers to referee manuscripts describing identical experimental procedures but differing in reported results (positive, negative, mixed, or no results), with the aim of assessing how many of the manuscripts were accepted or rejected, and why. Not only was there a lack of consensus between reviewers as to whether any one version of the manuscript was to be accepted, but the reviewers were significantly less likely to accept a manuscript that disagreed with their own theoretical perspective. That is, papers were accepted if they confirmed a reviewer’s preconceptions and rejected if not, illustrating that confirmation bias plays a role even in the peer review process.

The reality is that scientists are indeed human and are subject to the same types of bias as the rest of us, and with the same frequency and severity. Ironically, the labs of universities and pharmaceutical companies are contaminated not by wayward bacteria or a misplaced petri dish, but by the hopes and dreams of the very people trying to uncover something new about the world. Confirmation bias is to scientists as the sirens were to Odysseus. Tempting though it may be, those who follow the siren's song will soon find themselves crashing at the rocks.

The systematic issues with the current publishing process are more troubling than the occasional rogue like Yoshitaka Fujii. There exists in science a culture of ‘publish or perish’, meaning that scientists who don’t publish new work are going to be out of a job. There are two very disconcerting outcomes here; first, confirmation bias means that it can be unconsciously enticing to err on the side of interpreting results as being positive. Second, there is little accolade in replicating experiments — successful replications do not a prosperous career make, but new discoveries certainly do.

Another way in which scientists can tip the balance in favour of success is through ‘p-hacking’, where statistics are manipulated or misrepresented to demonstrate a positive result. In very basic terms, the p-value in statistics quantifies the strength of evidence against the null hypothesis (in simple terms, the hypothesis that there is no difference between experimental groups). Typically, a p-value of 0.05 or less is considered ‘statistically significant’, meaning that the experimental hypothesis is likely true. P-hacking can take many forms; experimenters can stop collecting data once a p-value of 0.05 is reached, they can choose the type of statistical tests to conduct, they can exclude participants, and so on.

 
  Addressing cognitive bias in science will likely require changes to the scientific journal publishing system. Here, bound collections of scientific journals line the shelves of a university library.   Selena N.B.H./Flickr  (CC BY 2.0)

Addressing cognitive bias in science will likely require changes to the scientific journal publishing system. Here, bound collections of scientific journals line the shelves of a university library. Selena N.B.H./Flickr (CC BY 2.0)

 

So the scientific community has found itself at somewhat of an impasse, aware of critical problems with the quality of research yet unsure of how to fix it. Recently, a team of researchers published a paper entitled “A manifesto for reproducible science” in Nature Human Behaviour. It’s an eight page document detailing methods of fixing this reproducibility crisis, including multiple ways to combat confirmation bias. The authors suggest individually ‘blinding’ participants, data collectors, and data analysts, such that the aims of the study are not known and the data cannot be manipulated to favour a particular outcome. They also suggest promoting study pre-registration in order to curb outcome switching, where researchers change the outcomes of interest in the study depending on the observed results.

Study pre-registration also tackles publication bias by making all research discoverable regardless of whether it’s ultimately published or not, and reduces p-hacking by forcing researchers to declare their experimental methodology and analysis techniques prior to the research taking place.

Over the past 10 years or so, the scientific community has become more cognisant of the issues surrounding cognitive biases in science and are slowly but surely putting in place checks and balances to mitigate their effects. Pre-registering study design is now fairly standard and resources such as arXiv.org allow researchers to disseminate their work and open it up to criticism before it goes to mainstream journals. Ultimately, with increased awareness and a healthy appetite for rigour and integrity in the scientific community, the future is looking bright and befuddlement-free.

Edited by Diana Crow