Science

How to tackle confirmation bias?

Like any research field, physics can fall prey to confirmation bias, as the Delft Majorana case shows. Delta asked two psychologists, experts in the field, what can be done.

Preferably, problems linked to confirmation bias are identified prior to publication, or better yet, prior to data collection. (Photo: Dalia Madi)

First we take a step back in time. ‘Majorana: not fraud, but confirmation bias‘ Delta wrote a couple of months ago. In March of this year, Nature had to withdraw an article by a group led by Professor Leo Kouwenhoven of Quantum Transport at QuTech and Microsoft, after turmoil about the trustworthiness of the study. In the 2018 article, the researchers claimed – in error as it turned out – to have observed Majorana particles. This was big news at the time because these obscure particles, which have no mass or charge, could be used to make excellent qubits, a future quantum computer’s calculating units.

Speculation about the researchers deliberately pushing unwelcome data aside to put peer scientists on the wrong track, led TU Delft to institute an external investigation. In their report, the experts suggest that the authors were probably ‘caught up in the excitement of the moment, and were themselves blind to the data that did not fit the goal they were striving for. Put differently, they were victims of confirmation bias, the tendency for researchers to interpret their evidence in a way that confirms the hypotheses that they originally favoured.

Confirmation bias is a big problem in science, says psychologist Matthew Makel of the Johns Hopkins University School of Education, who has a keen interest in ways in which science can self-correct better and regularly publishes about it. Australian psychologist Mark Rubin of the University of Newcastle takes a different position. He researches the group processes behind biases like the confirmation bias. He says confirmation bias may actually be beneficial for scientific progress.

Delta contacted both researchers and asked in a general sense how they believe pitfalls as seen in the Majorana research can be avoided.

Matthew Makel: “We should place greater emphasis on independent research.” (Photo: Johns Hopkins University School of Education)

How can researchers guard themselves against confirmation bias?
“As physicist Richard Feynman famously said, you must not fool yourself and you are the easiest person to fool. Researchers are human. I think a more fruitful focus to identify and overcome biases would be to look at fixing and creating better systems. These systems could include better incentives for sharing data and materials, supporting the assessment of reproducibility and replication, and rating research based on the quality of its questions and methods instead of the flashiness of its results.”

Are some people more susceptible for confirmation bias?
“My hunch is that context is key for biases in researchers. There is a reason why Upton Sinclair’s quotation that says ‘It is difficult to get a man to understand something when his salary depends upon his not understanding it’ is so well known. It resonates with many people. For example, in education research, programme evaluations conducted or paid for by those who developed the intervention produces larger effects than evaluations conducted by independent teams. This is why transparency of the research process and sharing potential conflicts of interest is important. This is not to say that interested parties can never produce valuable research, but as a society, we may want to place greater emphasis on independent evaluation and independent research.”

What other measures should research groups take to cope with risks of confirmation bias specifically when there are high interests at stake? And how should universities and publishers deal with it?
“There are several routes. Teams could recruit their own ‘red teams of independent critics that integrate criticism into their regular workflow. More research could also begin to use a registered reports format that relies on independent peer review prior to the beginning of data collection. This brings in an external perspective earlier in the research process. A third avenue that could be helpful is to use a pre-registered adversarial collaboration model that brings together researchers who disagree about an issue and have them collaborate to design a study that could potentially change their mind.”

To what extent would you say that science is self-correcting?
“Science is self-correcting when scientists seek to self-correct it. Correction is not something we should take for granted as inevitable. Rather, correction needs explicit support. Support in terms of financial resources and acceptance by the research community that it needs oversight and review, including after publication. Preferably, problems are identified prior to publication, or better yet, prior to data collection. But there will always be errors that slip through. Continued evaluation of research after publication is a key ingredient to a healthy self-correcting system. This evaluation can include assessing reproducibility (can others get the same results from the same data?) as well as replication (can the same results be found with new data?). Individual researchers and scientific fields must also accept that their work should continue to be evaluated after publication.”

Mark Rubin: “I’m not sure confirmation bias is a serious threat to science.” (Photo: University of Newcastle)

Are some people more susceptible to confirmation bias than others?
“Yes, some recent work has looked at individual differences in cognitive biases, including the confirmation bias, and other researchers have looked at the neural correlates of the confirmation bias. However, I think it’s also important to consider the confirmation bias at the collective level. For example, one recent study asked participants with a background in psychology to evaluate a study based on its abstract. Half the participants read a version of the abstract that related the results to parapsychology, and the other half read a version that explained the results in terms of neuroscience. Despite reading the exact same method and results, participants rated the neuroscience abstract as providing stronger evidence, most likely due to a collective confirmation bias in favour of neuroscience. Obviously, we need to be careful not to dismiss valid evidence for any claims out of hand.”

Still you refute the notion that confirmation bias is by definition a danger to science. How so?
“Yes, I’m not sure the confirmation bias is a serious threat to science. Again, researchers tend to be biased in favour of hypotheses that are more well-established and plausible (e.g., neuroscience rather than parapsychology). So when extrapolated to the collective level, the confirmation bias will tend to favour theories and hypotheses that are plausible and disfavour less plausible hypotheses. Consequently, the confirmation bias may actually be beneficial for scientific progress, because it deters scientists from discarding plausible theories too easily.

“Moreover, I think it would be impossible and problematic to eliminate all of your biases. Many of our biases are functional and adaptive for our goals. For example, it’s useful to be biased against strange men in dark alleys! Indeed, a recent discussion of the confirmation bias concluded that it ‘is not only not maladaptive but actually evolutionarily desirable’.

“Like other people, scientists can’t eliminate their biases. In science, a ‘bias’ would imply a motivated deviation from a perspective that is assumed to be ‘correct’ according to some objective criteria. But scientists are never 100% certain which perspective is ‘correct,’ and so it’s always a matter of opinion whether they are being unacceptably ‘biased’ or whether they have a reasonable justification for their particular claim. And, as Dr Makel pointed out, that’s where peer discussion, evaluation, and critique are important.”

All right then, but how can the pitfalls associated with confirmation bias be avoided?
“It’s only through discussions with other scientists that we can consider previously unidentified biases in our work and evaluate the potential impact of these biases on our conclusions.

Preregistration has been proposed as one method of revealing a confirmation bias. However, as I discuss in a recent article, I’m sceptical about this approach for several reasons. In particular, it’s important to appreciate that preregistration doesn’t reduce bias; it only allows people to identify when researchers have changed their initial, preregistered biases to a different set of biases. I think that transparency and peer criticism are more important than preregistration because they allow us to evaluate the researcher’s current biases in the context of their current claims. Certainly, that approach appeared to be effective in the case of the Majorana conductance paper.”

How so?
“The external experts commission (Brouwer et al. (2020)) concluded that the TU Delft researchers appeared to have engaged in unintentional biased selective reporting by presenting their ‘best’ results and choosing not to report their less impressive results. I think an important part of this story is that this biased selective reporting was identified during post-publication peer review by fellow scientists.”

That may be. Yet it took two whistle-blowers a tremendous amount of time and effort to set things in motion. How do we make science even more self-critiquing?
“There’s always room for improvement. I think we need to promote a scientific culture of data sharing, transparency, and civil critique, and we need to reduce any perceived penalties associated with admitting non-fraudulent biases and errors. We’re all biased, and we all make mistakes. A functional scientific culture shouldn’t shame researchers for that.”

Editor Tomas van Dijk

Do you have a question or comment about this article?

tomas.vandijk@tudelft.nl

Comments are closed.