Why is scientific research often so hard to replicate? The Netherlands Reproducibility Network, a new project launched last week with a bit of support from the Dutch Research Council (NWO), intends to address this issue.
Ever since the Dutch psychology professor Diederik Stapel was exposed for fabricating all kinds of plausible-sounding data – a story that made international headlines – the reproducibility of scientific research has been under increased scrutiny. Why had no one ever tried to reproduce his research results?
Since Stapel’s fraud came to light, a large number of studies have been conducted in a variety of fields, attempting to replicate previous research. These consistently show that 40 to 50 percent of the results cannot be confirmed, according to epidemiologist Michiel de Boer of UMC Groningen.
Within academia, he says, people are calling it a ‘replication crisis’. So far, around 20 countries have established networks to draw attention to the reproducibility of scientific research. Together with a number of colleagues from other universities, De Boer has now established such a network for the Netherlands: the Netherlands Reproducibility Network (NLRN).
The network is starting modestly, with a small grant of 250,000 euros for the next three years from research funder NWO. De Boer: “We have a coordinator, who works part-time, and we can also organise conferences and develop training materials.”
Fraudulent researchers are not the biggest problem
The goal is not necessarily to actually repeat past research, says De Boer, but to ensure the possibility of replication. Researchers should work as transparently as possible so that others can repeat their process step by step.
“Some scientists don’t see the problem”, De Boer explains. “They go: didn’t I explain what I did in my methodology section? But scientific journals only give you, for example, 400 words to explain your method, which is never enough to go into detail. And if you want to replicate a study, you also need the original data, or the software and codes that were used, and so on.”
The creation of the network is part of a larger drive towards open science, De Boer affirms. “But open science is also about things like freely accessible articles, and that’s not something we’re concerned with. We’re specifically looking at the possibility of repeating scientific research.”
Because they’re so rare, academic fraudsters like Stapel are actually not the biggest problem, De Boer believes. Researchers who decide to cut corners and only publish significant outcomes are much more common. This can sometimes make for conclusions that sound spectacular, even though they’re actually based on a fluke.
- Something similar happened to leading Delft researcher Leo Kouwenhoven. Read more about it in Majorana: not fraud, but confirmation bias and LOWI: ‘Majorana researchers negligent.
Meanwhile, science is also plagued by ‘publication bias’, says De Boer. Journals only publish outcomes that are deemed interesting – and that can’t always be replicated.
Is reproducibility relevant to all disciplines, even those where scientific experiments and data hardly play a role? Yes, says De Boer. “In a field like history, the discussion is still in its infancy, but we are seeing new initiatives. For example, there’s someone who’s trying to replicate the attribution process for two Rembrandt paintings, to see what kind of challenges that might entail.”
The network itself is also in its infancy, De Boer concedes. Together with his colleagues, he’s in talks with many potential partners, but so far only one institution – his own employer, the University of Groningen – has officially joined the network. “We do still have some way to go, but we’ve already connected with all kinds of local initiatives, and we look forward to introducing them to each other.”
HOP, Bas Belleman