Since the controversial scam by psychologist Diederik Stapel, who came up with all sorts of seemingly plausible data, the reproducibility of scientific research has attracted attention. Because how was it possible that no one would try to imitate his research?
Since then, a variety of studies have been conducted, not only in psychology, that have attempted to replicate or “replicate” previous research. It always turns out that between 40 and 50 percent of the results cannot be confirmed, says epidemiologist Michel de Boer from UMC Groningen.
According to him, science is talking about a “copy crisis.” Networks have now been established in about twenty countries to draw attention to the replication of scientific research. He has now also founded such a network, together with several colleagues from different universities: the Dutch Cloning Network (NLRN).
decent
The network starts modestly, with a small support from the research funder NWO: €250,000 over the next three years. “We have a part-time coordinator. We can also organize seminars and develop training materials, for example.
It's not necessarily about the actual replication of the research, says de Boer Possibility Of repetition. Scientists must work as transparently as possible, so that others can follow in their footsteps.
De Boer: “Some scientists don't see the problem. They think: I explained what I did in my method section, right? But a scientific journal, for example, will give you four hundred words to explain your method, so you can never provide all the details. If you want to reproduce something, you also need someone's data, software, codes, etc.
coincidence
Network creation is consistent with the goal Open scienceConfirms. “But open science, for example, is also about articles that can be read for free, and we are not concerned with that. We are looking specifically at the reproducibility of scientific research.
De Boer believes that fraudulent researchers like Stapel are not actually the biggest problem, because this rarely happens. Often, researchers go somewhere, for example, publishing only important results. Then the conclusions sometimes seem amazing, when in fact they are based on chance.
In science, there is also the problem of “publication bias,” says de Boer. Journals only publish interesting results. Sometimes there is only a little left when you try again.
Rembrandt
Is this relevant for all disciplines, even if experiments and data hardly play a role? De Boer thinks so. “In a field like history, the discussion is still in its infancy, but initiatives are emerging. For example, there are those who try to imitate the attribution of two paintings by Rembrandt, to see what they come up against.”
De Boer stresses that the network itself is also in its infancy. He says he and his colleagues are having all kinds of discussions, but so far only one institution has officially joined: his own University of Groningen. “We definitely still have to go, but all kinds of local initiatives have already been linked. We would like to put them in touch with each other.”
Read more
“Scientists should repeat each other's experiments more often.”
In the battle against inaccurate and fraudulent research, researchers must conduct experiments more frequently…