Sciences & Technology
The credibility of research needs you
The proposal of an ‘independent science quality assurance agency’ will not address the research replication crisis and will hinder attempts to improve science
Published 25 September 2019
The scientific ‘replication crisis’ is in the news again.
If you are unaware of what the replication crisis is – it’s when researchers in various fields find it difficult to reproduce and validate original research findings.
Recently, in response to the Queensland Government’s new regulations to reduce agricultural runoff in Great Barrier Reef catchments, the National Party, Queensland’s Agforce and Bob Katter have proposed an “independent science quality assurance agency.”
George Christensen (LNP) and Michael Guerin (AgForce) have specifically invoked the scientific ‘replication crisis’ to justify their position.
These politicians and lobbyists propose that the agency would “check scientific papers underpinning public policy and affecting peoples’ lives and livelihoods.”
Sciences & Technology
The credibility of research needs you
But they are targeting specific results they don’t like, rather than trying to improve scientific practice in a systematic way.
This proposal will not address the replication crisis and will hinder attempts to improve science.
Across a number of scientific fields, such as psychology and preclinical medicine, large-scale replication projects have failed to produce evidence supporting the findings of many original studies.
The rates of success differ between fields, but on average half – or fewer – of the tested findings in these projects successfully replicated.
Clearly there is a problem.
Many of these problems have arisen due to hyper-competitiveness in science, funding shortfalls, publication practices and an over-reliance on metrics that privilege quantity over quality.
These kinds of practices distort the evidence available to policy-makers and other researchers.
The good news is that we can do something useful about these problems – but only if we take the replication crisis seriously.
Scientists themselves have documented the poor practices that underlie this crisis, such as a misuse of statistics, often unwittingly, in ways that bias findings towards attention grabbing conclusions.
Scientists have also produced responses to problems in peer review, like ‘registered reports’ that remove bias in what is published; Transparency and Openness Promotion guidelines that improve methodological and statistical reporting (endorsed by thousands of journal); and developed new platforms for data sharing.
Sciences & Technology
The dark matter detective
Establishing an agency with a mission to adjudicate on hand-picked scientific results may make things worse.
At best, such an agency will be one more review panel. At worst, it will be a bureaucratic front for the political agenda of the day.
Either way it will make scientists more cautious and closed, and delay the flow of information to policy makers.
The record of the politicians and lobby groups involved demonstrates that they have little genuine interest in improving science. AgForce, for example, “deleted more than a decade’s worth of data” in advance of the new runoff regulations taking effect.
Such acts undermine any claim to support transparency.
Availability of data is absolutely critical for good science, and we support new measures to increase data sharing, but the record of AgForce is inconsistent with this.
This misuse of the replication crisis was seen three years ago in the UK to justify inaction on climate change. More recently it has popped up in the United States. Local politicians and lobby groups are copying from an overseas playbook in their misuse of the replication crisis.
Historians of science Professor Naomi Oreskes and Professor Erik Conway have detailed how these tactics were pioneered by the tobacco industry, ozone-depleting chlorofluorocarbon manufacturers and others in their book Merchants of Doubt.
Sciences & Technology
The next big scientific thing
These same tactics are being played out today.
Scientists can never make pronouncements with the certainty of a politician. If, as a society, we want to enjoy a healthy scientific community, we will need to accept the idea of scientific uncertainties.
However, the existence of small uncertainties does not justify rejection of the best available evidence.
It is tempting to respond to politically motivated attacks on science with claims about the excellent track record of scientific knowledge or the good intentions of the vast majority of scientists.
We can point to much better reasons to trust science: because scientists have identified the roots of the replication crisis and continue to put better systems in place to catch their own errors.
A simple story, that science is the best knowledge we have, is rhetorically powerful.
But this strategy downplays the need for all of us to improve.
As advocates of cultural change in science we have been told before that pointing out problems helps the anti-science movement. We say that being open about our work to improve science is essential for building public trust.
There are sensible policies to support the open science initiatives that will reduce error production and increase error detection in scientific work. Different fields need different approaches, but here are two ideas:
Sciences & Technology
Heroines of mathematics
Improve funding allocation procedures: Reward self-correcting activities, like replication studies. Don’t require every piece of funded research to be ground-breaking. Don’t promote flawed metrics. Enforce best practice data management and open data practices, whenever feasible. This can all be done without establishing an inefficient agency whose likely effect is to delay action.
Establish a national independent office of research integrity: To allow errors in the scientific literature, whether deliberate or accidental, to be corrected in a manner that is fair, efficient and systematic. Unlike the ‘Quality Assurance’ proposal, this would improve the process for all researchers, not just act as a hand-brake on the results targeted by lobby groups.
Science is something that humans do. It is self-correcting when, and only when, scientists correct it.
Research is hard work, and we can’t expect scientists to never make errors or to provide complete certainty.
We can expect scientists to create a culture that values detecting and correcting errors.
Admitting errors in one’s own work, finding them in others’ work, reporting them, retracting when necessary, and correcting the record are activities that should be the most highly regarded of scientific practices.
We need to shift the balance of rewards away from rewarding only groundbreaking discoveries and towards the painstaking work of confirmation.
We need more incentives for these behaviours in science. Politicised science agencies will reduce them.
Banner: Getty Images