We’re continually assured that government policies are grounded in evidence, whether it’s an anti-bullying programme in Finland, an alcohol awareness initiative in Texas or climate change responses around the globe. Science itself, we’re told, is guiding our footsteps.
There’s just one problem: science is in deep trouble. Last year, Richard Horton, editor of the Lancet, referred to fears that ‘much of the scientific literature, perhaps half, may simply be untrue’ and that ‘science has taken a turn toward darkness.’
It’s a worrying thought. Government policies can’t be considered evidence-based if the evidence on which they depend hasn’t been independently verified, yet the vast majority of academic research is never put to this test. Instead, something called peer review takes place. When a research paper is submitted, journals invite a couple of people to evaluate it. Known as referees, these individuals recommend that the paper be published, modified, or rejected.
If it’s true that one gets what one pays for, let me point out that referees typically work for no payment. They lack both the time and the resources to perform anything other than a cursory overview. Nothing like an audit occurs. No one examines the raw data for accuracy or the computer code for errors. Peer review doesn’t guarantee that proper statistical analyses were employed, or that lab equipment was used properly. The peer review process itself is full of serious flaws, yet is treated as if it’s the handmaiden of objective truth.
And it shows. Referees at the most prestigious of journals have given the green light to research that was later found to be wholly fraudulent. Conversely, they’ve scoffed at work that went on to win Nobel prizes. Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery and a black box. He points out that an extensive body of research finds scant evidence that this vetting process accomplishes much at all. On the other hand, a mountain of scholarship has identified profound deficiencies.
We have known for some time about the random and arbitrary nature of peer reviewing. In 1982, 12 already published papers were assigned fictitious author and institution names before being resubmitted to the same journal 18 to 32 months later. The duplication was noticed in three instances, but the remaining nine papers underwent review by two referees each. Only one paper was deemed worthy of seeing the light of day the second time it was examined by the same journal that had already published it. Lack of originality wasn’t among the concerns raised by the second wave of referees.
A significant part of the problem is that anyone can start a scholarly journal and define peer review however they wish. No minimum standards apply and no enforcement mechanisms ensure that a journal’s publicly described policies are followed. Some editors admit to writing up fake reviews under cover of anonymity rather than going to the trouble of recruiting bona fide referees. Two years ago it emerged that 120 papers containing computer-generated gibberish had survived the peer review process of reputable publishers.
There are serious knock-on effects. Politicians and journalists have long found it convenient to regard peer-reviewed research as de facto sound science. Saying ‘Look at the studies!’ is a convenient way of avoiding argument. But Nature magazine has disclosed how, over a period of 18 months, a team of researchers attempted to correct dozens of substantial errors in nutrition and obesity research. Among these was the claim that the height change in a group of adults averaged nearly three inches (7 cm) over eight weeks.
The team reported that editors ‘seemed unprepared or ill-equipped to investigate, take action, or even respond’. In Kafkaesque fashion, after months of effort culminated in acknowledgement of a gaffe, journals then demanded that the team pay thousands of dollars before a letter calling attention to other people’s mistakes could be published.
Which brings us back to the matter of public policy. We’ve long been assured that reports produced by the UN’s Intergovernmental Panel on Climate Change (IPCC) are authoritative because they rely entirely on peer-reviewed scientific literature. A 2010 InterAcademy Council investigation found this claim to be false, but that’s another story. Even if all IPCC source material did meet this threshold, the fact that one academic journal — and there are 25,000 of them — conducted an unspecified and unregulated peer review ritual is no warranty that a paper isn’t total nonsense.
If half of scientific literature ‘may simply be untrue’, then might it be that some of the climate research cited by the IPCC is also untrue? Even raising this question is often seen as being anti-scientific. But science is never settled. The history of scientific progress is the history of one set of assumptions being disproven, and another taking its place. In 1915, Einstein’s theory of relativity undermined Newton’s understanding of the universe. But Einstein said he would not believe in his own theory of relativity until it had been empirically verified.
It was an approach which made quite an impression on the young Karl Popper. ‘Einstein was looking for crucial experiments whose agreement with his predictions would by no means establish his theory,’ he wrote later. ‘While a disagreement, as he was the first to stress, would show his theory tobe untenable. This, I felt, was the true scientific attitude.’