The Covid-19 pandemic has stretched the bond between the public and the scientific profession as never before.
Scientists have been revealed to be neither omniscient demigods whose opinions automatically outweigh all political disagreement, nor unscrupulous fraudsters pursuing a political agenda under a cloak of impartiality.
Somewhere between the two lies the truth: Science is a flawed and all too human affair, but it can generate timeless truths, and reliable practical guidance, in a way that other approaches cannot.
In a lecture at Cornell University in 1964, the physicist Richard Feynman defined the scientific method. First, you guess, he said, to a ripple of laughter. Then you compute the consequences of your guess. Then you compare those consequences with the evidence from observations or experiments.
“If [your guess] disagrees with experiment, it’s wrong. In that simple statement is the key to science. It does not make a difference how beautiful the guess is, how smart you are, who made the guess, or what his name is…it’s wrong.”
So when people started falling ill last winter with a respiratory illness, some scientists guessed that a novel coronavirus was responsible. The evidence proved them right.
Some guessed it had come from an animal sold in the Wuhan wildlife market. The evidence proved them wrong. Some guessed vaccines could be developed that would prevent infection. The jury is still out.
Seeing science as a game of guess-and-test clarifies what has been happening these past months.
Science is not about pronouncing with certainty on the known facts of the world; it is about exploring the unknown by testing guesses, some of which prove wrong.
Bad practice can corrupt all stages of the process. Some scientists fall so in love with their guesses that they fail to test them against evidence. They just compute the consequences and stop there.
Mathematical models are elaborate, formal guesses, and there has been a disturbing tendency in recent years to describe their output with words like data, result, or outcome. They are nothing of the sort.
An epidemiological model developed last March at Imperial College London was treated by politicians as hard evidence that without lockdowns, the pandemic could kill 2.2 million Americans, 510,000 Britons, and 96,000 Swedes.
The Swedes tested the model against the real world and found it wanting: They decided to forgo a lockdown, and fewer than 6,000 have died there.
In general, science is much better at telling you about the past and the present than the future.
As Philip Tetlock of the University of Pennsylvania and others have shown, forecasting economic, meteorological, or epidemiological events more than a short time ahead continues to prove frustratingly hard, and experts are sometimes worse at it than amateurs, because they overemphasize their pet causal theories.
A second mistake is to gather flawed data.
On May 22, the respected medical journals the Lancet and the New England Journal of Medicine published a study based on the medical records of 96,000 patients from 671 hospitals around the world that appeared to disprove the guess that the drug hydroxychloroquine could cure Covid-19. The study caused the World Health Organization to halt trials of the drug.
It then emerged, however, that the database came from Surgisphere, a small company with little track record, few employees, and no independent scientific board.
When challenged, Surgisphere failed to produce its raw data. The papers were retracted with abject apologies from the journals. Nor has hydroxychloroquine since been proven to work. Uncertainty about it persists.
A third problem is that data can be trustworthy but inadequate. Evidence-based medicine teaches doctors to fully trust only science based on the gold standard of randomized controlled trials.
But there have been no randomized controlled trials on the wearing of masks to prevent the spread of respiratory diseases (though one is now underway in Denmark).
In the West, unlike in Asia, there were months of disagreement this year about the value of masks, culminating in the somewhat desperate argument of mask foes that people might behave too complacently when wearing them.
The scientific consensus is that the evidence is good enough and the inconvenience small enough that we need not wait for absolute certainty before advising people to wear masks.
This is an inverted form of the so-called precautionary principle, which holds that uncertainty about possible hazards is a strong reason to limit or ban new technologies.
But the principle cuts both ways. If a course of action is known to be safe and cheap and might help to prevent or cure diseases—like wearing a face mask or taking vitamin D supplements, in the case of Covid-19—then uncertainty is no excuse for not trying it.
A fourth mistake is to gather data that are compatible with your guess but to ignore data that contest it. This is known as confirmation bias.
You should test the proposition that all swans are white by looking for black ones, not by finding more white ones.
Yet scientists “believe” in their guesses, so they often accumulate evidence compatible with them but discount as aberrations evidence that would falsify them—saying, for example, that black swans in Australia don’t count. […]
As this example illustrates, one of the hardest questions a science commentator faces is when to take a heretic seriously.
It’s tempting for established scientists to use arguments from authority to dismiss reasonable challenges, but not every maverick is a new Galileo.
As the astronomer Carl Sagan once put it, “Too much openness and you accept every notion, idea, and hypothesis—which is tantamount to knowing nothing. Too much skepticism—especially rejection of new ideas before they are adequately tested—and you’re not only unpleasantly grumpy but also closed to the advance of science.”
In other words, as some wit once put it, don’t be so open-minded that your brains fall out.
Peer review is supposed to be the device that guides us away from unreliable heretics. A scientific result is only reliable when reputable scholars have given it their approval.
Dr. Yan’s report has not been peer-reviewed. But in recent years, peer review’s reputation has been tarnished by a series of scandals.
The Surgisphere study was peer-reviewed, as was the study by Dr. Andrew Wakefield, the hero of the anti-vaccine movement, claiming that the MMR vaccine (for measles, mumps, and rubella) caused autism.
Investigations show that peer review is often perfunctory rather than thorough; often exploited by chums to help each other, and frequently used by gatekeepers to exclude and extinguish legitimate minority scientific opinions in a field.
Herbert Ayres, an expert in operations research, summarized the problem well several decades ago: “As a referee of a paper that threatens to disrupt his life, [a professor] is in a conflict-of-interest position, pure and simple. Unless we’re convinced that he, we, and all our friends who referee have integrity in the upper fifth percentile of those who have so far qualified for sainthood, it is beyond naive to believe that censorship does not occur.”
Rosalyn Yalow, the winner of the Nobel Prize in medicine, was fond of displaying the letter she received in 1955 from the Journal of Clinical Investigation noting that the reviewers were “particularly emphatic in rejecting” her paper.
The health of science depends on tolerating, even encouraging, at least some disagreement. In practice, science is prevented from turning into a religion not by asking scientists to challenge their own theories but by getting them to challenge each other, sometimes with gusto.
Where science becomes political, as in climate change and Covid-19, this diversity of opinion is sometimes extinguished in the pursuit of a consensus to present to a politician or a press conference, and to deny the oxygen of publicity to cranks.
This year has driven home as never before the message that there is no such thing as “the science”; there are different scientific views on how to suppress the virus. […]
Read rest at WSJ ($)
There is very big difference between Regular Science and Junk Science
There is no such thing as “the science” but there are such things as confirmation bias and the statistical errors needed to push that through as “the science”.
Pls see
https://tambonthongchai.com/2018/12/14/climateaction/
https://tambonthongchai.com/2018/08/03/confirmationbias/