“Debunking the climate hiatus” in the prestigious journal Climatic Change.It’s always undignified to get hit by ‘friendly fire’. That’s what’s happened to a group of Stanford University statistical experts and their late-2015 peer-reviewed paper
Two statistician bloggers, Radford Neal and Grant Foster, have torn the paper apart, even though both agree – for other reasons – that the 15+ years pause or hiatus in warming is a statistical illusion. So, warmists, it’s no use making ad-hominem attacks on these bloggers because they’re on your side.
I am unqualified to comment on the statistical arguments, having barely passed Stats 101 at ANU in 1972, the era of the slide rule. So my point is about prima facie and uncorrected crud making its way into a prestigious peer-reviewed climate journal, which may now have to publish some soul-destroying corrections. And if that essay made it into “the science”, what other junk has also been elevated to scientific holy writ?
Critic Radford Neal (right) is Professor, Dept. of Statistics and Dept. of Computer Science, University of Toronto. He not only looks to me like a good statistician, but his papers have earned 22,600 citations, including 8,600 in the past half-decade. He sets out the status of the “Debunk” authors:
(Bala) Rajaratnam is an Assistant Professor of Statistics and of Environmental Earth System Science. (Joseph) Romano is a Professor of Statistics and of Economics. (Noah) Diffenbaugh is an Associate Professor of Earth System Science. (Michael) Tsiang is a PhD student. Climatic Change appears to be a reputable refereed journal, which is published by Springer, and which is cited in the latest IPCC report. The paper was touted in popular accounts as showing that the whole hiatus thing was mistaken — for instance, by Stanford University itself.
The university had crowed:
A new study reveals that the evidence for a recent pause in the rate of global warming lacks a sound statistical basis. The finding highlights the importance of using appropriate statistical techniques and should improve confidence in climate model projections…The Stanford scientists say their findings should go a long way toward restoring confidence in the basic science and climate computer models that form the foundation for climate change predictions.
Neal, however, throws his bucket of cold water over all concerned:
You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect.
Those familiar with the scientific literature will realize that completely wrong papers are published regularly, even in peer-reviewed journals, and even when (as for this paper) many of the flaws ought to have been obvious to the reviewers. So perhaps there’s nothing too notable about the publication of this paper. On the other hand, one may wonder whether the stringency of the review process was affected by how congenial the paper’s conclusions were to the editor and reviewers. One may also wonder whether a paper reaching the opposite conclusion would have been touted as a great achievement by Stanford University. Certainly this paper should be seen as a reminder that the reverence for “peer-reviewed scientific studies” sometimes seen in popular expositions is unfounded.
The other critical blogger is “Tamino” aka Grant Foster, much reviled by the sceptic crowd. He writes:
There’s yet another paper debunking the so-called “hiatus” in global temperature, making five so far (of which I’m aware), including one of my own. But this one, in my opinion, isn’t helping. In fact I believe it has some very serious problems, some of which make the idea of “hiatus” too easy to reject, while others make it too hard to reject. Although I agree with their overall conclusion — and published that conclusion before they did — I find their evidence completely unconvincing.
So what’s going on?
Just for starters, all four authors and the peer reviewers overlooked that the temperature data series from 1880 to 2013 they were examining was not the one they said it was. Radford Neal:
Rajaratnam, et al. describe this data as ‘the NASA-GISS global mean land-ocean temperature index’, which is a commonly used data set … However, the data plotted above, and which they use, is not actually the GISS land-ocean temperature data set. It is the GISS land-only data set, which is less widely used, since as GISS says, it ‘overestimates trends, since it disregards most of the dampening effects of the oceans’. They appear to have mistakenly downloaded the wrong data set, and not noticed that the vertical scale on their plot doesn’t match plots in other papers showing the GISS land-ocean temperature anomalies. They also apply their methods to various other data sets, claiming similar results, but only results from this data are shown in the paper.
Heavens to Betsy! But as Barack Obama and our Environment Minister Greg Hunt say, we must trust “The Science”.
Radford Neal continues that the authors had also got the bull by the tail in that they were “asking the wrong questions, and trying to answer them using the wrong data.”
The authors’ summarise their results in this way:
Our rigorous statistical framework yields strong evidence against the presence of a global warming hiatus. Accounting for temporal dependence and selection effects rejects — with overwhelming evidence — the hypothesis that there has been no trend in global surface temperature over the past ≈15 years. This analysis also highlights the potential for improper statistical assumptions to yield improper scientific conclusions. Our statistical framework also clearly rejects the hypothesis that the trend in global surface temperature has been smaller over the recent ≈ 15 year period than over the prior period. Further, our framework also rejects the hypothesis that there has been no change in global mean surface temperature over the recent ≈15 years, and the hypothesis that the distribution of annual changes in global surface temperature has been different in the past ≈15 years than earlier in the record.
Radford Neal, not mincing words, comments:
This is all wrong. There is not ‘overwhelming evidence’ of a positive trend in the last 15 years of the data — they conclude that only because they used a flawed method. They do not actually reject ‘the hypothesis that the trend in global surface temperature has been smaller over the recent ≈ 15 year period than over the prior period’. Rather, after an incorrect choice of start year, they fail to reject the hypothesis that the trend in the recent period has been equal to or greater than the trend in the prior period.
Failure to reject a null hypothesis is not the same as rejecting the alternative hypothesis, as we try to teach students in introductory statistics courses, sometimes unsuccessfully.
Similarly, they do not actually reject ‘the hypothesis that the distribution of annual changes in global surface temperature has been different in the past ≈15 years than earlier in the record’. To anyone who understands the null hypothesis testing framework, it is obvious that one could not possibly reject such a hypothesis using any finite amount of data.
Grant “Tamino” Foster, to his credit, independently spotted that the authors had used dud data and wrong start dates:
We’ll start with the confusion about what data they’re using. They focus on global temperature data from NASA GISS, and repeatedly refer to it as ‘land-ocean temperature index’ (LOTI)…They repeatedly say they’re using LOTI — but they’re not.
Foster also agrees with Radford Neal that the authors seriously erred in using 1950 as a start data for their 21st Century “pause” analysis, instead of 1970. Foster: “The choice to start at 1950 also makes their estimate of the ‘trend leading up to 1998’ wrong.” (There was already a “pause” in temperatures from 1950-70 and that earlier pause muddied the waters).
Foster – an ardent warmist, remember – rebukes the authors for comparing pre 1998 and post 1998 trends:
Don’t pick a starting time because of the result it gives if you want to claim to be the utmost in statistical rigor. It’s the essence of cherry-picking.
He goes on:
There are other technical problems, which I won’t go into. Suffice it to say that this paper doesn’t impress me, and although I agree that its conclusion is correct, I don’t agree that their analysis is correct.
And he concludes, just to make his warmist credentials very clear,
It’s now dawning on the scientific community in general: all that ‘hiatus’ talk from global warming deniers was bloviating by blowhards.
I’ve often read that in terms of statistical skill, orthodox climate scientists are not the sharpest knives in the drawer. Phil Jones of East Anglia University Climatic Research Unit and Climategate fame[i], confessed that he has lot of trouble (as I do) doing trends on Excel spreadsheets: “I’m not adept enough (totally inept) with Excel to do this now as no-one who knows how to is here.”
In 2010, as Steve McIntyre quipped at Climate Audit, Phil was ranked one of England’s top 100 scientists. “Just imagine the ranking that he could have achieved if he knew how to calculate a trend by himself.”
Perhaps it’s best to avert our eyes from Stanford University’s statistical wizards and their PR flaks in action. It’s like gawking at a traffic accident.