How accurate were James Hansen’s 1988 testimony and subsequent JGR article forecasts of global warming?
According to a laudatory article by AP’s Seth Borenstein, they “pretty much” came true, with other scientists claiming their accuracy was “astounding” and “incredible.”
Pat Michaels and Ryan Maue in the Wall Street Journal, and Calvin Beisner in the Daily Caller disputed this.
The whole debate has focused on comparisons of the 1988 and 2017 endpoints. Skeptical Science waived away the differences by arguing that if one adjusts for an overestimation in the rise of greenhouse gas (GHG) forcing, Hansen’s 2017 Scenario B prediction was not far off reality.
There are two problems with the debate as it has played out.
First using 2017 as the comparison date is misleading because of mismatches between observed and assumed El Nino and volcanic events that artificially pinched the observations and scenarios together at the end of the sample.
What really matters is the trend over the forecast interval, and this is where the problems become visible.
Second, applying a posthoc bias correction to the forcing ignores the fact that converting GHG increases into force is an essential part of the modeling.
If a correction were needed for the CO2 concentration forecast that would be fair, but this aspect of the forecast turned out to be quite close to observations.
Let’s go through it all carefully, beginning with the CO2 forecasts.
Hansen didn’t graph his CO2 concentration projections, but he described the algorithm behind them in his Appendix B.
He followed observed CO2 levels from 1958 to 1981 and extrapolated from there. That means his forecast interval begins in 1982, not 1988, although he included observed stratospheric aerosols up to 1985.
From his extrapolation formulas, we can compute that his projected 2017 CO2 concentrations were: Scenario A 410 ppm; Scenario B 403 ppm; and Scenario C 368 ppm. (The latter value is confirmed in the text of Appendix B).
The Mauna Loa record for 2017 was 407 ppm, halfway between Scenarios A and B. Note that Scenarios A and B also represent upper and lower bounds for non-CO2 forcing as well, since Scenario A contains all trace gas effects and Scenario B contains none.
So, we can treat these two scenarios as representing upper and lower bounds on a warming forecast range that contains within it the observed post-1980 increases in greenhouse gases.
Consequently, there is no justification for a posthoc dialing down of the greenhouse gas levels; nor should we dial down the associated forcing, since that is part of the model computation.
Now note that Hansen did not include any effects due to El Nino events. In 2015 and 2016 there was a very strong El Nino that pushed global average temperatures up by about half a degree C, a change that is now receding as the oceans cool.
Had Hansen included this El Nino spike in his scenarios, he would have overestimated 2017 temperatures by a wide margin in Scenarios A and B.
Hansen added in an Agung-strength volcanic event in Scenarios B and C in 2015, which caused the temperatures to drop well below trend, with the effect persisting into 2017.
This was not a forecast, it was just an arbitrary guess, and no such volcano occurred.
Thus, to make an apples-to-apples comparison, we should remove the 2015 volcanic cooling from Scenarios B and C, and add the 2015/16 El Nino warming to all three Scenarios.
If we do that, there would be a large mismatch as of 2017 in both A and B.
The main forecast in Hansen’s paper was a trend, not a particular temperature level. To assess his forecasts properly we need to compare his predicted trends against subsequent observations.
To do this we digitized the annual data from his Figure 3. We focus on the period from 1982 to 2017 which covers the entire CO2 forecast interval.
The 1982 to 2017 warming trends in Hansen’s forecasts, in degrees C per decade, were:
- Scenario A: 0.34 +/- 0.08,
- Scenario B: 0.29 +/- 0.06, and
- Scenario C: 0.18 +/- 0.11.
Compare these trends against NASA’s GISTEMP series (referred to as the Goddard Institute of Space Studies, or GISS, record), and the UAH/RSS mean MSU series from weather satellites for the lower troposphere.
- GISTEMP: 0.19 +/- 0.04 C/decade
- MSU: 0.17 +/- 0.05 C/decade.
(The confidence intervals are autocorrelation-robust using the Vogelsang-Franses method.)
So, the scenario that matches the observations most closely over the post-1980 interval is C. Hypothesis testing (using the VF method) shows that Scenarios A and B significantly overpredict the warming trend (even ignoring the El Nino and volcano effects).
Emphasising the point here: Scenario A overstates CO2 and other greenhouse gas growth and rejects against the observations, but Scenario B understates CO2 growth and zeroes-out non-CO2 greenhouse gas growth yet it too significantly overstates the warming.
The trend in Scenario C does not reject against the observed data, in fact, the two are about equal. But this is the one that left out the rise of greenhouse gases after 2000.
The observed CO2 level reached 368 ppm in 1999 and continued going up thereafter to 407 ppm in 2017. The Scenario C CO2 level reached 368 ppm in 2000 but remained fixed thereafter. Yet this scenario ended up with a warming trend most like the real world.
How can this be? Here is one possibility. Suppose Hansen had offered a Scenario D, in which greenhouse gases continue to rise, but after the 1990s they have a very little effect on the climate. That would play out similarly in his model to Scenario C, and it would match the data.
Climate modelers will object that this explanation doesn’t fit the theories about climate change. But those were the theories Hansen used, and they don’t fit the data. The bottom line is, climate science as encoded in the models is far from settled.
Ross McKitrick is a Professor of Economics at the University of Guelph.
John Christy is a Professor of Atmospheric Science at the University of Alabama in Huntsville.
Read more at Climate Etc.
It is common for researchers to declare that their failures were a success. I once saw a TV special on a new treatment for alcoholism. This technique used negative feedback.
Seven years after the treatment the researchers wrote a paper declaring their program as a big success. The investigators who made the TV special then contacted everyone who had been in the study. They found every one of them except one still had a serious problem with alcoholism. The one exception was dead, killed by the effects of too much alcohol.
Now consider climate change. Now only is there the normal desire of researchers to declare a success, but there are so many hidden agendas that are depending on the climate change movement. That means they have to declare the predictions a success.
You will notice that predictions such as an ice free Arctic, low lying land under water, and children not knowing what snow is were not mention in their success story.
Hansen’s models were quaint and grossly inaccurate .
Models consistently proven to over state warming were however
effective in launching one of, if not the biggest frauds in history .
$Trillions stolen from tax payers and ten’s of thousands of premature deaths
as a result of fuel poverty emanating from foolish government policies .
Climate conmen fattening their wallets while pretending humans
were going to shape the earths temperature through a trace gas .
Unbelievable .