Climate models in distress: Problems with forecast performance give cause to worry
By Dr. Sebastian Lüning and Prof. Fritz Vahrenholt
(German text translated/edited by P Gosselin)
The 2015/16 El Nino is over, and so are the celebrations by the climate alarmists. It’s becoming increasingly clear that the projections made by the climate models were wildly exaggerated.
Already in April 2015, a Duke University press release stated that the worst IPCC temperature prognoses need to be discarded immediately:
Global Warming More Moderate Than Worst-Case Models
A new study based on 1,000 years of temperature records suggests global warming is not progressing as fast as it would under the most severe emissions scenarios outlined by the Intergovernmental Panel on Climate Change (IPCC).
“Based on our analysis, a middle-of-the-road warming scenario is more likely, at least for now,” said Patrick T. Brown, a doctoral student in climatology at Duke University’s Nicholas School of the Environment. “But this could change.” The Duke-led study shows that natural variability in surface temperatures — caused by interactions between the ocean and atmosphere, and other natural factors — can account for observed changes in the recent rates of warming from decade to decade. The researchers say these “climate wiggles” can slow or speed the rate of warming from decade to decade, and accentuate or offset the effects of increases in greenhouse gas concentrations. If not properly explained and accounted for, they may skew the reliability of climate models and lead to over-interpretation of short-term temperature trends.
The research, published today in the peer-reviewed journal Scientific Reports, uses empirical data, rather than the more commonly used climate models, to estimate decade-to-decade variability. “At any given time, we could start warming at a faster rate if greenhouse gas concentrations in the atmosphere increase without any offsetting changes in aerosol concentrations or natural variability,” said Wenhong Li, assistant professor of climate at Duke, who conducted the study with Brown. The team examined whether climate models, such as those used by the IPCC, accurately account for natural chaotic variability that can occur in the rate of global warming as a result of interactions between the ocean and atmosphere, and other natural factors.
To test how accurate climate models are at accounting for variations in the rate of warming, Brown and Li, along with colleagues from San Jose State University and the USDA, created a new statistical model based on reconstructed empirical records of surface temperatures over the last 1,000 years. “By comparing our model against theirs, we found that climate models largely get the ‘big picture’ right but seem to underestimate the magnitude of natural decade-to-decade climate wiggles,” Brown said. “Our model shows these wiggles can be big enough that they could have accounted for a reasonable portion of the accelerated warming we experienced from 1975 to 2000, as well as the reduced rate of warming that occurred from 2002 to 2013.”
Further comparative analysis of the models revealed another intriguing insight. “Statistically, it’s pretty unlikely that an 11-year hiatus in warming, like the one we saw at the start of this century, would occur if the underlying human-caused warming was progressing at a rate as fast as the most severe IPCC projections,” Brown said. “Hiatus periods of 11 years or longer are more likely to occur under a middle-of-the-road scenario.” Under the IPCC’s middle-of-the-road scenario, there was a 70 percent likelihood that at least one hiatus lasting 11 years or longer would occur between 1993 and 2050, Brown said. “That matches up well with what we’re seeing.” There’s no guarantee, however, that this rate of warming will remain steady in coming years, Li stressed. “Our analysis clearly shows that we shouldn’t expect the observed rates of warming to be constant. They can and do change.”
Paper: Patrick T. Brown, Wenhong Li, Eugene C. Cordero and Steven A. Mauget. Comparing the Model-Simulated Global Warming Signal to Observations Using Empirical Estimates of Unforced Noise. Scientific Reports, April 21, 2015 DOI: 10.1038/srep09957“
Modelers like to pat each other on the back. Well modelled, dear colleague! Calibration tests using the past, of course, are all part of checking models. This in many cases starts with the Little Ice Age, which was the coldest phase of the past 10,000 years. When the models appear to reconstruct the warming since then, the joy runs quite high: Look here, everything is super.
The main driver of warming, however, remains unclear. Isn’t it logical that a re-warming follows a natural cooling? Is it a coincidence that CO2 rose during this phase?
More honest would be using calibration tests going back to the Medieval Warm Period Only when the preindustrial warm phases are successfully reproduced can we say that the models are confirmed.
In 2015 Gómez-Navarro et al used the Little Ice Age trick. They began their test at 1500 AD. i.e. during the mentioned cold phase. The result is no surprise: The general trend is “confirmed”, but in detail, it doesn’t work. Here’s the abstract from Climate of the Past:
A regional climate paleosimulation for Europe in the period 1500–1990 – Part 2: Shortcomings and strengths of models and reconstructions
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an
In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show a stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense,
In particular, the simulated anomalies during the Maunder and Dalton minima show a stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings.
In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyze the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that
A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.”
In January 2017 Benjamin Santer et al attempted to justify the validity of models. In the Journal of Climate, they compared to satellite data with the simulations of temperature over the last 18 years. The result: The models calculated a warming that was one and a half times more than what was measured in reality. Abstract:
Comparing Tropospheric Warming in Climate Models and Satellite Data
Updated and improved satellite retrievals of the temperature of the mid-to-upper troposphere (TMT) are used to address key questions about the size and significance of TMT trends, agreement with model-derived TMT values, and whether models and satellite data show similar vertical profiles of warming. A recent study claimed that TMT trends over 1979 and 2015 are 3 times larger in climate models than in satellite data but did not correct for the contribution TMT trends receive from stratospheric cooling. Here, it is shown that the average ratio of modeled and observed TMT trends is sensitive to both satellite data uncertainties and model–data differences in stratospheric cooling. When the impact of lower-stratospheric cooling on TMT is accounted for, and when the most recent versions of satellite datasets are used, the previously claimed
Here, it is shown that the average ratio of modeled and observed TMT trends is sensitive to both satellite data uncertainties and model–data differences in stratospheric cooling. When the impact of lower-stratospheric cooling on TMT is accounted for, and when the most recent versions of satellite datasets are used, the previously claimed ratio of three between simulated and observed near-global TMT trends is reduced to approximately 1.7. Next, the validity of the statement that satellite data show no significant tropospheric warming over the last 18 years is assessed. This claim is not supported by the current analysis: in five out of six corrected satellite TMT records, significant global-scale tropospheric warming has occurred within the last 18 years. Finally, long-standing concerns are examined regarding discrepancies in modeled and observed vertical profiles of warming in the tropical atmosphere. It is shown that amplification of tropical warming between the lower and mid-to-upper troposphere is now in close agreement in the average of 37 climate models and in one updated satellite record.”
Next, the validity of the statement that satellite data show no significant tropospheric warming over the last 18 years is assessed. This claim is not supported by the current analysis: in five out of six corrected satellite TMT records, significant global-scale tropospheric warming has occurred within the last 18 years. Finally, long-standing concerns are examined regarding discrepancies in modeled and observed vertical profiles of warming in the tropical atmosphere. It is shown that amplification of tropical warming between the lower and mid-to-upper troposphere is now in close agreement in the average of 37 climate models and in one updated satellite record.”
See comments on this at WUWT.
Read more at No Tricks Zone
If our world is warming naturally would that be bad? If so, how much should we spend in vain to fight it? Adapting would be the sane answer.
Note that when the author claims that satellite data supports warming in the last 18 years, he is talking about CORRECTED satellite data. Just as they are “correcting” historical data to support their cause, the same is true of the satellite data. The only valid indicator is what does the raw satellite data show?
He claims that the moderate climate models are close to being right. The graphs that I have seen in comparison to the satellite and high altitude balloon data showed that these models were also too high in their projections. Perhaps the graphs I saw didn’t use “corrected” data.
Of course he saw no need to mention the fudge factors in the models.
I predict that there will be more climate projections. Where’s my money? At least I’m right.