“The climate is changing, and we are the cause.” That is a statement that is so oft-repeated and affirmed that it goes way beyond mere conventional wisdom. Probably you encounter some version or another of that statement multiple times per week; maybe dozens of times. (bold added throughout)
Everybody knows that it is true! And to express disagreement with that statement, probably more so than with any other element of current progressive orthodoxy, is a sure way to get yourself labeled a “science denier,” fired from an academic job, or even banished from the internet.
The UN IPCC’s recent Sixth Assessment Report on the climate is chock full of one version after another of the iconic statement, in each instance of course emphasizing that the human-caused climate changes are deleterious and even catastrophic. Examples:
-
Human influence has likely increased the chance of compound extreme events since the 1950s. This includes increases in the frequency of concurrent heatwaves and droughts on the global scale (high confidence); fire weather in some regions of all inhabited continents (medium confidence); and compound flooding in some locations (medium confidence). (Page A.3.5)
-
Event attribution studies and physical understanding indicate that human-induced climate change increases heavy precipitation associated with tropical cyclones (high confidence) but data limitations inhibit clear detection of past trends on the global scale. (Page A.3.4, Box TS.10)
-
Some recent hot extreme events would have been extremely unlikely to occur without human influence on the climate system. (Page A.3.4, Box TX.10)
So, over and over, it’s that we have “high confidence” that human influence is the cause, or that events would have been “extremely unlikely” without human influence. But how, really, do we know that? What is the proof?
This seems to me to be rather an important question. After all, various world leaders are proposing to spend some tens or hundreds of trillions of dollars to undo what is viewed as the most important human influence on the climate (use of fossil fuels).
Billions of people are to be kept in, or cast into, energy poverty to appease the climate change gods. Political leaders from every country in the world are about to convene in Scotland to agree to a set of mandates that will transform almost everyone’s life.
You would think that nobody would even start down this road without definitive proof that we know the cause of the problem and that the proposed solutions are sure to work.
If you address my question — what is the proof? — to the UN, they seem at first glance to have an answer. Their answer is “detection and attribution studies.”
These are “scientific” papers that purport to look at the evidence and come to the conclusion that the events under examination, whether temperature rise, hurricanes, tornadoes, heatwaves, or whatever, have been determined to be “attributed” to human influences.
But the reason I put the word “scientific” in quotes is that just because a particular paper appears in a “scientific” journal does not mean that it has followed the scientific method.
The UN IPCC’s latest report, known as “Assessment Report 6” or “AR6,” came out in early August loaded up, as already noted, with one statement after another about “high confidence” in the attribution of climate changes and disasters to human influences.
In the couple of months since, a few statisticians who actually know what they are doing have responded. On August 10, right on the heels of the IPCC, Ross McKitrick — an economist and statistician at the University of Guelph in Canada — came out with a paper in Climate Dynamics titled “Checking for model consistency in optimal fingerprinting: a comment.”
On October 22, the Global Warming Policy Foundation then published two Reports on the same topic, the first by McKitrick titled “Suboptimal Fingerprinting?”, and the second by statistician William Briggs titled “How the IPCC Sees What Isn’t There.” (Full disclosure: I am on the Board of the American affiliate of the GWPF.).
The three cited papers are of varying degrees of technical difficulty, with McKitrick’s August paper in Climate Dynamics being highly technical and not for the faint of heart. (Although I studied this stuff myself in college, that was 50 years ago, and I can’t claim to follow all of the detail today.).
But both McKitrick and Briggs’s October papers are accessible to the layman. And in any event, the fundamental flaw of all of the IPCC’s efforts at claimed “attribution” is not difficult to understand.
In simple terms, the [IPCC] has assumed the conclusion, and then attempted to bury that fact in a blizzard of highly technical statistical mumbo jumbo.
First, let me express the flaw in my own language; and then I’ll discuss the approaches of the other two authors. Here’s the way I would put it: in real science, causation is established by disproof of a null hypothesis.
It follows that the extent to which you may have proved some level of causation depends entirely on the significance of the particular null hypothesis that you have disproved, and the definiteness of your disproof; and it further follows that no proof of causation is ever completely definitive, and your claim of causation could require modification at any time if another null hypothesis emerges that cannot be excluded.
The UN’s “attribution” studies universally deal with consideration of null hypotheses that are contrived and meaningless, and whose disproof (even if validly demonstrated) therefore establishes nothing.
Of the three linked papers, Briggs’s is the easiest for a layman to understand, and if you are going to read one of the three, it is the one I would recommend. Here is how Briggs expresses the same concept I have just described:
All attribution studies work around the same basic theme. . . . A model of the climate as it does not exist, but which is claimed to represent what the climate would look like had mankind not ‘interfered’ with it, is run many times. The outputs from these runs are examined for some ‘bad’ or ‘extreme’ event, such as higher temperatures or increased numbers of hurricanes making landfall, or rainfall exceeding some amount. The frequency with which these bad events occur in the model is noted. Next, a model of the climate as it is said to now exist is run many times. This model represents global warming. The frequencies from the same bad events in the model are again noted. The frequencies between the models are then compared. If the model of the current climate has a greater frequency of the bad event than the imaginary (called ‘counterfactual’) climate, the event is said to be caused by global warming, in whole or in part.
In other words, the “attribution” study consists of invalidating a null hypothesis that is itself a counterfactual model with no demonstrated connection to the real world as it would have existed in the absence of human influences.
The people who create these counterfactual models can of course build into them any characteristics they want in order that the result of their study will come out to be an “attribution” of the real-world data to human influences.
Why anyone would give any credence to any of this is beyond me.
By the way, there are hundreds upon hundreds of these “attribution” studies, all following the same useless formula. Could it really be that the hundreds of “scientists” who produce these things are unaware of and/or can’t perceive the fundamental logical flaw?
Ross McKitrick’s August 10 paper is, as noted, highly technical. If you are unfamiliar with the jargon and notation of econometric studies, it may make no sense to you at all.
But his October paper for the GWPF puts the main points in terms much more accessible to the layman. I would summarize the main points as follows.
The first is that the methodology of these many, many “attribution” studies always goes back to a seminal 1999 paper by Allen and Tett, referred to as AT99.
The second is that the AT99 methodology would only be valid in a particular study if it could be demonstrated that a series of conditions for something known as the Gauss-Markov Theorem has been fulfilled.
And the third is that the fulfillment of the conditions of the Gauss-Markov Theorem cannot be demonstrated in any of the climate “attribution” studies. Indeed, the climate “attribution” studies make no attempt to identify or deal with this problem. Thus, they are all meaningless.
The final step of the methodology of AT99 that supposedly supports “attribution” is something called the “Residual Consistency Test,” or “RCT.” From McKitrick’s August paper:
AT99 provided no formal null hypothesis of the RCT nor did they prove its asymptotic distribution, making non-rejection against 𝜒2 critical values uninformative for the purpose of model specification testing.
I think that McKitrick is making there basically the same point about meaningless, straw-man null hypotheses that I am making here; but then I can’t claim to fully comprehend all the jargon.
Anyway, when you read, for example, that scientists have demonstrated that the severity of the past year’s hurricane season is due to human greenhouse gas emissions, you may find that you are asking yourself, how could they possibly know that?
After all, there is no way they could possibly know how many and how severe the hurricanes would have been absent the GHG emissions.
Well, now you know how it is done: they just make up the counterfactual world in order to create a straw man null hypothesis that will get the result they want from the AT99 “attribution” methodology.
Hundreds upon hundreds of climate “scientists” follow this methodology with blinders on, and somehow no one ever notices that the whole exercise is meaningless, even as it provides the entire basis for a socialist takeover of the world economy.
Read more at Manhattan Contrarian
The attribution studies contain a big red flag. They are using simulations rather than real world data to determine what the world would be like without mankind’s emissions. When real world data doesn’t give the answers they want, it is very common for the climate change movement to substitute simulations. For ocean acidification, they picked 1988, the year the oceans were furthest away from acid, as their baseline. Real world data before that date did not support the acidification theory, so they substituted a simulation. The world is warming, but not enough to be a concern. So this movement is using far fetched simulations.
The UN Fifth Assessment Report did use real world data on what the trends showed. The conclusions that followed were that it is very unlikely that extreme weather events are increasing. This didn’t serve their political objectives. So the Six Assessment Report is using simulations to get the results they want.
It’s disgusting. The same people who claim to understand the world’s climate and what’s ailing it, don’t know where the Wuhan virus came from.