The accuracy and integrity of weather and climate measurements have always been a concern. However, errors and omissions were not as consequential in the past as they are now.
A hundred or even fifty years ago, our major weather concerns were more limited to local weather. When we have a flight from NYC to LAX, we need to know more detailed and reliable weather information, like is it snowing in St. Louis where we have a layover?
Or the farmer in Nebraska who needs to see the spring wheat production forecast in Ukraine. He needs the best possible information to better estimate the number of acres of winter wheat he should plant for today’s global markets.
We especially need better and more reliable information to decide what actions we should consider preparing for climate changes.
While scientists, engineers, and software programmers know the importance and the need for this data accuracy, the general public is not aware of how challenging these tasks can be.
When looking at long term climate data, we may have to use multiple proxies (indirect measures that we hope vary directly with weather), which add an extra layer of complexities, costs, and sources of error.
One of the most commonly used proxies is the ancient temperature and CO2 levels from ice core samples. Also, for the last few hundred years, tree-ring data was a primary source of annual temperatures.
But since the past half-century, direct atmospheric readings are being used, which are very accurate and reliable.
When we look at figure 1, we see CO2 rise dramatically in the mid-1950s. We know that we stopped using proxies to measure the atmospheric CO2 content and started using direct readings from the Mauna Loa weather station in Hawaii.
So, we need to ask ourselves, was this dramatic increase in CO2 real, or could it be partially skewed by the change in the measurement process? Or in the early 1960s, we stopped using tree ring data where we could.
Certain discrepancies were found during the period when we had both tree ring and thermometer records. We cannot make such changes in measurements without leaving room for doubt.
For example, figure 1 shows the CO2 content of the Antarctic ice sheets that are thousands of years old. Before the mid-1950s, the CO2 estimates were calculated based on an ice core’s CO2 gas content.
The CO2 levels measured in this fashion never seemed to get much over 280 ppm over tens of thousand-year period.
Now note that starting about 6,000-years ago, we see a small but steady increase as we look from left to right.
And the growth seems at a reasonable constant rate until the mid-1990s. Here the classical assumption is that the CO2 and the temperatures were both going up.
Recently, scientists looked at the slope going down if we stand in the 1950s and look right to left.
Doing so raises the question, is CO2 getting squeezed out from the glaciers by its enormous weight as it ages and perhaps also in combination with CO2 being chemically sequestered out, and in what proportions?
Starting the mid-1950s, we see a very substantial and fast rise in CO2 levels, and we see the now-familiar CO2 hockey sticks. Did the CO2 shoot up that fast, or was it part of the anomaly caused by the change in measurement methods? We think the latter.
How would the average person know? Was this dramatic change ever explained in an exact, understandable way? For now, let’s refer to the subject in a more general term as “data integrity.”
Here is another simple example. If we wanted to measure the Boston area’s temperature 200 years ago, we might have taken, say, twenty thermometers to twenty different locations.
We would have made some general decisions about putting a few along the coast and the rest in various spots in the city and countryside—mostly on farms.
We may have only put one or two in the mountains or forests because these stations needed to be manned, and data recorded several times a day.
Then, maybe once or twice each day or week or month, they might have been consolidated, obtaining an average “Boston temperature” for October 1820.
How would that compare the Boston weather of October 1920 or 2020 to see if it’s been increasing or not? Well, it poses quite a challenge:
- Over the last one hundred years, some trees might have grown around the thermometer, while in 1920, the thermometers might have been in full sun all day long. Now what?
- Some instruments were moved for some reason, like a major highway construction; how did that affect the temperature readings?
- Some instruments might have gradually gone out of calibration for months or even years before they were repaired or replaced. What do we do with the suspect data during the questionable period? Ignore it?
- When the instruments were replaced, how were they replaced? Some variables include the same height from the ground, the same protecting box, mercury replaced by alcohol thermometer or thermocouples, etc.
- A weather station was near a dirt road until it was covered by cement in 1926 and by asphalt in 1963? Later, reconfigured back to a dirt road when the area became a nature park in 2004?
- How would we compare, contrast, and integrate those temperatures with temperatures leading up to 2020? Very different and very challenging:
- Instruments that were once in a pasture are now near airport runways and jet exhausts!
- Another one was near a shady, sandy road that’s now an asphalt parking lot.
- Thermocouples have replaced many thermometers; how were the readings “stitched together”?
- Other weather stations were just abandoned because of the high costs of maintaining them or were replaced by a remote thermocouple or telemetry.
- How to reconcile the effect of the pollution of the 1960s to the 1990s with the pristine skies of the 1800s when clouds play such an important role?
- And the cloud cover of 1820 was probably quite different from today as a result of the increasing levels of “aerosols,” which play a vital role in cloud formation, the “greenhouse” and “Albedo” effects.
In recent decades, and mostly since the satellite period, hundreds of Earth-based weather stations were abandoned for a variety of reasons, including cost and data reliability.
Over the past several decades, NASA and NOAA have been trying to “normalize” current and historical recorded land-based and sea-based weather records.
Note in figure 2 where we see two sets of precisely the same data! The blue line represented the actual land-based temperatures from 1,218 stations in the US when the readings were taken.
Compare that to the red line, which means these very same temperature records but after they were “normalized” by NOAA. [1]
“Normalization” has a practical basis. It is a little like calculating the amount of fruit when adding apples to oranges. However, the process is susceptible to erroneous assumptions and execution.
Donald Easterbrook, a prominent environmental scientist, claims that the previous historical records were purposely manipulated, as shown in figure 2. Accusations have been made that these temperatures were skewed to fit the current narrative of CO2 induced global climate change.
The historical blue line data has been changed at least four times over the past few decades, and now in its final form, the red line shows a more dramatic, steeper temperature rise since the 1980s by lowering the temperatures in previous decades!
Today when we are asked to make multi-trillion-dollar decisions based on our temperature history over the last century, they have become severe and consequential.
For more information we recommend our book A Hitchhikers Journey Through Climate Change, coming soon to the CFACT store at CFACT.org.
[1] Real Climate Science article: “The problem with the NOAA graph is that it is fake data. NOAA creates the warming trend by altering the data. The NOAA raw data shows no warming over the past century.” Source
Read more at CFACT
The fact that New York City sits on what was Wilderness Area the same with the rest of our Nations Major Cities where the ignorant liberals who join and donate to the various Eco-Freak groups live
I have never felt a trust in ice cores as any kind of accurate measure for ancient atmospheric chemical content. To me, that seems similar to reading the water purity in which the lasagna noodles were cooked. Especially after thousands of years of aging, how can the air bubbles in glacial ice remain pristine? Very doubtful. But these metrics definitely serve the needs of those who wish to point out the evils of our modern world.
The people who lived thru those not-that-long-ago periods know that the manipulated temperatures are bogus. History tells us that the 30’s were hotter than today and that the 50’s were warmer than what it showed. Also that going into the 70’s it was not as cold as they are indicating. These bogus changes were all made to make the claims that temperatures shot up more than they did in the latter part of the last century and then claim it was all caused by mankind burning fossil fuels.