A lot of us having been pointing out one of the big problems with the global warming theory: a long plateau in global temperatures since about 1998. Most significantly, this leveling off was not predicted by the theory, and observed temperatures have been below the lowest end of the range predicted by all of the computerized climate models.
So what to do if your theory doesn’t fit the data? Why, change the data, of course!
Hence a blockbuster new report: a new analysis of temperature data since 1998 “adjusts” the numbers and magically finds that there was no plateau after all. The warming just continued.
Starting in at least early 2013, a number of scientific and public commentators have suggested that the rate of recent global warming has slowed or even stopped. The phenomena has been variably termed a “pause,” a “slowdown,” and a “hiatus.”…
But as a team of federal scientists report today in the prestigious journal Science, there may not have been any “pause” at all. The researchers from the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI) adjusted their data on land and ocean temperatures to address “residual data biases” that affect a variety of measurements, such as those taken by ships over the oceans. And they found that “newly corrected and updated global surface temperature data from NOAA’s NCEI do not support the notion of a global warming ‘hiatus.’”
It’s so convenient that they’re signaling for everyone else to get on board.
One question raised by the research is whether other global temperature datasets will see similar adjustments. One, kept by the Hadley Center of the UK Met Office, appears to support the global warming “hiatus” narrative—but then, so did NOAA’s dataset up until now. “Before this update, we were the slowest rate of warming,” said Karl. “And with the update now, we’re the leaders of the pack. So as other people make updates, they may end up adjusting upwards as well.”
This is going to be the new party line. “Hiatus”? What hiatus? Who are you going to believe, our adjustments or your lying thermometers?
The new adjustments are suspiciously convenient, of course. Anyone who is touting a theory that isn’t being borne out by the evidence and suddenly tells you he’s analyzed the data and by golly, what do you know, suddenly it does support his theory—well, he should be met with more than a little skepticism.
If we look, we find some big problems. The most important data adjustments by far are in ocean temperature measurements. But anyone who has been following this debate will notice something about the time period for which the adjustments were made. This is a time in which the measurement of ocean temperatures has vastly improved in coverage and accuracy as a whole new set of scientific buoys has come online. So why would this data need such drastic “correcting”?
As climatologist Judith Curry puts it:
The greatest changes in the new NOAA surface temperature analysis is to the ocean temperatures since 1998. This seems rather ironic, since this is the period where there is the greatest coverage of data with the highest quality of measurements–ARGO buoys and satellites don’t show a warming trend. Nevertheless, the NOAA team finds a substantial increase in the ocean surface temperature anomaly trend since 1998.
NOAA corrected the ocean temperature measurements to be more consistent with a previous set of measurements. But those older measurements were known to be a problem. Scientists relied on measurements from merchant vessels, which had slowly switched from measuring water in buckets dipped over the side to measuring it on its way through intake ports for cooling the ship’s engines. But that meant that water temperatures were more likely to be increased by contact with the ship, producing an artificial warming.
Hence the objection made by Patrick Michaels, Richard Lindzen, and Chip Knappenberger:
As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the engine itself, and as such, never intended for scientific use. On the other hand, environmental monitoring is the specific purpose of the buoys. Adjusting good data upward to match bad data seems questionable.
That’s putting it mildly.
They also point to another big change in the adjusted data: projecting far northern land temperatures out to cover gaps in measurement over the Arctic Ocean. Yet the land temperatures are likely to be significantly warmer than the ocean temperatures.
I realize the warmists are desperate, but they might not have thought through the overall effect of this new “adjustment” push. We’ve been told to take very, very seriously the objective data showing global warming is real and is happening—and then they announce that the data has been totally changed post hoc. This is meant to shore up the theory, but it actually calls the data into question.
Anthony Watts, one of the chief questioners of past “adjustments,” points out that to make the pause disappear, they didn’t just increase temperatures since 1998. They also adjusted downward the temperatures immediately before that. Starting from a lower base of temperature makes the “adjusted” increase look even bigger. That’s a pattern that invariably shows up in all these adjustments: the past is always adjusted downward to make it cooler, the present upward to make it warmer—an amazing coincidence that guarantees a warming trend.
All of this fits into a wider pattern: the global warming theory has been awful at making predictions about the data ahead of time. But it has been great at going backward, retroactively reinterpreting the data and retrofitting the theory to mesh with it. A line I saw from one commenter, I can’t remember where (update: it was David Burge), has been rattling around in my head: “once again, the theory that predicts nothing explains everything.”
There is an important difference between prediction before the fact and explanation after the fact. Prediction requires that you lay down a marker about what the data ought to be, to be consistent with your theory, before you actually know what it is. That’s something that’s very hard to get right. If your theory is going to be able to consistently predict data before it is gathered, it has got to be pretty darned good. Global warming theories have a wretched track record at making predictions.
But explanations of data after the fact are a lot easier. As they say, hindsight is 20/20. It’s a lot easier to tweak your theory to make it a better fit to the data, or in this case, to tweak the way the data is measured and analyzed in order to make it better fit your theory. And then you proclaim how amazing it is that your theory “explains” the data.
If this difference between prediction and explanation seems merely technical, remember that the whole political cause of global warming is based on the theory’s claim to make predictions before the fact—way before the fact, projecting temperatures for the next century. We’re supposed to base the whole organization of our civilization, at a cost of many trillions of dollars, on those ultra-long-term predictions. So exulting that they can readjust the data for the last few years to jibe with their theory after the fact is not exactly the reassurance we need.
Anyone with the slightest familiarity with science ought to be immediately skeptical of this new claim, so naturally mainstream media “science reporters” repeat it with complete credulity and even pre-emptively inoculate us against the sin of doubt. The Washington Post report/press-release-transcription has a nice little passive-aggressive twist, sneering that “The details of the data adjustments quickly get complicated—and will surely be where global warming doubters focus their criticism.” Those global warming doubters, always finding something to kvetch about! What are you gonna do?
Worse, the Post ends by passing along a criticism of mainstream scientists for even discussing the global warming pause before now.
Harvard science historian Naomi Oreskes recently co-authored a paper depicting research on the “hiatus” as a case study in how scientists had allowed a “seepage” of climate skeptic argumentation to affect the formal scientific literature. Of the new NOAA study, she said in an e-mail: “I hope the scientific community will do a bit of soul searching about how they got pulled into this framework, which was clearly a contrarian construction from the start.”
Remember that everybody’s data was showing a plateau in global temperatures, and many of the studies focused on this were attempting to uphold the global warming theory in the face of that evidence. Yet now some of the theory’s own supporters are going to be thrown under the bus for showing too much faith in the data and too little faith in the cause. They will get the message stated bluntly by Oreskes: science must never be contaminated by skepticism.
That gives us a pretty good idea of what is going on here. Because any field where people say this sort of thing is by that very fact not a field of science any longer.
Follow Robert on Twitter.
Copyright © 2016 The Federalist, a wholly independent division of FDRLST Media, All Rights Reserved.