By Gaius Publius, a professional writer living on the West Coast of the United States and frequent contributor to DownWithTyranny, digby, Truthout, and Naked Capitalism. Follow him on Twitter @Gaius_Publius, Tumblr and Facebook. GP article archive here. Originally published at DownWithTyranny
“Climate models that simulate the current climate the best [also] tend to project the most global warming.” See text below for explanation of the chart.
This is a climate story with two pieces.
The first piece is the by-now-obvious “Everything’s happening faster than anyone thought it would” observation, of which the immediate corollary is, “OMG we’re still screwed.”
It’s true that everything is happening faster than anyone thought it would (anyone who had a prominent public voice, that is). This part of the story is, as noted above, “by now obvious.” Changes are happening a lot faster — big storms are more frequent than anticipated, even by those who anticipated them; wildfires are burning hotter and later in the season (December?!) than even those who predicted more and hotter wildfires; and the cost to insurance companies of climate-caused damage is rising faster than insurance companies anticipated— and anticipating increasing costs is their whole business model.
But those of us without power have already gotten that message. The real resistance to that message is among people who do have power, but also have money to protect from that message as well.
The second piece of this story is much more interesting — it’s about how the message we all understand to be true is now supported. A group of climate scientists have published a paper (subscription required) that puts statistical data round the observation that things are happening faster than most models predict.
In other words, they’re analyzing the models statistically in order to see which models are making the best predictions. Instead of waiting for events to prove which climate models (projections) ended up being right because events proved them right, this study looks at models and figures out ahead of time which ones are most likely to “get it right” in advance of the observational data.
In other other words, all models are not created equal, so taking the average of a large set of models tells us less that looking at the best models first. The study attempts to identify those models.
How did the researchers test which models were best? They looked for models that made the most accurate statements in the past about the earth’s energy imbalance — models that most correctly anticipated the difference between energy-in (from the sun) and energy-out (radiation of that energy back into space).
Our whole problem is that difference — too much energy-in relative to energy-out, and the planet heats. So models that made predictions about the energy imbalance that turned out to be right are likely to be right about the effects of that imbalance, such as the amount of increased global warming.
From the website of the lead researcher, Patrick Brown:
The study addresses one of the key questions in climate science: How much global warming should we expect for a given increase in the atmospheric concentration of greenhouse gases?
One strategy for attempting to answer this question is to use mathematical models of the global climate system called global climate models. Basically, you can simulate an increase in greenhouse gas concentrations in a climate model and have it calculate, based on our best physical understanding of the climate system, how much the planet should warm. There are somewhere between 30 and 40 prominent global climate models and they all project different amounts of global warming for given change in greenhouse gas concentrations. Different models project different amounts of warming primarily because there is not a consensus on how to best model many key aspects of the climate system.
To be more specific, if we were to assume that humans will continue to increases greenhouse gas emissions substantially throughout the 21st century (the RCP8.5 future emissions scenario), climate models tell us that we can expect anywhere from about 3.2°C to 5.9°C (5.8°F to 10.6°F) of global warming above pre-industrial levels by 2100. This means that for identical changes in greenhouse gas concentrations (more technically, identical changes in radiative forcing), climate models simulate a range of global warming that differs by almost a factor of 2.
The primary goal of our study was to narrow this range of model uncertainty and to assess whether the upper or lower end of the range is more likely.
RCP8.5 is the IPCC’s worst-case climate scenario; it’s roughly the same as “business as usual” forever with respect to emissions. It’s the red line in the chart below:
Back to Brown (my emphasis):
So, what variables are most appropriate to use to evaluate climate models in this context? Global warming is fundamentally a result of a global energy imbalance at the top of the atmosphere so we chose to assess models in their ability to simulate various aspects of the Earth’s top-of-atmosphere energy budget. We used three variables in particular: reflected solar radiation, outgoing infrared radiation, and the net energy balance. Also, we used three attributes of these variables: their average (AKA climatological) values, the average magnitude of their seasonal variability and the average magnitude of their month-to-month variability. These three variables and three attributes combine to make nine features of the climate system that we used to evaluate the climate models (see below for more information on our decision to use these nine features).
And the finding:
We found that that there is indeed a relationship between the way that climate models simulate these nine features over the recent past, and how much warming they simulate in the future. Importantly, models that match observations the best over the recent past, tend to simulate more 21st-century warming than the average model. This indicates that we should expect greater warming than previously calculated for any given emissions scenario, or it means that we need to reduce greenhouse gas emissions more than previously thought to achieve any given temperature stabilization target.
In even plainer English, those models that best represented the energy imbalance were also the models that projected the greatest future warming.
The RCP8.5 Example
Brown has an extended discussion of the models that use RCP8.5 as a base from which to predict warming, which is explained by the chart above, taken from the paper linked earlier. So let’s take a look at that chart and what it tells us.
First, note the “envelope” starting around 2015 that surrounds the red line and the blue dashed line. Together these show the range of predictions for the RCP8.5 emissions scenario for every model studied. Quite a range.
Next, ignore the difference between the blue part of the data envelope and the purple part. That’s not relevant to the point made here. Look instead at the very thin pink sliver that sits on top of the entire envelope; it’s labeled “Observationally-informed projections.” Models in this sliver “got it right” in the past with respect to the earth’s energy imbalance.
What this paper is saying is that, if we stay on the RCP8.5 business-as-usual emissions path, the best models predict (a) a very narrow rangeof warming outcomes, and (b) the worst warming outcomes.
Why This Matters
This matters for two reasons. One, it adds to the certainty that we’re cooking the planet — a truly serious matter. But two, it also gives a science answer to the warming deniers’ and delayers’ counter charge, “But look at the uncertainty. Look at the range of predictions. These models are all over the place. How can you trust them?”
This important paper shows that that “range of predictions” can be narrowed considerable, from a broad fat funnel of outcomes to a tiny, toothpick-wide sliver of them. Goodbye “uncertainty.”