Understanding Climate Model Variability

Climate models allow us to make what amount to very educated guesses about things like temperature and precipitation under future climate change. But here’s the thing: while models agree about the big picture (for instance, that our world will continue to warm as more greenhouse gases are pumped into the atmosphere), most models don’t agree about the specifics of those conditions (exactly how warm things might get).

Instead, climate models tend to offer a range of probable outcomes. That’s because while all climate models represent the physics of the Earth’s climate system—and in doing so obey those physical laws—not all model developers take the same route to simulate this complex system in a computer. In practice, this means different models often project different and sometimes contrasting futures under climate change, especially at the local level.

Needless to say, this isn’t exactly what resource managers, decision makers, and citizens are looking for when planning for an already uncertain future under climate change. But there is hope. We’re getting better at evaluating how and why models produce their contrasting results.

In October of last year, the journal Climate Dynamics published just such a climate model evaluation effort by CIRC and OCCRI researchers. (Okay, you could file this one under “Studies We Missed,” but we like to space out the reviews of our research.)

The study, led by CIRC and OCCRI researcher David Rupp and including CIRC researcher John Abatzoglou as well as CIRC co-lead and OCCRI director Philip Mote, examines the disparity in model results for temperature and precipitation under projected future climate for the Columbia River Basin.

The Models and Where They Came From

The researchers analyzed data from 35 global climate models (GCMs). The GCMs’ data came from the fifth phase of the Coupled Model Intercomparison Project (CMIP5).

You can think of CMIP5 as something like a giant library that stores data from GCM simulation runs of the world’s climate.

It works something like this:

GCM simulations are run. This usually involves modeling the entire Earth’s climate forward in time, often to the year 2100, as well as back in time, often to roughly the start of the coal-burning industrial revolution (defined as the mid-1800s) and hence the start of major anthropogenic climate change. The data resulting from both kinds of runs (future and past) are then collected by CMIP5 The archived data is then accessed by groups like CIRC and OCCRI who then analyze the data, creating local climate projections through a process called downscaling. This is what Rupp and colleagues did for the Columbia River Basin.

Using GCM data from simulated runs for the Columbia River Basin for the years 1979 to 1990 as their baseline, the researchers compared these data to GCM data for future projections of the basin’s climate. The idea was to figure out how and why the models produced different results for temperature and precipitation.

The researchers did this by choosing data from GCM climate runs from two very different probabilities: one that assumes that humanity starts cutting greenhouse gas emissions, and hence our current trend of rising temperatures starts to level off, versus a future in which we do nothing and the current upward warming trend continues unabated.

Here’s how this works:

Via a GCM, future projections entail adding a given amount of greenhouse gas emissions to the modeled climates. This process simulates how energy from the sun is trapped by the additional gases. This produces what’s called anthropogenic forcing that pushes the climate system out of equilibrium, creating climate patterns beyond natural variability (the monthly, seasonally, yearly, and sometimes decades-long natural fluctuations in the climate). This anthropogenic forcing shows up in the computer simulations as a measure of humanity’s contribution to climate change, a clear signal above the noise of natural variability.

Putting all this together, researchers can figure out how big humanity’s current contribution to climate change is. But just how big will that contribution be in the future? The short answer: we don’t know and, as we’ll see, the GCMs don’t totally agree.

The biggest problem is that we simply can’t know how much more greenhouse gases that our species will continue to emit into the future. That means we can’t know the rate of warming. So to make up for this big uncertainty, climate researchers make a reasonable guess: emissions will either slow down, or they won’t.

Emissions Scenarios

Climate researchers and our frequent readers will know this guesswork as the two emission scenarios: the Representative Concentration Pathways (RCP) 4.5 and 8.5.

RCP 4.5 is the middle-of-the-road scenario that assumes we cut emissions and experience moderate warming. RCP 8.5 is a high-emissions scenario that assumes we do nothing and experience some pretty extreme warming. These were the two scenarios Rupp and colleagues used for the Columbia River Basin.

If you’re following the math, that’s two RCPs multiplied by 35 models, giving the researchers 70 possible futures for the basin. Only it’s a little more complicated than that. Some of those 35 models had more than one run for each RCP. This made the actual number of runs around 150. (Keep this in mind as we proceed.)

Graphed out, the data points revealed the kind of diverging range of temperature and precipitation projections many of our readers are used to seeing. They broke down like this:

Temperature Projections

The clear trend for temperature across all the 150 simulations (again two different emissions scenarios and 35 models) was upward. The models agreed. Without exception, every model/scenario combo the researchers examined showed warming for the Columbia River Basin under future climate conditions. The future will be warmer under climate change, but the details are more nuanced.

The researchers compared projected mean seasonal temperatures for three 30-year periods—2010 to 2039; 2040 to 2069; and 2070 to 2099—against mean seasonal temperatures for their baselines years, 1979 to 1990. Let’s consider just the mean temperature projections for the summer months for the years 2070 to 2099.

By the late decades of this century, the Columbia River Basin’s summers—that’s June, July, August, and September—are projected to warm by roughly 3 degrees Celsius (5 degrees Fahrenheit) for RCP 4.5 (the lower emissions scenario) and 6 degrees Celsius (11 degrees Fahrenheit) for RCP 8.5 (the higher scenario) above that of the baseline years, according to the researchers’ analysis of the CMIP5 data.

However, the multi-model mean wasn’t what the researchers were primarily interested in. They were after the range of temperature projections and how that range was produced.

And that range was significant.

Projections for the years 2070 to 2099 showed average temperatures for the summer months ranging from just 1 degree Celsius (2 degrees Fahrenheit) warming for RCP 4.5 to well above 8 degrees Celsius (15 degrees Fahrenheit) of warming for RCP 8.5.

The range is important, the researchers point out, because while the numerical mean represents a kind of best guess, the mean might not be the actual future the region experiences. In other words, don’t ignore the tail ends of the distribution, which produce significantly different futures for resource management.

Okay, a slight digression here:

This study makes use of a previous analysis by Rupp and colleagues of CMIP5 models. In their earlier study, the researchers discovered that GCMs that performed better at reproducing the Northwest’s historical climate—a process called hindcasting that compares how well model simulations can recreate actual historical data—tended to project more future warming. In other words, the models that were a better statistical fit for the region consistently produced warmer futures under the same amount of greenhouse gases when compared to other models. This hints that the mean of all 35 models might not be the measure we want when adapting to future climate change here in the Northwest. Okay, now onto precipitation.

Precipitation Projections

For their examination of precipitation, the researchers looked at projections for those same three 30-year periods, again comparing them against measurements taken for the baselines years, 1979 to 1990.

Precipitation projections from the GCM data also showed a range, but unlike temperature, there wasn’t a clear agreement about the trend. While temperature projections showed a clear warming trend—add greenhouse gases to the system and watch temperatures rise—there wasn’t an analog when it came to precipitation. There wasn’t a consensus on whether the region would get wetter or drier as anthropogenic forcing was added. What’s more, this was true across both RCP 4.5 and RCP 8.5. As with temperature, the range was significant. Let’s consider just the summer months for the years 2070 to 2099.

Under RCP 4.5, GCM projections for precipitation during the summer months for the end of the century ranged from a decrease of nearly 30 percent to an increase of nearly 15 percent. GCM data for RCP 8.5 told a similar story for those same years and months, but here the trend was even more pronounced. Projections ranged from a decrease of nearly 35 percent to an increase of 20 percent.

Accounting for the Range and the Disagreement

So what accounted for the range and the disagreement among the models?

To answer this question, Rupp and colleagues concerned themselves with two basic factors: how much the models responded to a given amount of forcing from the global climate and how the initial conditions at the start of a given GCM run—for instance, precise humidity and temperature conditions at the start of simulations for this century—affected a GCM’s outcome, its data. Because, obviously, a big part of the range had to do with the difference in forcing between the RCP 4.5 and RCP 8.5, the analysis was done in essentially two batches, one for each scenario. Let’s start with the initial conditions.

Precipitation Range Linked to Initial Conditions

Applying a statistical analysis—that we won’t go into here—Rupp and colleagues determined that the GCMs’ initial conditions accounted for the majority of the difference in outcomes when it came to precipitation but not for temperature projections. Initial conditions had only a minor effect on the range in temperature projections, the researchers found. Precipitation projections, however, were very dependent on initial conditions.

Initial conditions—what the researchers referred to as internal variability—accounted for most of the range of disagreement between the GCMs’ projections. And this was true across both RCPs. In other words, how the models began their runs proved more important than the methods used by each GCM to simulate precipitation in response to increased greenhouse gases.

As we’ve written about at length on our website, precipitation’s response to climate forcing is a difficult thing to model in a computer. Unlike temperature, precipitation here in the Northwest lacks a clear response to added greenhouse gases. This makes the climate change signal harder to tease out from the noise of natural variability. It also seems to make precipitation projections more dependent on initial conditions.

One way to find the signal and break this dependence is simply to run the model over and over again until it starts to drown out the ripples caused by the model runs’ initial conditions. (This is somewhat analogous to running an experiment multiple times.) But here’s the problem with climate modeling when it comes to precipitation: climate models are often not run frequently enough to find the signal in the noisy data.

This was the case with the data Rupp and colleagues used. Some GCMs examined had two runs associated with them. While others had up to 10 runs associated with them. (Some experiments were run more than others.)

Temperature Range Linked to Global Climate Warming

Temperature projections were another story. Here the GCM data showed a clear trend: add greenhouse gases and things heat up. The reason for the disagreement among the model outputs, according to the researchers’ analysis, was not initial conditions, but how closely the local climate’s warming related to the overall trend of global climate warming.

Comparing the GCMs’ projections for global mean temperature to projections for local mean temperature in the Columbia River Basin, the researchers’ analysis found this: how sensitive the global temperature was to increases in greenhouse gases accounted in large part for the differences among the local projections. This is because the local temperature tracked closely with the global mean temperature, the researchers found.

Here the interpretation of the results gets a little tricky.

The strong connection to global temperature points to differences in how the GCMs model the physics of the atmosphere and ocean and, hence, how sensitive the models are to a given amount of greenhouse emissions. As we noted above, while all GCMs run their simulations according to physical laws, they disagree sometimes about how to do this in the best way. This seems to account for why the basic physics of more greenhouse gases = more warming is the clear trend, even while there is disagreement about the degree of that warming. Conversely, there could be something else going on that hasn’t been teased out of the data yet.

The Biggest Uncertainty

Rupp and colleagues’ end their analysis by noting that their work just scratches the surface. A larger effort is still needed, the researchers note, to help determine how models are producing their different results, and ultimately how the range of outcomes might be further whittled down to something more manageable that decision makers can use to help plan for climate change. Largely through public funding, Rupp and others are continuing this work at OCCRI and CIRC.

But take note: uncertainty in climate projections will never be completely unavoidable. By far the biggest uncertainty is how much more greenhouse gases we will continue to emit into the atmosphere. This isn’t a scientific uncertainty but a cultural and political one.

As we hone our science and work out the kinks in our models (and transparently describe this process in really, really long, incredibly entertaining posts), let’s not argue about the big picture. The scientific consensus is clear. More greenhouse gases = more warming, even if the science is still working out the details.

A side note: A major reason for the location of the study is the Columbia River Treaty between the United States and Canada. Ratified in 1964, the treaty is now up for renegotiation. This is a big deal because the treaty currently defines rights and responsibilities between the two parties concerning hydropower and flood control. Because climate change is expected to impact both by affecting the region’s hydrology, any effective renegotiation of the treaty should include climate change projections and, ideally, in a transparent way that accounts for the range of those projections.


Study: Climate Dynamics

Citation: Rupp, David E., John T. Abatzoglou, and Philip W. Mote. “Projections of 21st century climate of the Columbia River Basin.” Climate Dynamics (2016): 1-17.

Photo Caption: Bonneville Dam at Sunrise. (Photo Credit: Portland Corps, some rights reserved.)


Nathan Gilles is the managing editor of The Climate Circulator, and oversees CIRC’s social media accounts and website. When he’s not writing for CIRC, Nathan works as a freelance science writer. 

 

 


Screen Shot 2017-08-07 at 2.41.20 PM

Stay up to date on the latest climate science news for the Northwest, subscribe to the CIRCulator. 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s