CLIMATE MODELING consists of tradeoffs. These tradeoffs are due in large part to the sheer computing power the model runs require. Often researchers have to choose between high spatial detail—grid cells small enough to capture detailed, local weather patterns—or having enough model runs to obtain good statistics—generally the more runs you make, the better results you receive. Achieving this level of spatial detail often requires running a regional climate model, which has more detail (smaller grid cells) than a global climate model. But here we run into a problem. Regional models are typically only run once. This is like pulling one card from a dozen and using it to guess what the other 11 cards are. Fret not. There are solutions.
This month we are pleased to feature two articles covering OCCRI’s involvement in the citizen science project climateprediction.net. The modeling effort, also known as weather@home, allows researchers to not only achieve a high level of spatial detail, but provides them with the computing power to run multiple model runs to obtain the best statistics.
The project works by using a regional climate model employing grid cells that are 25 km (9.6 miles) to a side. (To give this some perspective, global climate models usually run with a resolution 100 to 300 km—that’s 39 to 116 miles—to a side.) Model runs are computed on private computers (not super computers) via weather@home, a sort of crowd-sourced, cloud computing effort that allows regular people to volunteer their personal computer’s processing power to crunch numbers over the Internet for the climate researchers.
The first paper, by OCCRI Director Philip Mote and colleagues and published in the Bulletin of the American Meteorological Society, describes the regional climate modeling setup the project employs. To do their modeling, the researchers utilized the Hadley regional model (HadRM3P), which allowed for high resolution over the western U.S. The researchers then nested HadRM3P within the larger and lower resolution Hadley global climate model.
Essentially, the global model accounted for activity outside the regional domain by simulating the entire global atmosphere. Relevant information was then taken from the global model runs and fed back to the regional model. Multiple regional runs were then made. The researchers’ experiments included replicating historical climate for the model’s region for the years 1960 to 2009, as well as projecting future climate for the years 2030 to 2049.
The project’s initial results are promising. West-wide temperature variability and trends are remarkably similar to observed trends. Spatial patterns of temperature change point out the likelihood that warming will be greatest where mountain snowpack is already disappearing. Not surprisingly with such a large number of runs, Mote and colleagues point out that they were able to obtain very good statistics on extreme events such as heat waves, whose rarity normally makes it difficult to obtain good statistics.
The second paper, published in the Journal of Climate, comes from lead author Li Sihan, a PhD student of Mote’s. (See this month’s Featured Researcher.)
Compared with other regional modeling efforts, Li’s maps of model variables like temperature and precipitation more closely resemble observations, with mountains much colder and usually wetter than valleys for example.
In addition, Li and colleagues show how the accuracy of simulated changes at a local level depends on the number of runs they use.
Unlike almost all other regional models, the climateprediction.net experimental setup allows researchers to vary climate model physics parameters, allowing for better quantification of physical uncertainties, in this case various small-scale meteorological features. The idea is simple: make sure that smaller physical processes in the model, such as how clouds form, behave accurately to ensure that larger scale variables, such as annual precipitation, also behave accurately.
This is important because physical processes in all climate models are represented by equations, some of which may have a parameter in them that must be estimated (usually with the help of observations). For example, the parameter icefall speed is important for the development of clouds and determining precipitation type (rain, sleet, hail, or snow) as well as the amount of precipitation. Li and colleagues’ paper investigates how these parameters affect important model output variables and shows that the simulation of precipitation can be improved by optimizing the parameter set.
To volunteer your computer to the effort, go to climateprediction.net/getting-started.
Citations: Mote, P.W., M.R. Allen, R.G. Jones, S.Li, R. Mera, D.E. Rupp, A. Salahuddin, and D. Vickers, 2015: Superensemble regional climate modeling for the western US. Bull. Amer. Meteorol. Soc, doi: 10.1175/BAMS-D- 14-00090.1.
Li, S., P.W. Mote, D. Vickers, R. Mera, D.E. Rupp, A. Salahuddin, M.R. Allen, and R. G. Jones, 2015: Evaluation of a regional climate modeling effort for the western US using a superensemble from climateprediction.net. J. Clim, doi: 10.1175/JCLI-D-14-00808.1.
Photo: Image of the climateprediction.net program running on a user’s desktop. (Photo Credit: Oregon Climate Change Research Institute)
A professor of atmospheric sciences at Oregon State University, Philip Mote heads CIRC’s Climate Science activity. Along with co-leading CIRC, Phil directs the Oregon Climate Change Research Institute (OCCRI) and the Oregon Climate Service, and has helped co-lead several long-term research projects looking into the impacts of climate change. You might also find him rowing along the Northwest’s scenic waterways.
Stay up to date on the latest climate science news for the Northwest, subscribe to the CIRCulator.