Comparing Methods to Constrain Future European Climate Projections Using a Consistent Framework

Publications

Lukas Brunner, Carol McSweeney, Andrew P. Ballinger, Daniel J. Befort, Marianna Benassi, Ben Booth, Erika Coppola, Hylke de Vries, Glen Harris, Gabriele C. Hegerl, Reto Knutti, Geert Lenderink, Jason Lowe, Rita Nogherotto, Chris O’Reilly, Saïd Qasmi, Aurélien Ribes, Paolo Stocchi, Sabine Undorf

Journal of Climate
Work Package 2
Link: https://doi.org/10.1175/JCLI-D-19-0953.1

 

Highlights

Accurately modelling future climate changes is essential if we are to adapt to those changes. Understanding the uncertainty in our projections is also critical if we are to use them effectively. Currently, several different methods are used to constrain the projection range – that is, to use observations to infer which projections might be more or less plausible. The aim of this study is to understand how different the constrained projection ranges that result from these methods might be, and why. The team found that different constraint procedures often produce differing results in projections of mid-century European temperature and precipitation, but in other cases they agree quite closely. Understanding the reasons behind this and the circumstances in which it occurs will allow us to put our climate projections to best use, with a better understanding of the uncertainties within them and the meaning behind them. Society will be able to adapt to and mitigate the impacts of climate change more effectively, as we can predict the changes more accurately, and specific users of climate projections will be able to interpret those projections according to their needs and requirements.

Background
In order to react to future climate change appropriately, it is essential that we predict those changes as accurately as possible, in order to put adaptation and mitigation plans in place to limit the impacts of climate change on society. Our climate models produce a range of possible projections for future climate, and in order to increase their usefulness scientists will often constrain those results by giving more weight to those models or model runs that simulate historical change more accurately. Different constraining methods often produce different results, and the aim of this study is to expose these differences, shed light on why they arise, and discuss how we can use this understanding to improve our use of future climate projections helping users handle diverging results depending on their situation.

Results
The different methods show a predicted increase in temperature over Europe by between 2°C and 3°C in 2041-2060 relative to the present day (1995-2014). The spread is reduced by 20-30% compared to the raw model range. Central Europe and the Mediterranean have similar median results for the different methods, indicating they are more in agreement here than for Northern Europe. However, the spread in the different constrained ranges varied much more between methods, meaning we are less confident in identifying the ‘worst case’ scenarios which are often of interest to ‘risk averse’ users.

Results for future precipitation changes are more variable. Methods disagree on the projected change in Central Europe and the Mediterranean, indicating a drying trend but with median precipitation reduced by anywhere up to 25% depending on the method. A slight indication of a precipitation increase is seen for Northern Europe, but the median is within the climate’s natural variability for all methods. These variable results demonstrate the importance of better understanding the differences between constraint methods.

Methods
This paper compares several methods for quantifying the uncertainty in climate projections, using projections of mid-century temperature and precipitation levels under a high-emission scenario as a standardised base to compare the various approaches. The methods are applied to different regions, northern Europe, central Europe, the Mediterranean and Europe as a whole, to test if there are regional differences in the effectiveness of the various constraint methods. The median and spread of the methods are then compared with each other to isolate differences in the various procedures.

Policy relevance
This study helps us understand the sources of uncertainty in climate projections and how best to obtain useful results from them, allowing policymakers and stakeholders to make best use of the projections. Different users of climate projections will have different preferences and thresholds of risk, therefore presenting them with a clear description of the uncertainty in a projection allows them to be best informed when making decisions. This study’s projections of changes in temperature and precipitation levels are also useful in planning strategies to mitigate their impacts and adapt to future climate change.

Abstract

Political decisions, adaptation planning, and impact assessments need reliable estimates of future climate change and related uncertainties. To provide these estimates, different approaches to constrain, filter, or weight climate model projections into probabilistic distributions have been proposed. However, an assessment of multiple such methods to, for example, expose cases of agreement or disagreement, is often hindered by a lack of coordination, with methods focusing on a variety of variables, time periods, regions, or model pools. Here, a consistent framework is developed to allow a quantitative comparison of eight different methods; focus is given to summer temperature and precipitation change in three spatial regimes in Europe in 2041–60 relative to 1995–2014. The analysis draws on projections from several large ensembles, the CMIP5 multimodel ensemble, and perturbed physics ensembles, all using the high-emission scenario RCP8.5. The methods’ key features are summarized, assumptions are discussed, and resulting constrained distributions are presented. Method agreement is found to be dependent on the investigated region but is generally higher for median changes than for the uncertainty ranges. This study, therefore, highlights the importance of providing clear context about how different methods affect the assessed uncertainty—in particular, the upper and lower percentiles that are of interest to risk-averse stakeholders. The comparison also exposes cases in which diverse lines of evidence lead to diverging constraints; additional work is needed to understand how the underlying differences between methods lead to such disagreements and to provide clear guidance to users.