Data for Children Collaborative

View Original

Dictionary Series: What do we mean when we talk about Future Climate Model Projections?

With climate change an ever-increasing presence in today’s news and media, a lot of talk is based around how policy today impacts the climate of the future. Leaders often talk about ensuring that we remain within aspirational limits of global temperature rises of 1.5 or 2.0⁰C, whilst papers and media highlight scientific studies which show how these changes will impact everything from flooding, farming, health, and infrastructure. But what actually goes into creating the data for these reports? And how robust are they for seeing into the future?

By Dr James Mollard, University of Edinburgh


First, let us lay out exactly what a climate model is. Climate models are simply simulations of the Earth’s climate in some way. Note climate, and not weather. A climate model cannot tell you whether it will rain at 2pm on March 23rd in Stirling, nor is it intended to. They are built to consider large-scale physics. So instead, it can tell you the likelihood of you seeing rain in March around Central Scotland. The first mathematical model of climate was considered to be produced at some point in the early 19th century, when several scientists were using these to determine the impacts on temperature of the Earth’s orbit, ice ages and the composition of the atmosphere. At this stage, models considered the Earth as a single subject, and calculations through time were done by hand based on simple physics equations, often by the thousands, to calculate changes in the energy imbalance in the atmosphere, or radiative forcing. The most famous of these (rightly or wrongly!) came at the end of the century, when Arrhenius calculated the impact of the amount of heat retained by CO2 and water vapour, releasing a paper in 1896 claiming that doubling CO2 concentrations in the atmosphere would raise global temperatures by 5⁰C.

Figure 1- A typical GCM setup, with physical processes being calculated at each timestep for multiple gridboxes across the globe – Figure comes from Edwards, 2011

Climate models evolved further over the years to consider the impacts of the ocean, the impacts of different levels of the atmosphere, regional differences in climate change and important weather parameters, such as rain and cloud! Today, climate models are massively more complex. The term “climate model” is slowly been replaced by “Earth System Model” (or ESM) as the complexity of the interactions represented includes more than just simply the atmosphere and ocean. Most national institutions will have their own version, which will differ in how they work. But all models will usual consist of a “General Circulation Model” (GCMs), also known as an “Atmospheric Model” or “Dynamical Core” – this simulates the changes to the physical states of the atmosphere, allowing movement of air, transfer of heat and energy, and changes to states of water and surfaces. These models will split the Earth into many individual boxes in all 3 directions, with these calculations taking place to simulate the passing of between 20 minutes and 1 hour at a time. At this point in time, climate models have between 500,000 and 22,000,000 gridboxes to represent the Earth’s surface and atmosphere. And these calculations are done in each gridbox, every time.

Figure 2- The generic setup of the UKESM1, showing all the various sub-models and links between each used to create the UKESM1 - taken from https://ukesm.ac.uk/science-of-ukesm/

Alongside this GCM sit other models to represent other physical processes that happen within the world that can have impacts or feedbacks on the atmospheric. Figure 2 shows the UK Met Office’s setup for their ESM, the UKESM1. The Unified Model is the dynamical core, which is linked to a chemistry and aerosol model (UKCA-GLOMAP), which in turn links to models representing land physics, biogeochemistry, vegetation and (through a separate model which represents exchanges between the ocean and atmosphere known as a coupler) ocean physics including sea-ice and marine biogeochemistry. Each of these models runs based on the main dynamical core output, but also feedback into the main model to influence the next set of calculations. In total, these equate to nearly 1 million lines of computer code. It begins to be understandable why these models can only be run on some of the world’s fastest supercomputers, especially if you are trying to run them to create 100 years of future climate.


To begin to run a climate model, you need to have a significant amount of data to start. Some data can be read in once at the start, such as data that won’t change throughout the entire simulation, including terrain heights, land/ocean position and rotation of the Earth. Other data is needed to give the model a starting point, as after that it is calculated by the models themselves. Sea-ice amount, land surface cover, vegetation and weather variables can be known for a set point in time thanks to amazing observational datasets produced by satellites, aircraft, radar, and ground sites, so can be loaded in for the first time only. We also need to consider inputs that occur over time – such as emissions of gases and aerosols into the atmosphere, or changes to land surface use. For these, we need a dataset that encompasses the entire globe for the entire period of the simulation.

Since we can choose *when* to start our climate model, we can start it at a point in the past and see how the model changes over time, comparing it to what happened in reality – this is called hindcasting. We also know that the data we have input to begin the model has an uncertainty in it, and that slight changes in the initial data can lead to much larger changes further down the line – often referred to as the “butterfly effect”. By changing the input data within the uncertainties, we can create another model timeline to compare. Doing this multiple times gives us a spread of the possible changes we could see, called an ensemble. Comparing whether our reality fits within the possibilities given by our model is a useful method of determining whether the model itself is good enough to represent the Earth’s processes. Some models will be able to represent some phenomena better than others, depending on what drives those phenomena and how they are represented in each model.

Once there is confidence that the model is good at replicating the past, then, and only then, is it used to consider the future. The future is, of course, unknown, making the need for some data to be continuously added to the model problematic. Much of the data needed relies on local, national, and international governance and policy. Does the world go carbon neutral quickly, reducing greenhouse gas emissions? Or do only some countries do that? Or do we move away from some farming practices that produce lots of methane? Do some countries increase deforestation rates? Do our cities get bigger, replacing green fields with black tarmac? Do ships change routes, moving where diesel engine emissions are released?

It is impossible to know for certain which of these statements is true, or even if they are, when they will happen. To get around this, we create scenarios. These are single pathways that follow a specific route through time with a series of assumptions and predictions about how the future will go. Using different scenarios allows us to explore the range of possible futures. In the past, scenarios have been determined by “best-case” and “worse-case”, or by changes in specific characteristics. But to truly demonstrate the range of potential future impacts on climate, there needs to be a constant set of future scenarios which all models can use.

Step up the World Climate Research Programme, which runs the Climate Model Intercomparison Project (CMIP). This is a project that runs every few years, and provides the opportunity to compare the top climate models in a range of predefined simulations. One such part of the project is looking at future changes, meaning that they provide a full set of scenarios for future climate model simulations. The most recent CMIP, CMIP6 consider the future scenarios used in other aspects of social and economic prediction, known as the shared socio-economic pathways (SSPs) which consider everything from population to health, finance to energy usage and policy to conflict  In total, there are 5 SSPs used which represent a range of future possible trajectories, from a world that shifts pervasively towards sustainable development (SSP1) through to a fossil-fuelled developed world that focuses on economic factors and assumes technology will solve environmental problems (SSP5). These are used alongside a Representative Concentration Pathway (RCP), which are numbered after the radiative forcing in the atmosphere by the end of the century (2.6, 4.5, 6.0 and 8.5). Combined, they produce new future emission datasets and land use changes that can be used in our climate models. Of course, they cannot determine EVERY possible future. But they can give a range of our future, and an understanding of the potential impacts we can expect to see if we follow something similar.

Figure 3- Emissions of CO2, Methane and Sulphur Hexafluoride (all greenhouse gases) for each future scenario used in CMIP6. Figure taken from Figure 2, Meinshausen et al., 2020

In total, over 100 models from more than 50 centres have used these future scenarios to produce datasets that stretch to the end of the century and some further. It is a huge, global effort, but results in the ability to not only compare model against model, but also consider how our actions today can shape the climate of the future across the world. The data itself can then be used as inputs for other models, such as agriculture models, or finance models. These, along with the data from the climate models themselves, are massively important to determining policy, as comparisons between inaction and action can be spelled out not just in a value of radiative forcing, or a CO2 concentration, but in real-terms that it’s easier to understand. The increases of heatwaves, the frequency of drought, the amount of dollars lost, and in many, many cases, the difference in the numbers of deaths.

 A word of reflection from the Collaborative

As much as data does so well in enabling us to look at plausible future scenarios in what is an immensely complex field, it needs context. It cannot be seen as the be-all and end-all. In the efforts of climate action, we need it to aid in telling stories that everyone can relate to, to give an account of the tragic (or hopeful) outcomes of our decisions on human lives and our whole ecosystem.   

References

Edwards, P.N. (2011), History of climate modelling. WIREs Clim Change, 2: 128-139. https://doi.org/10.1002/wcc.95

Meinshausen, M., Nicholls, Z. R., Lewis, J., Gidden, M. J., Vogel, E., Freund, M., ... & Wang, R. H. (2020). The shared socio-economic pathway (SSP) greenhouse gas concentrations and their extensions to 2500. Geoscientific Model Development, 13(8), 3571-3605.

Dr Mollard contributed to highly impactful work on the Children Climate Risk Index (CCRI) project, leading to UNICEF’s Heatwave Report in 2022.