Paths out of uncertainty: Increasing extreme confidence

Nov 17, 2013

Long-term and average changes are in the focus of the discussion on climate change: globally, as the different scientific climate models all predict, it will be warmer on Earth at the end of the century. For decision-makers and people affected by climate change, however, information on the frequency and intensity of extreme events such as heat and cold extremes, heavy rainfall or dry spells are at least as important as indications of average values. Moreover, for them projections about the next ten, twenty, thirty or forty years are usually more relevant than the long-term view to the end of the century. The problem: for the short and medium term, the models yield extremely different results.

Does that mean that the models are not working? No, says Erich Fischer, a senior scientist at the Institute for Atmospheric and Climate Science at ETH Zurich, who has been investigating the causes of the major discrepancies in the short and medium-term projections. In a study just published in the journal "Nature Climate Change", he concludes that they are mostly caused by natural, chaotic and thus unpredictable fluctuations in the climate system. There is certainly potential for improving , Fischer says. "However, even if we had a perfect model for the medium-term, there would still be uncertainties."

Butterfly effect simulated

The researchers obtained their results from a simulation of the well-known butterfly effect, which states that slightly different starting conditions can vastly influence a development in the longer term ("Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?"): the scientists calculated the future climate twenty-one times using one of the leading climate models, deliberately changing the temperatures on Day 1 of the calculation ever so slightly for every point on Earth – by a maximum of one hundred billionths of a degree Celsius.

This revealed that the differences in the maximum and minimum annual temperatures and the intensive precipitation between 2016 and 2035 were almost as great in the realisations of this one model as the known differences between the various models. From these results the researchers concluded that the majority of the differences are due to the starting conditions and thus chaos, not the uncertainties of the models.

What can be predicted and what can't

"Our study reveals that we have to live with uncertainties in local, medium-term projec-tions," says Fischer. A Swiss farmer, for instance, cannot expect any accurate predic-tions on the changes in climate extremes on the Swiss Central Plateau in the next thirty to forty years, even if it is clear that the heat extremes and periods of heavy rainfall in the long-term trend will be more intense by the end of the century.

However, this does not mean to say that no scientific projections about the coming decades are possible. The ETH-Zurich scientists have found ways to make such projections – by considering large regions or the entire world. This enabled them to demonstrate that the intensity of heat extremes and periods of heavy rainfall will not increase equally everywhere on Earth: while heat extremes will become significantly more intense on two thirds of the land surface within three decades, there will be no significant changes in a third of the area. And as far as heavy is concerned, it will increase by ten per cent in a quarter of the area and less than ten per cent in the remaining three quarters.

Risks predictable

The ETH-Zurich researchers make similar projections for large individual regions such as Europe, the USA, China or Australia. In all these regions, the climate models predict an increase in the intensity of heat waves in the next thirty years and in the next fifty years. For institutions with a global focus, such as reinsurance companies or food multinationals, such predictions are extremely useful, even if it is unclear where exactly the will occur. "The different models agree that changes in extreme weather events will occur and how strong they will be, but not where they will be the strongest. This is largely determined by chaos," says Fischer. In physics, it is common for a single condition not to be predictable but probably the average. Fischer compares it with road traffic: if speed limits are increased, we can predict that there will more traffic accidents. Where exactly the next accident will take place, however, we cannot tell.

Explore further: A remote Swiss valley models global climate

More information: Nature Climate Change DOI: 10.1038/nclimate2051

Related Stories

The limitations of climate models

Nov 07, 2012

(Phys.org)—How accurate is the latest generation of climate models? Climate physicist Reto Knutti from ETH Zurich has compared them with old models and draws a differentiated conclusion: while climate modelling ...

A remote Swiss valley models global climate

Oct 04, 2013

(Phys.org) —EPFL scientists have developed a new statistical model of extreme rainfall in the Swiss Val Ferret region, which can be used across the globe.

Central European summer temperature variability to increase

Dec 18, 2012

More extreme heat waves have been observed in central Europe in recent years as summer temperature variability has increased on both daily and interannual timescales. Models project that as the climate warms throughout the ...

Increases in extreme rainfall linked to global warming

Feb 01, 2013

(Phys.org)—A worldwide review of global rainfall data led by the University of Adelaide has found that the intensity of the most extreme rainfall events is increasing across the globe as temperatures rise.

Recommended for you

Kiribati leader visits Arctic on climate mission

1 hour ago

Fearing that his Pacific island nation could be swallowed by a rising ocean, the president of Kiribati says a visit to the melting Arctic has helped him appreciate the scale of the threat.

NASA catches a weaker Edouard, headed toward Azores

17 hours ago

NASA's Aqua satellite passed over the Atlantic Ocean and captured a picture of Tropical Storm Edouard as it continues to weaken. The National Hurricane Center expects Edouard to affect the western Azores ...

User comments : 20

Adjust slider to filter visible comments by rank

Display comments: newest first

VENDItardE
1.2 / 5 (20) Nov 17, 2013
"Does that mean that the models are not working? No, says Erich Fischer, a senior scientist at the Institute for Atmospheric and Climate Science"

Every single indicator that the models point to tell us that the models aren't working.

Jim4321
1.2 / 5 (18) Nov 17, 2013
This article seems to imply that the models cannot be trusted at any time scale -- including the century scale. This follows because the parameters of the model were established by fitting to the rapid rise in global temperatures between roughly 1980 and 2000, which were assumed to be due entirely to CO2. However, if the models are not reliable over 20 and 30 year periods as claimed-- then their parameters cannot be fit by measured temperature changes over 20 or 30 year periods. The models seem to have no predictive value whatsoever.
antialias_physorg
4.3 / 5 (8) Nov 17, 2013
Did you guys even read the article?
Obviously not.
HannesAlfven
1.2 / 5 (17) Nov 17, 2013
The models are also problematic insofar as they lack grounding in the much larger discussion over concepts, propositions and whatever competing models we might construct, based upon competing worldviews. What this does is codify the models' underlying propositions as beyond the context of the discussion. The fact is that concepts are the atoms of thought, and thus any attempt to alter our conceptions of how the universe works would necessarily rely upon a discussion of the propositions themselves. So, to talk about the climate models in purely mathematical terms denies us an ability to question any of it, from the very start. We do not alter our conceptions through mathematics. The math is there as a reality check. The concepts should be there to permit us to back out of the idea if we so choose, and to compare it with other models, in order to ensure that our decisions are meaningful.

But, propositions about co2 are simply treated as facts everywhere one looks.
runrig
4.4 / 5 (7) Nov 17, 2013
However, if the models are not reliable over 20 and 30 year periods as claimed-- then their parameters cannot be fit by measured temperature changes over 20 or 30 year periods. The models seem to have no predictive value whatsoever.


You appear to be under the illusion that GCM's are like weather forecasts and the devil is in the detail. All they can establish is the magnitude of warming over a LONG time-scale. As all cyclic events in the climate system are averaged out (30 years should do it ). This is because the likes of ENSO/PDO have indeterminate cycle lengths. What the IPCC publish are ensembles of separate runs and a mean is derived (the error bounds should encompass the short-term variation - though may not for a time ).
In other words you seem to misunderstand what the GCM's can and cannot do.
NikFromNYC
1.2 / 5 (18) Nov 17, 2013
Breaking news: climate *is* chaotic on the multibillion year old ocean-dominated Earth, on century time scales, not just decades.

Potential reinstatement of Global Cooling is now likely, hopefully mitigated by China's big emissions boost. Policy makers told you to pig out on sugar, bread and pasta for decades with their Food Pyramid. Oops!

"...the scientists calculated the future climate twenty-one times using one of the leading climate models, deliberately changing the temperatures on Day 1 of the calculation ever so slightly for every point on Earth – by a maximum of one hundred billionths of a degree Celsius."

...as they try to model crazy Van Gogh worthy ocean currents that control heat exchange by using rectangular grid boxes that become puny as they reach the poles!

...and they still add massive water vapor positive feedback without being able to also model cooling cloud cover.

SOS = Same Old Sh*t, now with percentages!

Human time scales are puny.
NikFromNYC
1.2 / 5 (19) Nov 17, 2013
OMG they had huge model swings by mere temperature noise of 0.00000000001° C.

...as they claim 95% certainty via their beloved IPCC.

Did they try, say, 0.01° C noise? Their smoking computer maybe didn't handle that well.

Then they eke out a future junk science cherry pickers paradise, now peer reviewed:
"A multimember initial condition ensemble carried out with an Earth system model shows that trends towards more intense hot and less intense cold extremes may be masked or even reversed locally for the coming three to five decades even if greenhouse gas emissions rapidly increase."
Jim4321
1 / 5 (13) Nov 17, 2013
[
You appear to be under the illusion that GCM's are like weather forecasts and the devil is in the detail. All they can establish is the magnitude of warming over a LONG time-scale. As all cyclic events in the climate system are averaged out (30 years should do it ). This is because the likes of ENSO/PDO have indeterminate cycle lengths. What the IPCC publish are ensembles of separate runs and a mean is derived (the error bounds should encompass the short-term variation - though may not for a time ).
In other words you seem to misunderstand what the GCM's can and cannot do.

What I am saying is that you cannot use retrodiction to establish the basic model parameters because of the chaotic nature of the system on the 20 to 30 year scale -- as illustrated by the author. Hence, we have no model basis for predicting anything about future temperatures.
runrig
5 / 5 (5) Nov 17, 2013
What I am saying is that you cannot use retrodiction to establish the basic model parameters because of the chaotic nature of the system on the 20 to 30 year scale -- as illustrated by the author. Hence, we have no model basis for predicting anything about future temperatures.


But they do extensive hind-casts with known data of the climate cycles and calibrate their models thus.

These forecasts then are integrated into the (LONG) term future (without explicit knowledge of the cycle lengths). All they can do with any measure of skill is work out an average slope of temperature change. If you look at a slope falling away from the trend that is NO surprise. The error bounds gained from the sum of individual runs should contain the ave global trace.

The system is chaotic on a 20-30yr scale – which is why you need to ignore the signal within that time frame. Beyond that the basic energy in minus energy out equation rules.

http://iopscience.../article
runrig
5 / 5 (5) Nov 17, 2013
...and they still add massive water vapor positive feedback without being able to also model cooling cloud cover.


That's because they don't need to Nik: The warming caused by clouds (in Spencer's ENSO theory) will more/less cancel cooling. Not all convective cloud is Cb (Cumulonimbus). Much is merely inversion topped SC (stratocumulus) and as such has a greater or neutral warming influence. WV feed-back, as I keep saying, will not change relative humidity, only absolute humidity – hence same clouds (averaged globally). The system can be averaged Nik and the fundamental ENERGY IN – ENERGY OUT equation rules. Unfortunately excess CO2 and other GHG's make the out bit a bit less than it should be to stabilise temps. So the SB Law comes into play.
Jim4321
1 / 5 (13) Nov 17, 2013
"The system is chaotic on a 20-30yr scale – which is why you need to ignore the signal within that time frame. Beyond that the basic energy in minus energy out equation rules."

The CO2 signal presumably has only been significant since 1980 -- with a period of rapid warming until 2000 -- not much since. Hence if you are going to fit the model and its climate sensitivity to the data -- you only have a 20 to 35 year period for retrodiction. The chaos makes this data fitting unreliable (per the article). Hence we cannot establish the model parameters -- esp. the climate sensitivity parameter.

My understanding is that the direct CO2 heating is only about 1/3 of what is needed to explain 1980 - 2000. The rest of the increase in temperature is attributed to black box positive feedback -- the climate sensitivity. Its this black box positive feedback that I am saying cannot be established by retrodiction.
runrig
5 / 5 (5) Nov 17, 2013
The CO2 signal presumably has only been significant since 1980 -- with a period of rapid warming until 2000 -- not much since.


Jim:
The signal is/has been there much longer than that. As in the link I posted showing the trend minus the overlying climate cycles.
It is the climate cycles of unknown duration that are masking the underlying GHG signal – they will always do that given eg that the ENSO/PDO cycle affects global temp by upwards of 0.4C from warm to cold and back. This in a period of a decade or so. The total warming attributed to AGW amounts to just 0.8C. Then there are the solar cycle and aerosol pollution etc.

cont
runrig
5 / 5 (5) Nov 17, 2013
cont

The chaos makes this data fitting unreliable (per the article). Hence we cannot establish the model parameters -- esp. the climate sensitivity parameter.


Yes you can as, far as the hind-cast is concerned (esp as ensemble techniques are used to ave out internal chaos), always supposing you're modelling reality sufficiently well, then that is relevant for the future. Models need to be continually refined in light of observation but to say the climate sensitivity is not well enough defined because of the internal chaos misses the point of the basics. That climate cycles are internal chaos that play out to zero in the long term. GHG physics tells us the fundamental. BTW: the way I read it is that prediction down to local/regional level cant be made in the short/med term – which is of course correct. GCM's are global models intended to predict global average temperature changes.
Ducklet
1 / 5 (15) Nov 17, 2013
There needs to be more to verifying a hypothesis than merely having it match something so simple. Otherwise even I could pop out a dozen hypothesis that blanket the space and be virtually guaranteed one of them will bear out. That doesn't mean it's correct. Even a blind hen will find a grain once in a while - make enough predictions and you will eventually be correct. The correlation needs to be BETTER than random.
runrig
5 / 5 (3) Nov 18, 2013
There needs to be more to verifying a hypothesis than merely having it match something so simple. Otherwise even I could pop out a dozen hypothesis that blanket the space and be virtually guaranteed one of them will bear out. That doesn't mean it's correct. Even a blind hen will find a grain once in a while - make enough predictions and you will eventually be correct. The correlation needs to be BETTER than random.


Err ... the models have GHG physics plugged into them. In a hind-cast they have that and the known variable climate cycles ... on from that the GHG science and averaged out climate cycles.

The causation is inherent in the model, gained from empirical laboratory experimenting and theory. It worked for a past model of the climate .... and it says whatever of the climate into the future.
Jim4321
1 / 5 (6) Nov 18, 2013
runrig

I think the comment by ducklet may be argued as follows. The models cannot provide any detailed prediction of future temperature trends. What (apparently per article) they can say is that the global average temperature anomaly will be higher after some number of unspecified years (probably more than 30) and certainly by the year 3000. How many discrete pieces of information does this constitute : likely 3 or 4 almost certainly less than 10. So do we have 3 or 4 fitting parameters being used to predict 3 or 4 pieces of information? Now consider the model and hindcasting: how many parameters do you have to fit. For my work, I would consider 10 pieces of output enough to fit one parameter. My question to you: how many parameters must be obtained by backfitting the data? Ducklet further raises the fact that there are many different models. This fact essentially increases the effective number of fitting parameters and decreases the value of the data.
runrig
not rated yet Nov 18, 2013
Jim:
This fact essentially increases the effective number of fitting parameters and decreases the value of the data.

and
The models cannot provide any detailed prediction of future temperature trends.


We are not talking precision here. What is being done is reducing uncertainty in the computations. Hence the ensemble technique. In NWP it is used to quantify the probability of an event happening, subject to sensitivity to initial conditions.

I was not a modeler, just someone who was taught how it was done and had to use them in my line of work. I take the resultant ensemble spread to be a best measure of both the possible mean solution and the error bounds to be expected after putting in the known variables and inputs/outputs. And tweaking them to find that initial sensitivity. The number and variation of the modelling increases confidence and does not diminish it.

Cont
runrig
5 / 5 (1) Nov 18, 2013
Cont.

how many parameters must be obtained by backfitting the data
by? I take it you mean input.

Solar input.
Atmospheric/surface albedo
Orography
Land-use/vegetation
SST's
Sea-ice
Ocean currents
Atmospheric chemistry
Aerosols
GHG's
Current state of atmosphere eg temp/hum/wind/pressure fields

http://www.ipcc.c...9-6.html
http://www.climat...nce/gcm/
http://www.metoff...e-models
http://en.wikiped...ariables
Jim4321
1 / 5 (8) Nov 18, 2013
by? I take it you mean input.

No. There are undetermined parameters in the model that are estimated by back fitting the observed temperature history to the model; e.g., the climate sensitivity parameter. I assumed from your comments that you are familiar with the model and hence with this estimation problem. Quite seriously, I am asking how many undetermined parameters there are -- I don't know. It is the estimation of these parameters that are seriously limited if the underlying physical system is chaotic. The comments by the author indicate that what we can learn about these parameters will be severely limited. If the model has too many such parameters-- then the estimation problem becomes impossible. This is the gist of my initial comment.
runrig
not rated yet Nov 19, 2013
The comments by the author indicate that what we can learn about these parameters will be severely limited


Jim
Do you mean this comment?
""Our study reveals that we have to live with uncertainties in local, medium-term projec-tions," says Fischer. A Swiss farmer, for instance, cannot expect any accurate predic-tions on the changes in climate extremes on the Swiss Central Plateau in the next thirty to forty years, even if it is clear that the heat extremes and periods of heavy rainfall in the long-term trend will be more intense by the end of the century."

If so, yes, and I don't see a way around it. GCM's are far to broad-brush to infer climate specifics on regional and short temporal scales.

There are things that have to be parameterized because of sub-grid scale processes (eg clouds) and computational expense.
Cloud processes are the main variable in determining sensitivity and spread amongst the models.
http://www.ipcc.c...2-3.html