2018-2022 expected to be abnormally hot years

August 14, 2018, CNRS
Credit: CC0 Public Domain

This summer's worldwide heatwave makes 2018 a particularly hot year. And the next few years will be similar, according to a study led by Florian Sévellec, a CNRS researcher at the Laboratory for Ocean Physics and Remote Sensing (LOPS) (CNRS/IFREMER/IRD/University of Brest) and at the University of Southampton, and published in the 14 August 2018 edition of Nature Communications. Using a new method, the study shows that at the global level, 2018-2022 may be an even hotter period than expected based on current global warming.

Warming caused by is not linear. It appears to have lapsed in the early 21st century, a phenomenon known as a hiatus. A new method for predicting mean temperatures, however, suggests that the next few years will likely be hotter than expected.

The system, developed by researchers at CNRS, the University of Southampton and the Royal Netherlands Meteorological Institute, does not use traditional simulation techniques. Instead, it applies a statistical method to search 20th and simulations made using several reference models to find 'analogues' of current climate conditions and deduce future possibilities. The precision and reliability of this probabilistic system proved to be at least equivalent to current methods, particularly for the purpose of simulating the of the beginning of this century.

The new method predicts that mean air temperature may be abnormally high in 2018-2022—higher than figures inferred from anthropogenic global warming alone. In particular, this is due to a low probability of intense cold events. The phenomenon is even more salient with respect to , due to a high probability of heat events, which, in the presence of certain conditions, can cause an increase in tropical storm activity.

Once the is 'learned' (a process which takes a few minutes), predictions are obtained in a few hundredths of a second on a laptop. In comparison, supercomputers require a week using traditional simulation methods.

For the moment, the method only yields an overall average, but scientists now would like to adapt it to make regional predictions and, in addition to temperatures, estimate precipitation and drought trends.

Explore further: New technique for simulation of extreme weather events

More information: Florian Sévellec et al, A novel probabilistic forecast system predicting anomalously warm 2018-2022 reinforcing the long-term global warming trend, Nature Communications (2018). DOI: 10.1038/s41467-018-05442-8

Related Stories

New technique for simulation of extreme weather events

June 5, 2018

Predictions of how climate change may affect extreme weather systems, such as typhoons, are usually conducted using general circulation models (GCMs), which represent physical processes in the atmosphere or oceans.

El Nino and the end of the global warming hiatus

April 27, 2017

A new climate model developed by Yale scientists puts the "global warming hiatus" into a broader historical context and offers a new method for predicting global mean temperature.

Recommended for you

26 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

evropej
1.3 / 5 (8) Aug 14, 2018
How was the model validated? It is easy to create models of anything. Until you verify that you model reflects reality, its just a model and nothing more.

"Once the algorithm is 'learned' (a process which takes a few minutes), predictions are obtained in a few hundredths of a second on a laptop. In comparison, supercomputers require a week using traditional simulation methods."

This statement itself is enough for me to discredit any merit for this article!
Nik_2213
4.4 / 5 (7) Aug 14, 2018
But, they have made their prediction and must stand by it. THAT is good Science.
StrubMeister
1.5 / 5 (8) Aug 14, 2018
you can't algorithm weather.

Whart1984
Aug 14, 2018
This comment has been removed by a moderator.
rrwillsj
3.3 / 5 (3) Aug 14, 2018
But evropej, how can we trust your evaluation of anybody else work?

Especially with you over there cackling to yourself. Crouching behind the dead bushes, counting the 24 pieces of silver you were paid by the Carbon Lobby!
howhot3
3.7 / 5 (6) Aug 14, 2018
YEAH! Well there it is climate deniers. Keep smoking that trump stuff and you too can believe,
SteveS
5 / 5 (6) Aug 15, 2018
How was the model validated?


https://www.natur...-05442-8

Read from "Evaluation metrics and perfect model approach" onwards and then come back with an informed argument.
wxhack
5 / 5 (6) Aug 15, 2018
weather models use real world data from automatic weather stations, upper air data from sondes, amdar data from aircraft, buoy data etc to formulate future scenarios. And there is constant verification going on that is fed back into the models.
evropej
2 / 5 (2) Aug 15, 2018
But evropej, how can we trust your evaluation of anybody else work?

Especially with you over there cackling to yourself. Crouching behind the dead bushes, counting the 24 pieces of silver you were paid by the Carbon Lobby!


First, please save the passive aggressive comments if you want me to consider your comments seriously!

I am for conservation and preservation. I am not bought or swayed by anyone. I am however a scientist who sees countless people in so many industries use simulations tools without validating the models. Tweak the parameters or assumptions and you get different results. And remember assumptions are probabilities of being correct and multiplying probabilities reduces the chance of actually being correct. So the more you assume, the less likely you are going to be correct.

I get this from a development or research perspective but not to release to general public.
Captain Stumpy
4 / 5 (4) Aug 15, 2018
@evropej
I am however a scientist ... without validating the models
sorry, but Rule37
there are no [insert claim here] on the internet

with all due respect, considering the evidence, this claim is suspect

for starters, your perspective is, per your own words, "from a development or research perspective but not to release to general public", so that can mean anything from personal home research to commercial or DOD research

yet given your above comment, this looks more like commercial or personal research because any person with access to a google scholar search can see where climate models are getting more accurate over time matching observation - because that is how they validate the model

given the competition and the mostly free access, you can see for yourself

here is a page that goes into detail about the "reliability of the model" argument with plenty of references for you
https://skeptical...iate.htm
Captain Stumpy
4 / 5 (4) Aug 15, 2018
@evropej
I am however a scientist who
for two, you have a propensity to present arguments without evidence or references, which isn't indicative of a scientist, let alone one in development or research

when you make a claim like
How was the model validated?
when it's *literally* explained in the title as well as called out in the abstract
The prediction accuracy is equivalent to operational forecasts and its reliability is high
and the study has it's methods freely accessible, then it calls into question the veracity of your claim

this isn't meant to be anything other than a pointer to the problems of your claim versus the evidence presented, so take it as you will

but more importantly, just understand that there are *literally* actual research scientists that may comment to you about things here, though in all honesty, most posters are just trolls and many of the profiles belong to a small minority of trolls
tpb
2.3 / 5 (3) Aug 15, 2018
"Instead, it applies a statistical method to search 20th and 21st century climate simulations made using several reference models to find 'analogues' of current climate conditions and deduce future possibilities."

So looking at climate simulations to generate future simulations.
Too bad all the models missed the global warming hiatus. So using those models means GIGO.

evropej
3 / 5 (2) Aug 15, 2018
Captain Stumpy, what is your background and profession?
Captain Stumpy
4.3 / 5 (6) Aug 15, 2018
@evropej
Captain Stumpy, what is your background and profession?
well, it's completely irrelevant considering, but here is some
retired Army, retired Investigator, former Truck Captain (professional) and Asst. Chief (Vol)
currently taking courses at MIT

why is that so important?
does it influence the evidence or your perception of it?

If you think I am wrong then please point to the evidence within the study (not the article above, but the linked study) and show where it is wrong, and why you think it is wrong

Then give links and references to support your claims

You know, like I did above
SteveS
5 / 5 (4) Aug 16, 2018
So looking at climate simulations to generate future simulations.
Too bad all the models missed the global warming hiatus. So using those models means GIGO.


"After having tested PROCAST in a perfect model setting, we now test the exact same system with real observations. (Note that no retuning before going to observations has been applied.) We reproduce the skill analysis with the observed internal variability, estimated as anomalies from the forced component in GMT and SST (Fig. 1). For this purpose we computed retrospective predictions of the past, or hindcasts, from 1880 to 2016. This procedure allows a full estimate of the predictive ability of our prediction system in the most realistic and operational setting. "

https://www.natur...-05442-8
evropej
1 / 5 (2) Aug 16, 2018
Captain Stumpy, I asked because unless you are knee deep into this type of stuff ( modeling in general ), it is hard to understand why I might make that argument.
evropej
1 / 5 (2) Aug 16, 2018
Captain Stumpy, I asked because unless you are knee deep into this type of stuff ( modeling in general ), it is hard to understand why I might make that argument.
SteveS
5 / 5 (3) Aug 16, 2018
How was the model validated?


https://www.natur...-05442-8

Read from "Evaluation metrics and perfect model approach" onwards and then come back with an informed argument.


@evropej

As a scientist you will now have read the paper and know how it was validated, so what are your particular issues with the forecast?
Captain Stumpy
4.3 / 5 (6) Aug 16, 2018
@evropej
I asked because unless you are knee deep into this type of stuff ( modeling in general ), it is hard to understand why I might make that argument
I'm sorry, but that doesn't make any sense at all, IMHO

if you have the background, you should present [x] argument with [y] evidence in rebuttal
this is typical of science

so look at this from our perspective:
you're (essentially) an anonymous poster (unknown and unverifiable)
you have a vague claim against the science (ignoring error margins and linked evidence)
you provide no evidence
you argue you're a professional scientist (again, unknown and unverifiable)

given your lack of anything verifiable, why should we accept your belief?

because you state you're a professional?
SteveS
5 / 5 (4) Aug 16, 2018
@evropej

If you're interested I'm a saggar maker's bottom knocker.

Do you think that invalidates any fact based arguments I make?
evropej
1 / 5 (2) Aug 16, 2018
So it is difficult to talk to people whether or not they are in the science field or not that models are just that, models. What a model or simulation outputs, needs to be considered with a grain of salt. Typically, when I construct models, I try to verify that the model can produce expected results with a known configuration. What do I mean? Lets say that my model will predict the weather forecast? Ok, so I know the weather for the past so the test would be input all the information to predict the weather for a known state in the past. If the model correctly predicts knows states, then you can say with a higher level of confidence that the model can actually do what it says. This is in essence a poor explanation of model verification. Now, why do I have heartburn with this article? In order to predict global effects, the input data itself is not available for all the initial conditions ( temp, humidity, velocity, ground, solar, etc ). Hence, my position on this is not good.
Captain Stumpy
4.2 / 5 (5) Aug 16, 2018
@evropej
In order to predict global effects, the input data itself is not available for all the initial conditions ( temp, humidity, velocity, ground, solar, etc ). Hence, my position on this is not good
read this, please:
https://skeptical...iate.htm

that models are just that, models
models are just one more tool in the shed to eliminate confusion and build upon working knowledge

I understand what you're saying, but you've not made a good argument to discredit the model

mind you, the article (above on PO) is just the author's interpretation of the science in the study, so if you're going to discredit the model it should be with specific data to the model itself, not just dismissals because not all global data is available for all the initial conditions

PS - using an effective model can help identify potential and currently unknown initial conditions
SteveS
5 / 5 (3) Aug 17, 2018
@evropej

Typically, when I construct models, I try to verify that the model can produce expected results with a known configuration. What do I mean? Lets say that my model will predict the weather forecast? Ok, so I know the weather for the past so the test would be input all the information to predict the weather for a known state in the past. If the model correctly predicts knows states, then you can say with a higher level of confidence that the model can actually do what it says.


I refer you to my previous post starting....

""After having tested PROCAST in a perfect model setting, we now test the exact same system with real observations..."

Why did they do this? In your words they did this to "verify that the model can produce expected results with a known configuration"

The paper goes in to a lot more detail that you, as a scientist, would find interesting.

https://www.natur...-05442-8
evropej
1 / 5 (1) Aug 17, 2018
Steve,
I have read the article and it is informing. However, I still find it suffering from the same issue: a probabilistic model which certain levels of uncertainty being taken as a means of predicting future events. Its a machine doing guessing based on past data.

A 1 degree variance in the prediction which is within the tolerance of the model can vary the interpretation vastly for a region ( feels cold or feels hot ).

And it is ok for people not to agree or find the same article useful.
SteveS
5 / 5 (4) Aug 17, 2018
A 1 degree variance in the prediction which is within the tolerance of the model can vary the interpretation vastly for a region ( feels cold or feels hot ).


I think it's clear from your comment that you haven't read the paper.

1 PROCAST doesn't make regional predictions

Here we develop a novel method to predict global-mean surface air temperature and sea surface temperature, based on transfer operators, which allows, by-design, probabilistic forecasts.

2 1 degree variance is orders of magnitude larger than the predicted anomaly and would definitely not be "within the tolerance"

"This corresponds to an expected warm anomaly of 0.02 and 0.07 K for GMT and SST, which would reinforce the forced warm trend."

I'm surprised that you, a scientist, have made such basic mistakes.
howhot3
5 / 5 (3) Aug 17, 2018
Warming caused by greenhouse gas emissions is not linear
Says it all, But as long as CO2 is continually added to the atmosphere, it just gets worse. Sorry to all the climate change deniers, but it just works like that.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.