September 10, 2012 report

# Statisticians try to calculate probability of another 9/11 sized attack

*arXiv*, that using tried and true statistical models, that the likelihood of another attack as big, or even bigger, than 9/11, is as likely as not.

The basis of any statistical model is data and in this case the data was based on the number of terrorist acts committed between the years 1969 and 2007, which of course included 9/11. The attack on the twin towers in New York stands out of course, as the number of people killed that day was six times more than any other terrorist attack. To first see how accurate any given model might be, the researchers calculated the likelihood of 9/11 actually happening based on prior data using three different types of standard models; power law, exponential distributions and log-normal. After crunching the numbers they say it came down to between an eleven to thirty five percent chance, which they say is reasonable and shows that what happened on 9/11 was not unlikely, statistically speaking, to have happened.

The two then applied the same models looking forward into the future and came up with a likelihood of another 9/11 type attack falling between twenty and fifty percent, depending on which model was used and assuming that things remain the same, i.e. the average number of attacks per year (approximately 2000) stays the same. But, realizing that the odds of things holding steady isn't itself very realistic they also tried factoring in such destabilizing scenarios as rising food prices or things calming or growing worse in two of the current hot spots for terrorism; Iraq and Afghanistan. In such cases the models became truly alarming, indicating that in the worst case scenarios the likelihood of another event as deadly as 9/11 occurring, becomes nearly ninety five percent.

**More information:**Estimating the historical and future probabilities of large terrorist events, arXiv:1209.0089v1 [physics.data-an] arxiv.org/abs/1209.0089

**Abstract**

Quantities with right-skewed distributions are ubiquitous in complex social systems, including political conflict, economics and social networks, and these systems sometimes produce extremely large events. For instance, the 9/11 terrorist events produced nearly 3000 fatalities, nearly six times more than the next largest event. But, was this enormous loss of life statistically unlikely given modern terrorism's historical record? Accurately estimating the probability of such an event is complicated by the large fluctuations in the empirical distribution's upper tail. We present a generic statistical algorithm for making such estimates, which combines semi-parametric models of tail behavior and a non-parametric bootstrap. Applied to a global database of terrorist events, we estimate the worldwide historical probability of observing at least one 9/11-sized or larger event since 1968 to be 11-35%. These results are robust to conditioning on global variations in economic development, domestic versus international events, the type of weapon used and a truncated history that stops at 1998. We then use this procedure to make a data-driven statistical forecast of at least one similar event over the next decade.

© 2012 Phys.org

**Citation**: Statisticians try to calculate probability of another 9/11 sized attack (2012, September 10) retrieved 16 June 2019 from https://phys.org/news/2012-09-statisticians-probability-sized.html

It also really depends on what type of events you add to your simulation what type of distribution you get.

No statistical model is 'true'. There are statistical models that fit better than others. But if any statistical model were really 'true' then it would completely and accuratley predict future outcomes - and then it wouldn't be a STATISTICAL model anymore but a deterministic one.