Innovation dilemma suggests that 'better' models are not always better

May 8, 2017 by Lisa Zyga, feature

The robustness curves for an innovative model and a state-of-the-art model, with the horizontal axis representing the probability that the argument is sound and the vertical axis representing robustness. If the required probability of soundness of argument is below 0.9965 (where the lines cross), then the state-of-the-art model is more robust. Above this value, the innovative model is more robust. Credit: Ben-Haim. ©2017 The Royal Society
(—If you had to predict the probability of a catastrophic meteor striking the Earth, you would likely want the most accurate models on which to base your predictions. But a new paper shows that, because the most accurate models are generally more innovative and complex, they may suffer from a higher probability of error. Consequently, the most innovative and accurate models may not offer the best methods for making predictions, especially of rare, high-consequence events.

Yakov Ben-Haim, a professor of mechanical engineering at the Technion-Israel Institute of Technology in Haifa, Israel, has investigated this "innovation dilemma" in a recent issue of The Proceedings of The Royal Society A.

"A model that uses innovative and new concepts and results may in fact be more prone to error than a more standard state-of-the-art model," Ben-Haim told "Innovative models reflect progress, but not all progress is actual improvement."

To be clear, Ben-Haim makes a distinction between valid conclusions and sound arguments. His focus is not on reaching a valid conclusion per se, but rather on developing the most logically sound arguments that use models to reach conclusions. As he notes, it's possible to reach a valid conclusion with a flawed argument. But a highly sound argument has a higher of yielding valid conclusions, in general.

The main result of the paper is that there exists a tradeoff between soundness of argument (an indicator of performance) and robustness to error in the argument. The exact nature of this tradeoff differs for different arguments, and innovative models may be subject to a more severe tradeoff than simpler models.

To elaborate, while an innovative model may be capable of achieving a more sound argument than a simpler model, it may also have a higher probability of error and therefore be less reliable in predicting very rare events. So it may be advantageous to use the simpler model, if its probability of error is lower. Overall, the idea is that different models work better for different situations, and that the usual default assumption—to use the most advanced or sophisticated model—may not be the most reliable approach.

In particular, situations that involve making predictions for low-probability, high-risk events—such as a catastrophic meteor strike, or an explosion at a nuclear power plant—may benefit from simpler models. That's because these events have such a low probability of occurring that the probability of error in the argument may actually exceed the probability of the event occurring. In such a case, the error reduces our confidence in the estimated probability of the event occurring to such a large degree that we may be better off going with a simpler because of its lower probability of error.

In order to arrive at these results, Ben-Haim used an approach called info-gap theory to analyze the soundness and error of logical reasoning. Info-gap theory is traditionally used for making decisions in situations with very high levels of uncertainty, and has been used in areas such as engineering, economics, and medicine, among others. The application of info-gap theory to assess the uncertainty of an argument is a new and quite different use, and demonstrates that the theory can be extended to more wide-reaching areas.

"Info-gap analysis of robustness to error provides a tool for enhancing the ability to predict rare, high-consequence events," Ben-Haim said.

Explore further: How statistical thinking should shape the courtroom

More information: Yakov Ben-Haim. "Does a better model yield a better argument? An info-gap analysis." Proceedings of The Royal Society A. DOI: 10.1098/rspa.2016.0890

Related Stories

Overwhelming evidence? It's probably a bad thing

January 12, 2016

The old adage that says "If it sounds too good to be true, it probably is" has finally been put to the test – mathematically – in research led by the University of Adelaide.

Overcoming structural uncertainty in computer models

April 1, 2014

A computer model is a representation of the functional relationship between one set of parameters, which forms the model input, and a corresponding set of target parameters, which forms the model output. A true model for ...

Recommended for you

What rising seas mean for local economies

February 15, 2019

Impacts from climate change are not always easy to see. But for many local businesses in coastal communities across the United States, the evidence is right outside their doors—or in their parking lots.

Where is the universe hiding its missing mass?

February 15, 2019

Astronomers have spent decades looking for something that sounds like it would be hard to miss: about a third of the "normal" matter in the Universe. New results from NASA's Chandra X-ray Observatory may have helped them ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.