Overcoming structural uncertainty in computer models

Apr 01, 2014
Overcoming structural uncertainty in computer models
Figure caption: Left: A thiypothetical model with ten inputs and one output, decomposed to reveal six intermediate parameters. Right: Possible structural error in the subfunctions that result in Y1, Y5, and Y6 are corrected with discrepancy terms δ1, δ2 and δ3. Credit: Mark Strong and Jeremy E. Oakley

A computer model is a representation of the functional relationship between one set of parameters, which forms the model input, and a corresponding set of target parameters, which forms the model output. A true model for a particular problem can rarely be defined with certainty. The most we can do to mitigate error is to quantify the uncertainty in the model.

In a recent paper published in the SIAM/ASA Journal on Uncertainty Quantification, authors Mark Strong and Jeremy Oakley offer a method to incorporate judgments into a model about structural uncertainty that results from building an "incorrect" model.

"Given that 'all models are wrong,' it is important that we develop methods for quantifying our uncertainty in model structure such that we can know when our model is 'good enough'," author Mark Strong says. "Better models mean better decisions."

When making predictions using computer models, we encounter two sources of uncertainty: uncertainty in model inputs and uncertainty in model structure. Input uncertainty arises when we are not certain about input parameters in model simulations. If we are uncertain about true structural relationships within a model—that is, the relationship between the set of quantities that form the model input and the set that represents the output—the model is said to display structural uncertainty. Such uncertainty exists even if the model is run using input values as estimated in a perfect study with infinite sample size.

"Perhaps the hardest problem in assessing uncertainty in a prediction is to quantify uncertainty about the model structure, particularly when models are used to predict in the absence of data," says author Jeremy Oakley. "The methodology in this paper can help model users prioritize where improvements are needed in a model to provide more robust support to decision making."

While methods for managing input uncertainty are well described in the literature, methods for quantifying structural uncertainty are not as well developed. This is especially true in the context of health economic decision making, which is the focus of this paper. Here, models are used to predict future costs and health consequences of options to make decisions for resource allocation.

"In health economics decision analysis, the use of "law-based" computer models is common. Such models are used to support national health resource allocation decisions, and the stakes are therefore high," says Strong. "While it is usual in this setting to consider the uncertainty in model inputs, uncertainty in model structure is almost never formally assessed."

There are several approaches to managing model structural uncertainty. A primary approach is 'model averaging' in which predictions of a number of plausible models are averaged with weights based on each model's likelihood or predictive ability. Another approach is 'model calibration', which assesses a model based on its external discrepancies, that is, output quantities and how they relate to real, observed values. In the context of healthcare decisions, however, neither of these approaches is feasible since typically more than one model is not available for averaging, and observations on model outputs are not available for calibration.

Hence, the authors use a novel approach based on discrepancies within the model or "internal discrepancies" (as opposed to external discrepancies which are the focus of model calibration). Internal discrepancies are analyzed by first decomposing the model into a series of subunits or subfunctions, the outputs of which are intermediate model parameters that are potentially observable in the real world. Next, each sub-function is judged for certainty based on whether its output would equal the true value of the parameter from real-world observations. If a potential structural error is anticipated, a discrepancy term is introduced. Subsequently, beliefs about the size and direction of errors are expressed. Since judgments for internal discrepancies are expected to be crude at best, the expression of uncertainty should be generous, that is, allowed to cover a wide distribution of possible values. Finally, the authors determine the sensitivity of the model output to internal discrepancies. This gives an indication of the relative importance of structural uncertainty within each model subunit.

"Traditional statistical approaches to handling in computer models have tended to treat the models as 'black boxes'. Our framework is based on 'opening' the black box and investigating the 's internal workings," says Oakley. "Developing and implementing this framework, particularly in more complex models, will need closer collaboration between statisticians and mathematical modelers."

Explore further: New study finds accurate parameters to rev up water and energy flux modeling

More information: When Is a Model Good Enough? Deriving the Expected Value of Model Improvement via Specifying Internal Model Discrepancies: epubs.siam.org/doi/abs/10.1137/120889563

add to favorites email to friend print save as pdf

Related Stories

Quantifying uncertainty in computer model predictions

Aug 20, 2013

DOE's National Energy Technology Laboratory (NETL) has great interest in technologies that will lead to reducing the CO2 emissions of fossil-fuel-burning power plants. Advanced energy technologies such as ...

Recommended for you

Putting children first, when media sets its own rules

6 minutes ago

In an age when a significant number of parents won't let their child walk down the street to post a letter because of "stranger danger", it's ironic that many pay little attention while media organisations ...

Self-made billionaires more likely to give than inheritors

1 hour ago

A study by economists at the University of Southampton suggests that billionaires who have built their own fortunes are more likely to pledge to donate a large portion of their wealth to charities, than those who are heirs ...

Research band at Karolinska tuck Dylan gems into papers

13 hours ago

(Phys.org) —A 17-year old bet among scientists at the Karolinska Institute has been a wager that whoever wrote the most articles with Dylan quotes before they retired would get a free lunch. Results included ...

Adding uncertainty to improve mathematical models

15 hours ago

Mathematicians from Brown University have introduced a new element of uncertainty into an equation used to describe the behavior of fluid flows. While being as certain as possible is generally the stock and ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

cdkeli
not rated yet Apr 17, 2014
Most computer models make simplying assumptions based upon meeting an aggressive project schedule. The simplifications are generally presumed to discount mechanisms of little or no impact on the operational verification of the hardware/software interface. The problem is to quantify the impact of these assumptions prior to building the model so downstream effects can be better understood. Too often these simplifications are implemented without any valid data for or against. This is a huge hole in the design process and can be found throughout the semiconductor industry and beyond. Isn't it time we stopped flying by the seat of our pants?