

Philadelphia, PAA computer model is a representation of the functional relationship between one set of parameters, which forms the model input, and a corresponding set of target parameters, which forms the model output. A true model for a particular problem can rarely be defined with certainty. The most we can do to mitigate error is to quantify the uncertainty in the model.
In a recent paper published in the SIAM/ASA Journal on Uncertainty Quantification, authors Mark Strong and Jeremy Oakley offer a method to incorporate judgments into a model about structural uncertainty that results from building an "incorrect" model.
"Given that 'all models are wrong,' it is important that we develop methods for quantifying our uncertainty in model structure such that we can know when our model is 'good enough'," author Mark Strong says. "Better models mean better decisions."
When making predictions using computer models, we encounter two sources of uncertainty: uncertainty in model inputs and uncertainty in model structure. Input uncertainty arises when we are not certain about input parameters in model simulations. If we are uncertain about true structural relationships within a model—that is, the relationship between the set of quantities that form the model input and the set that represents the output—the model is said to display structural uncertainty. Such uncertainty exists even if the model is run using input values as estimated in a perfect study with infinite sample size.
"Perhaps the hardest problem in assessing uncertainty in a computer model prediction is to quantify uncertainty about the model structure, particularly when models are used to predict in the absence of data," says author Jeremy Oakley. "The methodology in this paper can help model users prioritize where improvements are needed in a model to provide more robust support to decision making."
While methods for managing input uncertainty are well described in the literature, methods for quantifying structural uncertainty are not as well developed. This is especially true in the context of health economic decision making, which is the focus of this paper. Here, models are used to predict future costs and health consequences of options to make decisions for resource allocation.
"In health economics decision analysis, the use of "lawbased" computer models is common. Such models are used to support national health resource allocation decisions, and the stakes are therefore high," says Strong. "While it is usual in this setting to consider the uncertainty in model inputs, uncertainty in model structure is almost never formally assessed."
There are several approaches to managing model structural uncertainty. A primary approach is 'model averaging' in which predictions of a number of plausible models are averaged with weights based on each model's likelihood or predictive ability. Another approach is 'model calibration', which assesses a model based on its external discrepancies, that is, output quantities and how they relate to real, observed values. In the context of healthcare decisions, however, neither of these approaches is feasible since typically more than one model is not available for averaging, and observations on model outputs are not available for calibration.
Hence, the authors use a novel approach based on discrepancies within the model or "internal discrepancies" (as opposed to external discrepancies which are the focus of model calibration). Internal discrepancies are analyzed by first decomposing the model into a series of subunits or subfunctions, the outputs of which are intermediate model parameters that are potentially observable in the real world. Next, each subfunction is judged for certainty based on whether its output would equal the true value of the parameter from realworld observations. If a potential structural error is anticipated, a discrepancy term is introduced. Subsequently, beliefs about the size and direction of errors are expressed. Since judgments for internal discrepancies are expected to be crude at best, the expression of uncertainty should be generous, that is, allowed to cover a wide distribution of possible values. Finally, the authors determine the sensitivity of the model output to internal discrepancies. This gives an indication of the relative importance of structural uncertainty within each model subunit.
"Traditional statistical approaches to handling uncertainty in computer models have tended to treat the models as 'black boxes'. Our framework is based on 'opening' the black box and investigating the model's internal workings," says Oakley. "Developing and implementing this framework, particularly in more complex models, will need closer collaboration between statisticians and mathematical modelers."
Source Article:
When Is a Model Good Enough? Deriving the Expected Value of Model Improvement via Specifying Internal Model Discrepancies
http://epubs.siam.org/doi/pdf/10.1137/120889563
SIAM/ASA Journal on Uncertainty Quantification, 2(1), 106 (Online publish date: February 6, 2014). The paper is available for free download at the link above through December 31, 2014.
About the authors:
Mark Strong is a clinical senior lecturer in public health and the Deputy Director of Public Health Section at the School of Health and Related Research at the University of Sheffield, and Jeremy Oakley is a professor of statistics in the School of Mathematics and Statistics at the University of Sheffield.
About SIAM
The Society for Industrial and Applied Mathematics (SIAM), headquartered in Philadelphia, Pennsylvania, is an international society of over 14,000 individual members, including applied and computational mathematicians and computer scientists, as well as other scientists and engineers. Members from 85 countries are researchers, educators, students, and practitioners in industry, government, laboratories, and academia. The Society, which also includes nearly 500 academic and corporate institutional members, serves and advances the disciplines of applied mathematics and computational science by publishing a variety of books and prestigious peerreviewed research journals, by conducting conferences, and by hosting activity groups in various areas of mathematics. SIAM provides many opportunities for students including regional sections and student chapters. Further information is available at http://www.siam.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.