Monday, October 03, 2005

Uncertainty in Risk Management and Risk Communication

© 2004-2005 William Charteris
www.billcharteris.com
www.imperialconsulting.net

Decision model uncertainty, also called decision-rule uncertainty, is another type of ‘knowledge uncertainty’. It is of greatest concern to the risk manager. It arises, for example, out of the need to balance different societal concerns when choosing between alternative courses of action (incl. no action) to reduce risk to a ‘politically acceptable’ level.

Decision model uncertainty may necessitate recourse to the Precautionary Principle when a scientific evaluation of the risk, which because of data imprecision, incompleteness, uncertainty or disagreement, makes it impossible to determine with sufficient certainty the risk in question. Although the Precautionary Principle is intuitively straightforward to understand, there is no agreed way of applying it to real decision-making. In this regard, a combination of ignorance auditing and plotting methods has proven useful in its application by the Commission to contentious issues, such as GMOs and hormone treated beef.

In the past decade or so, the paradigm shift away from the use of point estimates towards the use of probabilistic distributions to quantitatively address variability and uncertainty in risk assessment has created new challenges for risk managers and risk communicators. Instead of comparing single point estimates to “bright lines” of risk, risk managers must now struggle with decisions about how to use distributions (or parts thereof) in the decision-making process. In recognizing variability in a population, they must address questions about who to protect and by how much, questions that are significantly different in nature to those that have been replaced, i.e. “Is this risk above the bright line or not?” While appreciation of the artificial nature of the “bright line” criterion and the dramatic oversimplification of the risk assessment required to derive a point estimate might provide some reassurance of the importance of using a probabilistic analysis to characterize the variability and uncertainty in the risks, it does not make the risk manager’s job of picking criteria to determine the ‘acceptability’ of risk any easier.

While risk managers are now beginning to grapple with the challenge of dealing with probabilistic risk assessment results, the challenges for risk communication have only been minimally appreciated or explored. In December 2000, the Commission identified that its scientific committees did not share a common approach for the presentation of the findings of risk assessments. Indeed, major differences were identified in the opinions of different committees with regard to scope, format, degree of detail, size, and use of terminology. In addition, uncertainty in the assessment of risk was often not expressed clearly and committees varied in the way in which they addressed these uncertainties. The Commission addressed this situation by proposing a format to be used in the expression of scientific opinions. This initiative has been adopted by EFSA’s scientific committee. Clearly, further work is required involving a range of stakeholders in order to progress this aspect of risk communication. In this regard, we must arm risk communicators with better information about variability and uncertainty in the risks that they use for risk comparisons, and encourage them to test different strategies for communicating variability and uncertainty in risk using both qualitative and quantitative information.

This abstract is taken from a paper entitled 'Uncertainty and risk', which was published on December 6, 2004. The paper comprises 3,900 words and 25 references. Individual copies of the paper may be requested by e-mail from the author.


Model uncertainty and sensitivity analysis

© 2004-2005 William Charteris
www.billcharteris.com
www.imperialconsulting.net

Model uncertainty relates to the degree to which a chosen model accurately represents reality. It should ideally be addressed with sensitivity analysis; however, this view is not unanimously shared within the scientific community.

Sensitivity analysis comprises (i) mathematical, (ii) statistical, and (iii) graphical methods. Examples of mathematical sensitivity analysis methods include nominal range sensitivity analysis, breakeven analysis, difference in log odds ratio, and automatic differentiation. They are used to assess sensitivity of a model output to the range of variation of an input. They typically involve calculating the output for a few values of an input that represent the possible range of the input. They do not address the variance in the output due to the variance in the inputs, but they can assess the impact of a range of variation in the input values on the output. Examples of statistical sensitivity analysis methods include regression analysis, analysis of variance, response surface methods, Fourier amplitude sensitivity test, and mutual information index. They involve running simulations, such as Monte Carlo analysis, in which inputs are assigned probability distributions and assessing the effect of variance in inputs on the output distribution. Depending upon the method used, one or more inputs are varied at a time. Statistical methods allow one to identify the effect of interactions among multiple inputs. Examples of graphical sensitivity analysis methods include scatter plots and spider plots. They give representation of sensitivity in the form of graphs, charts, or surfaces. Generally, graphical methods are used to give 2-D and 3-D visual indication of how an output is affected by variation in inputs. They can be used as a screening method before further analysis of a model or to represent complex dependencies between inputs and outputs. In addition, they can be used to complement the results of mathematical and statistical methods for better representation.

There is no single sensitivity analysis method that is clearly superior to all others. Each method has its own strengths and limitations. In this regard, food safety risk models have important features that, taken individually, may favor one method over another. However, when taken together, there is no one obvious best method. A wide range of sensitivity analysis methods are not available in commercial risk analysis software, such as @RISK® and Crystall Ball®. They must be performed separately in dedicated mathematical (e.g. MATLAB®, Mathematica®, etc.) and statistical (e.g. SAS®, MINITAB®, STATISTICA®, etc.) software packages.

This abstract is taken from a paper entitled 'Uncertainty and risk', which was published on December 20, 2004. The paper comprises 3,900 words and 25 references. Individual copies of the paper may be requested by e-mail from the author.


Uncertainty in Risk Assessment

© 2004-2005 William Charteris
www.billcharteris.com
www.imperialconsulting.net

In its simplest form, ‘knowledge uncertainty’ can be thought of as comprising uncertainty in the appropriate parameter values for a chosen model, combined with uncertainty in the model itself.

Parameter uncertainty relates to the accuracy and precision with which model parameters can be inferred from input data, judgment, and the literature. It derives from statistical considerations and is usually described either by confidence intervals when using traditional (frequentist) statistical methods, or by probability distributions when using Bayesian statistical methods. Data uncertainties, which are the principal contributors to parameter uncertainty, include (i) measurement errors, (ii) inconsistent or heterogeneous data sets, (iii) data handling and transcription errors, and (iv) non-representative sampling caused by time, space, or financial limitations.

Model uncertainty relates to the degree to which a chosen model accurately represents reality. It may result from the use of surrogate variables (e.g. principal components, biomarkers, etc.), excluded variables (e.g. uncontrollable or noise variables), and approximations including the use of the incorrect mathematical expressions (e.g. low order polynomials) for representing the physical world. It is associated with all models used in all phases of a risk assessment, including (i) animal models used as surrogates for testing human carcinogenicity, (ii) the dose-response models used in extrapolations, and (iii) the computer models used to predict the fate and transport of chemicals in the environment. The use of rodents as surrogates for humans introduces uncertainty into the risk factor because of the considerable interspecies variability in sensitivity. Computer models are simplifications of reality, requiring exclusion of some variables that influence predictions but cannot be included in models because of (i) increased complexity, (ii) a lack of data for these variables, or (iii) difficulties associated with their observation. In general, parameter uncertainty and model uncertainty are generally recognized by risk assessors as major sources of uncertainty.

In a risk analysis, it is not always obvious which uncertainties should be ascribed to ‘natural variability’ and which should be ascribed to ‘knowledge uncertainty’. In this regard, separation of variability and uncertainty in quantitative microbiological risk analysis models has up to now rarely been made, a reflection of the fact that this can be a difficult, if not a daunting task. However, neglecting the difference between them may lead to improper risk estimates and/or incomplete understanding of the results. Also, if the distinction is not clear to the risk analyst, a variability distribution may be used incorrectly, i.e. as if it were an uncertainty distribution. The explicit separation of the two components in the input and output variables is a goal of risk assessors, and such a separation allows risk managers to understand how model outputs might improve if uncertainty is reduced.

This abstract is taken from a paper entitled 'Uncertainty and risk', which was published on December 20, 2004. The paper comprises 3,900 words and 25 references. Individual copies of the paper may be requested by e-mail from the author.


Uncertainty: its statistical meaning and treatment

© 2004-2005 William Charteris
www.billcharteris.com
www.imperialconsulting.net

The term ‘uncertainty’ is normally used to describe a lack of sureness about something or someone, ranging from just short of complete sureness to an almost complete lack of conviction about an outcome. ‘Doubt’, ‘dubiety’, ‘skepticism’, ‘suspicion’, and ‘mistrust’ are common synonyms. Each synonym expresses an aspect of uncertainty that plays a part in risk analysis. Uncertainty with respect to natural phenomena means that an outcome is unknown or not established and is therefore in question. Uncertainty with respect to a belief means that a conclusion is not proven or is supported by questionable information. Uncertainty with respect to a course of action means that a plan is not determined or is undecided. In many, but not all, situations a lack of sureness can be described statistically by probability distributions. The term ‘uncertainty’ is used to describe situations without sureness, whether or not described by a probability distribution.

Uncertainty is at the heart of the scientific method. Scientific uncertainty typically results from five characteristics of the scientific method: the variable chosen, the measurements made, the samples drawn, the models used, and the causal relationship employed. Scientific uncertainty may also arise from a controversy on existing data or lack of some relevant data. Uncertainty may relate to qualitative or quantitative elements of the analysis.

Generally speaking, uncertainty can be attributed to two sources: (i) the inherent variability of natural processes (“natural variability”), or (ii) incomplete knowledge (“knowledge uncertainty”). The ‘flux of nature’ is a metaphor for ‘natural variability’. ‘Natural variability’, sometimes called ‘aleatory uncertainty’, pertains to the inherent variability in the physical world; and, by assumption, this “randomness” is irreducible. The word ‘aleatory’ comes from the Latin alea, meaning a die or gambling device. In a food safety context, uncertainties related to natural variability include things such as microbial mutation, assumed to be a random process in time, or microbial distribution, assumed to be random in space. Natural variability is also sometimes referred to as external, objective, random, or stochastic uncertainty. ‘Knowledge uncertainty’, sometimes called ‘epistemic uncertainty’, pertains to a lack of understanding of events and processes, or with a lack of data from which to draw inferences; and, by assumption, such lack of knowledge is reducible with further information. The word ‘epistemic’ is derived from the Greek “to know.” Knowledge uncertainty is also sometimes referred to as functional, internal, or subjective uncertainty.

‘Natural variability’ and ‘Knowledge uncertainty’ arise for different reasons and are usually evaluated in different ways. A personal simplistic illustration of the distinction between the two sources of uncertainty involves a response surface model, in which the model describes natural variability, and the associated error bounds about the model parameters describe the uncertainty in the parameters and thus describe knowledge uncertainty. Although the distinction between the two is both convenient and important, it is at the same time hypothetical. The division is attributable to the model chosen or developed since modeling assumptions may cause “natural randomness” to become knowledge uncertainties, and vice versa.

This abstract is taken from a paper entitled 'Uncertainty and risk', which was published on December 20, 2004. The paper comprises 3,900 words and 25 references. Individual copies of the paper may be requested by e-mail from the author.