next up previous contents
Next: 2.3 More terminology Up: 2. Measurement Uncertainty Previous: 2.1 General comments

2.2 Classification of error sources

Errors must be treated differently if they arise from consistent and repeatable sources (like an offset in calibration) or if they arise from random fluctuations in the measurements. The former are called "systematic" or "bias'' errors, and the latter "random" or, occasionally and incorrectly, "statistical'' errors. In concept it is easy to differentiate these errors by this test: Random errors are reduced when an experiment is repeated many times and the results averaged together, while systematic errors remain the same.

Systematic errors can be studied through intercomparisons, calibrations, and error propagation from estimated systematic uncertainties in the sensors used; random error is usually studied through statistical analysis of repeated measurements or knowledge of the statistical character of the observation. Random errors sometimes result from insurmountable experimental uncertainties; for example, a measurement of droplet concentration obtained by counting a finite number of droplets necessarily has an associated uncertainty that can only be removed by counting more droplets. Systematic uncertainties, on the other hand, usually result from weaknesses in the instruments used and could be reduced by better equipment. Figure 2.1 illustrates some of these error sources and uncertainties.


Figure2.1
Figure 2.1: Illustration of the separate effects of bias errors and random errors. The true population mean is $\mu$, but an instrument is used that has a bias b and measures with random error (in each observation) $\sigma$. The resulting estimate of the mean, obtained from $\overline{x}$, is in error because of the separate contributions of the bias error b and the random error in the measurement of the mean, in this case $(\overline{x}-a)$. The precision of the instrument is $\sigma$, so the estimated random error in the mean is $\sigma/{N}^{1/2}$. The actual error in an experiment is the difference between the true value $\mu$ and the measured value $\overline{x}$. The histogram represents a frequency distribution measured in a particular experiment, with mean shown as the solid line labeled $\overline{x}$. In a large number of observations, it would be expected that the results would tend toward the smooth dashed curve with mean $a=\mu+b$. The measured standard deviation is s, but the limiting value for a large number of measurements is expected to be $\sigma$.

It is awkward that most of the mathematical treatments of errors deal with random errors, while most errors encountered in experimental research are instead systematic errors. Digitization noise2.2 and the errors introduced by counting finite numbers of events2.3 are among the few good examples of random errors in modern experiments. The prevalence of systematic errors is a particularly compelling reason to follow the methodology advocated here, because that approach features parallel treatment of systematic and random errors and focuses attention on their different characteristics. These separate error sources should be investigated and treated in different ways, and should be reported separately.

Analyses of uncertainty are made more difficult when most errors are systematic. Estimates of bias limits are often subjective, based on judgment, and hard to quantify or defend rigorously. In some cases an instrument will be calibrated against a known standard, and the precision of the calibration (although a random error in the calibration procedure) can be used as an estimate of the bias introduced by use of that calibration. This estimate must still be combined with other sources of bias such as the bias of the standard used for calibration or the bias arising because the instruments are generally used in environments less favorable to stability than the calibration environment. Repeated calibrations, intercomparisons among different instruments, and long-term stability of the calibrations can all provide information on possible biases.

Error contributions thought to be random may really be systematic. An example often cited as a possible source of random error is a dependence of an instrument on line voltage, causing fluctuations in the response function of an instrument during an experiment. However, line voltage fluctuations are seldom random, and are probably biased in a particular direction relative to the conditions at the time of calibration, so it is likely that in a given experiment or series of experiments such fluctuations will introduce a bias. Furthermore, such errors are likely to be correlated in time, so the usual procedure of assuming random error contributions to be independent for different measurements will probably not be valid. Close inspection of other common sources of error shows that they are often biases, and this increases the importance of treating such errors properly. Other examples will be given in later sections.


next up previous contents
Next: 2.3 More terminology Up: 2. Measurement Uncertainty Previous: 2.1 General comments 


 
NCAR Advanced Study Program
http://www.asp.ucar.edu