Systematic errors can be studied through intercomparisons, calibrations, and error propagation from estimated systematic uncertainties in the sensors used; random error is usually studied through statistical analysis of repeated measurements or knowledge of the statistical character of the observation. Random errors sometimes result from insurmountable experimental uncertainties; for example, a measurement of droplet concentration obtained by counting a finite number of droplets necessarily has an associated uncertainty that can only be removed by counting more droplets. Systematic uncertainties, on the other hand, usually result from weaknesses in the instruments used and could be reduced by better equipment. Figure 2.1 illustrates some of these error sources and uncertainties.
It is awkward that most of the mathematical treatments of errors deal with random errors, while most errors encountered in experimental research are instead systematic errors. Digitization noise2.2 and the errors introduced by counting finite numbers of events2.3 are among the few good examples of random errors in modern experiments. The prevalence of systematic errors is a particularly compelling reason to follow the methodology advocated here, because that approach features parallel treatment of systematic and random errors and focuses attention on their different characteristics. These separate error sources should be investigated and treated in different ways, and should be reported separately.
Analyses of uncertainty are made more difficult when most errors are systematic. Estimates of bias limits are often subjective, based on judgment, and hard to quantify or defend rigorously. In some cases an instrument will be calibrated against a known standard, and the precision of the calibration (although a random error in the calibration procedure) can be used as an estimate of the bias introduced by use of that calibration. This estimate must still be combined with other sources of bias such as the bias of the standard used for calibration or the bias arising because the instruments are generally used in environments less favorable to stability than the calibration environment. Repeated calibrations, intercomparisons among different instruments, and long-term stability of the calibrations can all provide information on possible biases.
Error contributions thought to be random may really be systematic. An example often cited as a possible source of random error is a dependence of an instrument on line voltage, causing fluctuations in the response function of an instrument during an experiment. However, line voltage fluctuations are seldom random, and are probably biased in a particular direction relative to the conditions at the time of calibration, so it is likely that in a given experiment or series of experiments such fluctuations will introduce a bias. Furthermore, such errors are likely to be correlated in time, so the usual procedure of assuming random error contributions to be independent for different measurements will probably not be valid. Close inspection of other common sources of error shows that they are often biases, and this increases the importance of treating such errors properly. Other examples will be given in later sections.