Dealing with measurement error is the bread and butter of scientific research. Every time we repeat a measurement with a sensitive instrument, we obtain slightly different results. A good model of relevant uncertainties is of fundamental importance to judging the significance and reproducibility of quantitative research.
Scientists know that the rate of observations that disagree by an abnormal amount with other measurements of the same quantity—outliers—are usually greater than expected. However, there is no widely accepted heuristic for estimating the size or shape of long tails. Estimates are often assumed to be approximately Normal—Gaussian—, while it is easy to find examples where this is not true.
A new study(1) published in the journal Royal Society Open Science 11 January 2017, confirms that “outliers” are common, with disagreements orders of magnitude more frequent than naively expected. The author, David C. Bailey, has used a dataset of 41,000 measurements of 3,200 quantities from medicine, nuclear and particle physics, and inter-laboratory comparisons ranging from chemistry to toxicology.
He concludes that reducing heavy tails is truly challenging. Heavy tails occur in even the most careful modern research, and do not appear to be caused by selection bias, old inaccurate data, or sloppy measurements of uninteresting quantities. The observed distributions are consistent with unknown systematics following the low-exponent power-laws that are theoretically expected and experimentally observed for fluctuations and failures in almost all complex systems.
In other words, scientific measurement is a complex process and power-law behaviour is just a consequence the “inherently complex nature of scientific research,”
Bailey, David C. 2017. ‘Not Normal: The Uncertainties of Scientific Measurements’. Royal Society Open Science 4 (1): 160600. doi:10.1098/rsos.160600.