``This Guide presents a widely applicable method for evaluating and expressing uncertainty in measurement. It provides a realistic rather than a `safe' value of uncertainty based on the concept that there is no inherent difference between an uncertainty component arising from a random effect and one arising from a correction for a systematic effect. The method stands, therefore, in contrast to certain older methods that have the following two ideas in common:
... When the value of a measurand is reported, the best estimate of its value and the best estimate of the uncertainty of that estimate must be given, for if the uncertainty is to err, it is not normally possible to decide in which direction it should err safe. An understatement of uncertainties might cause too much trust to be placed in the values reported, with sometimes embarrassing and even disastrous consequences. A deliberate overstatement of uncertainty could also have undesirable repercussions.''The examples of the `undesirable repercussions' given by the ISO Guide are of the metrological type. In my opinion there are other physical reasons which should be considered. Deliberately overstating uncertainty leads to a better (but artificial) agreement between results and `known' values or results of other experiments. This prevents the identification of possible systematic effects which could have biased the result and which can only be identified by performing the measurement of the same physical quantity with a different instrument, method, etc. (the so-called `reproducibility conditions'[3]). Behind systematic effects there is always some physics, which can somehow be `trivial' (noise, miscalibration, row approximations, background, etc.), but also some new phenomenology. If the results of different experiments are far beyond their uncertainty the experimenters could compare their methods, find systematic errors and, finally, the combined result will be of a higher quality. In this respect, a quotation from Feynman is in order:
``Well, QED is very nice and impressive, but when everything is so neatly wrapped up in blue bows, with all experiments in exact agreement with each other and with the theory - that is when one is learning absolutely nothing.''
``On the other hand, when experiments are in hopeless conflict - or when the observations do not make sense according to conventional ideas, or when none of the new models seems to work, in short when the situation is an unholy mess - that is when one is really making hidden progress and a breakthrough is just around the corner!''
(R. Feynman, 1973 Hawaii Summer Institute, cited by D. Perkins at the 1995 EPS Conference, Brussels).