- Uncertainties due to statistical errors are currently
treated using the frequentistic concept
of `confidence interval', although
- there are well known cases -- of great relevance in frontier physics -- in which the approach is not applicable (e.g. small number of observed events, or measurement close to the edge of the physical region);
- the procedure is rather unnatural, and in fact the interpretation of the results is unconsciously subjective (as will be discussed later).

- There is no satisfactory theory or model to treat
uncertainties due to systematic errors
^{1.3}consistently. Only*ad hoc*prescriptions can be found in the literature and in practice (*``my supervisor says ...''*):*``add them linearly''; ``add them linearly if ..., else add them quadratically''; ``don't add them at all''.*^{1.4}The fashion at the moment is to add them quadratically if they are considered to be independent, or to build a covariance matrix of statistical and systematic contribution to treat the general case. In my opinion, besides all the theoretically motivated excuses for justifying this praxis, there is simply the reluctance of experimentalists to combine linearly 10, 20 or more contributions to a global uncertainty, as the (out of fashion) `theory' of maximum bounds would require.^{1.5}

is not justified (especially if contributions due to systematic effects are included). This formula is derived from the rules of probability distributions, making use of linearization (a usually reasonable approximation for routine applications). This leads to theoretical and practical problems.

- and should have the meaning of random variables.
- In the case of systematic effects, how do we evaluate the input quantities entering in the formula in a way which is consistent with their meaning as standard deviations?
- How do we properly take into account correlations (assuming we have solved the previous questions)?