Instead of completely rewriting the primer, producing a thicker report which would have been harder to read sequentially, I have divided the text into three parts.

- The first part is dedicated to a critical review of standard statistical methods and to a general overview of the proposed alternative. It contains references to the other two parts for details.
- The second part essentially reproduces the old primer, subdivided into chapters for easier reading and with some small corrections.
- Part three contains an appendix, covering remarks on the general aspects of probability, as well as other applications.

This structure inevitably leads to some repetition, which I have
tried to keep to a minimum. In any case,
*repetita juvant*, especially in this subject where
the real difficulty is not understanding the formalism,
but shaking off deep-rooted prejudices.
This is also the
reason why this report is somewhat verbose (I have to admit) and
contains a plethora of
footnotes, indicating that this topic
requires
a more extensive treatise.

A last comment concerns the title of the report. As discussed in the last lecture at CERN, a title which was closer to the spirit of the lectures would have been ``Probabilistic reasoning ... ''. In fact, I think the important thing is to have a theory of uncertainty in which ``probability'' has the same meaning for everybody: precisely that meaning which the human mind has developed naturally and which frequentists have tried to kill. Using the term ``Bayesian'' might seem somewhat reductive, as if the methods illustrated here would always require explicit use of Bayes' theorem. However, in common usage `Bayesian' is a synonym of `based on subjective probability', and this is the reason why these methods are the most general to handle uncertainty. Therefore, I have left the title of the lectures, with the hope of attracting the attention of those who are curious about what `Bayesian' might mean.

Email: `dagostini@roma1.infn.it`

URL: `http://www-zeus.roma1.infn.it/agostini/ `