Ante factum

The statement of the American Statistical Association on March this year did not arrive completely unexpected. Many scientists were in fact aware and worried of the ``science’s dirtiest secret'', i.e. that ``the ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation''[30]. Indeed, as Allen Caldwell of MPI Munich eloquently puts it (e.g. in [31]) ``The real problem is not that people have difficulties in understanding Bayesian reasoning. The problem is that they do not understand the frequentist approach and what can be concluded from a frequentist analysis. What is not understood, or forgotten, is that the frequentist analysis relates only to possible data outcomes within a model context, and not probabilities of a model being correct. This misunderstanding leads to faulty conclusions.''

Faulty conclusions based on p-values are countless in all fields of research, and frankly I am personally much more worried when they might affect our health13 and security, or the future of our planet, rather then when they spread around unjustified claims of revolutionary discoveries or of possible failures of the so called Standard Model of Particle Physics[9].14 For instance, ``A lot of what is published is incorrect'' reported last year The Lancet's Editor-in-Chief Richard Horton[36]. This could be because, looking around more or less `at random', statistical `significant results' will soon or later show up (as that of the last frame of an xkcd cartoon shown in Fig.1 - see [37] for the full story);

Figure: A `significant' result obtained provando e riprovando[37].
\begin{figure}\begin{center}
\epsfig{file=xkcd_significant_result.eps,width=0.42\linewidth}
\end{center}\mbox{}\vspace{-1.0cm}\end{figure}
or because dishonest (or driven by wishful thinking, which in Science is more or less the same) researchers might do some p-hacking (see e.g. [38] and [39]) in order to make `significant effects' appear - remember that ``if you torture the data long enough, it will confess to anything''[40].

A special mention deserves the February 2014 editorial of David Trafimow, Director of Basic and Applied Social Psychology (BASP), in which he takes a strong position against ``null hypothesis significance testing procedure (NHSTP)'' because it ``has been shown to be logically invalid and to provide little information about the actual likelihood of either the null or experimental hypothesis''[41]. In fact a large echo (see e.g. [42], [43] and [44]) had last year a second editorial, signed together with his Associate Director Michael Marks published on February 15, 2015, in which they announce that, after ``a grace period allowed to authors'', ``from now on, BASP is banning the NHSTP''[45].

Giulio D'Agostini 2016-09-06