A recent New Scientist article [1] deals with errors in courts due to ``bad mathematics'', advocating the use of the so-called Bayesian methods to avoid them. Although most examples of resulting ``rough justice'' come from real life cases, the first ``probabilistic pitfall'' is taken from crime fiction, namely from a ``1974 episode of the cult US television series'' Columbo, in which a ``society photographer has killed his wife and disguised it as a bungled kidnapping.''

The pretended mistake happens in the concluding scene, when ``the hangdog detective [...] induces the murderer to grab from a shelf of 12 cameras the exact one used to snap the victim before she was killed.'' According to the article author (or to experts on which scientific journalists often rely on) the question is that ``killer or not, anyone would have a 1 in 12 chance of picking the same camera at random. That kind of evidence would never stand up in court.'' Then a sad doubt is raised, ``Or would it? In fact, such probabilistic pitfalls are not limited to crime fiction.''

Figure: Peter Galesco, played by Dick Van Dyke, taking the picture of its wife before killing her.

Being myself not particularly fond of this kind of entertainment (perhaps with a little exception of the Columbo series, that I watch casually), I cannot tell how much crime fiction literature and movies are affected by ``probabilistic pitfalls''. Instead, I can give firm witness that scientific practice is plenty of mistakes of the kind reported in Ref.[1], that happen even in fields the general public would hardly suspect, like frontier physics, whose protagonists are supposed to have a skill in mathematics superior to police officers and lawyers.

But it is not just a question of math skill (complex calculations are usually done without mistakes), but of probabilistic reasoning (what to calculate!). This is a quite old story. In fact, as David Hume complained 260 years ago [2],

"The celebrated Monsieur Leibniz has observed it to be a defect in the common systems of logic, that they are very copious when they explain the operations of the understanding in the forming of demonstrations, but are too concise when they treat of probabilities, and those other measures of evidence on which life and action entirely depend, and which are our guides even in most of our philosophical speculations."
It seems to me that the general situation has not improved much. Yes, `statistics' (a name that, meaning too much, risks to mean little) is taught in colleges and universities to students of several fields, but distorted by the `frequentistic approach', according to which one is not allowed to speak of probabilities of causes. This is, in my opinion, the original sin that gives grounds for a large number of probabilistic mistakes even by otherwise very valuable scientists and practitioners (see e.g. chapter 1 of Ref. [3]).

Going back to the ``shambling sleuth Columbo'', being my wife and my daughter his fans, it happens we own the DVD collections of the first seven seasons. It occurred then I watched with them, not much time ago (perhaps last winter), the `incriminated', superb episode Negative Reaction [4], one of the best performances of Peter Falk playing the role of the famous lieutenant. However, reading the mentioned New Scientist article, I did not remember I had a `negative reaction' from the final scene, although I use and teach Bayesian methods for a large variety of applications. Did I overlook something?

I watched again the episode and I was again convinced Columbo's last move was a conclusive checkmate.1Then I have invited some friends, all with physics or mathematics degree and somewhat knowledgeable of the Bayesian approach, to enjoy an evening together during the recent end of year holidays in order to let them make up their minds whether Columbo had good reasons to take Paul Galesco, magnificently impersonated by Dick Van Dyke, in front of the court (Bayes or not, we had some fun...).

The verdict was unanimous: Columbo was fully absolved or, more precisely, there was nothing to reproach the story writer, Peter S. Fischer. The convivial after dinner jury also requested me to write a note on the question, possibly with a short, self-contained introduction to the `required math'. Not only to `defend Columbo' or, more properly, his writer, but, and more seriously, to defend the Bayesian approach, and in particular its applications in forensic science. In fact, we all deemed the beginning paragraphs of the New Scientist article could throw a bad light on the rest of the contents.

Imagine a casual reader of the article, possibly a lawyer, a judge or a student in forensic science, to which the article was virtually addressed, and who might have seen Negative Reaction. Most likely he/she considered legitimate the charges of the policemen against the photographer. The `negative reaction' would be that the reader would consider the rest of the article a support of dubious validity to some `strange math' that can never substitute the human intuition in a trial.2 Not a good service to the `Bayesian cause'. (Imagine somebody trying to convince you with arguments you hardly understand and who begins asserting something you consider manifestly false.)

In the following section I introduce the basic elements of Bayesian reasoning (subsection 2.4 can be skipped on first reading), using a toy model as guiding example in which the analysis of ref. [1] (``1 in 12'', or, more precisely ``1 in 13'') holds. Section 4 shows how such a kind of evidence would change Columbo's and jury's opinion. Then I discuss in section 5 why a similar argument does not apply to the clip in which Columbo finally frames Galesco, and why all witnesses of the crucial actions (including TV watchers, with the exception of the author of Ref. [1] and perhaps a few others) and an hypothetical court jury (provided the scene had been properly reported) had to be absolutely positive the photographer killed his wife (or at least he knew who did it in his place).

The rest of the paper might be marginal, if you are just curious to know why I have a different opinion than Ref. [1], although I agree on the validity of Bayesian reasoning. In fact, at the end of the work, this paper is not the `short note' initially planned. The reason is that the past months I had many discussions on some of the questions treated here with people from several fields. I have realized once more that it is not easy to put the basic principles at work if some important issues are not well understood. People are used to solving their statistical problems with `ad hoc' formulae (see Appendix H) and therefore tend to add some `Bayesian recipes' in their formularium. It is then too high the risk that one looks at simplified methods - Bayesian methods require a bit more thinking and computation that others! - that are even advertised as `objective'. Or one just refuses to use any math, on the defense of pure intuition. (By the way, this is an important point and I will take the opportunity to comment on the apparent contradictions between intuition and formal evaluation of beliefs, defending ...both, but encouraging the use of the latter, superior to the former in complex situations - see in particular Appendix C).

So, to conclude the introduction, this document offers several levels of reading:


Giulio D'Agostini 2010-09-30