A person
meets an old friend
in a pub.
proposes
that the drinks should be payed for by
whichever
of the two extracts
the card of lower value
from a pack
(according to some rule which is of no
interest to us).
accepts and
wins. This situation happens again in the following days
and it is always
who has to pay.
What is the probability that
has become a cheat, as
the number of consecutive wins
increases?
The two hypotheses are: cheat (
) and honest (
).
is low because
is an ``old friend'',
but certainly not zero: let us assume
. To make the problem simpler let us make the approximation
that a cheat always wins (not very clever
):
. The probability of winning if he is honest is, instead,
given by the rules of probability assuming that
the chance
of winning at each trial is
(``why not?", we shall
come back to this point later):
. The result
is shown in the following table.
 |
 |
 |
|
(%) |
(%) |
0 |
5.0 |
95.0 |
1 |
9.5 |
90.5 |
2 |
17.4 |
82.6 |
3 |
29.4 |
70.6 |
4 |
45.7 |
54.3 |
5 |
62.7 |
37.3 |
6 |
77.1 |
22.9 |
 |
 |
 |
Naturally, as
continues to win the suspicion
of
increases. It is important to make two remarks.
- The answer is always probabilistic.
can never reach
absolute certainty that
is a cheat,
unless he catches
cheating, or
confesses to having cheated. This is coherent
with the fact that we are dealing with random events
and with the fact that any sequence of outcomes has the
same probability (although there is only one possibility over
in which
is always luckier). Making use
of
,
can make a decision about the
next action to take:
- continue the game, with
probability
of losing with certainty the next time too;
- refuse to play further, with probability
of offending the innocent friend.
- If
the final probability will
always remain zero: if
fully trusts
,
then he has just to
record the occurrence of a rare event when
becomes large.
To better follow the process of updating the probability
when new experimental data become available,
according to the Bayesian scheme
``the final probability of the
present inference is the initial probability
of the next one''.
Let us call
the probability assigned
after the previous win. The iterative application
of the Bayes formula yields
where
and
are the probabilities of
each win.
The interesting result is that
exactly the same values of
of (
)
are obtained (try to believe it!).