next up previous
Next: Correlation in results caused Up: Uncertainties from systematic effects Previous: Reweighting of conditional inferences


Joint inference and marginalization of nuisance parameters

A different approach, which produces identical results, is to think of a joint inference about both the quantities of interest and the influence variables:

$\displaystyle p(\mu,{\mbox{\boldmath$h$}} \,\vert\,{\mbox{\boldmath$d$}},I_0)$ $\textstyle \propto$ $\displaystyle p({\mbox{\boldmath$d$}} \,\vert\,\mu,{\mbox{\boldmath$h$}},I_0) \,
p_0(\mu,{\mbox{\boldmath$h$}} \,\vert\,I_0) \,.$ (77)

Then, marginalization is applied to the variables that we are not interested in (the so called nuisance parameters), obtaining
$\displaystyle p(\mu\vert\,{\mbox{\boldmath$d$}},I_0)$ $\textstyle =$ $\displaystyle \int\! p(\mu,{\mbox{\boldmath$h$}} \,\vert\,{\mbox{\boldmath$d$}}, I_0) \,\mbox{d}{\mbox{\boldmath$h$}}$ (78)
  $\textstyle \propto$ $\displaystyle \int p({\mbox{\boldmath$d$}} \,\vert\,\mu,{\mbox{\boldmath$h$}},I...
...p_0(\mu,{\mbox{\boldmath$h$}} \,\vert\,I_0) \,\mbox{d}{\mbox{\boldmath$h$}} \,.$ (79)

Equation (77) shows a peculiar feature of Bayesian inference, namely the possibility making an inference about a number of variables larger than the number of the observed data. Certainly, there is no magic in it, and the resulting variables will be highly correlated. Moreover, the prior cannot be improper in all variables. But, by using informative priors in which experts feel confident, this feature allows one to tackle complex problems with missing or corrupted parameters. In the end, making use of marginalization, one can concentrate on the quantities of real interest.

The formulation of the problem in terms of Eqs. (77) and (79) allows one to solve problems in which the influence variables might depend on the true value $\mu $, because $ p_0(\mu,{\mbox{\boldmath$h$}} \,\vert\,I_0)$ can model dependences between $\mu $ and ${\mbox{\boldmath$h$}}$. In most applications, ${\mbox{\boldmath$h$}}$ does not depend on $\mu $, and the prior factors into the product of $p_0(\mu \,\vert\,I_0)$ and $p_0({\mbox{\boldmath$h$}} \,\vert\,I_0)$. When this happens, we recover exactly the same results as obtained using the reweighting of conditional inferences approach described just above.


next up previous
Next: Correlation in results caused Up: Uncertainties from systematic effects Previous: Reweighting of conditional inferences
Giulio D'Agostini 2003-05-13