Bayesian Update#
The posterior distribution
Motivating Example#
Consider a coin where we do not know the chance of heads \(p\). Let’s take the prior that \(p\) is uniformally any value between 0 and 1. That is,
The chance that the coin lands head depends on the value of \(p\) which is random. We can marginalize over all possible values of \(p\),
In fact this looks like an expected value as a function of \(p\),
The chance of head given \(p\) is of course \(p\) itself so,
This is not so surprising since \(p\) is uniform, a large sample of \(p\) would neither be advantageous to getting heads or tails.
However, what’s the chance of getting two heads \(P(HH)\) out of two tosses?
We know that \(P(HH \mid p) = p^2\) therefore,
The answer is not \(1/4\) as we should expect. That must mean the two tosses of coins are not independent when \(p\) is random. To see why we need to look at the conditional probability that the second coin lands head if we know the first one lands heads for a random \(p\)
Where did the \(2/3\) come from? Let’s take a look at what happens to the chance of \(p\) given we know the first coin lands head. We would need to apply Bayes rule,
Normalizing we find that,
Thus,
Beta and Binomial Update#
We are interested in \(n\) IID Bernoulli trials where we define the sum as \(S = \sum{I_k}\).
The Bayesian update rule is,
Let the prior for the parameter \(p\) be distributed as,
the likelihood for \(S\) is then the binomial distribution,
Hence, the posterior distribution is,
Thus, the Bayesian update adds the number of successes \(k\) and the number of failures \(n-k\).
Expectation : $\(E(p \mid S=k) = \frac{r+k}{r+s+n}\)$
MAP : $\(\text{mode}(p \mid S=k) = \frac{r+k-1}{r+k+n-2}\)$
Transition Rule : $\( P(S_{n+1} = k+1 \mid S_n = k) = E(p \mid S_n=k) \)$
Evidence : The chance of \(k\) heads after \(n\) tosses using the beta prior is the beta-binomial distribution.
$$
P(S_n = k) = {n \choose k}\frac{C(r,s)}{C(r+k, s+n-k)}
$$