pm21-dragon/lectures/lecture-12/3 Bayes' theorem.ipynb
2025-01-17 08:33:02 +01:00

2.5 KiB

None <html> <head> </head>

Bayes' theorem

$ \newcommand{\thetavec}{\boldsymbol{\theta}} \newcommand{\pr}{\textrm{p}}$

Bayes' theorem with $\thetavec$ the vector of parameters we seek.

$$ \overbrace{\pr(\thetavec \mid \textrm{data})}^{\textrm{posterior}} = \frac{\color{red}{\overbrace{\pr(\textrm{data} \mid \thetavec)}^{\textrm{likelihood}}} \times \color{blue}{\overbrace{\pr(\thetavec)}^{\textrm{prior}}}} {\color{darkgreen}{\underbrace{\pr(\textrm{data})}_{\textrm{evidence}}}} $$

If we view the prior as the initial information we have about $\thetavec$, summarized as a probability density function, then Bayes' theorem tells us how to update that information after observing some data: this is the posterior pdf.

We will go through this post, "Bayes' Theorem with Lego".

Discussion: the Bayesian Brain hypothesis.

Here is an interesting article, which might serve as a starting point, about this idea.

Further material

In [ ]:
 
</html>