We already discuss how online learning works here using Conjugate distributions with Binomial distribution as the likelihood and Beta distribution as conjugate prior distribution. During this post, we will try to use Gaussian distribution for online learning in Bayesian inference. Conjugate prior of Gaussian distribution is Gaussian itself. That’s why we call Gaussian distribution self-conjugate. Let’s try to derive it.

Given trial result , from Bayesian formula we get:

We will try to derive the posterior , given likelihood and prior distribution . Parameter of Gaussian in this case are and . In this post, we will demonstrate how to calculate posterior under the assumption that are know. Thus, we will only learn parameter . We will ignore marginal probability first, since it is only constant value for normalization. Proceeding our formula above, we can do as follows. Continue reading “Using Gaussian Distribution for Online Learning/Sequential Learning in Bayesian Inference”