Gaussian is very important distribution. During this post, we will discuss the detail of Gaussian distribution by deriving it, calculate the integral value and do MLE (Maximum Likelihood Estimation). To derive Gaussian distribution, it is more difficult if we do it in cartesian coordinate. Thus, we will use polar coordinate. Before we derive the Gaussian using polar coordinate, let’s talk about how to change the coordinate system from cartesian to polar coordinate system first.
(1) Changing coordinate system from cartesian to polar coordinate
Changing coordinate system from catersian to polar coordinate is useful, such as when we calculate integral of certain function, in certain case, we prefer to use polar coordinate system because it will be away easier to calculate. To do that, we can use Jacobian matrix. Jacobian matrix actually defines partial derivative of a vector with respect to another vector. In our case changing cartesian coordinate to polar coordinate, the Jacobian matrix of in cartesian coordinate with respect to in polar coordinate is:
Then, changing coordinate system from cartesian coordinate to polar coordinate, it can be done by this formula.
, where is the determinant of .
Let’s do an example! See picture below.
To calculate the circle area, we can do as follows.
, let’s just skip the detail because we are already familiar with the circle area formula
We can calculate the circle area by using coordinate polar resulting exactly same with what we get above. See picture below.
In cartesian coordinate, we calculate by dividing the circle area into small box , then sum all those small box area. This is relatively difficult since we have to determine the lower bound and upper bound of the intergral w.r.t . Using polar coordinate system, we can convert our calculation by making small area shown in the picture above in the right parameterized by instead of . Let’s do that.
Let’s proceed our calculation.
Deriving Gaussian Distribution
See the result. We successfully calculate the circle area using polar coordinate easily where the final result is same with circle area formula we are familiar about.
(2) Deriving Gaussian distribution using polar coordinate
Let’s put the condition first. We already know that the Gaussian distribution, closer to the original point (would be in 0,0 for zero mean case), bigger its value is. In two dimensions, the distribution is shown picture below.
Another characteristic we know is that the distribution is symmetric both in axis x and y. Using these assumptions, we will try to derive what is the mathematical formula of it. Again, we will use polar coordinate to make the derivation process easier. See picture below.
This is Gaussian distribution if we see from the top, and we put polar coordinate . From the picture, we can calculate that , since and are independent. Let’s try to differentiate w.r.t in both side.
We know that in Gaussian distribution, only depends on (distance from origin), and will have same value for any value since is same. Thus, the left hand side will zero, and for the right hand side, we can use differential product rule.
By substituting and , and using chain rule in differentation, we get:
By substituting back and , we get:
In Gaussian distribution, this differential equation is true for any and , and and are independent. This can only happen if the ratio defined by the differential equation is a constant.
Since this is probability distribution function, thus, it sums to 1.
Let , then we get:
Because the distribution is symmetric, and are independent, thus, we can modify as follows.
Let’s solve two those integrals using coordinate polar like we already did before.
For integral w.r.t bounds, we use from to since theoretically Gaussian function reaches zero value in infinite axis. And determinant of Jacobian matrix in this case is , like what we already calculate before. Proceeding our calculation, we get:
Let , thus, we get:
The last, to find , we can use constraint or (for zero mean). We will use constraint variance because it will be easier to derive. Here we go.
Next, we will solve equation above using integral by part (). Let , thus . Let .
Let’s back to solve our integral by part.
We know from what we already get before that , thus:
Hooray! We already successfully calculate all we need in deriving Gaussian distribution. The last, we just plug and to our . Here we go.
Equation above is for mean . For non zero , we can write as follows.
Up to this point, we already successfully derive Gaussian distribution function. Congratulation!
(3) Integralling Gaussian function using polar coordinate
We will integralling simple Gaussian function . In this case, we ignore first the constant and use zero mean. Here we go.
Let , thus . Re-plug to our last equation, we get:
We’ve made it! We just successfully derived given Gaussian distribution.
(4) Maximum Likelihood Estimation of Gaussian distribution
Like we already did before in Bernoulli and Beta distribution here, we will also do MLE in this Gaussian distribution discussion. We will try to estimate mean () and variance () that maximizes the likelihood.
Let say we have trial result . What are the and that maximize the likelihood? Let’s do that.
The likelihood of Gaussian distribution is defined below.
To maximize this, as usual, we will take first differential and make it equal to zero. But, before it, to make it easier, let’s do in form.
First, let’s derive that maximizes the likelihood of Gaussian distribution. To do this, again, we can take the first differential w.r.t , make it equal to zero. Here we go.
See. Equation above is exactly same with what we already know about calculating mean value. Next, we will try to calculate variance. To do this, we take the first differential w.r.t , and make it equal to zero. Let’s do that.
Multiplying both side with , we get:
Variance formula we get via MLE is also exactly same with the variance formula what we already know about. Up to this point, we already know that given trials data experiment, we can fit the probability distribution into Gaussian distribution by parameter and using formula we already derive via MLE in this post.