After we discuss about polynomial regression here using LSE (Least Square Error), we know that higher order of polynomial model has more capability to fit more complex data points, but more prone to be overfitting. Picture below illustrates that red line (using high order) exactly fit those blue dot points, but will give big error, such as in axis 0.9. That is what we called overfitting (away too fit data training). In this case, green line is better, that has more general model to represent those data points.

We can avoid overfitting by using so-called regularization. How does it work? Usually, a function is prone to be overfitting when its coefficients (weighting values) has big value and not well distributed. Thus, we will force our training process to make those coefficients small by adding a term in our cost function. This process also makes those coefficients more well distributed. Here is our new cost function.

,

with our hypothesis funtion

There are two terms in equation above. Since we will minimize , thus, the first term of equation above is to make our square error as small as possible, and the second term is to make our coefficients also small. This second term is what we called regularization. As for constant , it is to determine how big we want to regularize. If we set with a value , we make our regularization is more important than our first term minimizing square error. We can try some values of , and choose a value that gives our hypothesis/prediction function best performance.

We will do some linear algebra to simplify our cost function. For first term, we already derive here giving us . And the second term, we can simplify as follows.

, for one pair input-output prediction

Re-writing for pair input-output prediction, we can get.

, here is identity matrix

Thus, our cost function becomes:

Again, our purpose is to minimize our cost function. In this case, we will find parameter that gives us the minimal value of our cost function, . Similar with what we already do here and here, we will take first differential and make it equal to zero. Here we go.

Because are constants, we can denote it using new single constant, . And our becomes.

Hooray! We already get our parameter for our polynomial regression model with regularization. Rewriting our hypothesis model, here is our final model for polynomial regression with regularization.

Very well explained, Can you share me the code?

Hi,

Find the code here: https://github.com/ardianumam/Data-Mining-and-Big-Data-Analytics-Book/blob/master/13.4%20Regresi%20dg%20regularisasi.py

Sorry if the code (function name, comments) is written in Bahasa (Indonesian language).