We already discuss about how to estimate linear regression function using least square estimation here. But, many real problem cannot be modeled using only linear model. See picture here.

We cannot fit a line (green line) for those data points (blue dot), otherwise, it will give us a big error. For this case, we need polynomial function to fit those data (red line). We can write our *hypothesis/prediction *function using polynomial model as follows.

Equation above is general form of our *hypothesis function *with polynomial order . We can represent linear regression by setting order , so that the *hypothesis function *become :

Actually, the rest process is very similar with what we already discuss in linear regression here. The difference is only in the *“design matrix” *of . But, it’s OK. I will just discuss again the detail here. Back to our case, we can write our in matrix notation.

, with and

We can write in another form for pair input-output prediction in matrix form as follows.

In this case, our design matrix is , and we denote it using bold-uppercase of x, .

Then, we will make *cost function *to define our error function. We will use *average of square error *as our basic form*.* And our purpose is to minimize the error “as small as we can” (*see trade-off topic in the bottom of this post*). **That’s why we name this with “ least square estimation“**. We can write our

*cost function*as follows.

By plugging our to our above, we get:

Then we do some linear algebra to simplify that, we get:

Look carefully to the equation above, each term above will produce scalar value, for example :

Thus, we can change third term, since these will produce same scalar value (multiplication is commutative). Then, by combining second and third term, we can simplify become:

Once again, our purpose is to find a polynomial function that minimizes error, . And in our case, we will find parameter that minimizes error, . To do so, we already know in high school math or in undergraduate calculus course that we can take first differential toward , and make it equal to zero. Here we go.

* Voila! *We can get the value of for our polynomial function that best fits our data points, assuming that is invertible. Thus, re-plugging in to our polynomial model, our

*hypothesis*function become:

with

where design matrix

Again, see! The process is really similar with what we already discuss in linear regression here. The difference is only the *design matrix.
*

**Discussing order number trade-off (capability vs overfitting)**

Using higher order, we can have more capability to fit more complex data points. See picture below for example.

From picture above, it is clearly see that the green line with 9 order polynomial function better on fitting the data points compare to those with lower order. But, higher order our model, it will be more prone to be *overfitting*. *Overfitting* means that our model is really good for data training in the term of giving really small error value, but it will gave us bigger error in prediction of unseen data. See here for the illustration.

Picture above, red line uses higher order than green line, so that it gives smaller error in the training data compared to the green line’s. But, see when the input , it will give big error. In this case, green line will be better than red line to model those data points. At this point, we have to make our model not only give small error, but also make our model more **general** that will generalize when coping with the prediction data (unseen data), for instance input in the picture above.

To avoid overfitting problem, there is a technique called *regularization*. You can read further here.

thank you for your explanation