Adarsh Menon

Linear Regression using Least Squares

Linear Regression is the simplest form of machine learning out there. In this post, we will see how linear regression works and implement it in Python from scratch.

September 8, 2018

Linear Regression is the simplest form of machine learning out there. In this post, we will see how linear regression works and implement it in Python from scratch. This is the written version of the above video. Watch it if you prefer that.

Linear Regression

In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. In the case of one independent variable it is called simple linear regression. For more than one independent variable, the process is called mulitple linear regression. We will be dealing with simple linear regression in this tutorial.

Let X be the independent variable and Y be the dependent variable. We will define a linear relationship between these two variables as follows:

Source: http://www.nabla.hr/SlopeInterceptLineEqu.gif

This is the equation for a line that you studied in high school. m is the slope of the line and c is the y intercept. Today we will use this equation to train our model with a given dataset and predict the value of Y for any given value of X.

Our challenege today is to determine the value of m and c, that gives the minimum error for the given dataset. We will be doing this by using the Least Squares method.

Finding the Error

So to minimize the error we need a way to calculate the error in the first place. A loss function in machine learning is simply a measure of how different the predicted value is from the actual value.

Today we will be using the Quadratic Loss Function to calculate the loss or error in our model. It can be defined as:

We are squaring it because, for the points below the regression line y — p will be negative and we don’t want negative values in our total error.

Least Squares method

Now that we have determined the loss function, the only thing left to do is minimize it. This is done by finding the partial derivative of L, equating it to 0 and then finding an expression for m and c. After we do the math, we are left with these equations:

Here x̅ is the mean of all the values in the input X and ȳ is the mean of all the values in the desired output Y. This is the Least Squares method. Now we will implement this in python and make predictions.

Implementing the Model

1.287357370010931 9.908606190326509

There wont be much accuracy because we are simply taking a straight line and forcing it to fit into the given data in the best possible way. But you can use this to make simple predictions or get an idea about the magnitude/range of the real value. Also this is a good first step for beginners in Machine Learning.

Find the data set and code here.