Evaluating a model by performance metrics is useless. Why? (Part 2)

Source: Artificial Intelligence on Medium

Evaluating a model by performance metrics is useless. Why? (Part 2)

Hi guys it the second part of the evaluation metrics if you want to find the classification please refer to the first part of this article.

Regression.

1. MSE

2. RMSE

3. MAE

4. RMSLE

5. R Squared

6. Adjusted R Squared

Mean Squared Error

It the average squared difference between the actual and predicted values.

It is the simplest metric and least used metric in the real world due it following characteristics.

It is very vulnerable to outliers. Let’s take an example of predicting 100 values and in that one value is outlier hence the MSE value will be higher and we will think our model is performing badly.

In the other case 30 predictions are wrong with a very small value, in that case, we will get the MSE value less than the first example and we will think the model is performing good but in the real scenario first model predicted 99 observations correct and second model predicted only 70 observations correct.

Root Mean Squared Error

It is very similar to the MSE and we are taking the square root of MSE value to make the final value in the same range of target value.

Like MSE it is also highly reactive to the outlier. If the MSE(A) > MSE(B) similarly RMSE(A) will be greater than RMSE(B)

Mean Absolute Error

In MAE the error is calculated as an average of the absolute difference between the actual and predicted values. MAE is the linear score which means it will weigh all the individual differences equally. For example, the difference between 14 and 0 will be twice the difference between 7 and 0.

One important factor about this metric is it is not sensitive to outliers as much as MSE.

MAE is widely used in finance sectors, MAE will count the 10 rupee error as twice as 5 rupee error meanwhile MSE will count 10 rupee error as four times than 5 rupee error.

MAE vs MSE

Root Mean Squared Logarithmic Error

In RMSLE we will take the log of predicted and actual values than the RMSE. We will usually use RMSLE when we don’t want to penalize the huge difference between the actual and predicted values when both actual and predicted values are huge. At first glance, you can think it is not much different than RMSE just we are taking the logarithmic difference between the actual and predicted and everything the same as RMSE. If you felt in that way after you read the remaining part of this topic it will change.

RMSLE is less prone to the outlier let’s understand the same with an example. Consider the A is actual values and B is the predicted values.

A = 60 80 90

B = 67 78 91

If we calculate RMSE will be 4.242 and RMSLE will be 0.6466.

Let’s introduce an outlier to it.

A = 60 80 90 750

B = 67 78 91 102

Now the RMSE will be 374.724 and RMSLE will be 1.160

Now understand the relative error between the RMSE and RMSLE.

Case 1:

A=100

B=90

RMSLE =0.1053

RMSE=10

Case 2:

A=10000

B=9000

RMSLE = 0.1053

RMSE = 1000

Due to logarithmic effect RMSLE are less prone to this situation.

Biased Penalty:

One important reason why RMSLE is introduced in data science competitions is because of the biased penalty nature. It will penalize the underestimation of the actual variable higher than the overestimation.

Case 1:

A=1000

B=600

RMSE=400

RMSLE=0.510

Case 2:

A=1000

B=1400

RMSE=400

RMSLE=0.33

R-Squared

In the classification model, we will compare our model with a dummy model to find the performance of it but in the regression model, we don’t have any dummy model to compare it.

In that case, we will compare it will the baseline model, it is the model that will predict the mean of the target variable as a result of each observation. If you are adding any new feature in the model if it is contributing to predict the target variable the R square value will increase otherwise the R-Square value will not change.

Adjusted R-Squared

A model that is performing equal to baseline will be 0 and the higher the R square value the better the model.

Unlike the R-squared, the adjusted R-Squared is reactive to new features in both ways. If the new feature is adding value the Adjusted R-square value will increase in the opposite case if the feature is not adding any value the Adjusted R-square value will decrease.