Source: Deep Learning on Medium
Locally Linear Embedding (LLE) | Data Mining
Locally Linear Embedding (LLE) is a method of Non Linear Dimensionality reduction proposed by Sam T. Roweis and Lawrence K. Saul in 2000 in their paper titled “Nonlinear Dimensionality Reduction by Locally Linear Embedding”. This article is based on multiple sources mentioned in the references section. The project by Jennifer Chu helped me understand LLE better.
Machine Learning algorithms use the features they are trained on to predict the output. For example, in the case of a house price prediction problem, there might be a number of features like the size of the house, number of bedrooms, number of bathrooms, etc. which are trained using some machine learning model to try to predict the house price as accurately as possible. One major problem many machine learning algorithms face while doing this is that of overfitting, where the model fits the training data so well that it is unable to predict the real life test data accurately. This is a problem since it makes the algorithm very effective.
Dimensionality reduction helps reduce the complexity of the machine learning model helping reduce overfitting to an extent. This happens because the more features we use, the more complex a model gets, and this may cause the model to fit the data too well, causing overfitting. Features that do not help decide the output label also may be used, which may not help in real life. For example, in the house price prediction problem, we may have a feature like the age of the seller, which may not affect the house price in any way. Dimensionality reduction helps us keep the more important features in the feature set,reducing the number of features required to predict the output.