Do This Additional Step, You Have Made A Generalize Machine Learning Model

Original article was published on Artificial Intelligence on Medium


The first step is to split the data to train and test data. The train data will be used for cross-validation and the test data will be used as the unseen data. Then, after we split the data, we can do cross validation on the training data and you can adjust how much the amount of k you want to use. And finally, we can make a prediction on the unseen data and we can see the score how well the model is. The code for doing these is look like this,

from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# Split the X and Y
X = df_preprocessed.drop(default, axis = 1).values
y = df_preprocessed[default].values
# Split the dataset for cross validation and unseen data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y)
# Doing Hyperparameter Tuning and Cross Validation without the
# unseen data using Decision Tree. Then, fit the model on it

param_grid = {
'max_depth': [i for i in range(3, 10, 2)]
dt = DecisionTreeClassifier(random_state=42)
clf = GridSearchCV(dt, param_grid, cv=5), y_train)
# Predict the unseen data and print the score
y_pred = clf.predict(X_test)
clf.score(X_test, y_test)
classification_report(y_test, y_pred)

Based on the implementations above, I’ve got an accuracy about 88.3% on the test data. It means that the model has a good score and capable for handling the unseen data. Also, when we create the classification report using classification_report function, the result looks like this,

The main label that we want to predict, which is 1, has a really good score with 92% rate of precision and 71% rate of recall. This model still can be improved by tuning the hyper parameters and also doing some feature selection and engineering. If you want to see how my work on this, you can see my GitHub here.