Image Classifier — Zalando Clothing Store using Monk Library

Original article was published by Vidya on Artificial Intelligence on Medium


ktf.Train()

a. Analyse Learning Rates

# Analysis Project Name
analysis_name = "analyse_learning_rates_vgg16"
# Learning rates to explore
lrs = [0.1, 0.05, 0.01, 0.005, 0.0001]
# Number of epochs for each sub-experiment to run
epochs=5
# Percentage of original dataset to take in for experimentation
# We're taking 5% of our original dataset.
percent_data=5
# I made sure that all the GPU processors are running
ktf.update_num_processors(2)
# Very important to reload post updating
ktf.Reload()

Output:

# "keep_all" - Keep all the sub experiments created
# "keep_none" - Delete all sub experiments created
analysis = ktf.Analyse_Learning_Rates(analysis_name, lrs, percent_data, num_epochs=epochs, state="keep_none")

Output:

Result

From the above table, it is clear that Learning_Rate_0.0001 has the least validation loss. We will update our learning rate with this.

ktf.update_learning_rate(0.0001)
# Very important to reload post updates ktf.Reload()

b. Analyse Batch sizes

# Analysis Project Name
analysis_name = "analyse_batch_sizes_vgg16"
# Batch sizes to explore
batch_sizes = [2, 4, 8, 12]
# Note: We're using the same percent_data and num_epochs.
# "keep_all" - Keep all the sub experiments created
# "keep_none" - Delete all sub experiments created
analysis_batches = ktf.Analyse_Batch_Sizes(analysis_name, batch_sizes, percent_data, num_epochs=epochs, state="keep_none")

Result

From the above table, it is clear that Batch_Size_12 has the least validation loss. We will update the model with this.

ktf.update_batch_size(12)# Very important to reload post updates
ktf.Reload()

c. Analyse Optimizers

# Analysis Project Name
analysis_name = "analyse_optimizers_vgg16"
# Optimizers to explore
optimizers = ["sgd", "adam", "adagrad"]
# "keep_all" - Keep all the sub experiments created
# "keep_non" - Delete all sub experiments created
analysis_optimizers = ktf.Analyse_Optimizers(analysis_name, optimizers, percent_data, num_epochs=epochs, state="keep_none")

Result

From the above table, it is clear that we should go for Optimizer_adagrad since it has the least validation loss.

Summary of Hyperparameter Tuning Experiment

Here ends our experiment and now it’s time to switch on the Expert Mode to train our Classifier using the above Hyperparameters.

Summary:

  • Learning Rate — 0.0001
  • Batch size — 12
  • Optimizer — adagrad

1. Training from scratch: vgg16

Expert Mode

Let’s create another Experiment named expert_mode_vgg16 and train our classifier from scratch.

ktf = prototype(verbose=1)ktf.Prototype("Project-Zalando", "expert_mode_vgg16")ktf.Dataset_Params(dataset_path="/content/drive/My Drive/Data/zalando", split=0.8, input_size=224, batch_size=12, shuffle_data=True, num_processors=2)# Load the dataset
ktf.Dataset()
ktf.Model_Params(model_name="vgg16", freeze_base_network=True, use_gpu=True, use_pretrained=True)ktf.Model()ktf.Training_Params(num_epochs=5, display_progress=True, display_progress_realtime=True, save_intermediate_models=True, intermediate_model_prefix="intermediate_model_", save_training_logs=True)# Update optimizer and learning rate
ktf.optimizer_adagrad(0.0001)
ktf.loss_crossentropy()# Training
ktf.Train()

Output:

Validation

ktf = prototype(verbose=1);ktf.Prototype("Project-Zalando", "expert_mode_vgg16", eval_infer=True)# Just for example purposes, validating on the training set itself
ktf.Dataset_Params(dataset_path="/content/drive/My Drive/Data/zalando")
ktf.Dataset()accuracy, class_based_accuracy = ktf.Evaluate()

Inference

Let’s see the Prediction on sample images.

ktf = prototype(verbose=1)ktf.Prototype("Project-Zalando", "expert_mode_vgg16", eval_infer=True)

Model is now loaded.

img_name = "/content/1FI21J00A-A11@10.jpg"
predictions = ktf.Infer(img_name=img_name)

#Display
from IPython.display import Image
Image(filename=img_name)

We’ve successfully completed training our classifier. Check the logs and models folder under this Experiment to see the model weights and other insights.