AI Saturday-Bangalore Chapter: Trash Classifier Tutorial

Check out our Meetup Group :

Introduction and Motivation:

Trash Segregation is one of the challenging process in order to effectively recycle Garbage. It is always easy to recycle a well segregated waste than mixed ones.

The current recycling process often requires recycling facilities to sort by hand. People can also be confused about the correct way to recycle materials. By using computer vision, we can predict the category of recycling of an object based on an image of it. — Problem Statement from here.

As a Co-organiser of Nuture.AI’s AI Saturdays Meetup in Bangalore, I thought it would be a great toy experiment to build a trash classifier based on this dataset using Library (Thanks a Ton !! Jeremy Howard for the course and library). We had shared this task as assignment in the meetup for fellow attendees to put their learning into action.

This post serves as Tutorial/Walk through on how to use library to solve the problem.

Problem Statement: Build a Trash Classifier

Type of Problem : Muti Class

The Dataset can be Downloaded From : Link or Direct Link

Data Organisation : Images are stored in Separate Folders (from_path type)

Understanding the data set: It consists of 2527 images :

  • 501 glass
  • 594 paper
  • 403 cardboard
  • 482 plastic
  • 410 metal
  • 137 trash

Sample Images from data-set :


Performance Claimed by the Author:

The SVM achieved better results than the Neural Network. It achieved an accuracy of 63% using a 70/30 train/test data split. After running 50 epochs of training with a 0.0075 learning rate and batch size of 25 on our neural network with a 70/30 train/test split, we achieved a testing accuracy of 27% (and 23% training accuracy). Link

So, Lets Get started !

Key Steps to Be Followed :

  1. Data Preparation and Organisation into Respective Folder:
  2. Choice of Architecture(eg Resnet34,Resnext50 etc) for performing Transfer Learning.
  3. Perform Sensible Data Augmentation if you have less data or if feel so.
  4. Create a learn object and fine Tuning the Last Layers. Preferably Over-fitting the Data. Use PreCompute = True initially to arrive at a sub-optimal Weights for the last layer added.
  5. Train with PreCompute=False for 2 or 3 epochs. Since there is no effect of data augmentation when PreCompute is set as True as we use saved intermediate activation instead of augmented image.
  6. As the Our Data-set is not a subset of ImageNet’s 1000 Classes and the Classes are different,we are should the retrain the lower layers to get better results.
  7. Unfreeze All the layers and tune the weights with Differential Learning rates.
  8. Learning Rates can be chosen with the help of lr_find() tool, which gives informational plot between loss and learning rate.
  9. Once we reach a state where the validation loss starts to increase instead of decreasing we stop training and proceed further to prediction.
  10. One more Cool Trick built inside Library that no other library has is Test Time Augmentation (a.k.a learn.TTA()). We Perform TTA in order to improve the predictions as well as simulate real world inputs and validate how well our model performs in such situations.
  11. Validate the Results by plotting sample imaged or correct, incorrect classes. You can also calculate the required metrics like Accuracy,precision,recall etc .. or Confusion matrix.

To Do Challenges:

  1. Explore Multiple Architectures Compare Performance. (eg. Resnet50, Densenet101, Resnext50 etc..)
  2. Use T-SNE and Visualise the Images in 2D Co ordinate System.

The Code for the above experiment is available here.

TIP: In order to build confidence in solving similar problems please try to write code on your own following the steps and refer to notebook only when stuck.

Feedback and suggestions are always welcome. Hope it helped!. Thanks for reading 🙂


  1. Deep Learning course. Part1 version2 (Special Thanks to Jeremy Howard and Rachel Thomas for giving me an opportunity to attend the course as international fellow)
  2. Orginal Authors Gituhub Repo
  3. Orginal Author Poster

Source: Deep Learning on Medium