Objective — Classify images using Google Could AUTO ML API, return results and store results in DynamoDB
AutoML was made available to the public as a beta version by Google during the Cloud NEXT 2018 conference. AutoML from Google provides state of the art neural network techniques which can be directly used to custom train an own model and classify images of user’s choice. The user is not expected to write even a single line of code. The algorithm is powerful, and the results can be fetched using Python/Rest API’s. This release is expected to be highly disruptive in the deep learning community for image classification tasks.
For more information about Google AutoML check — https://cloud.google.com/automl/.
The following demo explains in detail how to use this Google AutoML for a custom image classification task. The default documentation is pretty elaborative, and this demo in steps would serve as an add-on material.
In this demo, flowers of different categories are classified. It is better to have a large number of images in each category. This flower data can be downloaded from
- Manually identify and segregate images in the local file system (Ex, daisy, roses, dandelion, tulips etc.). These are the classifications on which the final result is predicted
2) Create a Google Cloud account, create a Project, Create a Service account. Go to https://cloud.google.com/.Create an account/login. Google cloud gives free account for one year for up to 300$ Usage
3) Create a new project
4) Enable AutoML API for this project
5) Create a Service Account for this project. Service account can be created from IAM & admin option in the navigation bar
Save the JSON credentials that will be generated since the credential cannot be checked later. Enable billing for this account. The account is now set for its usage
6) Create/ Use the bucket in the project, which has the same name as the Project id. The bucket can be created from the “STORAGE” option. Once the user logs in into Google Auto ML using this project id (mentioned in steps below), a bucket is automatically created into this project in google cloud. This bucket has the name that will match the Auto ML requirements.
7) Upload the images into the bucket, replicating the same folder structure as created before
8) The folder structure inside the bucket should look exactly like the one in the local machine where the user has manually created different classifications
9) Create a dataset csv, which has the records of the location in the google cloud location. (A python program can be used to create this). Each image in the set must have a record the csv file.
10) The python code can be taken from this URL https://gist.github.com/yufengg/984ed8c02d95ce7e95e1c39da906ee04
11) The python code generates a CSV, which has the details for each image in the training dataset and looks like the one in the above image. Load this CSV into the same bucket
12) Open this project in Google AUTOML, using the same google account https://beta-dot-custom-vision.appspot.com/vision/overview
13) Enter the Project ID, using the project id for the project that comes along when the project was created in the Google cloud
14) Load the dataset of images, using the CSV from the bucket. The dataset can be loaded from the CSV that was created in the above-mentioned steps.
15) Click on New Dataset and enter a dataset name and enter the path for the CSV file in the Google cloud account. Note, all these are done in the same google account
The default classification for an image would be a single class. However, multi-classification can be enabled and the results will be displayed for each class along with the confidence score for that class
17) The images will be loaded from the CSV into the AutoML and the structure will also be replicated
The console should look like this, once the images are loaded
18) New images can be added on the go, by clicking “ADD IMAGEs” on the top. New images loaded will not have any class by default
19) However, for the loaded images with a class as well as the new images, the class associated with them can be changed on the go, in the console itself
20) To do so, click an image, and just change the class and click OK
21) To load an image directly into a specific category, just select the folder/class on the left pane and click “ADD IMAGE”. This directly adds the image into the selected category
22) Images can be deleted too. Also, images can be set to have no class, in which case, they will be in the “UNLABELD” folder which will not be used for training
23) Once the folder structure, the contents of the training data is ready, the model can be trained
24) Click on “TRAIN” and select “Train New Model” and enter a model name. The free account offers free training for a restricted duration too
The training can take several minutes. More the images in each category, better is the training. There is a minimum number of images required in each category. Once the training is done, the model parameters such as Precision, Accuracy, Recall can be seen in the Evaluation Metrics
To classify a new image, go to “PREDICT” tab, upload an image and the console returns the class of that image. The confidence score for each class is also shown
The results can be either a single class or multi-class which was specified while creating the model.
Note down the Model ID and Project ID and predictions on a new image can be done using the Predict tab in the UI.
The results can be fetched in Python/ Rest API directly, post this without needing to use the console. The model name and project id are the parameters which will be required to fetch results in python. A sample code will be given in the console, on how to fetch results from python
The python code on how to fetch this result and deploying that in AWS will be continued in next parts.
Source: Deep Learning on Medium