Crop : Plant Disease Identification Using Mobile App.

Source: Deep Learning on Medium

In this article i’m going to explain how we can use the Deep Learning Models to detect and classify the diseases of plants and guide the farmers through videos and give instant remedies to overcome the loss of plants and fields. At first, we have to understand

  1. What is the cause and how to overcome the cause?

2. What are the benefits of doing this ?

3. Can we solve this problem with “Deep Learning technology” ?

4. In Deep Learning “which Algorithm” is used to address this problem ? and how to do that ?.

NOTE: I did it only for Tomato and Potato plants. you can do it for other plants by collecting data of that plant.

1. The Cause And Introduction.

In a past days and present, farmers are usually detecting the crop diseases with their naked eye which makes them to take tough decisions on which fertilisers to use. it’s requires detailed knowledge the types of diseases and lot of experience needed to make sure the actual disease detection . Some of the disease looks almost similar to farmers often leaves them into confusion state. Look below Image for more understanding.

Similar Symptoms but different diseases.

They look same and almost similar, In case the farmer makes wrong predictions and uses the wrong fertilisers or more than the normal dose (or) threshold or Limit (every plant has some threshold fertilisers spraying to be followed). it mess up the whole plant (or) soil and cause enough damage to plant and fields.

Cause of over dose of fertilisers. (Source: jainsusa.com)

So, How to prevent this from happening ?

To prevent this situation we need a better and perfect guidance on which fertilisers to use and to make correct identification of diseases and ability to distinguish between two or more similar type of diseases in visuals.

This is where Artificial Neural Networks comes handy. In short ANN

2. What is ANN ?.

Artificial Neural Nets are computational model based on structure of biological neural nets designed to mimic the actual behaviour of a biological neural network which are present in our brain.

Combining the Multiple ANN’s.

So we can assign some stuff to it, it gets our job done. ANN helps us to make correct identification of disease and also guiding the right quantity of fertilisers.

Only single ANN can’t get our job done . so, we make a bunch of them stacking one on other forming a layer which we can make multiple layer in between input layer (where weights and data are given) and output layer (result) those multiple layers called Hidden layers and it then makes a Deep Neural Network and the study of it called Deep Learning.

2.1 How does it looks like The Deep Neural Network.

Simple vs Deep Neural Nets.

Simple Neural Nets are good at learning the weights with one hidden layer which is in between input and output layer. But, it’s not good at complex feature learning.

On other hand Deep Learning Neural Nets , the series of layers between input and output layer are called hidden layers which can perform identification of features and creating new series of features from data, just as our brain. the more layers we push into the more features it will learn and perform complex operations. The output layer combines all features and make predictions.

Therefore, Simple Neural Nets are used for simple tasks and bulk data isn’t requires to train itself. where as in Deep learning Neural Network can be expensive and require massive data sets to train itself on. I’m not gonna discuss about on this topic cause it’s goes beyond this article.

IF you are absolute beginner to Deep Learning concept below link would help to get all basics.

2.2 What type of Deep Learning Model is best for this scenario ??

There you go Convolutional Neural Network (CNN or Conv Nets). it is well known for its widely used in applications of image and video recognition and also in recommender systems and Natural Language Processing(NLP). However, convolutional is more efficient because it reduces the number of parameters which makes different from other deep learning models.

CNN Architecture.

To keep it simple i will explain only the brief understanding of this model and the steps that are used in building Convolutional Neural Network. If you want detailed end-to-end explanation i will cover this topic in next article.

Main Steps to built a CNN (or) Conv net:

  1. Convolution Operation .
  2. ReLU Layer (Rectified Linear Unit).
  3. Pooling Layer (Max Pooling).
  4. Flattening.
  5. Fully Connected Layer.

Start Writing Code.1.Convolution is the first layer to extract features from input image and it learns the relationship between features using kernal or filters with input images.

2. ReLU Layer : ReLU stands for Rectified Linear Unit for a non-linear operation. The output is ƒ(x) = max(0,x). we use this because to introduce the non-linearity in CNN.

3. Pooling Layer : it is used to reduce the number of parameters by down sampling and retain only the valuable information to process further. There are types of Pooling:

  • Max Pooling (Choose this).
  • Average and Sum pooling.

4. Flattening : we flatten our entire matrix into vector like a vertical one. so, that it will be passed to input layer.

5. Fully Connected Layer: we pass our flatten vector into input Layer .we combined these features together to create a model. Finally, we have an activation function such as softmax or sigmoid to classify the outputs.

3.Done understanding CNN Operations. What next ??

1. Gathering Data (Images).

Gather the data sets as many as you can with Images affected by diseases and also which are healthy. you should require bulk data.

2. Building CNN.

Build CNN using some popularly used open-source Libraries for development of AI , Machine Learning and also Deep Learning.

Open Source Libraries .

3. Choose Any Cloud Based Data Science IDE’s.

Its is good to train model in cloud as it requires massive computation power our normal machines laptops and computer won’t sustain. If you have good GPU config laptop you can train in your local machine. i choose Google colab you can choose your whatever cloud you like.

Google Colab:

Google colab is a free cloud service which offers free GPU(12Gb ram)Its a hassle free and fastest way to train our model no need to installation of any libraries on our machine. it works completely on cloud.It comes pre-installed with all dependencies.

Google Colab offers free GPU.

Sign-in to colab and create new python notebook(ipynb) switch to GPU mode and start writing code.

4. Start Writing Code.

Data should be stored in Google drive before writing code.

Source Code can be found here My github link . if you found useful please star me in github.

Step 1: Mount data from google drive.

from google.colab import drive
drive.mount(‘/content/your path’)

Step 2: Import Libraries.

# Import Libraries
import os
import glob
import matplotlib.pyplot as plt
import numpy as np
# Keras API
import keras
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten
from keras.layers import Conv2D,MaxPooling2D,Activation,AveragePooling2D,BatchNormalization
from keras.preprocessing.image import ImageDataGenerator

Step 3: Load train and test data into separate variables.

# My data is in google drive.
train_dir ="drive/My Drive/train_set/"
test_dir="drive/My Drive/test_data/"

Step 4: Function to Get count of images in train and test data.

# function to get count of images
def get_files(directory):
if not os.path.exists(directory):
return 0
count=0
for current_path,dirs,files in os.walk(directory):
for dr in dirs:
count+= len(glob.glob(os.path.join(current_path,dr+"/*")))
return count

Step 5: View number of images in each.

train_samples =get_files(train_dir)
num_classes=len(glob.glob(train_dir+"/*"))
test_samples=get_files(test_dir)
print(num_classes,"Classes")
print(train_samples,"Train images")
print(test_samples,"Test images")
5.Output.
  • 12 Classes == 12 types of diseases images are collected.
  • 14955 Train Images
  • 432 Test images (i took only few images to test mistakenly).
  • Later i added more into test around 20% of train data. train test splitting is important thing to choose always choose 70% train–30 %test or 80–20%.Failing to do so it leads to wrong evaluation and accuracy.

Step 6: Pre-processing our raw data into usable format.

# Pre-processing data with parameters.
train_datagen=ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen=ImageDataGenerator(rescale=1./255)
  • Rescaling image values to between (0 –1) it is also called normalization.
  • whatever pre processing you do with train it should be done to test parallely.
  • All these parameters are stored in variable “train_datagen and test_datagen”.

Step 7: Generating augmented data from train and test directories.

# set height and width and color of input image.
img_width,img_height =256,256
input_shape=(img_width,img_height,3)
batch_size =32
train_generator =train_datagen.flow_from_directory(train_dir,
target_size=(img_width,img_height), batch_size=batch_size)
test_generator=test_datagen.flow_from_directory(test_dir,shuffle=True,target_size=(img_width,img_height), batch_size=batch_size)
  • Takes the path to a directory, and generates batches of augmented data. Yields batches indefinitely, in an infinite loop.
  • Batch Size refers to the number of training examples utilized in one iteration.
7.Output.

Step 8: Get 12 Diseases Names/classes.

# The name of the 12 diseases.
train_generator.class_indices
8.Disease Names.

Step 9: Building CNN model

# CNN building.
model = Sequential()
model.add(Conv2D(32, (5, 5),input_shape=input_shape,activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Conv2D(32, (3, 3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(128,activation='relu'))
model.add(Dense(num_classes,activation='softmax'))
model.summary()
9.Summary of layers.
9. Output.

CNN shrinks the parameters and learns features and stores valuable information output shape is decreasing after every layer .Can we see the output of every layer ? Yes !! we can.

Step 10: Visualization of images after every layer.

from keras.preprocessing import image
import numpy as np
img1 = image.load_img('/content/drive/My Drive/Train_d/Tomato___Early_blight/Tomato___Early_blight/
(100).JPG')
plt.imshow(img1);
#preprocess image
img1 = image.load_img('/content/drive/MyDrive/Train_d/
Tomato___Early_blight/Tomato___Early_blight(100).JPG', target_size=(256, 256))
img = image.img_to_array(img1)
img = img/255
img = np.expand_dims(img, axis=0)
  • Take a sample image from train a visualize the output after every layer.
  • NOTE: Pre-processing is needed for new sample image.
from keras.models import Model
conv2d_1_output = Model(inputs=model.input, outputs=model.get_layer('conv2d_1').output)
max_pooling2d_1_output = Model(inputs=model.input,outputs=model.get_layer('max_pooling2d_1').output)
conv2d_2_output=Model(inputs=model.input,outputs=model.get_layer('conv2d_2').output)
max_pooling2d_2_output=Model(inputs=model.input,outputs=model.get_layer('max_pooling2d_2').output)
conv2d_3_output=Model(inputs=model.input,outputs=model.get_layer('conv2d_3').output)
max_pooling2d_3_output=Model(inputs=model.input,outputs=model.get_layer('max_pooling2d_3').output)
flatten_1_output=Model(inputs=model.input,outputs=model.get_layer('flatten_1').output)conv2d_1_features = conv2d_1_output.predict(img)
max_pooling2d_1_features = max_pooling2d_1_output.predict(img)
conv2d_2_features = conv2d_2_output.predict(img)
max_pooling2d_2_features = max_pooling2d_2_output.predict(img)
conv2d_3_features = conv2d_3_output.predict(img)
max_pooling2d_3_features = max_pooling2d_3_output.predict(img)
flatten_1_features = flatten_1_output.predict(img)