Aerial Cactus Identification using TensorFlow

Source: Deep Learning on Medium

Aerial Cactus Identification using TensorFlow

Welcome to the Beginner Tutorial in TensorFlow. If you are good with fundamentals But don’t know where to start learning Tensorflow. start from here.

In this tutorial I am going to show you Aerial cactus Identification usning TensorFlow. Actually this problem is posted as a compeititon kaggle.

Short Description:

To assess the impact of climate change on Earth’s flora and fauna, it is vital to quantify how human activities such as logging, mining, and agriculture are impacting our protected natural areas. Researchers in Mexico have created the VIGIA project, which aims to build a system for autonomous surveillance of protected areas. A first step in such an effort is the ability to recognize the vegetation inside the protected areas. In this competition, you are tasked with creation of an algorithm that can identify a specific type of cactus in aerial imagery.

check this link for description:

Download Dataset Here:


  • TensorFlow (Architecture)
  • Pandas (Dataset Importing)
  • Matplotlib and SeaBorn (Data Visualization)
  • Numpy (Matrix)

Step 1: Import Necessary Dependencies:

<script src=”“></script>

Step 2: Reading the CSV file:


Step 3: Visualizing the Data:

sns.countplot(x='has_cactus', data=data)

Now we have done all the preprocessing works. Next we need to build a Convolutional Neural Network for Identifying the cactus.

Convolutional Neural Network:

model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(2, (3, 3), activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(8, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')


You should optimize your Network for better performance. Here I am Adam Optimizer. Also we need to choose our loss. I have choosen Binary cross entropy. Because the final output should be 0 or 1.

model.compile(optimizer=tf.train.AdamOptimizer(), loss='binary_crossentropy', metrics=['accuracy'])

Now we need to get the pixel value of the images. So we are running a loop and then get the pixel values and storing it into an array. I am using PIL(Python Imaging Library) for converting images to array of pixels.

from PIL import Image 
for i in range(len(data['has_cactus'])):'aerial-cactus/train/'+data['id'][i])

Converting the Image and label into an Numpy array


Training the Model:

I have used 8 epochs to train my model,y,epochs=8)

Train the model.

Note: Training time depends on speed of your machine.

Epoch 1/8
17500/17500 [==============================] - 9s 501us/sample - loss: 0.2808 - acc: 0.8905
Epoch 2/8
17500/17500 [==============================] - 7s 407us/sample - loss: 0.1419 - acc: 0.9449
Epoch 3/8
17500/17500 [==============================] - 8s 429us/sample - loss: 0.1285 - acc: 0.9521
Epoch 4/8
17500/17500 [==============================] - 9s 534us/sample - loss: 0.1244 - acc: 0.9517
Epoch 5/8
17500/17500 [==============================] - 9s 535us/sample - loss: 0.1149 - acc: 0.9562
Epoch 6/8
17500/17500 [==============================] - 9s 519us/sample - loss: 0.1094 - acc: 0.9575
Epoch 7/8
17500/17500 [==============================] - 10s 569us/sample - loss: 0.1014 - acc: 0.9599
Epoch 8/8
17500/17500 [==============================] - 7s 405us/sample - loss: 0.0977 - acc: 0.962

My Model got an Overall Accuracy of 96.26% in Train data. Now we need to predict the test data.

Predicitng the Test Data:

from tensorflow.keras.preprocessing import image
import os

for i in range(length):
path = 'aerial-cactus/test/'+list[i]+''
img = image.load_img(path, target_size=(32, 32))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
if classes[0]>0.5:
print(list[i]+" is a cactus")
print(list[i]+" is not a cactus")

Converting the Predicted values into Pandas Dataframe:

we need to convert the array of values to pandas dataframe for submission.


Exporting The Prediction:

Getting the final CSV file.


Full Notebook Link:

My Github Profile link: