Data Science, Car Crashes and Pandemics. Welcome to AlexNet

Original article was published by S Ahmad on Artificial Intelligence on Medium

Data Science, Car Crashes and Pandemics. Welcome to AlexNet

Using AI to improve response times saves lives

Photo by NOAA on Unsplash

If you’ve ever crashed a car before then you know how it awkward and frustrating it can be. If you’re lucky and everything is straight forward, then it’s alright but if not, you can be in for a world of pain. If you’ve ever been through a natural disaster though, it’s just a pain from start to end.

First comes the blame game, then comes the proof

From the perspective of the Government and the Insurance industry, the use case is quite obvious as the occurrence of disasters such as hurricanes, earthquakes, floods need to be identified quickly. These kind of events don’t just decimate the buildings when they occur, but they decimate the entire environment around it too.

Obtaining accurate data to help plan for an effective response has been a challenge because collecting and extracting data has been a slow and labour intensive operation. Given that, any response is currently quite slow. Drones and Satellite Imagery have improved the problem somewhat, but a lot of data still has to be manually collated.

As a result of this problem, machine learning fits naturally here to automate the role of detecting damages caused by disasters. Machine Learning can also offer a solution for future disasters because a well built model can generalise, but also improve it’s recognition capability as the amount of data increases.

Existing work in the field focuses largely on single-event disasters but recently, Google produced a custom dataset spanning 3 disasters (Haiti, Mexico and Indonesian earthquakes), from which they applied a CNN and measured how well it could generalise.

Here we have a before an after example of damaged buildings taken from Satellite Imagery

The paper, Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks takes this challenge to create a system that can (a) recognise damaged buildings but more importantly (b) generalise across different disasters. The model that then encapsulate looks to achieve two things: building detection and damage classification. From here, different forms of neural network architectures are considered, ultimately residing where each architecture is a variant of an AlexNet.


An AlexNet uses a sequence of convolutional layers followed by a sequence of fully connected layers to create a Convolutional Neural Network that has the ability to recognise and interpret images with performance that surpasses all prior models. AlexNet was produced by Geoffrey Hinton et al (2011) and has proven to be the benchmark for these types of networks since.

Note: This is a great link to create an AlexNet [source] using Tensor flow in Python. Highly recommended!

The classification model has an input of two 161 pixel x 161 pixel RGB images, with corresponds to a 50 m x 50 m ground footprint and is centered on a given building.

One of the two images provided is from before the disaster event, and the other image is from after the disaster event. The model analyses differences in the two images and outputs a score from 0.0 to 1.0, where 0.0 means the building was not damaged, and 1.0 means the building was damaged.

Now the AlexNet works so well because it’s a huge neural network encompassing 60 million parameters and 650,000 neurons. It also consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way soft-max.

Illustration of the architecture of the AlexNet CNN [source]

Aside from this, the problem is a bit more difficult as images are distorted by blurriness, obfuscation, colour differences, and other features. These issues can make it more difficult for any image recognition system to recognise broken buildings, so a histogram equalisation technique can be used to normalise between images before testing.

On a single event setting, the accuracy of the model is quite high but the more important result is that results from one disaster demonstrate predictability natural disasters.

Once a neural network is trained, it’s relatively efficient to pick up and apply to a new data set which when applied to natural disasters, reduces time, energy and the effort required of crisis workers to generate disaster reports. Meanwhile, a timely decision on aid delivery can be implemented without any delay.

Given the complexity of the set up, the results of the AlexNet are quite good on the sample data set as an ROC score of around 70% is seen to be somewhat good. Naturally, there are a number of ways to improve the performance of the model but as a starting point, Google are paving the way in style.