Deep learning for semantic segmentation of drains from LIDAR data-initial assesment.

Original article can be found here (source): Deep Learning on Medium

Deep learning for semantic segmentation of drains from LIDAR data-initial assesment.

In my last article I wrote about using OpenCV to identify a drainage network from LIDAR data. The results weren’t too bad considering, but I was interested to see if I could do better using deep learning approaches. In OpenCV there are deep learning models which can be trained, but I thought a better way to go would be to look at the better performing segmentation models and see if one of those could be used. A good background article on deep learning for segmentation is this one here by George Seif.

uNet seemed to be interesting, it won the biomedical segmentation challenge in 2015 and if you are interested you can read the paper on uNet here.

So can uNet be used to find drains from LIDAR? Lets find out!

Getting Going

There are a couple of implementations available on Github and to get going quickly I cloned this repo which also has additional background information.

In the repository you will find not only the model, but also sample data, and a jupyter notebook which you can use interactively to test your environment setup and to improve your familiarity with the model.

Once downloaded, a quick look at the test image data is interesting. A couple of images are presented below. Note that the masks are simply the target features in a black and white image mask. The image size is 512x 512, where as the LIDAR tiles are 1001×1001 pixels. With the repo are 30 sample images and corresponding masks that can be used to train the model. You can see below, a sample image and its corresponding mask. If your environment is set up correctly, you should be able to train the uNet model on the sample data and end up with a trained model set of weights, it takes a little while.