Artificial Intelligence & Paleontology: Use Deep Learning to search for Microfossils

Source: Deep Learning on Medium

In this posting we show a Deep Learning-based method for fully automated microfossil identification and extraction in bore core samples acquired via MicroCT. For the identification we developed a Deep Learning approach which resulted in a high rate of correct microfossil identification (98% IoU). To validate it we use ground truths generated by specialists in the micropaleontology field. We also present the first fully annotated MicroCT-acquired publicly available microfossils dataset.

Artificial Intelligence & Paleontology

The applicability of the computational analysis of paleontological images ranges from the study of animals, plants and evolution of microorganisms to the simulation of the habitat of living beings of a given epoch.

But nowadays paleontology is not only a pure science anymore. It also can be applied to solve problems in areas of economical activity, such as oil exploration, where there are several factors to be analyzed in order to identify the potential of an exploration site and minimize the expenses related to the oil extraction process. One factor is the characterization of the environment to be explored. This analysis can occur in several ways: use of probes, extraction of samples for petrophysical components evaluation or the correlation with logs of other drilling wells.

When we look at samples extraction, fossils found in sedimentary rock are central for the characterization of this rock. Here Computed Tomography (CT) is of importance because it preserves the sample and makes it available for several analyzes. Based on 3D images generated by CT, several analyzes and simulations can be performed and processes, currently performed manually and exhaustively, can be automated.

Imagine the following scenario: A paleontologist receives a rock sample with microfossils for analysis. The time needed for the complete process of microfossils isolation, performed manually, is high and after this process the rock sample is destroyed. After this, the paleontologist will analyze the microfossils in physical isolation with a microscope and classify manually each isolated fossil.

A few microfossils manually extracted from our carbonatic rocks and photographed with a Z-stack microscope

Now let’s hypothesize that the company where this paleontologist works acquired an AI-based tomographic image analysis software, specific for microfossil analysis. This software performs the microfossil identification and extraction from the rock sample automatically with minimal or no supervision at all. The paleontologist now can load his tomographic sample, select the specific pipeline and leave the procedure executing during the night, leaving to the paleontologist only the work of evaluating the obtained results results, that previously required a predominantly manual effort, and classify each extracted microfossil.

The before & after of microfossil analysis…

Microfossil Identification and Classification with Deep Learning

You can perform this workflow above employing Semantic Segmentation with Deep Learning. Let’s see how…

We employed a scanned carbonatic rock sample obtained from a drilling rig probe collected at the Sergipe Basin Quaternary sediments, off the northeast coast of Brazil. For the training of our Semantic Segmentation network, a team of micropalentologists generated a Ground Truth for this rock sample, segmenting and classifying manually the whole MicroCT volume. The scanner used to digitalize the sample was a Versa XRM-500 (ZEISS/XRadia), with a volume size of 956x1004x983 voxel.

The full dataset, together with additional explanations and the specialist-annotated Ground Truth of manually segmented images is available here:

Digitalized rock sample

The Deep Learning API we employed was the framework and with hyperparameter optimization (HYPO) we acquired a IoU of 0.98 with UNET + ResNet34 (or ResNet50). The different kinds of HYPO are important here and we will explain them below.

HYPO #1: Variable Resolution. The MicroCT data posed a challenge: the memory requirements imposed by both the dataset size we employed and the UNET architecture would strongly limit the batch size of our training set. In order to overcome this limitation and be able to initially work with larger batch sizes and train the network at a faster pace, we employed a step-wise progressive improving image resolution training strategy.

For this purpose we performed the training in three cycles: we started our transfer learning with the dataset at 1/4 of the original MicroCT image resolution; trained the model; re-sized the dataset to 1/2 of the original resolution; trained again and, after that, we employed the full CT volume resolution for a final fine tuning training cycle. The outline of this strategy was originally presented by Jeremy Howard as an informal communication during a CNN lecture available at The figure below shows our interpretation of the step-wise progressive improving resolution training workflow employed.

The step-wise progressive improving image resolution training strategy

HYPO #2: Differential Learning Rate: Another strategy for fine-tuning a model is the Differential Learning Rates (DLR) strategy, also informally presented by Jeremy Howard during a lecture of the same course series.

When performing transfer learning followed by fine tuning, in the first layers the pre-trained model being adapted will learn generic low level features from the new dataset being used in the transfer learning. These low level features are very probably similar to those of the original dataset, regardless of the image context. So, there is no the necessity of employing high learning rates at these first layers. As information gets deeper into the network, the feature combinations become more complex and dataset-specific, and are more directly connected to the application context. Higher learning rates in deeper layers are desirable in order to allow the network to better adapt to context-specific features.

HYPO # 3: Fit1Cycle: We trained the network employing the fit1cycle method originally developed by Leslie N. Smith:

This method is fast, it will allow us to employ only 5 epochs in the first Transfer Learning stage.

Do it! Segmentation of Microfossils in Carbonatic Rocks

Below is the code for the semantic segmentation of microfossil samples in MicroCT-acquired bore cores from carbonatic rocks. This is the code we published as a notebook ( and it assumes that you are either using Google Colab or that you have the latest versions of PyTorch and installed. You’ll also need a GPU with at least 11 GB RAM.

Initializations, Import Python libraries and the Framework

# Notebook Initializations
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# Imports
from import *
from fastai.utils.show_install import *
from fastai.callbacks.hooks import *
from pathlib import Path
# Show if everything is OK

Define the places where your data is stored and check it

If you are using Google Colab together with the Google Drive do this

from google.colab import drive
path = Path('gdrive/My Drive/Colab Notebooks/DL/')

If you are not importing your data from the Google Drive

# Adapt this to match your environment...
path = Path('myPath/')

Define data variables such as path_lbl (local where your labels are) and path_img (local where your train and validation datas are stored)

# Initialize path_lbl (local where your labels are) 
path_lbl = path/'train_masks_labels'
# Initialize path_img (local where your train and validation data are stored)
path_img = path/'train'
# Check how many files are there
fnames = get_image_files(path_img)
# Check if label names match the size
lbl_names = get_image_files(path_lbl)

Show a single MicroCT slice

img_f = fnames[0]
img = open_image(img_f),5))

Load the Mask belonging to this particular Slice

# Scan the filenames with a simple lambda function
get_y_fn = lambda x: path_lbl/f'{x.stem}_GT{x.suffix}'

Show a Ground Truth Mask

mask = open_mask(get_y_fn(img_f)),5), alpha=1)
src_size = np.array(mask.shape[1:])

Load your labels

codes = np.loadtxt(path/'codes.txt', dtype=str); codes

IOU metric, Initial data split and model

# Refer to your labels as numbers
name2id = {v:k for k,v in enumerate(codes)}

Define your error metrics

The Intersection Over Union (IOU) Metric and a function to save prediction definitions

def iou_metric(input, target):
target = target.squeeze(1)
mask = target != 0
return (input.argmax(dim=1)[mask]==target[mask]).float().mean()
def save_preds(dl):
names = dl.dataset.items
 for b in dl:
preds = learn.pred_batch(batch=b, reconstruct=True)
for o in preds:[i].name)
i += 1

A few more definitions

Definition of weight decay, metric, model and if we employ imageNet weights

metrics = iou_metric
# Use a deep network
# We will employ transfer learning from ImageNet weights...

Training Cycles & Validation

Here is where we start the training part. First we employ the 256×252(1/4) resolution. The same exact sequence is performed for the 512 (1/2) and 1024 (full) resolutions.

Cycle #1: 256×256

# Define the batch size and the resolution employed
size = src_size//4
size[1]= size[1]+1

Apply transformations such as resolution change and data augmentation to the data…

if useImageNet:
data = (src.transform(get_transforms(), size=size, tfm_y=True)

Load the model with the data, the metric and the weight decay selected

learn = unet_learner(data, used_model, metrics=metrics, wd=wd)

Find the most appropriate learning rate


Mannually set the learning rate after examining the graph produced by the code above…

# Adjust this LR accordingly to what you identified above...


learn.fit_one_cycle(5, slice(lr), pct_start=0.9)

Save the weights and load it to continue the training and perform some data release…'stage1-256x252')

Keep a loading code in case you need it:


Unfreeze the net in order to learn internal weights — Fine Tuning@Cycle #1


Employ a Differential Learning Rate (DLR) and Train the learner

lrs = slice(lr/400,lr/4)
learn.fit_one_cycle(10, lrs, pct_start=0.8)

Save the weitghs and load it to continue the trainning and perform some data release

# Save the fine-tuned network @ Cycle #1'stage2-256x252')
# Release Memory
del data
del learn
# Collect Garbage

Cycle #2: 512×512

We will perform the same as above, just now with a resolution of 512×512

# Set the new Size for the MicroCT Slices
size = src_size//2
# Adapt ImageNet Parameters to our Image Characteristics
if useImageNet:
data = (src.transform(get_transforms(), size=size, tfm_y=True)
# Create a new Network for 512x512 input images
learn = unet_learner(data, used_model, metrics=metrics, wd=wd)
# Load our fine-tuned low resolution network weights learned on Cycle #1...
# Find the best learning rate for this network instance
# Manually set the new learning rate (LOOK at the graph above!)

Perform the transfer learning stage with this new, middle-resolution network. Again we will employ the fit1cycle method developed by Leslie N. Smith.

learn.fit_one_cycle(5, slice(lr), pct_start=0.8)
# Save and Load...'stage1-512x502')
# Unfreeze for fine-tuning...

Employ a Differential Learning Rate (DLR) and Train the learner

# Prepare for varying learning rates...
lrs = slice(1e-6,lr/10)
# Fine-tune for 10 epochs
learn.fit_one_cycle(10, lrs)
# SAVE STAGE2 OF CYCLE #2...'stage2-512x502')
# Flush garbage..
del data
del learn

Cycle #3: 1024×1024 — Full Resolution Training

Now we will perform the same as above again, just now with a resolution of 1024×1024. We will employ a shorter transfer learning stage. Most cells below go uncommented because we are repeating steps, only with a few different parameters…

# Original image size
size = src_size
# Batch size of one!
if useImageNet:
data = (src.transform(get_transforms(), size=size, tfm_y=True)
learn = unet_learner(data, used_model, metrics=metrics, wd=wd)
# Adapt it to your values
# Transfer learning stage
learn.fit_one_cycle(3, slice(lr), pct_start=0.8)
# Save stage 1 of Cycle #3'stage1-1024x1004')
# Prepare for the final fine-tuning
# Prepare for varying learning rates
lrs = slice(1e-6,lr/10)
# Fine-tune for 10 epochs
learn.fit_one_cycle(10, lrs)
# Save stage 2 of Cycle #3'stage2-1024x1004')

Show some prediction results

learn.show_results(rows=2, figsize=(10,10))

Save all prediction results

name_gen = 'image_gen'
path_gen = path/name_gen

Want to see some results?

(A) shows microfossils identified and segmented by the network in a MicroCT slice. (B) shows the fossil in detail and (C) shows a microtomographic image of the fossil after it was manually extracted. (D) shows a microscope photograph of the same sample.