Source: Deep Learning on Medium
Week 6 — DASM — New Experiments And The Final Dataset
Hello everyone, today I will continue with the final part of the series of the development of the machine learning project DASM (Damage assessment system for imagery data).
After the hard work that we did on labeling the data, we have now a total of 3627 images. As follow:
As we can notice, we have an unbalancing issue in our data distribution among the different classes. This is due to, that we scrapped our data from the internet; so it is quite difficult to maintain or to collect a balanced dataset from the internet.
In order to solve this issue, we used a technique called Weighted Random Sampling. As follows:
- while in the training phase and while we are taking the batches, we sample the batches such that the minority classes have a large representation in each batch.
We used the transfer learning approach in training our Cnn networks, where we did it in two separate manners:
- we used previously trained models (Resnet 18, Alexnet and vgg11) as fixed feature extractors, and trained just the final fully connected layer or layers, after adjusting its output class count to fit our dataset. (Result on the left side)
- We fine tunned the entire network, which means that we trained all the layers in the network; not just the fully connected layers, but starting from ImageNet weights. (Result in the right side)
Deep Learning Results