Improving convolutional neural network accuracy using Gabor filter and progressive resizing

Source: Deep Learning on Medium

after implementing Gabor CNN we did follow a specific training process to apply progressive resizing.

5-Implementation

5.1 Dataset

  • The dataset we worked is for plant diseases and it contains 39 classes, The healthy plants are apart of these classes, there is also Background class, it refers to the images where there is no plant or plants which those not exist in our classes.
  • The link for the data is here, a special thanks to Marko Arsenovic who provided this dataset.

5.2-Gabor filter implementation :

I this project we did work with Pytorch and fastai to build our models, the following code is an implementation of Gabor layer in pytorch:

This code is provided by iKintosh

5.3-Training process

All the models were trained to respect the following steps :

  • Step 1: The last layer was trained on 39 classes. At the beginning we choose the image size to be 128 so we will be able to do progressive resizing later on.
  • Step 2: we unfreeze all the layers of the model and we train it again.
  • Step 3: we apply the progressive resizing so we change the size of the image size from 128 to the original size which is 265, then we unfreeze just the last two layers of the model and we train it.
  • Step 4: we unfreeze the last three layers of the model and we train it.
  • Last step: we unfreeze all the model and we train it.

6-Results

In this section, we present the benefits of using Gabor filters in CNNs alongside progressive resizing. the following table presents the overall results of our experiment.

PR : progressive resizing
* : not trained

Gabor CNN achieves better results most of the time after progressive resizing, We can notice that our Gabor models outperform the normal CNNs, as we can see for ResNet18 we’ve been able to reach accuracy 99.31% instead of 98.99% in normal ResNet18. And the best accuracy we’ve had was 99.55% with Gabor ResNet34, The result is very good and promising. Overall the models were improved for 1% of the normal performance which proves that the use of Gabor filter inside a CNN helps us get better results. But on the other hand even the good result on accuracy the Gabor models take more time than the normal ones for the prediction, it’s almost 4x higher than the normal CNNs, which makes it a bad choice for real-time classification problems.

7-Conclusion

In recent times, deep learning methods have outperformed traditional machine learning approaches on virtually every single metric. CNNs are one of the chief contributors to this success. To meet the ever-growing demand of solving more challenging tasks, the deep learning networks are becoming larger and larger. However, training of these large networks require high computational effort and energy requirements. In this work, The extensive experiments show that Gabor CNNs are significantly improved when using Gabor in comparison with normal CNNs.

Project and more Infos :

The project is here, You will find more statistics in the notebooks

Papers:

[1] P. Ji, L. Jin, X. Li, Vision-based Vehicle Type Classification Using Partial Gabor Filter Bank, in 2007 IEEE International Conference on Automation and Logistics, IEEE, 2007, pp. 1037–1040.

[2] S. Shakib Sarwar, P. Panda, K. Roy, Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks, in 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED).

[3] A.Alekseev, A. Bobe, GaborNet: Gabor filters with learnable parameters in deep convolutional neural networks.

[4] S. BERISHA, Image classification using gabor filters and machine learning, 2009.

[5] D. Gabor. Theory of communication. part 1: The analysis of information. Journal of the Institution of Electrical Engineers-Part III: Radio and Communication Engineering, 93(26):429–441, 1946.

[6] S. Meshgini, A. Aghagolzadeh and H. Seyedarabi, “Face recognition using Gabor filter bank, kernel principal component analysis and support vector machine,” International Journal of Computer Theory and Engineering, vol. 4, pp. 767–771, 2012.

[7] S. Luan, B. Zhang, S. Zhou , C. Chen, J. Han, W. Yang, J. Liu Gabor Convolutional Networks, in 2018 IEEE Transactions on Image Processing

Webography: