Practical Deep Learning Strategy


I write this article to conclude my deep learning development experience of my internship at UmboCV. During my internship, I built a violence detection system on production. I hope that I can introduce my strategies on this cutting-edge research deep learning project.

For each project which has its own data, I will follow three states.
1. Data Processing
2. Model Development
3. Transfer Learning

Data Processing

As the first state of a deep learning problem, it may be the most important one. There is a simple exploratory data analysis pipeline from the blog.

Ingest Data → Clean Data → Transform Data → Present Data

For Ingest Data, we have to collect data as more as possible. Then, we should think about what kind of input data should be for models. As the position of models, we can design how to label our data. This step is really important because if you choose the wrong way to label data, it will be really hard to train a good model.

If you know what kind of format is suitable for your models, you can start to build a good quality dataset. A good quality dataset will lead to a good model, so make sure to clean your dataset really carefully.

Then, transform your data into the specific format for your models. For example, resize your image data as the input size of your models. This step will speed up your data loader.

Finally, try to present your data and check that it is reasonable for a model to learn the meaning. Sometimes, researchers will assume deep learning model can learn anything and forget to check the final data. It is really dangerous for researchers to build a baseline method. In general, a machine is hard to beat a human. Make sure people can do the tasks.

Model Development

If you already built a good dataset, you can start to read some papers and select a good model to implement. For your first model, you should choose a model in a simple architecture. Implement the model as fast as you can. To make sure you successfully train the model, there are some tips I usually take.

First, select some small amount of data and try to overfit them. The capacity of your model must remember all the feature of your small dataset. If you can not train the dataset to 100%, there must be some bugs in your code. Second, data augmentation is really important. From my experience, data augmentation is as essential as model architecture. If your architecture is correct and there is no bug in your code, try to change your data augmentation strategy.

After your finish your baseline model, you can survey more papers and compare the pros and cons of these methods. Choose the approach which is most suitable for your situation.

All of the models should train on the public dataset first. There are two reasons. First, on the public dataset, you will have a standard metric. Second, in most of the time, the public dataset includes much more data than your own data. Pretrained weights of deep learning models are the most critical elements.

Transfer Learning

For the strategies of transfer learning, you should follow the guide on CS231n. There are some advice I want to share. 
1. Don’t think you make your model have a great fit on your dataset in one time. You should be patient and take care of the parameters in all layers.
2. You should inference on your testing data for each fine-tuned model. With several iterations of the processing, you will understand more about the distribution of your dataset.

View at Medium.com