How to Perform Machine Learning Research in a Fast Paced Environment – Post 1/2

Source: Deep Learning on Medium

How to Perform Machine Learning Research in a Fast Paced Environment – Post 1/2

Being able to perform successful machine learning research and project delivery in the industry and especially startups (what is often called applied researcher) can be quite different from academia research. It is different in the requirements, starting point, goals and time frame. This often forces a different working method from what we might be used to from academia. It requires another skill set, and the ability to quickly ramp-up and POC a project, then moving forward with it towards some working model or algorithm.

I distinguish between two main categories: a ‘known’ task with a well-defined set of solutions (for example, detection, segmentation, etc.) we can work with, and a ‘new’ task, which will require us to harness our knowledge and expertise in order to solve it.

My first post will focus on the flow I suggest going with when we deal with a ‘known’ task, and the next post will focus on the latter. I’ve broken it down and here is my take on it.

1) Literature survey: 2–3 days

If the task ahead of us is of a known subject (for example, detection) then we should focus on 4–5 well known approaches from recent years. Read them and understand the approach, the differences, pros and cons etc. Preferably summarize each in a few sentences. We should limit this stage for a couple of days.

2) What work should we follow

Usually if a task is extensively studied, then more recent work would be perhaps building on top of previous approaches, maybe more complex and more resource consuming with heavy-weight models but the core idea behind it might be the same. Hence it is not always the best practice to follow ‘the latest and greatest’. Academic work sometime add huge complexities to tackle edge cases and improve the quality measures by even less than 1%, which are probably not that important for us at this stage of the project. Our aim is for things to work and to work well. It is less important for us if some addition to the model or algorithm improved the score by 0.3% in the paper’s quality measures. We should also remember that academic work is not always judged by FPS or runtime, while this criterion is (usually) of greater importance for us in the ‘real world’ of industry.

To sum up, we should go with an approach that looks solid, got some good results and yet is simple to understand, implement and debug. Go with the solid foundations and not with the latest ‘fancy’ SOTA.

3) Look for result examples / video demonstrating the performance

Now that we’ve chosen some approach we might want to start with, it’s always good to see some results the original authors got. It might be a demonstration video or output examples. Get a sense if the results that they got look good for your goal. For example: maybe a body keypoint detector got good quantitative results but we see it has a problem localizing joints, which is critical for us. This might make us change the approach we chose.

4) Look for the project’s code

A source code might be a good reference to have, although I suggest eventually to implement it yourself. We all have our way of coding and a coding style that our research team usually keeps. This also helps us better understand the model and how we input the data, how we expect to get the outputs and prepare the GT, which will be critical for debug and further research.

5) Implement, overfit for debugging

Now we can go implement the model, training and testing. To do a quick and efficient debug and make sure this thing has a chance of working, we should get to a point that we can overfit a few examples (tens or hundreds).

6) Scaling up to a whole dataset

Now we can go ahead and start training on our whole dataset. If the difference between the paper’s model and ours is just the dataset which we train on, a good starting point might be their weights, if published. Sometimes we do things a bit differently but we can still leverage existing datasets so another option is to pre-train the model on the public dataset and then train it on our data. This usually boosts performance.

7) Now — do the real research work!

So far, we had a bit research mainly for choosing the approach, following mostly coding and integration. Now we’ve started training and this is where the real research capabilities come into play. Things probably won’t work well or not as expected and now it is time for us to understand the problems, the difficulties with our data and how to handle it and make things work well.

8) Next steps

Once our model is stable and achieves good performance on our test data and on our platform, we can start thinking how to further improve it. Gathering additional data is important, but we should be wise with the data collection and do it only after we’ve analyzed our model’s performance and understood when it works well and when it fails. The data collection focus should be on these failure scenarios.

We can also gradually add layers of complication, either on the model itself (but keep in mind the runtime specifications) or on training. Usually more advanced and recent papers would tackle edge-case issues and suggest ways to solve them or just general techniques to improve the performance.

Remember that a machine learning model is an ongoing project, we always research new ways to make it better.