How I found my current job

Source: Deep Learning on Medium

The path to my current job

The new year is a time for stories. Let me tell you one on how I found a job in deep learning (DL) / computer vision (CV).

In January of 2017, I was working in a company called TrueAccord, located in San Francisco. I was building recommender systems, using time series and tabular data. At the same time, the internet was full of blog posts and papers on how deep learning models outperform humans. What I was reading looked much more exciting than projects that I did at work.

At that time, I had some knowledge about deep learning, but I would not say that my hands-on experience was extensive. I did not do any DL in my graduate school or at work, but I worked on a few DL competitions at Kaggle. I even had a Kaggle Master title, with a gold medal in Ultrasound Nerve Segmentation challenge. That was the first challenge where UNet was widely used by the Kaggle community. It was also the first challenge in general, where soft dice was used as a loss function for a segmentation task.

Before 2017, Kaggle was mostly hosting tabular data competitions, ones where stacking xgboost and other traditional ML algorithms was the way to get to the top. There were a few DL competitions, but they were rather rare.

Everything changed in 2017. DL became more mature and moved from academia to industry. Kaggle competitions picked up the trend, and challenges in the imagery data became more frequent.

My job at TrueAccord was pretty good, but I wanted to move to a position where I will work on deep learning related tasks.

The problem was that my starting position for such a change was not very strong:

  1. My major was Physics and not Computer Science.
  2. I had only one year of industry experience.
  3. I did not have any machine learning related papers in my resume.
  4. I did not do any computer vision projects at my job.
  5. My deep learning knowledge was limited.

Basically, I was in the same situation as any other person that is trying to switch the direction of his or her work.

And I did exactly the same as any other person will do in my place. The approach is rather simple: you go to the interview, if it works, you are done, if not, you study what you did not know, and repeat. The hope is that after a finite number of iterations, you will get lucky and get the desired position.

Of course, I was studying in all the free time that I was able to find. Papers, blog posts, and, of course, DL competitions.

March 2017, Sergey Mushinsky and I finished 3rd out 493 in Dstl Satellite Imagery Feature Detection Challenge and split $20k of the prize money. After that challenge, I started to feel comfortable with binary image segmentation, and multispectral imagery that one typically deals in the satellite imagery. (Blog post describing the solution, preprint, code)

I used the prize money to buy a second GPU, which in turn, made me realize that Keras is not working well with a multi GPU setup. I switched to PyTorch, which is my main DL framework since then. During the time of the competition, I failed an onsite interview at Descartes Labs, and a few technical screens.

Somewhere around that time, I was invited for the onsite interview with NVidia, which I did not pass as well. One of the issues that I had was my limited knowledge of how 2D object detectors work. Luckily DSTL started Safe Passage: detecting and classifying vehicles in aerial imagery challenge that was focussing on exactly this, which they decided not to host at Kaggle, but to use their own platform.

There was a rule in that competition: “Everyone can participate, but you need to have a passport and live in some limited set of countries to be able to claim the prize money.” I lived in San Francisco, I paid taxes in the US, but due to my Russian citizenship, I was not eligible for the prize. I knew about that unfortunate fact, but still, I needed to get some practice with object detectors. Hence I worked hard on the challenge and finished second.

I believe that the fact that the color of my passport prevents me from claiming the prize is discriminative. I wrote this on my Facebook page and on Twitter and moved on. Somehow this story was picked by Russian news, and I got to the main page Russian online resources and TV channels.

British defense lab developing things for MI-6, AI algorithms, the winner that can not get prize due to Russian citizenship. You can make a good story out of it. And Russian media did. At a time, I did not feel comfortable giving an interview on camera, so I refused to talk to the journalists. It did not stop them. They put my profile picture in the background and invited “specialists” on the topic to give their expert opinion.

The journalists even approached my parents, who had no clue about machine learning, competitions, and all the story. My mother told them that parents are essential but that it would be unwise to underestimate the influence of the school teachers. This deflection worked, and journalists moved with their questions to my high school.

One of the top Russian Tech companies, Mail Ru Group, decided to use this opportunity to get some positive PR. They proposed to give me $15,000, which was equivalent to the second-place prize in the challenge. I liked the idea, but I did not feel that it is fair. Based on the rules that I agreed to, I was not eligible for the prize. Moreover, I was not thrilled to get in my resume the line that I was paid to develop AI algorithms for British MI-6. I got something better in my mind. I love theoretical physics, and I knew that there is always a problem with funding. Hence I asked to transfer the money to the Russian fund that supports fundamental science.

In the end, everything worked well. Mail Ru Group and I got some positive PR. Russian fundamental science got $15k. DSTL got the motivation to think twice about their rules for ML competitions.

The story got the attention of the Russian audience for a couple of days and died out after this. In the English speaking press, there is only one blog post that talks about it. Maybe this is for good 🙂

It was June 2017, and I was in TrueAccord. I wanted a Computer Vision job, and I did not have it.

The next one was Tesla, the recruiter contacted me because of my Kaggle achievements, which do not happen often. I passed take home, tech screen, onsite interview. The next steps were the background check and approval of my application by Elon Musc. I did not pass. The recruiter told me that I violated the nondisclosure agreement (NDA) and talked at a forum about the interview process. Which was true. I did mention that I am interviewing in Tesla at slack channel ods.ai. I did not share any interview questions there, but technically they were right. As I remember, Tesla’s NDA is rather strict and prohibits discussing your interview process even at a high level.

This rejection made me sad. Working with Andrei Karpathy would be exciting. Even now, a few years later, I feel guilty that I created that unhealthy situation. Hopefully, at some point, I will have a chance to apologize for my behavior.

Next was Amazon from space competition at Kaggle. The problem was too straightforward to invest a lot of time in it. Multilabel classification on a small dataset. As a result, I merged with a team with six other people. In one week, we trained 480 classification networks and stacked them. 7th place out of 938.

The company that was organizing the competition was called Planet Labs. They had an open DL Engineer position, I asked about it and was invited to the onsite interview. I failed again. The feedback — not in-depth DL knowledge.

It was the middle of August. After seven months of job search, I started losing my typical positive attitude and faith in myself. I just started the interview process with Lyft, but I assumed that it will end up like everything else before it.

I was sitting in front of the window, and I was drunk. I just got another rejection:

A Googler recently referred you for the Research Scientist, Google Brain (United States) role. We carefully reviewed your background and experience and decided not to proceed with your application at this time.

It started getting me.

But I had an idea. In every bad situation, there is some weird move that may work. Most of the rejections I got during the resume screening stage. As a result, I did not have a chance to show my technical skills. The premise of the idea was to add Deep Learning publications to my resume.

I reached out to Alexey Shvets, who was a postdoc in MIT and made a proposal:

  1. Let’s find the next DL conference that has a competition track.
  2. Train a model, create a submission, get the last place. (I was assuming that if people spend years, working on some problem, it would be hard to be severe competition for them)
  3. Write a preprint or paper on that academic dataset, describing our solution.

He agreed, and we looked for a conference that may suit us. We came across MICCAI that was happening in three weeks. It had a workshop called Endoscopic Vision Challenge with a few computer vision competitions. The deadline was in eight days.

We picked the first challenge, GIANA. The problem had three subchallenges. In the remaining eight evenings, I adapted the pipeline that I had from previous Kaggle problems, wrote the code, and trained required models. Alexey wrote reports that described our approach. We were confident that we would be at the end of the leaderboards. Hence I did not really invest time in the problems. Winning kaggle pipeline and fit predict. We sent our submissions and reports to the organizers, and I assumed that we are done. It was not the case, Alexey figured out that the workshop had another competition called Robotic Instrument Segmentation with three subchallenges and that the deadline was extended by four days. He asked if I would like to give a shot in these subchallenges. I agreed, spend four evenings, wrote code, trained models, we send the predictions to the organizers.

The challenge had rules: the member of the team needs to come to MICCAI, and to present the results, final standing would be announced only at the conference.

I have never been to Quebec City and Canada in general. Hence I agreed to go.

The day of the workshop. I am coming to the room. Everyone knows everyone, and all of them are excited. I do not know anyone, and I cannot start the conversation because I feel comfortable in Deep Learning and Computer Vision in general. Still, all the specifics of the medical imaging were a bit foreign to me.

The presentation of the first challenge started. Teams from different universities presented their solutions. It was hard to judge how good they are, but it was clear that a lot of work was invested in them. It was my turn. I came to the scene, told the audience that I am not the expert in the field, that we did this challenge only as a practice, apologized for wasting their time, and showed our two slides.

The organizers showed the results.

1. First sub challenge — we are first.

2. Second sub challenge — we are third.

3. Third sub challenge — we are first.

First overall. (Official press release)

It was the time for the second challenge, one that had deadline shifted by four days. Different teams presented their solutions. I apologized for wasting the time of the audience, showed our two slides.

First, second, first. First overall.

Even now, I remember that moment. I am standing at the scene. The organizer is preparing a check and some gifts. Alexey and I are winners, but I felt frustrated. How did this happen that some random dude that does not have domain knowledge in medical imaging got first places in both challenges? At the same time, people that make money for living working on the topic and that spent months working on this problem had much weaker models?

I asked the audience: “Do you know where do I work?” Noone, except one organizer who checked me at the LinkedIn knew. I told them that I work in TrueAccord, which is a debt collection agency, and that I do not train deep learning models at work. And this happens because HRs in Google Brain and Deepnind do not even look at my resume.

After that passionate speech members of the Deepmind’s health team that were in the audience caught me in the break and asked if I would be interested in interviewing for a Research Engineer position to their team.

I believe it was the first time in history when the debt collection agency won competitions in medical imagery. 🙂

Somewhere at that time, I accepted an offer from Lyft, where I work now. In the break between TrueAccord and Lyft, I participated in the Carvana Image Masking Challenge at Kaggle. The approach from the GIANA allowed lead me to the bottom 20% of the leaderboard. New ideas were developed, and the team of Vladimir Iglovikov, Alexander Buslaev, and Artem Sanakoyeu finished 1st out of 735. (blog post, code)

That competition is the first big challenge where the community started using pre-trained encoder in UNet type architectures. It is the norm now, and there are great libraries that allow you to get the variety of difference Segmentation Networks with a variety of different pre-trained encoders. But at that time, the idea was relatively new. The challenge lead to a preprint called TernausNet, that Alexey and I wrote just for fun, and that is surprisingly my most cited work.

Summary

In eight months, I found an excellent job in the field that I was interested in. That time was full of pain. You make mistakes. Every time you fail, you feel that you are stupid. From time to time, you start losing faith in yourself. But you are learning something new. And every step that you do moves you toward the desired goal.