Original article was published on Artificial Intelligence on Medium
Interviewer: Haebichan Jung, Interviewer at TowardsDataScience.com.
Interviewee: Amit Jain Tech Lead, ML at Trade Rev.
For more TDS-only interview, please check out here:
Can you tell us about your professional background?
It’s been 15 years of extensive experience in industry for many verticals by now. I’ve worked as team manager, team lead, stuff like that. My career has 3 phases in the span of these 15 years. In the first 5 years were spent in telecom. I was developing algorithms for 3G stack, using C, assembly code, etc. Next 5 years I was into cloud computing, application programming. The last 5 years, I’ve honed my skills in backend skills and added ML to my skill set.
Most recently, I’ve moved to Toronto in September 2017 and joined TradeRev as Tech Lead for the ML team.
What is your focus at TradeRev?
TradeRev is an online platform for used cars in the B2B space. So rather than dealers auctioning dealers during physical auctions, they use TradeRev’s online platforms. So like ebay for dealer to dealer car transaction.
Machine Learning comes into picture to give confidence on the car, the dealer. Some regular uses are recommendation systems, price predictions, regression problems, and computer vision problems of identifying the views of a car from videos.
My role in TradeRev was very unique. I was primarily responsible for ML product delivery. This meant taking research into production. I was leading ML team, and was involved in all aspects of the product lifecycle.
This began with ideation with the product team on what’s the solution you are looking for. Then collaborating with data engineers and Data Scientists and building prototypes, and choosing AWS services here and there, and then taking prototype into polished solution. Our tech stack was python, scikit-learn, tensorflow, and other normal libraries.
One other important work was introducing software best practices in Data Science, such as CI/CD in the Machine Learning space, model monitoring (how do we actually model our model), unit-testing, etc.
Can you speak on the business impact your team made at Trade Rev using ML?
One important taking research into production is understanding how ML actually makes an impact on the business. As an example are recommender system. This is a very basic thing; based on my previous history, what new items am I interested in?
So based on previous history of car dealer (what car he is interested in buying), we could help customer sales team target specific only certain dealers spit out by ML algorithm (10 dealers focused instead of 100). This could then reduce time to close the deal. This is a very direct impact of having ML algorithm in production.
Do you have any advice for those switching to a Data Science career from background such as yours (software engineering)?
I have 3 specific advices.
- Clearly understanding various roles in Data Science. These are Product Manager, SDE, ML engineer, Data Engineer, Researcher, Data Scientist, Business Analyst, and so on. Ask which role interest you and which relevant skill set you have. Also, marry the domain expertise along with skill in Machine Learning.
- Identify the skill gap. If there is a career part you are targeting, find the skill gap you have and take appropriate courses on math, coding, whatever it can be.
- Focus on applied the side of ML. So let’s say you have a housing dataset and you want to predict something. Use existing libraries to see what you can find. Get feedback on the things you have built.
You spoke about domain expertise. What is its role in Machine Learning and how important is it?
My perspective is that there is no silver bullet here. ML is nothing but stats, math, and new libraries. It’s age-old technique with new libraries, new frameworks with more data and computing power (in a nutshell).
Say you don’t have a domain expertise and you only know the tools. For this situation, think of it as a marathon not a sprint. For marathon, you need to have your fundamentals, so learning new running technique or something. But the question is do you have the stamina, meaning do you have the foundations of the domain expertise.
So let’s take for instance drug discovery (considering the corona season). For a person who is expert in tensorflow or scikit-learn but doesn’t know anything about molecules, etc. — I don’t think this person can do wonders in drug discovery. But if another person has domain expertise in the field, the two can come together for a bigger prospect of success. There are unicorns of course who come with all the skill set but on average, if we build a team with different skill set, the rate of team’s success improves.
To your point about drug discovery, I do see that lots have been posting their models online without really understanding the biological domain. This seems dangerous.
Yes, and what I see is that many people take this in isolation. It should be more of a holistic approach to product being built with ML. In finance, if the person doesn’t have any economics background, and builds models on stock market. And tomorrow the stock market behaves weirdly, ML alone will not work.
You mention frequently about model production, which I often hear as the last part of a Data Science project life cycle. Can you speak more on the importance of this topic and how newcomers can learn?
I’d like to quote Andrew Ng on this. He said that “We know the ML works. Now it’s the time to take it into production and monetize it.” But what I’ve seen is that Data Scientist is usually focused on the Jupyter Notebook and the hyperparameters.
What I’ll say is to not have a very narrow approach. Think about the business context — how ML is fitting into the business. The ML is one part of the business, like recommender system. Recommender system is not the business, but part of the business.
The other part is to think about what it means about ML being released into production. For instance, your field data can have different distribution than training/test data. So if you don’t have that mindset, it’ll be troublesome if tomorrow some bugs come and the predictions become way off and becomes challenging to fix. This leads to questions like “How can you improve your training? How can you improve your API?”