How to Keep Up With The Latest in AI — While Working in Jobs That Doesn’t Need It

Original article was published on Artificial Intelligence on Medium

How to Keep Up With The Latest in AI — While Working in Jobs That Doesn’t Need It

Thoughts on to balance the need for learning about SOTA vs the reality of work-life obligations.


This article serves as a summary of this tweet (below) by Sam Bowman. Much of the content is taken from the responses from the original thread, interlaced with my own experience.

If you’re working in data science, chances are you’ll know someone who’s asked this question before (or perhaps that person is in fact — yourself). Especially if you’re working in a non-tech company (actually, even tech companies can be practical in solving their problems) — chances are your company will not be expecting you to spend much time to read up/research/study on start of the art (SOTA) techniques to solve your business problems.

It’s a dilemma really. One that can be quite frustrating at times.

It’s a dilemma since parts of being a good data scientist is having that drive to push the boundaries of what data science has to offer, but you’re not encouraged to do so at your workplace.

And it’s frustrating since you always have that inkling of a suspicion that whatever that’s being deployed in your current production system can easily be replaced by the latest SOTA techniques, but you haven’t been given that green light to spend some working hours to read on some relevant research papers, experiment and A/B test your hypothesis.

So what should a data scientist do to keep up with Hintons and Lecuns of the world?

On the one hand, the drive to being a decent data scientist compels you to keep up to date with the latest AI/ML progress, but on the other side of the coin, there are just so many papers to be read, on so many domains being published each day. And working in a company that isn’t interested in pushing the boundaries in AI is definitely not helping.


From the tweet’s thread, here are some strategies that I liked (and perhaps, useful for you too).

1. Stagger your learning

What I like about the above is that it puts a damper on that need to always be on the lookout for the latest in SOTA algorithms and just forces you to focus on the end goal (ie. solving a business problem). Yes, you’ll be a bit late to the party (the FOMO is real). But you’ll also be able to preserve that little bit of sanity that you have left on other things (like helping out with your kid’s homework and or going out shopping with your wife for example).

I recall when the BERT paper was released a few years ago and every few weeks you’d see a new Bert variant come up and claiming SOTA scores. I don’t think testing out all of the different embeddings is an efficient use of your time — and they probably don’t add much to your level of depth of knowledge (since things get stale pretty quick during the quick ramp up when everyone is eager to prove something).

Thus having a gap year (or 6 months) to just let the dust settle before adopting a technique sounds like a good strategy to me. Not to mention within that 1 year period there’ll already be tons of workshops, tutorials, articles, videos, example codes and what have you to quickly give you the gist of what you’ll need to know about the algorithm and how to efficiently deploy it (in the framework that you’re familiar with no less).

2. Listen to podcasts

Podcasts are the easiest way to consume the latest SOTA in my opinion. A day’s commute to work (pre-Covid 19) takes around 3 hour of my time (back and forth), and a podcast episode usually ranges between 30 minutes to 1 hour depending on the provider. That means that within a week, it’d sum up to 15 hours of quality content that I can always go back to should I find the topic to be interesting.

My favorites are as below:

  1. TWIML (covers the entire AI domain, business vertical and ML Ops)
  2. NLP Highlights (Allen NLP folks interviewing other NLP experts)
  3. Artificial Intelligence with Lex Friedman

3. Join a virtual paper reading group

If there’s any silver lining that has come out out of Covid-19, it‘s that it has served as a catalyst for virtual knowledge sharing in many companies. Some that I have recently on my radar are:

  1. Algo Hours by Stitchfix (recordings are available on Youtube)
  2. Deep Learning Salon by Weights and Biases (recordings are available on Youtube).
  3. HuggingFace has been sharing a lot of content in various tech meetups and on their Youtube channel, but haven’t gotten around to sharing the internal discussions just yet. However, they do share what they’re reading on Github.

4. Kaggle Reading Group. It’s quite dated (last updated Dec’19, so perhaps it’s not THAT old…), but covers a lot of NLP research papers. Prepared by Rachael Tatman of Kaggle.

5. Paper Reading & Discussion by Hosted by elvis of, this is quite a new find and I still haven’t had the time to dive into their materials just yet. Titles from past recordings does look interesting though.

Circling back to reality though, I never made it a point to diligently watch or join each and every knowledge sharing sessions that goes live (typically around 12am ++ ) as and when they appear. What I’d normally do (pre-Covid19) is to look up the recording, have it saved on my phone (which is easy for Youtube), and watch them during commute.

Of course, now that I’m mostly working from home, I can always watch them directly on Youtube whenever I’m on a break or something. And most of the time I’d usually focus on the stuff that relates to something that I can immediately use (going back to point #1 above) or are being applied in a business setting somewhere (as opposed to just being the latest SOTA).

4. Community/Social Media

The earlier mentioned items point to things that you can do to learn things on your own. But notice that in most cases, there isn’t a feedback loop that tells you whether what you’ve picked up so far is right or not.

That’s the beauty of keeping in touch with the community. Some benefits that I can think of from the top of my head:

  1. Check and balance of your understanding.
  2. The quickest way to crowd-source for ideas on how to solve a problem. People are generous in general just as long as you’re not seen as abusing them.
  3. Be able to join learning groups on various topic (from beginner to advanced). The ones on TWIML (This Week in Machine Learning) and MLT (Machine Learning Tokyo) Slack group even hosts sessions for various geographical regions to cater to the audience). For users of the fastai library, the community, in particular, are very active on their Discourse page.
  4. Direct access to the pioneers and trailblazers of the industry via platforms like Twitter/Slack. The things that I enjoy most about this platform are the tweets (and their responses, such as the one that inspired this post) given by experts in the field. It’s always refreshing to be able to read through the back and forth discussions on topics ranging from ML best practices, data ethics, NLP techniques and others from the authors of books, lecturers, researchers and the who’s who of the AI circle.

5. Doing

Ultimately, one needs to be actively working on the codes and building something to truly appreciate the knowledge that’s been gained. Studying source codes, working on your own projects, joining Kaggle competitions or even writing about you’ve learned — will greatly help solidify your understanding and preserve it for a much longer period (ie. deliberate practice).


Keeping in sync with the latest development and state of the art in AI/ML/NLP can be a daunting task — more so if your day job doesn’t require you be in the forefront of AI research.

In this post, we covered a few tips on how one can stay up to date with the latest developments in ML/NLP. To summarize, it’s not easy. But more importantly, perhaps you don’t really need to stay at the bleeding edge of progress.

With progress being made in the NLP domain at a faster rate these days — it might not even be a good thing to want to keep track of every single thing that appears on a weekly basis. A better, simpler approach is to stagger the rate of input in order to both allow the best of the breed to stand out; while also filtering out bogus claims out of the hype circulation.

The rest of the tips revolved around consuming knowledge during spare times, leveraging networks for information filtering and allocating some time for deliberate practice.