Learning Deep Learning — the journey begins

Source: Deep Learning on Medium

I have been watching the space for several years now. While I studied Engineering and Math, at BITS Pilani, I took a course, Neural Networks, which allowed me to understand and visualize decision-making frameworks from a ‘learning system’.

However, after my undergrad engineering degree, I got degrees in economics (at the London School of Economics) and management (at London Business School), and haven’t been able to dust out my old love for math.

I just didn’t see any fit of Python, programming and big data analysis with the work that I was doing, which had to do with laying out a strategy, building revenue side for businesses and streamlining operations. It was all done via excel (sometimes Google Sheets) and SQL. And of course, the right hand of any manager, MS Powerpoint.

I’ve been trying to program using Python for almost 2 years now but haven’t had the discipline to go through with being able to learn anything beyond basic operations. Secondly, the lack of a path to build a specific product, in spite of having a vision — was another factor.

After years of waiting, I finally took the plunge and signed up for a Udacity nanodegree course in Deep Learning.

I’m still a bit vague as to where this will take me, although I am now aware of the paths in front of me.

In my initial conversations, the mentor at Udacity told me that he would consider me to be a successful participant in the course if I can generate a Proof of Concept for an idea that would bring together deep learning frameworks and build a business off of that. That seems like a challenging yet interesting path to take.

Today was a highly interesting day for me. The three exercises that I did on the Udacity course

  1. Style transfer — I created a funky duck using style transfer, the duck originally looked way different (and this application reminded me of the Prisma app)
Duck image: Before style transfer
Duck image: after style transfer (I found this super cool and funny)

2. MIT Self-driving car project — A super cool simulator called Deep Traffic. This one, I honestly didn’t spend much time on. I just played around with the numbers for a little bit and looked at the code and didn’t understand much of the code. I did take a neural nets elective back in 2009 in college (been 10 years) so I did understand what he was trying to do with the code.

This is what the simulation looks like

3. Flappybird — this exercise is probably a staple of most deep learning courses. I looked at the hundreds of lines of output showing me the parameters of the neural network and bits of what I learnt started coming back to me.

Udacity suggested I read this book called “Grokking Deep Learning”. I have heard of A Trask (and follow him on Twitter) so I knew of his work. That pushed me to get a copy of the book. I don’t even know what the word Grokking means, so I’ll have to look that up first, I guess.

There are two other recommended books: Neural Nets and Deep Learning by Michael Nielsen and The Deep Learning Textbook by Ian Goodfellow et al.

I’ll figure out when to read them after I finish this one. I’m also planning on reviewing the book by Trask after I read it.

The next section is on Jupyter notebooks. I’ve played around with them before, so I shouldn’t find it difficult (only time will tell).

That’s all for now.

Love,

R