Source: Deep Learning on Medium
Some Insights, Handy Todos and Setup Guides
In Fall 2018, I took fast.ai’s “Practical Deep Learning for Coders (Part I)” course at USF. The course is offered in person (how I did it) and as a MOOC. It was a great experience and I learned a lot; but in retrospect I could have gotten more out of it had I prep’d a little differently. I share my experiences and some handy todos here, hoping it helps others get more out of their time doing Deep Learning with fast.ai.
Who should read this post?
My secret wish: everyone (I can use the feedback!). But if you’re an ace Deep Learning practitioner taking the course to keep pace with SOTA (state of the art) then you already know all I’m going to say. If you’re an active programmer getting into Deep Learning you will find this useful in parts. But if you’re like me, i.e. new to Python and Deep Learning, then reading this before the course will get you going faster. And, if you’re the lucky few who still operate in the real world (i/e don’t breathe code), then WE need you! Stay and help us understand your world and how we can be of use to you!
Why this post?
Prior to this course, I had zero exposure to Deep Learning, the tech or the math. I was, however, deeply interested in how AI is going to shape society, and if society would have a say in shaping AI. I now understand the first *enough* to see a desperate need for the second. Experts in the field have been advocating for and championing the cause of enabling society to actively shape AI. Fast.ai is one such champion. Their goal is to mainstream Deep Learning; bring it to one and all. And a big part of this is reducing the barriers to entry.
None of what I share below is rocket science but can be intimidating for a first timer. And if you’re from a group that isn’t the typical geek persona, it can be outright scary. I am not a geek. I do not have a background in Data Science. I used to code a lot, but I’m not in love with coding. What I do love however, is how my code can help solve problems. And increasingly if at all it can without creating bigger problems! My own Deep Learning journey is just starting but taking cue from Jeremy’s comment that there are more people like me who need help getting started than experts who write papers, I decided to pause and pen this down. I want to shine light on Jeremy’s clarion call of “You, whoever you are, can do Deep Learning” while keeping it real in terms of the effort required and the expectations to be had.
The current course is taught in Python. I self-learnt Python a few years ago; but the post baby brain fog wiped all that out, along with the geek-talk and the tribal knowledge around it. But I did code for a living (in C++) many years ago and that foundation has stayed. So this time around I was in a “don’t know my way around this shiny new thing but I’ll figure it out” state. With the associated anxiety of course!
What’s unique about fast.ai?
Coming back to the course. The fast.ai team makes it super easy to get in; in line with their goal of making Deep Learning accessible to all. The pre-requisites are intentionally set to a minimum and the setup guides are easy to follow. The forums are a store full of information. But the course is intense and picks up pace like a roller-coaster ride. Buckle up!
What’s unique though is the nature of intensity. Jeremy’s teaching style is best described as DO-LEARN-DO. Enjoy the experimentation before understanding the science. Learn the tricks, then the trade. For most of us who’ve been conditioned by years of traditional learning, this can be quite jarring. Add to that the free-flowing Jupyter notebooks that are forever works-in-progress, it can take a class or two to settle in. Preparing for this in advance can make a huge difference to how quickly you can gather steam.
Another thing that made a world of difference to me is fast.ai’s approach to math. Jeremy presents it very intuitively and downplays the complex bits. For someone who is dead scared of math this was liberating. I was suddenly empowered to explore a field that I’m interested in without worrying about understanding all of it, in all its depth, all in one go. In fact I recall him saying in the last lecture, “All of you are ready to read Deep Learning papers now; ignore the fancy greek and just focus on the text”! And that’s great. BUT if you haven’t seen it before, or like most of us, saw it half distracted in high school, then the math takes some returning to. Again, anticipate and prepare for this.
Finally the expectations you should have going in. This course will not turn you into an expert publishing papers (at least not right away); but it doesn’t claim to either. The goal is simple — get as many people, experts in THEIR fields (though I can’t even claim that!), to understand what machine learning can do and teach them ways to do it quickly. And on this, it delivers plenty and then some. It’s a great place to start exploring the field and its applications. And if you’re interested, it also gives you all the means and tools required to dig deeper and go pro. Effort is yours to put.
Now let’s turn to the todo lists, my original promise!
What to do before and during the course to overcome the anxiety of being thrown into the deep end (pun intended)!
I highly recommend spending a week or two BEFORE the course doing the following:
- Setup your local machine to be ready to browse code at a minimum. Learn how here.
- Do NOT read up on Deep Learning (the tech or the math)! The advantage of not knowing what you’re getting into is huge, and should not be underestimated. Especially given Jeremy’s style of teaching, which is basically a sine curve oscillating between “wow, see that magic?” and “don’t let that be magic!”.
- Do, however, budget for more time DURING the course than what is suggested. Because half the time stuff will not work.
- If you’ve never coded in Python, but are comfortable coding, just spend 2 days reading stuff like this or this or this… (you get the idea) and write code. I didn’t do this, and was able to follow along and find what I needed when stuff didn’t work… but doing this would have allowed me to spend more time experimenting with the deep learning concepts and less time worrying about getting my syntax right.
- Once you learn to build a Deep learning model (first class!) you’ll need to train it. This requires a unix/linux box or a powerful PC with Nvidia GPUs OR the use of a remote compute environment. By remote, I mean on the cloud, stuff like Google Cloud, Amazon AWS etc that provide compute engines for machine learning. If you’re using remote compute, you’ll also need to setup the remote environment to browse code (or IDE if you will). You can do this after the first class, as prescribed, or give it a go in advance. See how here.
Here are some things to consider doing DURING the course:
- Unpacking — Each class packs in a lot of new concepts, gently introduced and then actively used in later classes. The few times I fell behind in reviewing the lecture video/notebooks, the next class was a total washout. Don’t do this if you’re attending in-person. I don’t mean understand every concept down to its last hairy detail, but get a general idea, RUN the Jupyter notebooks (i/e execute the code in them) and spend *some* time with it before next class.
- Listen to Jeremy — I mean like really listen, to everything. If he says, go code — do it. If he says “try blah”… go try blah. I did this but not as much as I should have. The old habit of wanting to understand “why” before “how” is hard to break; as is the infrastructure software mould of doing software specs and architecture diagrams BEFORE writing any code. I wish I had broken those two habits sooner. They have value, just not here.
- Personal Project — By week 3 or 4, identify a fun project that you can continue to grow with. Going through the class notebooks is a good way look at new concepts, but the real learning will happen on the projects, so start early. I really struggled to get my project off the ground and by the time I did, the course was over… of course I’m still working on it, but being able to try out more of the concepts during the course would have resulted in them sticking better.
- Forum — this was the most intimidating part for me. Everyone seems to already know everything and within one week people were posting amazing models and projects showcasing all the progress they were making. And here I was struggling to find the horse, let alone attempt getting on it. The fast.ai team encourages folks to ask for help. So I did. Started posting questions at first, then some comments and eventually some progress too. The rules I set for myself: 1) if I’m stuck on something for over a day and nothing has worked, it goes on the forum; 2) put the blinders on any post that uses terms I haven’t seen in class yet. I highly recommend these rules to newcomers!
- Study Group — if your time and location align with a study group, definitely join in. I was juggling a few other things during the course (a job hunt, a toddler and a bad commute) so I found myself being most productive at home. I hope to fix this in the near future!
And it’s a wrap.
Anytime I look at the amazing work others in this field are doing (students, experts, nearly everyone but me) I get burdened down by the miniscule progress at my end. But then I pause and tell myself, I don’t need it to be easy, I just need to know I can get there. And that’s what I hope this blog reaffirms to others like me, so they may overcome their anxieties and jump in.
Questions, feedback, insights, thoughts are all welcome! For those of you wanting to get started with the setups, see the sections below. Happy Deep Learning!
Local setup for Mac
If your local machine is a Mac, then you will not be able to run or train fast.ai models on it (no Nvidia GPU and fast.ai v1 is only supported on Linux). But you still want to be able to browse code from day 1. For that you’ll need some software and software package managers – homebrew, miniconda/anaconda, pip, git, python 3.6 or higher, pytorch, jupyter (the tool of choice for writing DL code and running models) and any editor (I used VS Code).
- If you have a clean machine, i.e no installations of any of the above then you’re in luck. This blog is all kinds of awesome to get you going. I’d say read the blog anyways!
- I had a bunch of random packages and python installations from past lives and wanted a clean start. This and this really helped with the cleanup and then blog in step 1 worked.
- After step 1 you should have Homebrew and Anaconda/Miniconda package managers installed; and PIP too. I used Miniconda, which installs all the other packages including python. This conda cheat-sheet helped check on things along the way. With homebrew/pip you will need to install the additional packages as well.
- The main software packages you will need are Python 3.6 or higher, NumPy, SciPy, Matplotlib and Jupyter. The “conda list” command will list all installed packages.
- VS Code installation guide is easy to follow. Getting started with Python is documented here and here — you will need the Python extension for VSCode. I also installed the Vim and Jupyter extensions for kicks. VS Code cheat sheet is here.
- Install Pytorch from here.
- Install fast.ai from here.
At this point, you can launch VS Code, create a workspace for your fast.ai and pytorch codebase and the linking, references, definitions etc should all work seamlessly across the two. Happy browsing!
Remote Setup (GCP, VIM, ctags etc)
Fast.ai is supported on multiple platforms like Google Cloud Platform (GCP), Amazon Web Services (AWS), Salamander etc. so that us regular folks who don’t have linux boxes with fancy GPUs sitting in their garages can also do Deep Learning! The setup guides provided on the course forums are great for getting started. Pick a remote compute platform, run through the setup guides. I used GCP and did not run into any issues not already covered in the forums. So I have nothing to add here.
To browse code, I wanted a Vim setup like Jeremy’s. This was tricky and I had to do a lot more than “just copy his .vimrc file”. Which btw, is awesome, so get it from here. The vim version that came pre-installed on GCP compute engine did not have python support enabled. You can check that using “vim — version” or “vim — version | grep python” command and should see something like this:
Turning the minus to plus required some work. You can either compile VIM from source; which I did not want to do on a remote platform (if you want to, see this and this). Or you can download a version that comes pre-compiled with python enabled. After some hunting, I settled on Vim-NOX as the easiest option. You can install Vim-NOX using the instructions here.
Once Vim-NOX is setup, vim — version should show python3 with a plus sign, which means you now have Python syntax support enabled for Vim. Just add in a few more changes to the .vimrc and install few more packages to use Vim with easy code browsing features like auto-complete, code fold etc. and you’ll be good to go.
- Install powerline (for statuses/prompts):
pip install powerline-status
- Install vundle (bundle/package manager for vim):
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
- Change .vimrc to find python3 and vundle (change text in red to the location of your python site-packages; may be different if you don’t use the conda version):
python3 import sys; sys.path.append(“/opt/anaconda3/lib/python3.6/site-packages”)
python3 from powerline.vim import setup as powerline_setup
python3 del powerline_setup
- Install ctags:
sudo apt-get install exuberant-ctags
- Create tags for the code you want to browse. For me this was the fast.ai and pytorch libraries, so this is what I executed to create tags:
sudo ctags -R -o ~/tags /opt/anaconda3/pkgs/fastai-1.0.34-py_1/site-packages/fastai /opt/anaconda3/pkgs/pytorch-nightly-1.0.0.dev20181024-py3.6_cuda9.2.148_cudnn7.1.4_0
- Configure ctags:
echo — python-kinds=-i > ~/.ctags
- Set tags in .vimrc to point to the tags generated:
- Change code fold method: In Jeremy’s .vimrc the code fold method is set to “expr”, which is apparently the most complicated and most powerful way to fold code in Vim! Expr uses regular expressions to create rules for folding. Fun fact: the last time I worked with regular expressions was when I taught PERL as a TA for an undergrad course that I have no memory of now! So, I just modified my .vimrc and set the foldmethod to “indent” and this is good enough for now!