Announcing fast.ai part 1 now available as Kaggle Kernels



It’s a great time to get started doing deep learning with Kaggle Kernels!

Recently I had the chance to give my first conference talk, at PyOhio in Columbus. I spoke about getting into deep learning, and I used Kaggle Kernels to demo some material from the first lesson of the fast.ai course.

The next day I came across a Medium article called “Learn deep learning with GPU enabled kaggle kernels and fastai MOOC”, and I was excited to see more people recognizing the capabilities of this platform.

So I thought, if we’re going to make it easy for people to get started with deep learning on Kaggle Kernels, why not make the whole fast.ai course available?

The Benefits of Kaggle Kernels

According to the documentation, “Kaggle Kernels is a cloud computational environment that enables reproducible and collaborative analysis.” Basically, you’re able to concentrate on writing your code, and Kaggle handles setting up the execution environment and running it on their servers. There are a few reasons why I think that this setup provides advantages for someone who’s starting out learning about deep learning:

  • Free GPU with no waiting/approval: The fact that you can access a GPU instantly and for free is a huge step for getting started with deep learning. When I first started, your only option was to wait for AWS to approve you for a GPU instance, and then pay them to run it (and hope you didn’t forget to shut it off). Running on GPU vs CPU can mean your training finishes in minutes versus hours.
  • Deep learning packages pre-installed: This is another huge win for beginners, saving you hours of setup and Googling obscure error message. The Kernel environment is set up using the Dockerfile in the docker-python repo maintained by Kaggle. It sets up CUDA and CUDNN, NVIDIA’s libraries for accelerating deep learning on their GPUs, and installs popular python libraries for deep learning: in addition to fastai, there’s keras, tensorflow, pytorch, and pytorch, among others.
  • Access to data: Most of the fast.ai lessons use Kaggle competitions for training data, and in Kaggle Kernels accessing that data is as easy as clicking “Add Dataset”. It also makes it easy to apply the lessons to other past competitions without any additional steps. And if you can’t find the data that you need, you can upload your own dataset and share it with the Kaggle community.
It’s as easy as a single click!
  • Social features of the platform: I think that some of the social features of Kaggle Kernels also make it a good learning environment. You have the ability to “fork” existing kernels and tweak them to make your own version, giving you the ability to experiment and see the impact of different code changes. You can comment on kernels to ask the authors questions or try to track down a bug. And upvoting helps you see popular kernels, which might lead you to the next interesting topic you want to learn.

Gotchas

There are a couple of things to be aware of when working with Kernels if you’re used to working on AWS or your own machine, or if you’re following instructions geared toward those environments. These differences accounted for the bulk of the changes I needed to make to the original fast.ai notebooks to get them running as Kernels.

  • Read-only input data: By default the data you load into the kernel lives under ../input. But the directories the data lives under are read-only. This causes two issues for fast.ai — first, in some cases you’re expected to move the data around to conform to a certain directory structure. This can be solved by passing in the data to the learner in a different form (in lesson 1 I used a list of filenames and labels, which is supported by the from_names_and_array method). Additionally, the learners by default write tmp data and model weights to the same directory as the data, this can be modified by passing the tmp_name and models_name options to the learner.
  • Waiting for package updates: I mentioned that the package installs are maintained in a Dockerfile by Kaggle, and this is largely to your benefit, saving you from hours of configuring things. The only drawback is that you only get package updates when they rebuild their Docker image. When I was working on these I noticed a few issues that had been fixed in the latest fast.ai release, and I had to port these fixes over myself by monkey-patching some classes. I was also unable to complete lesson 4 and 6 (RNN) because of a bug occurring in pytorch 0.3.1 and CUDNN 7.1. (The update to pytorch 0.4.0 for GPU Kernels is in progress)
Not fun when your debugging path eventually leads here…
  • Non-persistent filesystem: If you restart your Kernel, you lose the files you’ve written to disk. This isn’t a major problem for running the fast.ai lessons, but if you were experimenting with different models or hyperparameters and trying to save the weights, this could be difficult for you.

The Kernels

Without further ado, here they are! I hope this helps in your deep learning journey—nothing would make me happier than seeing these get forked, tweaked, and applied in new and interesting ways.

Lesson 1
Lesson 2
Lesson 3
Lesson 4
Lesson 5
Lesson 6 (SGD)
Lesson 6 (RNN)
Lesson 7 (CIFAR10)
Lesson 7 (CAM)

If you have any questions, feel free to reach out on the fast.ai forums or on Twitter: @hortonhearsafoo.

Source: Deep Learning on Medium