Original article can be found here (source): Deep Learning on Medium
Fast and Reproducible Deep Learning
There are endless resources for someone who wants to learn to train a deep learning model, but running a successful deep learning project requires managing many additional moving parts that are much less discussed. This talk contributes to filling that gap in our deep learning education resources.
Thanks to the Chicago ML Meetup for hosting.
Deep learning projects require managing large datasets, heavy-duty dependencies, complex experiments, and large amounts of code. This talk provides best practices for accomplishing these tasks efficiently and reproducibly. Tools that are covered include the Creevey library for processing large collections of files; pip-tools and nvidia-docker for managing dependencies; and MLflow Tracking for tracking experiments.
Autofocus is a deep learning project that labels animals in images taken by motion-activated “camera traps.” It illustrates many of the ideas discussed in the talk.