I used to be a big fan of Julia, to the point of having written the first — and, to the best of my…

Source: Deep Learning on Medium


I used to be a big fan of Julia, to the point of having written the first — and, to the best of my knowledge, only––openly available comprehensive text on it (Learn Julia the Hard Way). In it, I saw the possibility for a language that had Python-like syntax with C-like speeds. In late 2015, when I was commissioned to write a book on it, that sounded quite enticing. By 2017, it was clear that Python’s overwhelming head start and abundant ecosystem would put it in front, especially with the increasing importance of deep learning based approaches. There’s a lesson here for all of us: Python could make up for its apparent lack of speed through packages like Numba, but catching up with the massive Python ecosystem just wasn’t going to happen for Julia. The incremental acceleration of a single order of magnitude, or two at most, just wasn’t going to justify leaving the incredibly rich deep learning and machine learning ecosystem of Python behind.

We have a pre-configured deep learning instance that has all that worked out. For personal pet projects, I’ve also used AWS’s deep learning AMI (pretty good, actually!) and NVIDIA DIGITS v6 via AWS for fast transfer learning ‘fire and forget’ jobs. Recently, I started creating machine learning jobs as purpose-built Docker images that I can simply push on ECS and set up a Lambda to grab the final weights, save them to S3 and terminate the instance when training is done or if certain premature end of training criteria have been met. Works like a charm — maybe I’m going to write a little about that architecture someday!