2020 AI & ML Opensource

Source: Deep Learning on Medium

Cutting edge open-source frameworks, tools, libraries, and models for research exploration to large-scale production deployment.

Frameworks & Tools

1) PyTorch

Github: https://github.com/pytorch/pytorch

PyTorch is an open-source deep learning framework built to be flexible and modular for research, with the stability and support needed for production deployment. It enables fast, flexible experimentation through a tape-based autograd system designed for immediate and python-like execution. With the release of PyTorch 1.0, the framework will also offer graph-based execution, a hybrid front-end allowing seamless switching between modes, distributed training, as well as efficient and performant mobile deployment.

Dynamic neural networks

While static graphs are great for production deployment, the research process involved in developing the next great algorithm is truly dynamic. PyTorch uses a technique called reverse-mode auto-differentiation, which allows developers to modify network behaviour arbitrarily with zero lag or overhead, speeding up research iterations.

The best of both worlds

Bringing together elements of flexibility, stability, and scalability, the next release of PyTorch will include a unique hybrid front end. This means AI / ML researchers and developers no longer need to make compromises when deciding which tools to use. With PyTorch’s hybrid front end, developers can seamlessly switch between imperative, define-by-run execution and graph mode, boosting productivity and bridging the gap between research and production.

Tape based autograd

Get Started

  1. Install PyTorch. Multiple installation options are supported, including from source, pip, conda, and pre-built cloud services like AWS.
  2. Review documentation and tutorials to familiarize yourself with PyTorch’s tensor library and neural networks.
  3. Check out below tools, libraries, pre-trained models, and datasets to support your development needs.
  4. Build, train, and evaluate your neural network. Here’s an example of code used to define a simple network
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):

def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features

net = Net()

Github: https://github.com/onnx/onnx

ONNX is an open format for representing deep learning models, allowing AI developers to easily move models between state-of-the-art tools and choose the best combination. ONNX accelerates the process from research to production by enabling interoperability across popular tools including PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet, and more.

Framework interoperability

ONNX enables models to be trained in one framework, and then exported and deployed into other frameworks for inference. ONNX models are currently supported in frameworks such as PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet and Chainer with additional support for Core ML, TensorFlow, Qualcomm SNPE, Nvidia’s TensorRT and Intel’s nGraph.

Hardware optimizations

Any tool that exports ONNX models can benefit from ONNX-compatible runtimes and libraries designed to maximize performance. ONNX currently supports Qualcomm SNPE, AMD, ARM, Intel and other hardware partners.

Get Started

  1. Install ONNX from binaries using pip or conda, or build from source.

2. Review documentation and tutorials to familiarize yourself with ONNX’s functionality and advanced features.

3. Follow the importing and exporting directions for the frameworks you’re using to get started.

4. Explore and try out the community’s models in the ONNX model zoo.

3) Tensor Comprehensions

Research: https://research.fb.com/announcing-tensor-comprehensions/

Github: https://github.com/facebookresearch/TensorComprehensions

Tensor Comprehensions (TC) accelerates development by automatically generating efficient GPU code from high-level mathematical operations. TC is a C++ library and mathematical language that helps bridge the gap between researchers, who communicate in terms of mathematical operations, and engineers who are focused on running large-scale models.

Boost productivity

Tensor Comprehensions (TC) is based on generalized Einstein notation for computing on multi-dimensional arrays. It greatly simplifies the development of new operations by providing a concise and powerful syntax which can be automatically and efficiently translated into high-performance computation CUDA kernels.

Tensor Comprehensions provides a lightweight and seamless integration with PyTorch.

Get Started

  1. Set up orinstall Anaconda if you don’t already have it.

2. Install Tensor Comprehensions.

3. Review the tutorial and documentation to familiarize yourself with how to use Tensor Comprehensions.

4) Glow

Github: https://github.com/pytorch/glow

Glow is a machine learning compiler that accelerates the performance of deep learning frameworks on different hardware platforms. It enables the ecosystem of hardware developers and researchers to focus on building next-gen hardware accelerators that can be supported by deep learning frameworks like PyTorch.

Powerful hardware optimizations

Glow accepts a computation graph from deep learning frameworks, such as PyTorch, and generates highly optimized code for machine learning accelerators. It contains many machine learning and hardware optimizations like kernel fusion to accelerate model development.

Glow is currently in active development.

Get Started

Visit GitHub to get started.

5) faiss

Github: https://github.com/facebookresearch/faiss

FAISS (Facebook AI Similarity Search) is a library that allows developers to quickly search for embeddings of multimedia documents that are similar to each other. It solves the limitations of traditional query search engines that are optimized for hash-based searches and provides more scalable similarity search functions.

Efficient similarity search

With FAISS, developers can search multimedia documents in ways that are inefficient or impossible with standard database engines (SQL). It includes nearest-neighbour search implementations for million-to-billion-scale datasets that optimize the memory-speed-accuracy tradeoff. FAISS aims to offer state-of-the-art performance for all operating points.

FAISS contains algorithms that search in sets of vectors of any size, and also contains supporting code for evaluation and parameter tuning. Some of its most useful algorithms are implemented on the GPU. FAISS is implemented in C++, with an optional Python interface and GPU support via CUDA.

Get Started

  1. Install FAISS.

2. Review documentation and tutorials to familiarize yourself with how FAISS works and its capabilities.

3. Experiment with building indexes and searching using FAISS.

6) StarSpace

Github: https://github.com/facebookresearch/StarSpace

StarSpace is a general-purpose neural embedding model that can be applied to a number of machine learning tasks including ranking, classification, information retrieval, similarity learning, and recommendations. It’s both highly competitive with existing methods while generalizing well to new use cases.

A multi-purpose learning model

StarSpace learns to represent objects of different types into a common vectorial embedding space in order to compare them against each other. This makes it well suited for a variety of problems, including:

  • Learning word, sentence or document level embeddings.
  • Information retrieval — the ranking of sets of entities/documents or objects, e.g., ranking web documents.
  • Text classification, or any other labelling task.
  • Metric/similarity learning, e.g., learning sentence or document similarity.
  • Content-based or Collaborative Filtering-based Recommendation, e.g., recommending music or videos.
  • Embedding graphs, e.g., multi-relational graphs such as Freebase.

Additional details on StarSpace can be found in the research paper.

7) Visdom

Github: https://github.com/facebookresearch/visdom

Visdom is a visualization tool that generates rich visualizations of live data to help researchers and developers stay on top of their scientific experiments that are run on remote servers. Visualizations in Visdom can be viewed in browsers and easily shared with others.

Rich, live visualizations

visdom provides an interactive visualization tool that supports scientific experimentation. Visualizations of plots, images, and text can be easily broadcast for yourself and collaborators.

The visualization space can be organized through the Visdom UI or programmatically, allowing researchers and developers to inspect experiment results across multiple projects and debug code. Features like windows, environments, states, filters, and views also provide multiple ways to view and organize important experimental data.

8) Adanet

Github: https://github.com/tensorflow/adanet

Fast and flexible AutoML with learning guarantees

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. It uses the AdaNet algorithm by Cortes et al. 2017 to learn the structure of a neural network as an ensemble of subnetworks while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture but also for learning to the ensemble to obtain even better models.

9) AutoML Video On-Device

Github: github.com/google/automl-video-ondevice

Inference using the AutoML Video trained object detection mobile sequence models

The example code that shows how to load the Google Cloud AutoML Video Object Tracking On-Device models and conduct inference on a sequence of images from a video clip. The targeted devices are CPU and Edge TPU.

10) Budou

Github: github.com/google/budou

Automatic line-breaking tool for Chinese, Japanese, and Korean (CJK) languages

Budou automatically translates CJK sentences into organized HTML code with meaningful chunks to provide beautiful typography on the web

How Google uses Budou

Budou is used in some of our websites to provide legible and beautiful typography in CJK (Chinese, Japanese, and Korean). Headings are broken into multiple lines at meaningful positions according to the browser screen width.

11) Bullet Physics SDK