Humming Bird- An Accelerator to Machine Learning Models

Original article was published by Karteek Menda on Deep Learning on Medium


Humming Bird- An Accelerator to Machine Learning Models

The swift in ML models by converting them to tensor computations.

Hello Aliens…..

Let me first give you a brief idea of what is the main setback of traditional machine Learning models which led to the rise of this “Humming Bird”.

Over the last few years, the capabilities of DL have increased tremendously. It has gained huge popularity. That popularity has led to focus on optimizing the training and development by leveraging tensor computations.

Traditional machine Learning models such as random forests are typically CPU-based on inference tasks and could benefit from GPU-based hardware accelerators.

Let me throw some light on CPU vs GPU.

CPU vs GPU. Retrieved from here

CPU is great for data reuse, as it has fast caches. It consists of lot of different processes or threads and gives high performance on a single thread of execution. These are good for task parallelization. It is generally optimized for high performance on sequential codes(caches and branch prediction).

Coming to GPU, It has lots of math units, and fast access to onboard memory, It runs a program on each vertex. It gives high throughput on parallel tasks. These are generally great for data parallelism. It is generally optimized for higher arithmetic intensity for parallel nature(Floating Point operations).

So, what if we make use of the advantages of Neural networks in our traditional random forest and leverage GPU-accelerated inference? This is where the Microsoft’s Humming Bird comes into the picture. It transforms your Machine Learning model to tensor computations such that it can use GPU-acceleration to speed up inference.

Hummingbird is only used to speed up the time to make a prediction, not to speed up the training!”

But wait, why do we need to transform the model to tensors?

There are several reasons for transforming your model into tensors:

1. speeds up inference tremendously, which cuts costs for making new predictions.

2. It allows for a more unified standard compared to using traditional Machine Learning models.

3. Any optimization in Neural Network frameworks is likely to result in the optimization of your traditional Machine Learning model.

Good right…..

The usage of GPU-acceleration creates a massive speed-up in inference compared to traditional models. So, it is worthy to transform your traditional Machine Learning Models to a Neural Network.

About the package:

Hummingbird is a library for compiling trained traditional ML models into tensor computations.

Users can benefit from:

(1) all the current and future optimizations implemented in neural network frameworks.

(2) native hardware acceleration.

(3) having a unique platform to support for both traditional and neural network models.

(4) and have all of the above, without having to re-engineer their models.

Hummingbird supports a variety of ML models and featurizers. Currently these are the Supported Operators. Also, have a look at the upcoming features and support here.

Hummingbird usage is extremely simple.

Installation:

Hummingbird was tested on Python > 3.5 on Linux, Windows and MacOS machines. It is recommended to use a virtual environment to be on a better note. Hummingbird requires PyTorch >= 1.4.0.

Once PyTorch is installed, you can get Hummingbird from pip with:

pip install hummingbird-ml

If you require the optional dependencies lightgbm and xgboost, you can use:

pip install hummingbird-ml[extra]

Now I will build a Random Forest Model on a random dataset and see the magic when we use the Hummingbird.

Then, we use IRIS dataset and build a decision tree model.

from sklearn.datasets import load_iris
from sklearn import tree
X, y = load_iris(return_X_y=True)
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, y)

After doing so, the only thing we actually have to do to transform it to Pytorch is to import Hummingbird and use the convert function

import hummingbird.ml
from hummingbird.ml import convert
# Use Hummingbird to convert the model to PyTorch
model = convert(clf, 'pytorch')

The resulting model is simply a torch.nn.Module and can then be exactly used as you would normally work with Pytorch.

# Run predictions on CPU
model.predict(X)
# Run predictions on GPU
model.to('cuda')
model.predict(X)

Conclusion:

That is how Hummingbird comes as a savior where we accelerate our Machine Learning models by leveraging Tensors. For detailed information refer the API documentation here. I would recommend the fellow members to visit the documentation and grab more information.

Happy Learning…………

Thanks for reading the article! If you like my article do 👏 this article. If you want to connect with me in Linkedin, please click here.

I will be covering Google’s “BigBird” and “PRADO” in my upcoming article.

This is Karteek Menda.

Signing Off