Building NLP Solutions with NGC Models and Containers on Google Cloud AI Platform

Original article was published by Eshan Chatty on Deep Learning on Medium


What’s the NVIDIA NGC?

It comprises of GPU-optimized software. It packs:

Containers: They package software applications, libraries, dependencies, and run-time compilers in a self-contained environment so they can be easily deployed across various computing environments. The deep learning frameworks from NGC are GPU-optimized.

Collections: provide all the assets you need to build cutting edge AI software in one place.

Helm Charts: Helm provides a container orchestration tool that allows you to configure and manage your containerized application deployments.

Google and Nvidia have come to together to build an end to end platform for accelerating NLP. This offers us the latest Nvidia GPU, and A100, which is one of the most used. This google cloud platform sits over this amazing infrastructure tapping into the full use of Nvidia without worrying too much about the GPU terminology and models that are used. Around close to 200 pre-trained models (For E.g, BERT) is used for NLP applications, industry toolkits, as well as collections, is Nvidia NGC cued and help in deploying them to other sectors, which include using NLP to AI healthcare.