Demystifying Deep Learning

Original article was published by Eshan Chatty on Artificial Intelligence on Medium


How NVIDIA is contributing to the Deep-learning environment in overcoming these challenges?

  1. The NVIDIA DEEP LEARNING INSTITUTE provides hands-on Training for Data Scientists and Software Engineers, helping the world to solve challenging problems using AI and deep learning. They cover complete workflows for applications in autonomous vehicles, healthcare, video analytics, and so on. Click on the link above for more information.
  2. NVIDIA INCEPTION has become a credible platform for AI-startups as it provides benefits like AI-expertise from the NVIDIA DLI, technology access from AWS, and cloud support by Oracle. It also provides you a global community for showcasing your innovation and go-to-market support.
  3. The latest Algorithms are provided via GPU accelerated frameworks and Deep learning Software Development Kits.
Source: NVIDIA

4. They provide Fast training using DGX, A100, V100, TITAN. DGX is Nvidia produced servers and workstations while A100, V100 are HPC Data Center Platforms. TITAN is one of the most powerful GPUs built for PC.

5. Deployment Platforms are provided by EGX, NGC, TensorRT,
A100/T4, Drive AGX, Jetson AGX. EGX Securely deploys and manages containerized AI frameworks and applications, including NVIDIA TensorRT, TensorRT Inference Server, and DeepStream. It also Includes Kubernetes plug-in, container runtime, NVIDIA drivers, and GPU monitoring. More here. www.nvidia.com/egx. NGC provides Optimized Containers, Pre-Trained Models, Model Scripts which are secure, scalable, and runs on any platform. For more, click here. www.nvidia.com/ngc

Additional resources in case you want to keep updated:

These are a few of the interesting questions that were covered by Will Ramey!

How to get results if we have very high-resolution images like 2500*2500 (Like mammograms)?

For large images, you just need to increase your input/output size. However, as you increase the resolution of your input and output, your limiting factor will be GPU memory, as those large images get even larger as they are fed through the neural network. If your GPU is running out of memory there are a few ways to work around this. You can try processing your image in ‘tiles’. Breaking the main image down into smaller chunks that your GPU can handle, and then re-assembling them in a post-processing step. You can also train over multiple GPUs and have each GPU take a portion of the image.

How deep learning applies to Software Testing to identify bugs early and making the Software Testing process easy?

One use case I have seen at a large tech company is the use of DL to analyze code check-ins to determine which of a set of possible tests to run as a post-commit hook. Oftentimes, tests that are run post-check-in have nothing to do with the code that is checked in — so saving running these some of these tests on a per check-in basis can speed up the testing process considerably. Generally speaking, you can think of problems involving code as ‘natural language’ problems (even though they aren’t really natural language) — a lot of natural language advancements in the DL space have happened within the last 1–2 years with the Transformer family of deep learning models.

The recently launched NVIDIA Broadcast App magically removes background noise from ordinary microphones using AI, what kind of training data might have been used to train the AI?

This app takes, as input, noisy audio, and outputs clean audio. Therefore the training data might have looked like a bunch of different audio clips, one version that is noisy and one clean version. To do this, they could have recorded a bunch of dirty audio clips and manually cleaned them up OR (an easier way) would be to take a bunch of clean audio clips and add noise to them.

Can an NLP Machine translation system use DL work with only a monolingual data?

It depends on what other data you have access to. Generally speaking, DL models do need supervision and labeled data to do discriminative tasks like translation — however, we have become very efficient at transfer learning these days — so if you can find supervised data that is sufficiently similar to your monolingual data, you can transfer from your supervised data onto your monolingual test data. Models such as the BERT family of models are pretty good at this — being trained on one set of data, but ultimately evaluated on another.