Learning PyTorch : Introduction

Original article was published by Pixelbeget Lab on Deep Learning on Medium


Learning PyTorch : Introduction

PyTorch is a library for Python programs that facilitates building deep learning projects. It emphasizes flexibility and allows deep learning models to be expressed in idiomatic Python.

PyTorch has been proven to be fully qualified for use in professional contexts for real-world, high-profile work. It’s clear syntax, streamlined API, and easy debugging make it an excellent choice for introducing deep learning.

PyTorch provides a core data structure, the tensor, which is a multidimensional array that shares many similarities with NumPy arrays. PyTorch comes with features to perform accelerated mathematical operations on dedicated hardware, which makes it convenient to design neural network architectures and train them on individual machines or parallel computing resources.

PyTorch offers two things that make it particularly relevant for deep learning:
first, it provides accelerated computation using graphical processing units (GPUs), often yielding speedups in the range of 50x over doing the same calculation on a CPU. Second, PyTorch provides facilities that support numerical optimization on generic mathematical expressions, which deep learning uses for training. we can safely characterize PyTorch as a high-performance library with optimization support for scientific computing in Python.

PyTorch arguably offers one of the most seamless translations of ideas
into Python code in the deep learning landscape. For this reason, PyTorch has seen widespread adoption in research, as witnessed by the high citation counts at international conferences. PyTorch also has a compelling story for the transition from research and development into production.

Deep Learning projects

for performance reasons, most of PyTorch is written in C++ and CUDA, a C++-like language from NVIDIA that can be compiled to run with massive parallelism on GPUs. Moving computations from the CPU to the GPU in PyTorch doesn’t require more than an additional function call or two. By having tensors and the autograd-enabled tensor standard library, PyTorch can
be used for physics, rendering, optimization, simulation, modeling, and more

Basic, high-level structure of a PyTorch project, with data loading, training, and
deployment to production

First we need to physically get the data, most often from some sort of storage as the data source. Then we need to convert each sample from our data into a something PyTorch can actually handle: tensors. PyTorch DataLoader instances can spawn child processes to load data from a dataset in the background so that it’s ready and waiting for the training loop as soon as the loop can use it. At each step in the training loop, we evaluate our model on the samples we got from the data loader. We then compare the outputs of our model to the desired output (the targets) using some criterion or loss function. PyTorch also has a variety of loss functions at torch.nn. we also need an optimizer doing the updates, and that is what PyTorch offers us in torch.optim.

It’s increasingly common to use more elaborate hardware like multiple GPUs or multiple machines that contribute their resources to training a large model. In those cases, torch.nn.parallel.Distributed-DataParallel and the torch.distributed submodule can be employed to use the additional hardware.PyTorch defaults to an immediate execution model (eager mode).
Whenever an instruction involving PyTorch is executed by the Python interpreter, the corresponding operation is immediately carried out by the underlying C++ or CUDA implementation. As more instructions operate on tensors, more operations are executed by the backend implementation.

PyTorch also provides a way to compile models ahead of time through TorchScript. Using TorchScript, PyTorch can serialize a model into a set of instructions that can be invoked independently from Python: say, from C++ programs or on mobile devices. We can think about it as a virtual machine with a limited instruction set, specific to tensor operations. This allows us to export our model, either as TorchScript to be used with the PyTorch runtime, or in a standardized format called ONNX. These features are at the basis of the production deployment capabilities of PyTorch.

Summary

  1. Deep learning models automatically learn to associate inputs and desired outputs from examples.
  2. Libraries like PyTorch allow you to build and train neural network models
    efficiently.
  3. PyTorch minimizes cognitive overhead while focusing on flexibility and speed.It also defaults to immediate execution for operations.
  4. TorchScript allows us to precompile models and invoke them not only from
    Python but also from C++ programs and on mobile devices.
  5. Since the release of PyTorch in early 2017, the deep learning tooling ecosystem has consolidated significantly.
  6. PyTorch provides a number of utility libraries to facilitate deep learning projects.

In the next post we will be learning how to install PyTorch on a Ubuntu 18.04 system and pre-trained models.