Introduction to basic PyTorch tensor functions

Original article was published on Deep Learning on Medium

This article is a part of “Deep Learning with Pytorch : Zero to GANs” course, provided by Jovian ML & freeCodeCamp and in its first week of the course, covered the basic PyTorch tensors & linear regression in PyTorch. Objective of this article is to talk about some of the basic tensor operations.

Introduction

PyTorch is an open source deep learning framework, developed by Facebook’s AI research lab. Tensors are fundamental data structures in PyTorch. Tensor is a data collector and in general it is an array of n- dimensional. For example: Scalar is a 0-D tensor, Vector is a 1-D tensor, Matrices are 2-D tensors and so on. PyTorch tensors are similar to Numpy arrays but with a difference that PyTorch tensors can be instantiated on GPUs for parallel computing.

Let’s get started

Function 1 : Torch.Tensor

torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor

This function converts input data to a tensor and the input data can be a list, tuple, NumPy array, scalar, and other types.

This example shows that a list has been converted to a tensor by using torch.tensor() function . The dimension of the tensor can be extracted by using dim() function and the shape of the tensor can be inspected by using .shape() function, which outputs the length of the tensor along each dimension

It shows that, the output tensor is an one dimensional and of size 4.

Creating tensor from a nested list

Creating a tensor from numpy array

Function 2: Torch.Randn

torch.randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.randn returns a tensor with randomly filled elements from a normal distribution with mean 0 & standard deviation 1 and the shape of the tensor determined by the size argument

#Example 1 - working
tensor_rand = torch.randn((6))
tensor_rand
## Output ##
tensor([-0.8254, -0.2660, -0.7612, 0.7048, -1.8596, -1.2513])
#Example 2 - working
tensor_rand1 = torch.randn((4,5))
tensor_rand1
## Output ##
tensor([[-0.7198, 0.2343, -1.1689, -1.9184, -0.8556],
[ 0.5851, 0.9229, -0.2094, 0.0444, 0.3789],
[-1.7424, 0.0966, -0.3040, -2.2399, 0.1730],
[ 0.9268, -1.5848, 0.3625, 0.8362, -0.4173]])

The above examples returns the tensors & their shapes are matched with size argument 6 and all the elements are randomly picked from standard normal distribution. Also, check troch.rand function, which returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)

Function 3: Torch.Arange

torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

torch.arange function returns a 1-D tensor with sequence of evenly spaced values within the given range [start & end). It requires atleast one argument i.e. end and the deafult value of the start & step arguments are 0 & 1 respectively

This function is similar to numpy.arange function

Returns a 1D tensor of 10 values with evenly spaced of step size 1 and all are integers. This function consider the end point as end- stepsize and so the end point in this example will be 9 (= 10 -1)

Returns a 1D tensor of evenly spaced values starting from 2 to 30. Observed that the tensor elements are rounded, since the given data type is an integer and also noticed that the space between the consecutive elements doesn’t match with the given step size

In this example the data type has changed to float and now noticed that the space between the consecutive elements were exactly matches with the given step size 2.5

Function 4: Torch.Flatten

torch.flatten(input, start_dim=0, end_dim=-1) → Tensor

This function helps to flattens the input(multiple dimensions) to a dimension tensor. It is a combination of torch.reshape & torch.squeeze functions.

It requires atleast one argument, i.e. a tensor. Start & end dimensions are optional & default values are 0 & -1 respectively

This example returns a input tensor of size (3,2) to a 1-D tensor of size (6), which is equal to number of elements in the input tensor

First example shows that the whole input tensor flattened to a 1-D tensor, however, it is possible to flattens only specific parts of input tensor, which is shown in 2nd example.

Input tensor of size (2,2,3) flattens to a size of (2,6).

Function 5 — matrix multiplications — (torch.mm/mv)

torch.mm(input, mat2, out=None) → Tensor

torch.mv(input, vec, out=None) → Tensor

torch.mm – Performs a matrix multiplication of the two matrices and torch.mv – Performs a matrix-vector product of the matrix and vector

These functions does not do broadcasting

To perform matrix multiplication, the no. of. columns in matrix_1(input) should be equal to no. of. rows in matrix_2 (mat2).

Source : Wikipedia

This example returns a tensor of size (3,2); matrix_1 is a (3 x 3) tensor & matrix_2 is a (3 x 2) tensor and the resultant matrix multiplication is a (3 x 2) tensor

This example returns a tensor of size (3); matrix_1 is a (3 x 2) tensor & vec is a 1-D tensor of size 2 and the resultant matrix-vector multiplication is a 1-D tensor of size 3

Function 6 — Torch.Argmax

torch.argmax(input) → LongTensor

This function returns the indices of the maximum value of all elements in the input tensor. To get the indices of the minimum value, check torch.argmin() function

This example returns the indices of the maximum value of the input tensor. Maximum value of the input tensor is 100 & it’s index is 3

This example returns the indices of the maximum value of the input tensor along row axis and the output tensor is a 1-D tensor.

Function 7 — Torch.Tensor.Item

item() → number

Returns the value of the tensor as a standard Python number.This only works for tensors with one element. For other cases, see tolist().

# Example 1 - working
t1 = torch.tensor([100])
print(t1)
t1.item()
##Output##
tensor([100])
100
# Example 2 - working
t2 = torch.tensor([[99]])
print(t2)
t2.item()
##Output##
tensor([[99]])
99

In both the cases .item() returns a standard number from a given tensor input

Conclusion

We have discussed upon a few basic tensor functions with this, we can start messing around the tensors in PyTorch. For further exploration of these kind of functions on your own, visit the official PyTorch documentation. To learn more about Deep Learning with Pytorch, visit this site.

References:

  1. https://jovian.ml/vineel369/pytorch-zero-to-gan-01-basic-tensor-operations
  2. PyTorch official documentation
  3. https://deeplizard.com/learn/video/fCVuiW9AFzY

I hope you enjoyed reading and learned something new. Eager to see some constructive feedback from you guys on my first blog post and also Let me know any interesting functions you were explored. Any questions, happy to help !!