5 Useful Tensor Operations in PyTorch

Original article was published on Deep Learning on Medium

5 Useful Tensor Operations in PyTorch

This article discusses a few useful tensor operations that help in matrix manipulation, conditional operations, and reshaping techniques.

This article explains the notebook I wrote as a part of the 6-week free certification course, “Deep Learning with PyTorch: Zero to GANs” taught by Aakash N S and offered by freecodecamp.org.

Introduction to PyTorch

Source: Analytics India Magazine

PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab.

All of the deep learning is computations on tensors, which are generalizations of a matrix that can be indexed in more than 2 dimensions. Let us have a brief intro to how to create tensors in PyTorch and some basic operations.

Importing PyTorch

The PyTorch library can be imported in a single line

Importing libraries

Now, let’s look at some torch tensor functions.

1) torch.narrow

torch.narrow(input, dim, start, length) → Tensor

The narrow method returns a narrowed version of the original tensor i.e., used to slice the tensors by defining the dim, start and length parameters.

Instead of slicing the matrices in the traditional way using : , we can use narrow method by defining the dim (dimension) along which to slice, the start index and the length width of the sliced matrix. Here, the ending index would be end = start + length -1

Example 1:

Here, m is a 3-D tensor i.e., matrix with values sampled out of a normal distribution with mean 0 and variance 1.

The tensor m is narrowed (sliced) along the dimension = 1 starting from the index position start = 2 to ending start+length = 2+1 = 3, with ending index exclusive. One important thing to notice is that the dimension value ranges from -3 to 2, both inclusive.

Example 2:

The same narrow method can be called by the tensor m using dot operator ., it works the same as mentioned above. I would suggest trying the same narrow method using different dimension values to get a better idea of how the slicing works. If the start + length exceeds the maximum index along that dimension, it throws a Runtime error.

The narrow function can be used to slice larger matrices/tensors without having to write the dimensions directly without having to use the :.

2) torch.matmul

torch.matmul(input, other, out=None) → Tensor

The matmul returns the matrix multiplication of the input tensor and other tensor, provided the dimensions are suitable for multiplication. Else, it throws a Runtime error.

Remember the basic linear algebra, to find the matrix product of two matrices the number of columns of the first matrix must equal the number of rows of the second matrix.

Example 1:

The tensors m and n are of dimensions [3,2] and [2,2] respectively. Since the number of columns of m and rows of n match, matrix multiplication is performed.

Example 2:

The matrix multiplication is successful because the above-mentioned condition is met.

3) torch.where

torch.where(condition, x, y) → Tensor

The where method returns tensor containing elements from x and y based on the condition provided. If the condition is satisfied, value from x is taken, else from y.

Example 1:

The condition provided is m<0. The condition is applied on every element of the tensor m. If the condition is satisfied, the value from m is taken, else value from n is taken.

Example 2:

The condition m%2 == 0 is checked on the tensor m. When the value in m is even, then value from m is taken, else from n which is 1 always.

For the where method to be successful, the condition must return a boolean tensor and the dimensions and datatypes of both the tensors from which values are taken must be same otherwise, an error occurs.

The torch.where can be used to perform some conditional operations on torch tensors and use values accordingly. For example when we want to apply a thresholding filter on a matrix.

4) torch.all

all(dim, keepdim=False, out=None) → Tensor

The all method returns a boolean value indicating whether all of the values present along the dim of a tensor matrix are True or not. The dim = 1 indicates along the rows and dim = 0 along the columns.

Example 1:

The all method checks whether all of the values of the columns (dim = 0) are True or not. If even a single value is False, then output is False otherwise True.

Example 2:

Here, the all method checks if there is a True value present along all the rows of the boolean matrix m and outputs the result.

For all method to work on a tensor matrix, it must be a boolean tensor. Otherwise, it throws an error.

The all method is always used with boolean tensors. Generally, while dealing with data science problems most of the information might contain NaN (missing) values. To find out whether a tensor has NaN values in it or not, we use this method.

5) torch.squeeze

torch.squeeze(input, dim=None, out=None) → Tensor

The squeeze method returns a tensor where all the dimensions of input with size 1 removed. The dim attribute is used to define along which dimension the squeezing must be done.

Example 1:

Previously, the dimenions of m are [1,6]. After squeezing, the tensor is squeezed along the dimensions whose size is 1 to output a tensor of shape [6].

Example 2:

Here, the squeezing is done along the dimension 0 which results in a tensor of size [6,1] from [1,6,1].

The squeeze method can be used to reduce unnecessary dimensions of a tensor.


Here, in this article we discussed some basic operations that can be performed on tensors. We have seen how to slice tensors using narrow, calculate matrix products, performing conditional operations, and size modification operations.


For further information, I suggest visiting official documentation about torch.tensor . Check out my notebook in jovian.ml containing the code for all the functions mentioned above along with some cases where you might encounter errors.

I hope this article helped you in gaining some knowledge of PyTorch tensor operations. Connect with me on LinkedIn.