Original article was published on Deep Learning on Medium
5 Useful Tensor Operations in PyTorch
This article discusses a few useful tensor operations that help in matrix manipulation, conditional operations, and reshaping techniques.
Introduction to PyTorch
PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab.
All of the deep learning is computations on tensors, which are generalizations of a matrix that can be indexed in more than 2 dimensions. Let us have a brief intro to how to create tensors in PyTorch and some basic operations.
The PyTorch library can be imported in a single line
Now, let’s look at some torch tensor functions.
torch.narrow(input, dim, start, length) → Tensor
narrow method returns a narrowed version of the original tensor i.e., used to slice the tensors by defining the
Instead of slicing the matrices in the traditional way using
: , we can use
narrow method by defining the
dim (dimension) along which to slice, the
start index and the
length width of the sliced matrix. Here, the ending index would be
end = start + length -1
m is a 3-D tensor i.e., matrix with values sampled out of a normal distribution with mean 0 and variance 1.
m is narrowed (sliced) along the
1 starting from the index position
2 to ending
3, with ending index exclusive. One important thing to notice is that the
dimension value ranges from
2, both inclusive.
narrow method can be called by the tensor
m using dot operator
., it works the same as mentioned above. I would suggest trying the same
narrow method using different
dimension values to get a better idea of how the slicing works. If the
start + length exceeds the maximum index along that dimension, it throws a
narrow function can be used to slice larger matrices/tensors without having to write the dimensions directly without having to use the
torch.matmul(input, other, out=None) → Tensor
matmul returns the matrix multiplication of the
input tensor and
other tensor, provided the dimensions are suitable for multiplication. Else, it throws a
Remember the basic linear algebra, to find the matrix product of two matrices the number of columns of the first matrix must equal the number of rows of the second matrix.
n are of dimensions
[2,2] respectively. Since the number of columns of
m and rows of
n match, matrix multiplication is performed.
The matrix multiplication is successful because the above-mentioned condition is met.
torch.where(condition, x, y) → Tensor
where method returns tensor containing elements from
y based on the
condition provided. If the condition is satisfied, value from
x is taken, else from
The condition provided is
m<0. The condition is applied on every element of the tensor
m. If the condition is satisfied, the value from
m is taken, else value from
n is taken.
m%2 == 0 is checked on the tensor
m. When the value in
m is even, then value from
m is taken, else from
n which is
where method to be successful, the condition must return a
boolean tensor and the
datatypes of both the tensors from which values are taken must be
same otherwise, an error occurs.
torch.where can be used to perform some conditional operations on torch tensors and use values accordingly. For example when we want to apply a thresholding filter on a matrix.
all(dim, keepdim=False, out=None) → Tensor
all method returns a
boolean value indicating whether all of the values present along the
dim of a tensor matrix are
True or not. The
dim = 1 indicates along the rows and
dim = 0 along the columns.
all method checks whether all of the values of the
columns (dim = 0) are
True or not. If even a single value is
False, then output is
all method checks if there is a
True value present along all the rows of the boolean matrix
m and outputs the result.
all method to work on a tensor matrix, it must be a boolean tensor. Otherwise, it throws an error.
all method is always used with
boolean tensors. Generally, while dealing with data science problems most of the information might contain
NaN (missing) values. To find out whether a tensor has
NaN values in it or not, we use this method.
torch.squeeze(input, dim=None, out=None) → Tensor
squeeze method returns a tensor where all the dimensions of input with size
1 removed. The
dim attribute is used to define along which dimension the squeezing must be done.
Previously, the dimenions of
[1,6]. After squeezing, the tensor is squeezed along the dimensions whose
1 to output a tensor of shape
Here, the squeezing is done along the dimension
0 which results in a tensor of size
squeeze method can be used to reduce unnecessary dimensions of a tensor.
Here, in this article we discussed some basic operations that can be performed on tensors. We have seen how to slice tensors using
narrow, calculate matrix products, performing conditional operations, and size modification operations.
For further information, I suggest visiting official documentation about
torch.tensor . Check out my notebook in jovian.ml containing the code for all the functions mentioned above along with some cases where you might encounter errors.
I hope this article helped you in gaining some knowledge of PyTorch tensor operations. Connect with me on LinkedIn.