Original article was published on Deep Learning on Medium
A Brief Idea upon some of the Pytorch’s Tensor Functions
This exploration is a part of the free certification course provided by Jovian ml on “Deep Learning with PyTorch: Zero to GANs” .If you want to dive into the world of deep learning,this article might help you a lot as this is the first step towards deep learning using pytorch.
What is Deep Learning?
Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.
Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
What is Pytorch?
PyTorch is an open-source machine learning library for Python, based on Torch, used for applications such as natural language processing. It is primarily developed by Facebook’s artificial-intelligence research group, and Uber’s “Pyro” Probabilistic programming language software is built on it.
PyTorch is comparatively easier to learn than other deep learning frameworks. This is because its syntax and application are similar to many conventional programming languages like Python. PyTorch’s documentation is also very organized and helpful for beginners.There are many other reasons for choosing pytorch over other frameworks. Refer to this page for more understanding.
What is a Pytorch Tensor?
A PyTorch Tensor is basically the same as a numpy array: it does not know anything about deep learning or computational graphs or gradients, and is just a generic n-dimensional array to be used for arbitrary numeric computation.
The biggest difference between a numpy array and a PyTorch Tensor is that a PyTorch Tensor can run on either CPU or GPU. To run operations on the GPU, just cast the Tensor to a cuda datatype.
For more knowledge refer to official documentation of pytorch
Here comes the gist of the article:
I have chosen five tensor functions to explain.They are:
1 — torch.item()
2 — torch.add()
3 — torch.argmin()
4 — torch.new_zeroes()
5 — torch.equal()
item() → number
item() function works only for tensors with one element,and it gives the output as the element stored in the tensor.This operation is not differentiable.
Here , the tensor contains only 1 value,so when item() function is called the one and only element of the tensor is displayed
This example explains that the function not just stores int dtypes but also float dtypes.
torch.add(input, other, out=None)
Adds the scalar
other to each element of the input
input and returns a new resulting tensor.
input is of type FloatTensor or DoubleTensor,
other must be a real number, otherwise it should be an integer.
- input (Tensor) — the input tensor.
- value (Number) — the number to be added to each element of
add() function is used to add any 2 tensors ,they may be both of same sizes or one of them should be a scalar value.
torch.argmin(input) → LongTensor
Returns the indices of the minimum value of all elements in the
input (Tensor) — the input tensor.
Here,the function returned the indices of the minimum value of all elements in the input tensor in the given dimension.
new_zeroes(size, dtype=None, device=None, requires_grad=False) → Tensor
- size (int…) — a list, tuple, or
torch.Sizeof integers defining the shape of the output tensor.
- dtype (
torch.dtype, optional) – the desired type of returned tensor. Default: if None, same
torch.dtypeas this tensor.
- device (
torch.device, optional) – the desired device of returned tensor. Default: if None, same
torch.deviceas this tensor.
- requires_grad (bool, optional) — If autograd should record operations on the returned tensor. Default:
This example shows how zeroes are filled in a tensor with one element.
torch.equal(input, other) → bool
Returns True if two tensors have the same size and elements, False otherwise.
input (Tensor) — the input tensor.
other(Tensor) — the other input tensor.
Returned False since both the tensors are of different sizes.
You can follow the Official documentation for
torch.Tensor: https://pytorch.org/docs/stable/tensors.html for more explanation
You can even follow medium articles of Jovian ml which are about Machine learning and Deep learning