Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Articles by Shahid Akhtar Khan
Page 2 of 17
How to perform element-wise division on tensors in PyTorch?
To perform element-wise division on two tensors in PyTorch, we can use the torch.div() method. It divides each element of the first input tensor by the corresponding element of the second tensor. We can also divide a tensor by a scalar. A tensor can be divided by a tensor with same or different dimension. The dimension of the final tensor will be same as the dimension of the higher-dimensional tensor. If we divide a 1D tensor by a 2D tensor, then the final tensor will a 2D tensor. Syntax torch.div(input, other, *, rounding_mode=None, out=None) Parameters: ...
Read MoreHow to perform element-wise subtraction on tensors in PyTorch?
To perform element-wise subtraction on tensors, we can use the torch.sub() method of PyTorch. The corresponding elements of the tensors are subtracted. We can subtract a scalar or tensor from another tensor with same or different dimensions. The dimension of the final tensor will be the same as the dimension of the higher-dimensional tensor due to PyTorch's broadcasting rules. Syntax torch.sub(input, other, *, alpha=1, out=None) Parameters: input − The tensor to be subtracted from other − The tensor or scalar to subtract alpha − The multiplier for other (default: 1) out − The ...
Read MoreHow to perform element-wise addition on tensors in PyTorch?
We can use torch.add() to perform element-wise addition on tensors in PyTorch. It adds the corresponding elements of the tensors. We can add a scalar or tensor to another tensor. We can add tensors with same or different dimensions. The dimension of the final tensor will be same as the dimension of the higher dimension tensor. Steps Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already installed it. Define two or more PyTorch tensors and print them. If you want to add a scalar quantity, ...
Read MoreHow to resize a tensor in PyTorch?
To resize a PyTorch tensor, we use the .view() method. We can increase or decrease the dimension of the tensor, but we have to make sure that the total number of elements in a tensor must match before and after the resize. Steps Import the required library. In all the following Python examples, the required Python library is torch. Make sure you have already installed it. Create a PyTorch tensor and print it. Resize the above-created tensor using .view() and assign the value to a variable. .view() does not resize the ...
Read MoreHow to join tensors in PyTorch?
PyTorch provides two main methods to join tensors: torch.cat() and torch.stack(). The key difference is that torch.cat() concatenates tensors along an existing dimension, while torch.stack() creates a new dimension for joining. Key Differences torch.cat() concatenates tensors along an existing dimension without changing the number of dimensions. torch.stack() stacks tensors along a new dimension, increasing the tensor dimensionality by one. Using torch.cat() with 1D Tensors Let's start by concatenating 1D tensors ? import torch # Create 1D tensors t1 = torch.tensor([1, 2, 3, 4]) t2 = torch.tensor([0, 3, 4, 1]) t3 = ...
Read MoreHow to access the metadata of a tensor in PyTorch?
In PyTorch, tensor metadata includes essential information like size, shape, data type, and device location. The most commonly accessed metadata are the tensor's dimensions and total number of elements. Key Metadata Properties PyTorch tensors provide several ways to access metadata: .size() − Returns the dimensions as a torch.Size object .shape − Returns the same dimensions as .size() torch.numel() − Returns the total number of elements .dtype − Returns the data type .device − Returns the device (CPU/GPU) Example 1: 2D Tensor Metadata import torch # Create a 4x3 tensor T = ...
Read MoreHow to convert a NumPy ndarray to a PyTorch Tensor and vice versa?
A PyTorch tensor is like numpy.ndarray. The difference between these two is that a tensor utilizes the GPUs to accelerate numeric computation. We convert a numpy.ndarray to a PyTorch tensor using the function torch.from_numpy(). And a tensor is converted to numpy.ndarray using the .numpy() method. Steps Import the required libraries. Here, the required libraries are torch and numpy. Create a numpy.ndarray or a PyTorch tensor. Convert the numpy.ndarray to a PyTorch tensor using torch.from_numpy() function or convert the PyTorch tensor to numpy.ndarray using the .numpy() method. ...
Read MoreHow to access and modify the values of a Tensor in PyTorch?
We use Indexing and Slicing to access the values of a tensor. Indexing is used to access the value of a single element of the tensor, whereas Slicing is used to access the values of a sequence of elements. We use the assignment operator to modify the values of a tensor. Assigning new value/s using the assignment operator will modify the tensor with new value/s. Steps Import the required libraries. Here, the required library is torch. Define a PyTorch tensor. Access the value of a single element ...
Read MoreHow to convert an image to a PyTorch Tensor?
PyTorch tensors are n-dimensional arrays that can leverage GPU acceleration for faster computations. Converting images to tensors is essential for deep learning tasks in PyTorch, as it allows the framework to process image data efficiently on both CPU and GPU. To convert an image to a PyTorch tensor, we use transforms.ToTensor() which automatically handles scaling pixel values from [0, 255] to [0, 1] and changes the dimension order from HxWxC (Height x Width x Channels) to CxHxW (Channels x Height x Width). Method 1: Converting PIL Images The most common approach is using PIL (Python Imaging Library) ...
Read MoreHow to move a Torch Tensor from CPU to GPU and vice versa?
A torch tensor defined on CPU can be moved to GPU and vice versa. For high-dimensional tensor computation, the GPU utilizes the power of parallel computing to reduce the compute time.High-dimensional tensors such as images are highly computation-intensive and takes too much time if run over the CPU. So, we need to move such tensors to GPU.SyntaxTo move a torch tensor from CPU to GPU, following syntax/es are used −Tensor.to("cuda:0") or Tensor.to(cuda)And, Tensor.cuda()To move a torch tensor from GPU to CPU, the following syntax/es are used −Tensor.to("cpu")And, Tensor.cpu()Let's take a couple of examples to demonstrate how a tensor can be ...
Read More