Programming Articles - Page 829 of 3363

How to move a Torch Tensor from CPU to GPU and vice versa?

Shahid Akhtar Khan
Updated on 06-Nov-2023 03:42:20

26K+ Views

A torch tensor defined on CPU can be moved to GPU and vice versa. For high-dimensional tensor computation, the GPU utilizes the power of parallel computing to reduce the compute time.High-dimensional tensors such as images are highly computation-intensive and takes too much time if run over the CPU. So, we need to move such tensors to GPU.SyntaxTo move a torch tensor from CPU to GPU, following syntax/es are used −Tensor.to("cuda:0") or Tensor.to(cuda)And, Tensor.cuda()To move a torch tensor from GPU to CPU, the following syntax/es are used −Tensor.to("cpu")And, Tensor.cpu()Let's take a couple of examples to demonstrate how a tensor can be ... Read More

How to get the rank of a matrix in PyTorch?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:43:25

2K+ Views

The rank of a matrix can be obtained using torch.linalg.matrix_rank(). It takes a matrix or a batch of matrices as the input and returns a tensor with rank value(s) of the matrices. torch.linalg module provides us many linear algebra operations.Syntaxtorch.linalg.matrix_rank(input)where input is the 2D tensor/matrix or batch of matrices.StepsWe could use the following steps to get the rank of a matrix or batch of matrices −Import the torch library. Make sure you have it already installed.import torch Create a 2D tensor/matrix or a batch of matrices and print it.t = torch.tensor([[1., 2., 3.], [4., 5., 6.]]) print("Tensor:", t)Compute the rank ... Read More

How to normalize a tensor in PyTorch?

Shahid Akhtar Khan
Updated on 31-Oct-2023 03:57:21

32K+ Views

A tensor in PyTorch can be normalized using the normalize() function provided in the torch.nn.functional module. This is a non-linear activation function.It performs Lp normalization of a given tensor over a specified dimension.It returns a tensor of normalized value of the elements of original tensor.A 1D tensor can be normalized over dimension 0, whereas a 2D tensor can be normalized over both dimensions 0 and 1, i.e., column-wise or row-wise.An n-dimensional tensor can be normalized over dimensions (0, 1, 2, ..., n-1).Syntaxtorch.nn.functional.normalize(input, p=2.0, dim = 1)ParametersInput – Input tensorp – Power (exponent) value in norm formulationdim – Dimension over which ... Read More

PyTorch – How to get the exponents of tensor elements?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:32:17

1K+ Views

To find the exponential of the elements of an input tensor, we can apply Tensor.exp() or torch.exp(input). Here, input is the input tensor for which the exponentials are computed. Both the methods return a new tensor with the exponential values of the elements of the input tensor.SyntaxTensor.exp()ortorch.exp(input) StepsWe could use the following steps to compute the exponentials of the elements of an input tensor −Import the torch library. Make sure you have it already installed.import torchCreate a tensor and print it.t1 = torch.rand(4, 3) print("Tensor:", t1)Compute the exponential of the elements of the tensor. For this, use torch.exp(input) and optionally ... Read More

PyTorch – torch.log2() Method

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:28:13

547 Views

We use the torch.log2() method to compute logarithm to the base 2 of the elements of a tensor. It returns a new tensor with the logarithm values of the elements of the original input tensor. It takes a tensor as the input parameter and outputs a tensor.Syntaxtorch.log2(input)where input is a PyTorch tensor.It returns a new tensor with logarithm base 2 values.StepsImport the torch library. Make sure you have it already installed.import torch Create a tensor and print it.tensor1 = torch.rand(5, 3) print("Tensor:", tensor1)Compute torch.log2(input) and optionally assign this value to a new variable. Here, input is the created tensor.logb2 = ... Read More

What does Tensor.detach() do in PyTorch?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:24:29

13K+ Views

Tensor.detach() is used to detach a tensor from the current computational graph. It returns a new tensor that doesn't require a gradient.When we don't need a tensor to be traced for the gradient computation, we detach the tensor from the current computational graph.We also need to detach a tensor when we need to move the tensor from GPU to CPU.SyntaxTensor.detach()It returns a new tensor without requires_grad = True. The gradient with respect to this tensor will no longer be computed.StepsImport the torch library. Make sure you have it already installed.import torch Create a PyTorch tensor with requires_grad = True and ... Read More

How to compute gradients in PyTorch?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:20:48

13K+ Views

To compute the gradients, a tensor must have its parameter requires_grad = true. The gradients are same as the partial derivatives.For example, in the function y = 2*x + 1, x is a tensor with requires_grad = True. We can compute the gradients using y.backward() function and the gradient can be accessed using x.grad.Here, the value of x.gad is same as the partial derivative of y with respect to x. If the tensor x is without requires_grad, then the gradient is None. We can define a function of multiple variables. Here the variables are the PyTorch tensors.StepsWe can use the ... Read More

PyTorch – How to compute element-wise logical XOR of tensors?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:13:01

578 Views

torch.logical_xor() computes the element-wise logical XOR of the given two input tensors. In a tensor, the elements with zero values are treated as False and non-zero elements are treated as True. It takes two tensors as input parameters and returns a tensor with values after computing the logical XOR.Syntaxtorch.logical_xor(tensor1, tensor2)where tensor1 and tensor2 are the two input tensors.StepsTo compute element-wise logical XOR of given input tensors, one could follow the steps given below −Import the torch library. Make sure you have it already installed.Create two tensors, tensor1 and tensor2, and print the tensors.Compute torch.logical_xor(tesnor1, tesnor2) and assign the value to ... Read More

How to narrow down a tensor in PyTorch?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:08:19

978 Views

torch.narrow() method is used to perform narrow operation on a PyTorch tensor. It returns a new tensor that is a narrowed version of the original input tensor.For example, a tensor of [4, 3] can be narrowed to a tensor of size [2, 3] or [4, 2]. We can narrow down a tensor along a single dimension at a time. Here, we cannot narrow down both dimensions to a size of [2, 2]. We can also use Tensor.narrow() to narrow down a tensor.Syntaxtorch.narrow(input, dim, start, length) Tensor.narrow(dim, start, length)Parametersinput – It's the PyTorch tensor to narrow.dim – It's the dimension along ... Read More

How to perform a permute operation in PyTorch?

Shahid Akhtar Khan
Updated on 06-Dec-2021 11:03:59

3K+ Views

torch.permute() method is used to perform a permute operation on a PyTorch tensor. It returns a view of the input tensor with its dimension permuted. It doesn't make a copy of the original tensor.For example, a tensor with dimension [2, 3] can be permuted to [3, 2]. We can also permute a tensor with new dimension using Tensor.permute().Syntaxtorch.permute(input, dims)Parametersinput – PyTorch tensor.dims – Tuple of desired dimensions.StepsImport the torch library. Make sure you have it already installed.import torch Create a PyTorch tensor and print the tensor and the size of the tensor.t = torch.tensor([[1, 2], [3, 4], [5, 6]]) print("Tensor:", ... Read More

Advertisements