- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Found 134 Articles for PyTorch

228 Views
We use the torch.log2() method to compute logarithm to the base 2 of the elements of a tensor. It returns a new tensor with the logarithm values of the elements of the original input tensor. It takes a tensor as the input parameter and outputs a tensor.Syntaxtorch.log2(input)where input is a PyTorch tensor.It returns a new tensor with logarithm base 2 values.StepsImport the torch library. Make sure you have it already installed.import torch Create a tensor and print it.tensor1 = torch.rand(5, 3) print("Tensor:", tensor1)Compute torch.log2(input) and optionally assign this value to a new variable. Here, input is the created tensor.logb2 = ... Read More

8K+ Views
Tensor.detach() is used to detach a tensor from the current computational graph. It returns a new tensor that doesn't require a gradient.When we don't need a tensor to be traced for the gradient computation, we detach the tensor from the current computational graph.We also need to detach a tensor when we need to move the tensor from GPU to CPU.SyntaxTensor.detach()It returns a new tensor without requires_grad = True. The gradient with respect to this tensor will no longer be computed.StepsImport the torch library. Make sure you have it already installed.import torch Create a PyTorch tensor with requires_grad = True and ... Read More

9K+ Views
To compute the gradients, a tensor must have its parameter requires_grad = true. The gradients are same as the partial derivatives.For example, in the function y = 2*x + 1, x is a tensor with requires_grad = True. We can compute the gradients using y.backward() function and the gradient can be accessed using x.grad.Here, the value of x.gad is same as the partial derivative of y with respect to x. If the tensor x is without requires_grad, then the gradient is None. We can define a function of multiple variables. Here the variables are the PyTorch tensors.StepsWe can use the ... Read More

283 Views
torch.logical_xor() computes the element-wise logical XOR of the given two input tensors. In a tensor, the elements with zero values are treated as False and non-zero elements are treated as True. It takes two tensors as input parameters and returns a tensor with values after computing the logical XOR.Syntaxtorch.logical_xor(tensor1, tensor2)where tensor1 and tensor2 are the two input tensors.StepsTo compute element-wise logical XOR of given input tensors, one could follow the steps given below −Import the torch library. Make sure you have it already installed.Create two tensors, tensor1 and tensor2, and print the tensors.Compute torch.logical_xor(tesnor1, tesnor2) and assign the value to ... Read More

407 Views
torch.narrow() method is used to perform narrow operation on a PyTorch tensor. It returns a new tensor that is a narrowed version of the original input tensor.For example, a tensor of [4, 3] can be narrowed to a tensor of size [2, 3] or [4, 2]. We can narrow down a tensor along a single dimension at a time. Here, we cannot narrow down both dimensions to a size of [2, 2]. We can also use Tensor.narrow() to narrow down a tensor.Syntaxtorch.narrow(input, dim, start, length) Tensor.narrow(dim, start, length)Parametersinput – It's the PyTorch tensor to narrow.dim – It's the dimension along ... Read More

2K+ Views
torch.permute() method is used to perform a permute operation on a PyTorch tensor. It returns a view of the input tensor with its dimension permuted. It doesn't make a copy of the original tensor.For example, a tensor with dimension [2, 3] can be permuted to [3, 2]. We can also permute a tensor with new dimension using Tensor.permute().Syntaxtorch.permute(input, dims)Parametersinput – PyTorch tensor.dims – Tuple of desired dimensions.StepsImport the torch library. Make sure you have it already installed.import torch Create a PyTorch tensor and print the tensor and the size of the tensor.t = torch.tensor([[1, 2], [3, 4], [5, 6]]) print("Tensor:", ... Read More

2K+ Views
Tensor.expand() attribute is used to perform expand operation. It expands the Tensor to new dimensions along the singleton dimension.Expanding a tensor only creates a new view of the original tensor; it doesn't make a copy of the original tensor.If you set a particular dimension as -1, the tensor will not be expanded along this dimension.For example, if we have a tensor of size (3, 1), we can expand this tensor along the dimension of size 1.StepsTo expand a tensor, one could follow the steps given below −Import the torch library. Make sure you have already installed it.import torchDefine a tensor ... Read More

3K+ Views
To create a tensor with gradients, we use an extra parameter "requires_grad = True" while creating a tensor.requires_grad is a flag that controls whether a tensor requires a gradient or not.Only floating point and complex dtype tensors can require gradients.If requires_grad is false, then the tensor is same as the tensor without the requires_grad parameter.Syntaxtorch.tensor(value, requires_grad = True)Parametersvalue – tensor data, user-defined or randomly generated.requires_grad – a flag, if True, the tensor is included in the gradient computation.OutputIt returns a tensor with requires_grad as True.StepsImport the required library. The required library is torch.Define a tensor with requires_grad = TrueDisplay the ... Read More

137 Views
Element-wise remainder when a tensor is divided by other tensor is computed using the torch.remainder() method. We can also apply torch.fmod() to find the remainder.The difference between these two methods is that in torch.remainder(), when the sign of result is different than the sign of divisor, then the divisor is added to the result; whereas in torch.fmod(), it is not added.Syntaxtorch.remainder(input, other) torch.fmod(input, other)ParametersInput – It is a PyTorch tensor or scalar, the dividend.Other – It is also a PyTorch tensor or scalar, the divisor.OutputIt returns a tensor of element-wise remainder values.StepsImport the torch library.Define tensors, the dividend and the ... Read More

609 Views
torch.linalg.svd() computes the singular value decomposition (SVD) of a matrix or a batch of matrices. Singular value decomposition is represented as a named tuple (U, S, Vh).U and Vh are orthogonal for real matrix and unitary for input complex matrix.Vh is transpose of V when V is a real value and conjugate transpose when V is complex.S is always real valued even when the input is complex.SyntaxU, S, Vh = torch.linalg.svd(A, full_matrices=True)ParametersA – PyTorch tensor (matrix or batch of matrices).full_matrices – If True, the output is a full SVD, else a reduced SVD. Default is True.OutputIt returns a named tuple ... Read More