- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to apply rectified linear unit function element-wise in PyTorch?
To apply a rectified linear unit (ReLU) function element-wise on an input tensor, we use torch.nn.ReLU(). It replaces all the negative elements in the input tensor with 0 (zero), and all the non-negative elements are left unchanged. It supports only real-valued input tensors. ReLU is used as an activation function in neural networks.
Syntax
relu = torch.nn.ReLU() output = relu(input)
Steps
You could use the following steps to apply rectified linear unit (ReLU) function element-wise −
Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it.
import torch import torch.nn as nn
Define input tensor and print it.
input = torch.randn(2,3) print("Input Tensor:
",input)
Define a ReLU function relu using torch.nn.ReLU().
relu = torch.nn.ReLU()
Apply the above-defined ReLU function relu on the input tensor. And optionally assign the output to a new variable
output = relu(input)
Print the tensor containing ReLU function values.
print("ReLU Tensor:
",output)
Let's take a couple of examples to have a better understanding of how it works.
Example 1
# Import the required library import torch import torch.nn as nn relu = torch.nn.ReLU() input = torch.tensor([[-1., 8., 1., 13., 9.], [ 0., 1., 0., 5., -5.], [ 3., -5., 8., -1., 5.], [ 0., 3., -1., 13., 12.]]) print("Input Tensor:
",input) print("Size of Input Tensor:
",input.size()) # Compute the rectified linear unit (ReLU) function element-wise output = relu(input) print("ReLU Tensor:
",output) print("Size of ReLU Tensor:
",output.size())
Output
Input Tensor: tensor([[-1., 8., 1., 13., 9.], [ 0., 1., 0., 5., -5.], [ 3., -5., 8., -1., 5.], [ 0., 3., -1., 13., 12.]]) Size of Input Tensor: torch.Size([4, 5]) ReLU Tensor: tensor([[ 0., 8., 1., 13., 9.], [ 0., 1., 0., 5., 0.], [ 3., 0., 8., 0., 5.], [ 0., 3., 0., 13., 12.]]) Size of ReLU Tensor: torch.Size([4, 5])
In the above example, notice that the negative elements in the input tensor are replaced with zero in the output tensor.
Example 2
# Import the required library import torch import torch.nn as nn relu = torch.nn.ReLU(inplace=True) input = torch.randn(4,5) print("Input Tensor:
",input) print("Size of Input Tensor:
",input.size()) # Compute the rectified linear unit (ReLU) function element-wise output = relu(input) print("ReLU Tensor:
",output) print("Size of ReLU Tensor:
",output.size())
Output
Input Tensor: tensor([[ 0.4217, 0.4151, 1.3292, -1.3835, -0.0086], [-0.7693, -1.7736, -0.3401, -0.7179, -0.0196], [ 1.0918, -0.9426, 2.1496, -0.4809, -1.2254], [-0.3198, -0.2231, 1.2043, 1.1222, 0.7905]]) Size of Input Tensor: torch.Size([4, 5]) ReLU Tensor: tensor([[0.4217, 0.4151, 1.3292, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [1.0918, 0.0000, 2.1496, 0.0000, 0.0000], [0.0000, 0.0000, 1.2043, 1.1222, 0.7905]]) Size of ReLU Tensor: torch.Size([4, 5])
- Related Articles
- How to find element-wise remainder in PyTorch?
- How to apply linear transformation to the input data in PyTorch?
- How to perform element-wise addition on tensors in PyTorch?
- How to perform element-wise subtraction on tensors in PyTorch?
- How to perform element-wise multiplication on tensors in PyTorch?
- How to perform element-wise division on tensors in PyTorch?
- How to apply functions element-wise in a dataframe in Python?
- PyTorch – How to compute element-wise logical XOR of tensors?
- PyTorch – How to compute element-wise entropy of an input tensor?
- How to compute the element-wise angle of the given input tensor in PyTorch?
- How to apply a 2D convolution operation in PyTorch?
- How to apply a 2D Max Pooling in PyTorch?
- How to apply a 2D Average Pooling in PyTorch?
- How to apply a 2D transposed convolution operation in PyTorch?
- How to compute the Heaviside step function for each element in input in PyTorch?
