- Data Structure
- Networking
- RDBMS
- Operating System
- Java
- MS Excel
- iOS
- HTML
- CSS
- Android
- Python
- C Programming
- C++
- C#
- MongoDB
- MySQL
- Javascript
- PHP
- Physics
- Chemistry
- Biology
- Mathematics
- English
- Economics
- Psychology
- Social Studies
- Fashion Studies
- Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to measure the mean absolute error (MAE) in PyTorch?
Mean absolute error is computed as the mean of the sum of absolute differences between the input and target (predicted and actual) values. To compute the mean absolute error in PyTorch, we apply the L1Loss() function provided by the torch.nn module. It creates a criterion that measures the mean absolute error.
Both the actual and predicted values are torch tensors having the same number of elements. Both the tensors may have any number of dimensions. This function returns a tensor of a scalar value. It is a type of loss function provided by the torch.nn module. The loss functions are used to optimize a deep neural network by minimizing the loss.
Syntax
torch.nn.L1Loss()
Steps
To measure the mean absolute error, one could follow the steps given below
Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it.
import torch
Create the input and target tensors and print them.
input = torch.randn(3, 4) target = torch.randn(3, 4)
Create a criterion to measure the mean absolute error.
mae = nn.L1Loss()
Compute the mean absolute error (loss) and print it.
output = mae(input, target) print("MAE loss:", output)
Example 1
In this Python code, we compute the mean absolute error loss between a 2-dimensional input and the target tensors.
# Import the required libraries import torch import torch.nn as nn # define the input and target tensors input = torch.randn(3, 4) target = torch.randn(3, 4) # print input and target tensors print("Input Tensor:
", input) print("Target Tensor:
", target) # create a criterion to measure the mean absolute error mae = nn.L1Loss() # compute the loss (mean absolute error) output = mae(input, target) # output.backward() print("MAE loss:", output)
Output
Input Tensor: tensor([[-0.3743, -1.3795, 0.7910, -0.8501], [-0.4872, 0.3542, -1.1613, 0.2766], [-0.0343, 0.6158, 1.5640, -1.5776]]) Target Tensor: tensor([[-0.1976, -0.5571, 0.0576, -0.6701], [ 0.3859, -0.4046, -1.3166, 0.0288], [ 0.7254, 0.5169, 0.2227, 0.9585]]) MAE loss: tensor(0.7236)
Notice that the input and target tensors are 2D tensors, whereas the mean absolute error is a scalar value.
Example 2
In this example, we compute the mean absolute error loss between a 2-dimensional input and the target tensors. Here, the input is a tensor which needs the parameter requires_grad = True. We also calculate the gradients with respect to the input values.
# Import the required libraries import torch import torch.nn as nn # define the input and target tensors input = torch.randn(4, 5, requires_grad = True) target = torch.randn(4, 5) # print input and target tensors print("Input Tensor:
", input) print("Target Tensor:
", target) # create a criterion to measure the mean absolute error loss = nn.L1Loss() # compute the loss (mean absolute error) output = loss(input, target) output.backward() print("MAE loss:", output) print("input.grad:
", input.grad)
Output
Input Tensor: tensor([[-1.5325, 0.9718, 0.8848, -2.3685], [ 0.1574, -0.5296, -0.1587, -0.6423], [-1.9586, 0.6249, -1.1507, -1.7188]], requires_grad=True) Target Tensor: tensor([[-0.2213, -1.0928, 0.1864, 0.6496], [ 0.9031, 1.3741, -0.9058, -2.0849], [-0.7316, -0.9297, -1.4479, 0.9797]]) MAE loss: tensor(1.4757, grad_fn=<L1LossBackward>) input.grad: tensor([[-0.0833, 0.0833, 0.0833, -0.0833], [-0.0833, -0.0833, 0.0833, 0.0833], [-0.0833, 0.0833, 0.0833, -0.0833]])
Notice that in the above output, the mean absolute error is a grad function (L1LossBackward). The size of the gradient tensor is the same as the input tensor.