Difference Between Tensor and Variable in Pytorch

PyTorch is an open-source Python library used for machine learning, computer vision and deep learning. It is an excellent library to build neural networks, conduct complex computations and optimize gradient differentiation.

Developed by Facebook's Research Team (FAIR), it gained popularity due to its dynamic computing graphs, allowing it to change graphs in real time. This was revolutionary back in 2016 when models working in real-time just started to emerge.

There are two main data structures to understand in PyTorch: Tensor and Variable. Let's explore their differences and evolution.

What is a Tensor?

A Tensor is used to define an n-dimensional matrix or multi-dimensional arrays which are used in mathematical operations. They represent many types of arrays such as scalar, vector and n-dimensional arrays. They share similar functionalities to NumPy arrays or TensorFlow tensors.

PyTorch tensors support automatic differentiation, which is instrumental for backpropagation in neural networks. They are the primary data structure for storing and manipulating data in PyTorch.

Example

Here's how to create a basic tensor ?

import torch

data = [1, 2, 3, 4, 5]
tensor = torch.tensor(data)
print(tensor)
print(f"Shape: {tensor.shape}")
print(f"Data type: {tensor.dtype}")
tensor([1, 2, 3, 4, 5])
Shape: torch.Size([5])
Data type: torch.int64

What is a Variable?

Variables were the primary wrapper around tensors in older PyTorch versions (before 0.4.0). They were used to wrap tensors and provide automatic differentiation and computational graph tracking. Variables were essential for gradient-based optimization algorithms.

However, after PyTorch 0.4.0, Variables were deprecated. Their functionalities were merged directly into Tensors, eliminating the need for a separate wrapper class.

Example (Legacy Code)

This example shows how Variables were used in older PyTorch versions ?

import torch
from torch.autograd import Variable

data = [1, 2, 3, 4, 5]
var = Variable(torch.tensor(data, dtype=torch.float32), requires_grad=True)
print(var)
tensor([1., 2., 3., 4., 5.], requires_grad=True)

Modern Tensor with Gradient Computation

In modern PyTorch, tensors can directly handle gradient computation without Variables ?

import torch

# Create tensors with gradient computation enabled
x = torch.tensor(6.0, requires_grad=True)
y = torch.tensor(9.0, requires_grad=True)

# Perform computation
result = x + y
print(f"Result: {result}")

# Compute gradients
result.backward()
print(f"Gradient of x: {x.grad}")
print(f"Gradient of y: {y.grad}")
Result: tensor(15., grad_fn=<AddBackward0>)
Gradient of x: tensor(1.)
Gradient of y: tensor(1.)

GPU Acceleration with Tensors

Modern tensors can be easily moved to GPU for faster computation ?

import torch

# Check if CUDA is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using device: {device}")

# Create tensors
t1 = torch.tensor(6.0)
t2 = torch.tensor(9.0)

# Move to GPU if available
t1 = t1.to(device)
t2 = t2.to(device)

result = t1 + t2
print(f"Result: {result}")
print(f"Device: {result.device}")
Using device: cpu
Result: tensor(15.)
Device: cpu

Key Differences Summary

Aspect Variable (Deprecated) Modern Tensor
PyTorch Version < 0.4.0 >= 0.4.0
Purpose Wrapper around tensors Direct data structure
Gradient Support Required wrapper Built-in with requires_grad
Current Status Deprecated Actively used

Conclusion

Variables were deprecated in PyTorch 0.4.0, and their functionality was merged into Tensors. Modern PyTorch uses tensors with requires_grad=True for automatic differentiation, making the API simpler and more intuitive.

Updated on: 2026-03-27T11:54:02+05:30

402 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements