How to Copy a Tensor in PyTorch?

PyTorch is a popular Python library for machine learning and deep learning developed by Facebook AI. When working with tensors, you often need to create copies for different purposes. PyTorch provides several methods to copy tensors, each with distinct behaviors regarding memory sharing and gradient tracking.

Using clone() Method

The clone() method creates a deep copy of a tensor that preserves the computational graph. This is ideal when you need an independent copy that still supports gradient computation ?

import torch

# Original tensor
original_tensor = torch.tensor([11, 12, 13])

# Create a deep copy using clone()
copied_tensor = original_tensor.clone()

# Modify the copied tensor
copied_tensor[0] = 20

print("Original Tensor:", original_tensor)
print("Copied Tensor:", copied_tensor)
Original Tensor: tensor([11, 12, 13])
Copied Tensor: tensor([20, 12, 13])

Time Complexity: O(n), where n is the number of elements in the tensor.

Space Complexity: O(n), as a new memory block is allocated.

Using detach() Method

The detach() method creates a tensor that shares data with the original but removes it from the computational graph. This is useful for inference when you don't need gradient tracking ?

import torch

# Tensor with gradients enabled
original_tensor = torch.tensor([11.0, 12.0, 13.0], requires_grad=True)

# Create a lightweight copy using detach()
copied_tensor = original_tensor.detach()

# Modify copied tensor (affects original since they share memory)
copied_tensor[0] = 20

print("Original Tensor:", original_tensor)
print("Copied Tensor:", copied_tensor)
Original Tensor: tensor([20., 12., 13.], requires_grad=True)
Copied Tensor: tensor([20., 12., 13.])

Time Complexity: O(1), as no new memory is allocated.

Space Complexity: O(1), as the copied tensor shares memory with the original.

Using copy.deepcopy() Method

The copy.deepcopy() method creates completely independent copies, including nested structures. This is useful for complex data structures containing multiple tensors ?

import torch
import copy

# Dictionary containing tensors
original_data = {
    'tensor_a': torch.tensor([11, 12, 13]),
    'tensor_b': torch.tensor([14, 15, 16])
}

# Create a deep copy using deepcopy()
copied_data = copy.deepcopy(original_data)

# Modify the copied data
copied_data['tensor_a'][0] = 20

print("Original Data:", original_data)
print("Copied Data:", copied_data)
Original Data: {'tensor_a': tensor([11, 12, 13]), 'tensor_b': tensor([14, 15, 16])}
Copied Data: {'tensor_a': tensor([20, 12, 13]), 'tensor_b': tensor([14, 15, 16])}

Time Complexity: O(n), where n is the total number of elements across all tensors.

Space Complexity: O(n), as new memory blocks are created for each tensor.

Comparison

Method Memory Sharing Gradient Tracking Best Use Case
clone() No Preserved Deep learning with gradients
detach() Yes Removed Inference and analysis
deepcopy() No Preserved Complex nested structures

Conclusion

Use clone() for independent copies with gradient support, detach() for memory-efficient inference copies, and deepcopy() for complex nested tensor structures. Choose the method based on your memory and gradient requirements.

Updated on: 2026-03-27T16:48:10+05:30

10K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements