# How to perform in-place operations in PyTorch?

PyTorchServer Side ProgrammingProgramming

#### Complete Python Prime Pack

9 Courses     2 eBooks

#### Artificial Intelligence & Machine Learning Prime Pack

6 Courses     1 eBooks

#### Java Prime Pack

9 Courses     2 eBooks

In-place operations directly change the content of a tensor without making a copy of it. Since it does not create a copy of the input, it reduces the memory usage when dealing with high-dimensional data. An in-place operation helps to utilize less GPU memory.

In PyTorch, in-place operations are always post-fixed with a "_", like add_(), mul_(), etc.

## Steps

To perform an in-place operation, one could follow the steps given below −

• Import the required library. The required library is torch.

• Define/create tensors on which in-place operation is to be performed.

• Perform both normal and in-place operations to see the clear difference between them.

• Display the tensors obtained in normal and in-place operations.

## Example 1

The following Python program highlights the difference between a normal addition and an in-place addition. In in-place addition, the value of the first operand "x" is changed; while in normal addition, it remains unchanged.

# import required library
import torch

# create two tensors x and y
x = torch.tensor(4)
y = torch.tensor(3)
print("x=", x.item())
print("y=", y.item())

print("In-place Addition x:",x.item())

## Output

x = 4
y = 3
In-place Addition x: 7

In the above program, two tensors x and y are added. In normal addition operation, the value of x is not changed, but in in-place addition operation, it's changed.

## Example 2

The following Python program shows how the normal addition and in-place addition operations are different in terms of memory allocation.

# import required library
import torch

# create two tensors x and y
x = torch.tensor(4)
y = torch.tensor(3)
print("id(x)=", id(x))

print("In-place Addition id(z):",id(z))
id(x)= 63366656
In-place Addition id(z): 63366656