- Trending Categories
- Data Structure
- Networking
- RDBMS
- Operating System
- Java
- iOS
- HTML
- CSS
- Android
- Python
- C Programming
- C++
- C#
- MongoDB
- MySQL
- Javascript
- PHP

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

# How to perform in-place operations in PyTorch?

**In-place** operations directly change the content of a tensor without making
a copy of it. Since it does not create a copy of the input, it reduces the
memory usage when dealing with high-dimensional data. An in-place
operation helps to utilize less GPU memory.

In PyTorch, **in-place** operations are always post-fixed with a "_", like add_(), mul_(), etc.

## Steps

To perform an in-place operation, one could follow the steps given below −

Import the required library. The required library is torch.

Define/create tensors on which

**in-place**operation is to be performed.Perform both

**normal**and i**n-place**operations to see the clear difference between them.Display the tensors obtained in normal and

**in-place**operations.

## Example 1

The following Python program highlights the difference between a normal addition and an **in-place** addition. In in-place addition, the value of the first operand "x" is changed; while in **normal addition**, it remains
unchanged.

# import required library import torch # create two tensors x and y x = torch.tensor(4) y = torch.tensor(3) print("x=", x.item()) print("y=", y.item()) # Normal addition z = x.add(y) print("Normal Addition x:",x.item()) # In-place addition z = x.add_(y) print("In-place Addition x:",x.item())

## Output

x = 4 y = 3 Normal Addition x: 4 In-place Addition x: 7

In the above program, two tensors x and y are added. In normal addition operation, the value of x is not changed, but in in-place addition operation, it's changed.

## Example 2

The following Python program shows how the normal addition and in-place addition operations are different in terms of memory allocation.

# import required library import torch # create two tensors x and y x = torch.tensor(4) y = torch.tensor(3) print("id(x)=", id(x)) # Normal addition z = x.add(y) print("Normal Addition id(z):",id(z)) # In-place addition z = x.add_(y) print("In-place Addition id(z):",id(z))

## Output

id(x)= 63366656 Normal Addition id(z): 63366080 In-place Addition id(z): 63366656

In the above program, normal operation allocates new memory location for "z", whereas **in-place** operation does not allocate new memory.

- Related Questions & Answers
- How to perform an expand operation in PyTorch?
- How to perform a permute operation in PyTorch?
- How to perform element-wise addition on tensors in PyTorch?
- How to perform element-wise subtraction on tensors in PyTorch?
- How to perform element-wise multiplication on tensors in PyTorch?
- How to perform element-wise division on tensors in PyTorch?
- Perform NAND/NOR operations in MySQL
- How to perform arithmetic operations on a date in Python?
- C++ Program to Perform Operations in a BST
- How to perform the arithmetic operations on arrays in C language?
- How to perform arithmetic operations on two-dimensional array in C?
- PyTorch – How to perform random affine transformation of an image?
- How to perform mathematical operations on elements of a list in R?
- Perform mathematical operations in a MySQL Stored Procedure?
- C# Program to perform all Basic Arithmetic Operations