How to join tensors in PyTorch?

PyTorch provides two main methods to join tensors: torch.cat() and torch.stack(). The key difference is that torch.cat() concatenates tensors along an existing dimension, while torch.stack() creates a new dimension for joining.

Key Differences

  • torch.cat() concatenates tensors along an existing dimension without changing the number of dimensions.

  • torch.stack() stacks tensors along a new dimension, increasing the tensor dimensionality by one.

Using torch.cat() with 1D Tensors

Let's start by concatenating 1D tensors ?

import torch

# Create 1D tensors
t1 = torch.tensor([1, 2, 3, 4])
t2 = torch.tensor([0, 3, 4, 1])
t3 = torch.tensor([4, 3, 2, 5])

print("Original tensors:")
print("t1:", t1)
print("t2:", t2)
print("t3:", t3)

# Concatenate tensors
result = torch.cat((t1, t2, t3))
print("\nConcatenated tensor:")
print("Result:", result)
Original tensors:
t1: tensor([1, 2, 3, 4])
t2: tensor([0, 3, 4, 1])
t3: tensor([4, 3, 2, 5])

Concatenated tensor:
Result: tensor([1, 2, 3, 4, 0, 3, 4, 1, 4, 3, 2, 5])

Using torch.cat() with 2D Tensors

For 2D tensors, we can concatenate along different dimensions ?

import torch

# Create 2D tensors
t1 = torch.tensor([[1, 2], [3, 4]])
t2 = torch.tensor([[0, 3], [4, 1]])
t3 = torch.tensor([[4, 3], [2, 5]])

print("Original tensors:")
print("t1:\n", t1)
print("t2:\n", t2)
print("t3:\n", t3)

# Concatenate along dimension 0 (rows)
result_dim0 = torch.cat((t1, t2, t3), dim=0)
print("\nConcatenating along dimension 0:")
print(result_dim0)

# Concatenate along dimension -1 (columns)
result_dim_neg1 = torch.cat((t1, t2, t3), dim=-1)
print("\nConcatenating along dimension -1:")
print(result_dim_neg1)
Original tensors:
t1:
 tensor([[1, 2],
        [3, 4]])
t2:
 tensor([[0, 3],
        [4, 1]])
t3:
 tensor([[4, 3],
        [2, 5]])

Concatenating along dimension 0:
tensor([[1, 2],
        [3, 4],
        [0, 3],
        [4, 1],
        [4, 3],
        [2, 5]])

Concatenating along dimension -1:
tensor([[1, 2, 0, 3, 4, 3],
        [3, 4, 4, 1, 2, 5]])

Using torch.stack() with 1D Tensors

Stacking creates a new dimension, turning 1D tensors into a 2D tensor ?

import torch

# Create 1D tensors
t1 = torch.tensor([1, 2, 3, 4])
t2 = torch.tensor([0, 3, 4, 1])
t3 = torch.tensor([4, 3, 2, 5])

print("Original tensors:")
print("t1:", t1)
print("t2:", t2)
print("t3:", t3)

# Stack along dimension 0
result_dim0 = torch.stack((t1, t2, t3), dim=0)
print("\nStacking along dimension 0:")
print(result_dim0)

# Stack along dimension -1
result_dim_neg1 = torch.stack((t1, t2, t3), dim=-1)
print("\nStacking along dimension -1:")
print(result_dim_neg1)
Original tensors:
t1: tensor([1, 2, 3, 4])
t2: tensor([0, 3, 4, 1])
t3: tensor([4, 3, 2, 5])

Stacking along dimension 0:
tensor([[1, 2, 3, 4],
        [0, 3, 4, 1],
        [4, 3, 2, 5]])

Stacking along dimension -1:
tensor([[1, 0, 4],
        [2, 3, 3],
        [3, 4, 2],
        [4, 1, 5]])

Using torch.stack() with 2D Tensors

Stacking 2D tensors creates a 3D tensor ?

import torch

# Create 2D tensors
t1 = torch.tensor([[1, 2], [3, 4]])
t2 = torch.tensor([[0, 3], [4, 1]])
t3 = torch.tensor([[4, 3], [2, 5]])

print("Original tensors:")
print("t1:\n", t1)
print("t2:\n", t2)
print("t3:\n", t3)

# Stack along dimension 0
result_dim0 = torch.stack((t1, t2, t3), dim=0)
print("\nStacking along dimension 0:")
print("Shape:", result_dim0.shape)
print(result_dim0)

# Stack along dimension -1
result_dim_neg1 = torch.stack((t1, t2, t3), dim=-1)
print("\nStacking along dimension -1:")
print("Shape:", result_dim_neg1.shape)
print(result_dim_neg1)
Original tensors:
t1:
 tensor([[1, 2],
        [3, 4]])
t2:
 tensor([[0, 3],
        [4, 1]])
t3:
 tensor([[4, 3],
        [2, 5]])

Stacking along dimension 0:
Shape: torch.Size([3, 2, 2])
tensor([[[1, 2],
         [3, 4]],

        [[0, 3],
         [4, 1]],

        [[4, 3],
         [2, 5]]])

Stacking along dimension -1:
Shape: torch.Size([2, 2, 3])
tensor([[[1, 0, 4],
         [2, 3, 3]],

        [[3, 4, 2],
         [4, 1, 5]]])

Comparison

Method Dimension Change Best For
torch.cat() No change Combining tensors along existing dimensions
torch.stack() Adds one dimension Creating batches or grouping tensors

Conclusion

Use torch.cat() to concatenate tensors along existing dimensions without changing dimensionality. Use torch.stack() to create a new dimension and stack tensors, which is particularly useful for batch operations in deep learning.

Updated on: 2026-03-26T18:39:50+05:30

35K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements