Remove duplicates from a List in C#

Removing duplicates from a List<T> in C# is a common task when working with collections. There are several approaches to accomplish this, with Distinct() being the most straightforward method. The Distinct() method uses LINQ to filter out duplicate elements based on their default equality comparison.

Syntax

Following is the syntax for using Distinct() to remove duplicates −

List<T> uniqueList = originalList.Distinct().ToList();

For custom equality comparison −

List<T> uniqueList = originalList.Distinct(comparer).ToList();

Using Distinct() Method

The Distinct() method from LINQ is the simplest way to remove duplicates from a list. It returns an IEnumerable<T> containing unique elements −

using System;
using System.Collections.Generic;
using System.Linq;

public class Demo {
   public static void Main() {
      List<int> numbers = new List<int> {10, 20, 30, 40, 50, 30, 40, 50};
      
      Console.WriteLine("Original List:");
      foreach (int num in numbers) {
         Console.WriteLine(num);
      }
      
      // Remove duplicates using Distinct()
      List<int> uniqueNumbers = numbers.Distinct().ToList();
      
      Console.WriteLine("\nList after removing duplicates:");
      foreach (int num in uniqueNumbers) {
         Console.WriteLine(num);
      }
   }
}

The output of the above code is −

Original List:
10
20
30
40
50
30
40
50

List after removing duplicates:
10
20
30
40
50

Using HashSet for Better Performance

For large collections, using a HashSet<T> provides better performance as it automatically maintains uniqueness −

using System;
using System.Collections.Generic;
using System.Linq;

public class Demo {
   public static void Main() {
      List<string> fruits = new List<string> {"Apple", "Banana", "Orange", "Apple", "Grape", "Banana"};
      
      Console.WriteLine("Original List:");
      Console.WriteLine(string.Join(", ", fruits));
      
      // Remove duplicates using HashSet
      HashSet<string> uniqueSet = new HashSet<string>(fruits);
      List<string> uniqueFruits = uniqueSet.ToList();
      
      Console.WriteLine("\nUnique fruits using HashSet:");
      Console.WriteLine(string.Join(", ", uniqueFruits));
   }
}

The output of the above code is −

Original List:
Apple, Banana, Orange, Apple, Grape, Banana

Unique fruits using HashSet:
Apple, Banana, Orange, Grape

Using GroupBy for Complex Objects

When working with custom objects, you can use GroupBy() to remove duplicates based on specific properties −

using System;
using System.Collections.Generic;
using System.Linq;

public class Person {
   public string Name { get; set; }
   public int Age { get; set; }
   
   public override string ToString() {
      return $"{Name} ({Age})";
   }
}

public class Demo {
   public static void Main() {
      List<Person> people = new List<Person> {
         new Person { Name = "John", Age = 25 },
         new Person { Name = "Jane", Age = 30 },
         new Person { Name = "John", Age = 25 },
         new Person { Name = "Bob", Age = 35 }
      };
      
      Console.WriteLine("Original List:");
      foreach (var person in people) {
         Console.WriteLine(person);
      }
      
      // Remove duplicates based on Name property
      var uniquePeople = people.GroupBy(p => p.Name)
                              .Select(g => g.First())
                              .ToList();
      
      Console.WriteLine("\nUnique people by Name:");
      foreach (var person in uniquePeople) {
         Console.WriteLine(person);
      }
   }
}

The output of the above code is −

Original List:
John (25)
Jane (30)
John (25)
Bob (35)

Unique people by Name:
John (25)
Jane (30)
Bob (35)

Comparison of Methods

Method Performance Best Use Case
Distinct() Good for small to medium collections Simple types with default equality
HashSet Excellent for large collections When performance is critical
GroupBy() Good Complex objects with custom criteria

Conclusion

The Distinct() method is the most straightforward approach for removing duplicates from a list in C#. For better performance with large collections, consider using HashSet<T>, and for complex objects, use GroupBy() to remove duplicates based on specific properties.

Updated on: 2026-03-17T07:04:35+05:30

7K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements