Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Program to find duplicate elements and delete last occurrence of them in Python
When working with lists in Python, sometimes we need to find duplicate elements and remove their last occurrences. This problem requires tracking the frequency of each element and identifying when we encounter the final duplicate.
So, if the input is like [10, 30, 40, 10, 30, 50], then the output will be [10, 30, 40, 50].
Algorithm Steps
To solve this problem, we follow these steps:
- Count the frequency of each element in the list
- Track how many times we've seen each element while iterating
- When the seen count equals the total frequency and it's greater than 1, remove that element
- Continue until all duplicates' last occurrences are removed
Implementation
class Solution:
def solve(self, nums):
seen = {}
frequency = {}
# Count frequency of each element
for i in range(len(nums)):
if nums[i] not in frequency:
frequency[nums[i]] = 1
else:
frequency[nums[i]] += 1
i = 0
while i < len(nums):
total_count = frequency[nums[i]]
# Track how many times we've seen this element
if nums[i] not in seen:
seen[nums[i]] = 1
else:
seen[nums[i]] += 1
# If this is the last occurrence of a duplicate, remove it
if total_count == seen[nums[i]] and total_count > 1:
nums.pop(i)
i -= 1
i += 1
return nums
# Test the solution
ob = Solution()
result = ob.solve([10, 30, 40, 10, 30, 50])
print(result)
[10, 30, 40, 50]
How It Works
The algorithm uses two dictionaries:
- frequency: Stores the total count of each element in the original list
- seen: Tracks how many times we've encountered each element during iteration
When seen[element] equals frequency[element] and the frequency is greater than 1, we know this is the last occurrence of a duplicate element, so we remove it.
Alternative Approach Using Collections
from collections import Counter
def remove_last_duplicates(nums):
frequency = Counter(nums)
seen = {}
result = []
for num in nums:
seen[num] = seen.get(num, 0) + 1
# Keep element if it's not the last occurrence or not a duplicate
if seen[num] != frequency[num] or frequency[num] == 1:
result.append(num)
return result
# Test the alternative solution
data = [10, 30, 40, 10, 30, 50]
result = remove_last_duplicates(data)
print(result)
[10, 30, 40, 50]
Conclusion
This algorithm efficiently removes the last occurrence of duplicate elements by tracking element frequencies and occurrence counts. The time complexity is O(n) and space complexity is O(k) where k is the number of unique elements.
