Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
LRU Cache implementation using Double Linked Lists
LRU (Least Recently Used) Cache is a caching algorithm that evicts the least recently accessed item when the cache reaches its capacity. It maintains items in order of their usage, with the most recently used at the front and the least recently used at the back. A doubly linked list combined with a hash map provides an efficient implementation with O(1) time complexity for both get and put operations.
How LRU Cache Works
The LRU cache maintains two key data structures:
Doubly Linked List ? Stores cache items in order of usage, with head as most recent and tail as least recent
Hash Map ? Provides O(1) access to any node in the linked list using the key
When an item is accessed or inserted, it moves to the front of the list. When the cache is full and a new item is added, the item at the tail (least recently used) is removed.
Example Implementation
Here's the pseudocode for LRU Cache operations:
class LRUCache:
constructor(capacity):
this.capacity = capacity
this.hashMap = new HashMap()
this.head = new DummyNode()
this.tail = new DummyNode()
this.head.next = this.tail
this.tail.prev = this.head
get(key):
if key exists in hashMap:
node = hashMap[key]
moveToHead(node)
return node.value
return -1
put(key, value):
if key exists in hashMap:
node = hashMap[key]
node.value = value
moveToHead(node)
else:
newNode = new Node(key, value)
if hashMap.size >= capacity:
removeLRU()
addToHead(newNode)
hashMap[key] = newNode
moveToHead(node):
removeNode(node)
addToHead(node)
removeNode(node):
node.prev.next = node.next
node.next.prev = node.prev
addToHead(node):
node.prev = head
node.next = head.next
head.next.prev = node
head.next = node
Step-by-Step Example
Consider an LRU Cache with capacity 2. Let's trace through operations:
| Operation | Action | Cache State | Result |
|---|---|---|---|
| put(1, "A") | Insert new | [1:"A"] | Cache: 1:"A" |
| put(2, "B") | Insert new | [2:"B", 1:"A"] | Cache: 2:"B" ? 1:"A" |
| get(1) | Move to front | [1:"A", 2:"B"] | Return "A" |
| put(3, "C") | Evict LRU (2) | [3:"C", 1:"A"] | Cache: 3:"C" ? 1:"A" |
| get(2) | Not found | [3:"C", 1:"A"] | Return -1 |
Advantages
O(1) Time Complexity ? Both get and put operations execute in constant time
Efficient Memory Usage ? Only stores necessary pointers and eliminates unused data quickly
Locality of Reference ? Takes advantage of temporal locality in data access patterns
Predictable Performance ? Consistent eviction behavior regardless of access patterns
Disadvantages
Memory Overhead ? Requires additional memory for pointers and hash map structure
Implementation Complexity ? More complex than simple caching strategies like FIFO
Poor Performance on Sequential Access ? Not optimal for sequential or scan-heavy workloads
No Frequency Consideration ? May evict frequently used items that haven't been accessed recently
Conclusion
LRU Cache using doubly linked lists provides an efficient caching solution with O(1) operations for both insertion and retrieval. The combination of hash map for fast lookup and doubly linked list for maintaining access order makes it ideal for applications requiring predictable cache behavior and optimal memory management.
