Least Frequently Used (LFU) Implementation


A memory cache is an element of software or hardware which holds frequently accessed data in a convenient location. A cache is used to improve system performance by reducing the amount of energy it requires to access data. The memory ability establishes how much data a cache can store. While the memory cache is full, some data must be evicted in order to provide a place for new data.

Following this, there is the fact that the world is ending. The Least Frequently Used (LFU) cache storage replacement policy is one of these policies. Everything in an LFU cache has a consumption count, which indicates the number of times the item was accessed. While the memory cache is full, thereby the product with the least amount of usage is removed. This approach depends on the premise that items used infrequently will be used less frequently in the future.

An LFU cache can be implemented using a data structure that allows for quick item lookup, insertion, and deletion. A dictionary (or hash table) is an excellent choice for this task. Here's how to implement an LFU cache using a dictionary in Python

Python

class LFUCache:
   def __init__(self, capacity: int):
      self.capacity = capacity
      self.cache = {}  # Dictionary to store the items in the cache
      self.counts = {}  # Dictionary to store the usage count of each item

   def get(self, key: int) -> int:
      if key in self.cache:
         # Increment the usage count of the item
         self.counts[key] += 1
         return self.cache[key]
      else:
         return -1

   def put(self, key: int, value: int) -> None:
      if self.capacity == 0:
         return

      if key in self.cache:
         # Update the value of the item and increment its usage count
         self.cache[key] = value
         self.counts[key] += 1
      else:
         if len(self.cache) >= self.capacity:
            # Evict the item with the lowest usage count
            min_count = min(self.counts.values())
            items = [k for k, v in self.counts.items() if v == min_count]
            del self.cache[items[0]]
            del self.counts[items[0]]
         # Add the new item to the cache with a usage count of 1
         self.cache[key] = value
         self.counts[key] = 1
 
cache = LFUCache(2)  # Create an LFU cache with capacity 2
cache.put(1, 1)  # Add key 1 with value 1 to the cache
cache.put(2, 2)  # Add key 2 with value 2 to the cache
print(cache.get(1))  # Output: 1
cache.put(3, 3)  # Add key 3 with value 3 to the cache, evicting key 2
print(cache.get(2))  # Output: -1 (key 2 is no longer in the cache)
print(cache.get(3))  # Output: 3
cache.put(4, 4)  # Add key 4 with value 4 to the cache, evicting key 1
print(cache.get(1))  # Output: -1 (key 1 is no longer in the cache)
print(cache.get(3))  # Output: 3
print(cache.get(4))  # Output: 4

Output

1
-1
3
-1
3
4

init command

The '__init__' method creates two dictionaries:'self.cache' for keeping the items in the cache and 'self. counts' to preserve the utilization count of each item.

get command

The 'get' method searches the cache for the specified key, and if it turns up, it increments the item's usage count and converts its value. If the key cannot be found, the function returns '-1'.

put command

The 'put' method adds or modifies a combination of keys and values in the cache. If the key already exists in the cache, the item's value is updated and its user count is increased. If the key doesn't exist already in the cache, the key-value pair is added, and if the memory cache is full, the product alongside its lowest usage count is evicted. The 'min' operates is used to determine which is the item in 'self. counts' with the lowest consumption count and then the resultant knowledge is utilized to find all the items in 'self. cache' with that usage count. The first item on the list of items is then removed.

This implementation offers an efficient method of implementation. Lastly, an LFU cache uses bandwidth-based eviction regulations to figure out which items are removed when the storage space is full. When the cache of items is packed and a new item is required, the product with the least use count is evicted. This guarantees that the cache includes the most commonly utilized items at all times, which can lead to higher cache hit increases and performance.

The LFU cache implementation employs a hash table and a priority queue data structure to efficiently maintain the cache and its usage counts.

Conclusion

LFU cache is a type of caching technique that eliminates the least frequently used items via the cache's memory when it becomes full. It is useful in situations where there are not many resources available and the implementation of those resources must be optimized.

The LFU cache application is an efficient way of handling limited resources when caching is required. It can be used for a variety of purposes, including web-cached information, a database that caches, and directory caching. However, maintaining usage counts and managing the cache eviction policy necessitates enormous computational overhead. Furthermore, the characteristics of the task and data access patterns can have an impact on its performance.

Updated on: 03-May-2023

218 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements