
Problem
Solution
Submissions
Distributed Key-Value Store
Certification: Intermediate Level
Accuracy: 0%
Submissions: 0
Points: 15
Design and implement a distributed key-value store that supports basic operations like get, put, and delete. The system should handle data sharding across multiple nodes, ensure data replication for fault tolerance, and maintain consistency.
Example 1
- Input: kvs = DistributedKeyValueStore(nodes=3, replicas=2) kvs.put("user1", {"name": "John", "age": 30}) kvs.put("user2", {"name": "Alice", "age": 25}) result = kvs.get("user1")
- Output: {"name": "John", "age": 30}
- Explanation:
- Step 1: Create a distributed key-value store with 3 nodes and replication factor of 2.
- Step 2: Store key "user1" with value {"name": "John", "age": 30}.
- Step 3: Store key "user2" with value {"name": "Alice", "age": 25}.
- Step 4: Retrieve the value for key "user1".
- Step 5: The system returns {"name": "John", "age": 30} from the appropriate node.
Example 2
- Input: kvs = DistributedKeyValueStore(nodes=5, replicas=3) kvs.put("product1", {"name": "Laptop", "price": 999.99}) kvs.get("product1") kvs.simulate_node_failure(2) # Node 2 fails result = kvs.get("product1")
- Output: {"name": "Laptop", "price": 999.99}
- Explanation:
- Step 1: Create a distributed key-value store with 5 nodes and replication factor of 3.
- Step 2: Store key "product1" with value {"name": "Laptop", "price": 999.99}.
- Step 3: Retrieve the value for key "product1" successfully.
- Step 4: Simulate a failure of node 2.
- Step 5: Retrieve the value for key "product1" again.
- Step 6: Despite node 2 failing, the system can still retrieve the data from another replica.
- Step 7: The coordinator transparently redirects the request to a healthy node with a replica.
Constraints
- 1 ≤ number of nodes ≤ 100
- 1 ≤ replication factor ≤ number of nodes
- Keys and values can be of any serializable type
- Time Complexity: O(1) average case for get/put operations
- Space Complexity: O(n) where n is the total number of key-value pairs
Editorial
My Submissions
All Solutions
Lang | Status | Date | Code |
---|---|---|---|
You do not have any submissions for this problem. |
User | Lang | Status | Date | Code |
---|---|---|---|---|
No submissions found. |
Solution Hints
- Use consistent hashing to distribute keys across nodes
- Implement a replication strategy (e.g., chain replication or primary-backup)
- Add health checks to detect node failures
- Use vector clocks or similar mechanisms for handling conflicts
- Implement a coordinator that knows the cluster topology