Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
Program to check number of requests that will be processed with given conditions in python
When building web applications, we often need to implement rate limiting to prevent abuse. This problem demonstrates how to process requests with both per-user and global rate limits within a 60-second sliding window.
We have a list of requests where each contains [uid, time_sec] representing a user ID and timestamp. We need to enforce two constraints: u (maximum requests per user in 60 seconds) and g (maximum global requests in 60 seconds). Requests at the same timestamp are processed by lowest user ID first.
Example
Given requests = [[0, 1], [1, 2], [1, 3]], u = 1, g = 5:
- User 0 requests at time 1 ? Allowed (first request)
- User 1 requests at time 2 ? Allowed (first request)
- User 1 requests at time 3 ? Denied (exceeds per-user limit of 1)
Result: 2 requests processed successfully.
Algorithm
We use sliding window technique with deques to track recent requests ?
- Sort requests by timestamp, then by user ID
- Use deques to track timestamps within the 60-second window
- For each request, clean expired timestamps and check limits
- Accept request if both user and global limits allow
Implementation
from collections import defaultdict, deque
class RateLimiter:
def count_processed_requests(self, requests, u, g):
# Track recent requests per user and globally
user_requests = defaultdict(deque)
global_requests = deque()
window_time = 60
requests.sort(key=lambda x: [x[1], x[0]]) # Sort by time, then uid
processed_count = 0
for uid, time in requests:
# Remove expired global requests
while global_requests and global_requests[0] + window_time <= time:
global_requests.popleft()
# Remove expired user requests
while user_requests[uid] and user_requests[uid][0] + window_time <= time:
user_requests[uid].popleft()
# Check if request can be processed
if len(global_requests) < g and len(user_requests[uid]) < u:
user_requests[uid].append(time)
global_requests.append(time)
processed_count += 1
return processed_count
# Test the implementation
limiter = RateLimiter()
requests = [[0, 1], [1, 2], [1, 3]]
u = 1 # Max requests per user in 60 seconds
g = 5 # Max global requests in 60 seconds
result = limiter.count_processed_requests(requests, u, g)
print(f"Processed requests: {result}")
Processed requests: 2
How It Works
The algorithm maintains two sliding windows:
- Global window: Tracks all processed requests in the last 60 seconds
- Per-user windows: Tracks requests per user in the last 60 seconds
For each incoming request, we:
- Remove expired timestamps from both windows
- Check if processing this request would exceed either limit
- If within limits, add timestamp to both windows and increment counter
Alternative Example
# Example with stricter limits
limiter = RateLimiter()
requests = [[1, 10], [1, 20], [2, 25], [1, 30], [2, 35]]
u = 2 # Max 2 requests per user
g = 3 # Max 3 requests globally
result = limiter.count_processed_requests(requests, u, g)
print(f"Processed requests: {result}")
# Let's trace through the processing
print("\nStep-by-step processing:")
print("Request [1,10]: Processed (user:1/2, global:1/3)")
print("Request [1,20]: Processed (user:2/2, global:2/3)")
print("Request [2,25]: Processed (user:1/2, global:3/3)")
print("Request [1,30]: Denied (user limit reached)")
print("Request [2,35]: Denied (global limit reached)")
Processed requests: 3 Step-by-step processing: Request [1,10]: Processed (user:1/2, global:1/3) Request [1,20]: Processed (user:2/2, global:2/3) Request [2,25]: Processed (user:1/2, global:3/3) Request [1,30]: Denied (user limit reached) Request [2,35]: Denied (global limit reached)
Time Complexity
Time: O(n log n + n×w) where n is the number of requests and w is the window size. Sorting takes O(n log n), and each request may trigger cleanup of old timestamps.
Space: O(n) for storing active requests in the sliding windows.
Conclusion
This sliding window approach efficiently handles rate limiting by maintaining separate deques for global and per-user request tracking. The algorithm ensures both individual user limits and system-wide limits are respected while processing requests in chronological order.
