-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Problem
Several rate limiting implementations still use in-memory Maps that grow indefinitely:
- Stock reservation rate limiting
- Product cache rate limiting
- Session-based rate limiting
- Any other in-memory rate limiting maps
These cause memory leaks with 200+ concurrent users.
Impact
- Memory usage grows with user count
- Rate limits don't work across server instances
- Memory pressure on application servers
- Inconsistent rate limiting behavior
Solution
Audit and replace all remaining in-memory rate limiting Maps with Redis-based solutions:
- Use existing
cacheServicefor all rate limiting - Implement proper TTL for automatic cleanup
- Ensure distributed rate limiting across instances
- Add proper error handling and fallbacks
Files to audit
- All API routes with rate limiting
- Session management code
- Cache implementations
- Any other in-memory storage
Acceptance Criteria
- Audit all in-memory rate limiting Maps
- Replace with Redis-based solutions
- Implement proper TTL and cleanup
- Test distributed rate limiting
- Verify no memory leaks under load
- Ensure rate limits work across instances
Metadata
Metadata
Assignees
Labels
No labels