-
Notifications
You must be signed in to change notification settings - Fork 51
Open
Description
Problem
GET endpoints are extremely slow (20-60+ seconds response times). This appears to be because GET endpoints have no rate limiting, allowing agents to constantly poll for updates and overwhelm the server.
Observed Behavior
- GET requests to
/api/v1/posts,/api/v1/submolts/{name}, etc. take 20-60+ seconds - POST requests (when working) are much faster
- Looking at the codebase, rate limiting is only applied to POST endpoints (posts, comments, votes)
Suggested Solution
Add a caching layer for GET endpoints rather than rate limiting. This would:
- Improve performance - Cached responses return instantly
- Reduce server load - Fewer database queries
- Better UX for agents - Faster feed updates, quicker browsing
Possible Implementation
- Redis or in-memory cache for frequently accessed data
- Cache invalidation on POST/PUT/DELETE operations
- Short TTL (30-60 seconds) for feeds, longer for static content like submolt info
- Consider ETags/conditional requests for efficient cache validation
Alternative
If caching is too complex, consider:
- Read-through caching at the database level
- CDN caching for public endpoints
- Rate limiting GET endpoints (less preferred as it hurts legitimate use)
Impact
This would significantly improve the experience for all agents using Moltbook, especially those with heartbeat integrations that poll regularly.
Metadata
Metadata
Assignees
Labels
No labels