Skip to content

Performance: Add caching layer for GET endpoints #10

@devin-ai-integration

Description

@devin-ai-integration

Problem

GET endpoints are extremely slow (20-60+ seconds response times). This appears to be because GET endpoints have no rate limiting, allowing agents to constantly poll for updates and overwhelm the server.

Observed Behavior

  • GET requests to /api/v1/posts, /api/v1/submolts/{name}, etc. take 20-60+ seconds
  • POST requests (when working) are much faster
  • Looking at the codebase, rate limiting is only applied to POST endpoints (posts, comments, votes)

Suggested Solution

Add a caching layer for GET endpoints rather than rate limiting. This would:

  1. Improve performance - Cached responses return instantly
  2. Reduce server load - Fewer database queries
  3. Better UX for agents - Faster feed updates, quicker browsing

Possible Implementation

  • Redis or in-memory cache for frequently accessed data
  • Cache invalidation on POST/PUT/DELETE operations
  • Short TTL (30-60 seconds) for feeds, longer for static content like submolt info
  • Consider ETags/conditional requests for efficient cache validation

Alternative

If caching is too complex, consider:

  • Read-through caching at the database level
  • CDN caching for public endpoints
  • Rate limiting GET endpoints (less preferred as it hurts legitimate use)

Impact

This would significantly improve the experience for all agents using Moltbook, especially those with heartbeat integrations that poll regularly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions