Skip to content

http: avoid copying large reply bodies#109

Draft
l0rinc wants to merge 2 commits intomasterfrom
l0rinc/http-no-copy-large-replies
Draft

http: avoid copying large reply bodies#109
l0rinc wants to merge 2 commits intomasterfrom
l0rinc/http-no-copy-large-replies

Conversation

@l0rinc
Copy link
Copy Markdown
Owner

@l0rinc l0rinc commented Jan 27, 2026

Fixes bitcoin#31041

Problem

Serving very large JSON-RPC batch responses can spike peak memory usage because the full reply body is copied into libevent’s output buffer.
This can contribute to out-of-memory termination on low-memory systems when an indexer (or other client) requests large batches.

Fix

This PR reduces peak memory by adding HTTPRequest::WriteReply(int, std::string&&) and using evbuffer_add_reference() so large reply bodies can be sent without an extra full copy (with a safe fallback to copying if referencing fails).
It also adds a functional test that constructs a large batch reply and asserts the large-reply path switches from "copied" (baseline) to "referenced" after the change.

Add a functional test that creates a large JSON-RPC batch reply (using repeated getblock verbosity=0 calls) and asserts the HTTP server logs that the large response body was copied into the libevent output buffer.

This establishes baseline behavior related to bitcoin#31041 (memory pressure from large batch replies) before switching the reply path to avoid the extra copy.
Add `HTTPRequest::WriteReply(int, std::string&&)` that stores the reply body and uses `evbuffer_add_reference()` so libevent can send large responses without an extra full copy.

This reduces peak memory when serving large RPC/REST replies (eg JSON-RPC batch responses implicated in bitcoin#31041). Update the functional test to assert the referenced-reply path is used for large batch replies.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Crash upon RPC v1 connection in v28.0.0

1 participant