-
Notifications
You must be signed in to change notification settings - Fork 5
observe dynamic memory usage #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- added here instead of memusage to prevent a circular dependency later
since the addrs to send get worked on & removed from the list, add the size of the queue instead of mostly returning empty
90e268a to
90e8233
Compare
mzumsande
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Just had a first look, will review in more detail / run the patch next week.
Not sure how important, but maybe m_node_states should be accounted for too (in particular vBlocksInFlight)?
src/net_processing.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does m_headers_sync->m_header_commitments need special treatment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! I missed that field (comments on my attempt to understand it below)
| PeerRef peer = GetPeerRef(pfrom->GetId()); | ||
| if (peer == nullptr) return false; | ||
|
|
||
| peer->inaccessible_dyn_memusage = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't completely understand this - why is this memory inaccessible, and why do you query m_addrs_to_send and m_addr_known here, and not in PeerDynamicMemoryUsage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since the m_addrs_to_send work queue is frequently fluctuating, the idea was to try to capture it at a point in time where it's more representative. generally, ProcessMessages fills it up, and then SendMessages services / clears it. so at a random point in time, it's most likely empty. but when entering ProcessMessages for a peer, it might have some addresses on there because of calling RelayAddress from other peers.
does that help? def not the most robust solution, but I think it should be slightly better than directly calling from PeerDynamicMemoryUsage
90e8233 to
03dbb36
Compare
|
yup def, it's on my list but it's a good sign that you're catching these :) I don't yet have calculations for any of the dynamic memory usage on |
I think it's only used during the new headerssync phase right after startup (see PR 25717). You could start a node on mainnet / testnet /signet with an empty datadir - the default logging will show you the progress. I would expect the memory usage to slowly grow in the first phase (when we download headers for the first time), reach its maximum in the second phase (the re-download). After all headers are synced successfully it should be back to zero. |
|
I reviewed the branch again, and it looks good, all calculations make sense to me! I tested the headerssync calculations with a fresh headers sync on signet, only adding a log for the fields
Yes, that makes sense, I get 4144 for Also, headerssync is something we only do with a single peer at a time (at least usually, if it's too slow we would add more peers) as part of IBD and the memory is cleared after it has finished, so it shouldn't be too important for blocksonly scaling. |
patch to observe memory usage of different types of bitcoin connections
exposes RPC endpoint
getpeermemoryinfoto list current memory usage for each peercurrently not included
TxRequestTrackerV2TransportPeerManagerImplthings that grow in proportion to peers