Conversation
kasperisager
left a comment
There was a problem hiding this comment.
I assume we're still fine with the constraint that uv_now() is only updated once per tick?
maybe if we trace how many times |
|
What's interesting isn't necessarily the number of times it's called but rather the possible drift it causes. If a tick is much longer than a single millisecond then If the code can tolerate that drift up to some reasonable limit then all should be well. |
… saved during fast recovery
19d7b6b to
2fc8f17
Compare
|
The algorithm on main sends packets faster, probably because token-bucket algorithm we use on main adapts immediately when a new bandwidth or RTT sample adjusts the pacing rate. The tokens start accumulating at the new rate immediately. This algorithm waits until a packet is sent at Still this can probably be solved while still being simpler than main: when we adjust the rate, check the state of the pacer. If a packet is scheduled ( |
This changes the pacing code, the original strategy was to track the number of bytes we are allowed to send in stream->tb_available, which would be incremented on a timer and decremented on sending data. This PR tracks the next time we may send a packet in stream->next_send_ts, and each time we send a packet we increment stream->next_send_ts according to the pacing rate.
The updated code is simpler, avoids a bug where before stream->pacing_bytes_per_ms could be truncated to zero given a very low delivery rate, and avoids a burst when a new write is enqueued on an idle connection.
This PR also fixes a bug where the clamped cwnd from the BBR PROBE_RTT phase could be wrongly saved during fast recovery.