Skip to content

overhaul pacing code#260

Open
jthomas43 wants to merge 1 commit intoholepunchto:mainfrom
jthomas43:deficit_traffic_shaper
Open

overhaul pacing code#260
jthomas43 wants to merge 1 commit intoholepunchto:mainfrom
jthomas43:deficit_traffic_shaper

Conversation

@jthomas43
Copy link
Copy Markdown
Contributor

This changes the pacing code, the original strategy was to track the number of bytes we are allowed to send in stream->tb_available, which would be incremented on a timer and decremented on sending data. This PR tracks the next time we may send a packet in stream->next_send_ts, and each time we send a packet we increment stream->next_send_ts according to the pacing rate.

The updated code is simpler, avoids a bug where before stream->pacing_bytes_per_ms could be truncated to zero given a very low delivery rate, and avoids a burst when a new write is enqueued on an idle connection.

This PR also fixes a bug where the clamped cwnd from the BBR PROBE_RTT phase could be wrongly saved during fast recovery.

Copy link
Copy Markdown
Contributor

@kasperisager kasperisager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume we're still fine with the constraint that uv_now() is only updated once per tick?

@telamon
Copy link
Copy Markdown
Contributor

telamon commented Sep 26, 2025

I assume we're still fine with the constraint that uv_now() is only updated once per tick?

maybe if we trace how many times uv_now() gets called per tick it'd be a relational to answer.

@kasperisager
Copy link
Copy Markdown
Contributor

kasperisager commented Sep 26, 2025

What's interesting isn't necessarily the number of times it's called but rather the possible drift it causes. If a tick is much longer than a single millisecond then uv_now() will start to drift.

If the code can tolerate that drift up to some reasonable limit then all should be well.

@jthomas43 jthomas43 force-pushed the deficit_traffic_shaper branch from 19d7b6b to 2fc8f17 Compare October 7, 2025 22:20
@jthomas43 jthomas43 closed this Oct 10, 2025
@jthomas43 jthomas43 reopened this Oct 10, 2025
@jthomas43
Copy link
Copy Markdown
Contributor Author

jthomas43 commented Oct 14, 2025

The algorithm on main sends packets faster, probably because token-bucket algorithm we use on main adapts immediately when a new bandwidth or RTT sample adjusts the pacing rate. The tokens start accumulating at the new rate immediately.

This algorithm waits until a packet is sent at stream->next_send_ts to apply the new rate when scheduling the next+1 packet. If the next_send_ts is 1 second into the future and an ACK arrives showing that we have bandwidth to send N packets / rtt we still have to wait a whole second before we start sending at that rate.

Still this can probably be solved while still being simpler than main: when we adjust the rate, check the state of the pacer. If a packet is scheduled (next_send_ts is in the future), re-schedule next_send_ts according to the new pacing rate. I think this could be done using the time next_send_ts was scheduled, retroactively scheduling the packet with the rate we've just discovered, or could be done by crediting the time spent waiting so far according to the original pacing rate and the time we still have to wait according to the new pacing rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants