Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: adjust mux config #418

Merged
merged 3 commits into from
Jan 25, 2024
Merged

perf: adjust mux config #418

merged 3 commits into from
Jan 25, 2024

Conversation

sinui0
Copy link
Member

@sinui0 sinui0 commented Jan 24, 2024

This PR adjusts the multiplexer config to increase the receive window. This will reduce the effects of throttling as latency increases and should give us a theoretical throughput of ~300Mbps @ 60ms per stream instead of ~33Mbps.

The trade-off here is that it increases the memory allocated to buffers. Due to this, I capped the max streams to 40 to prevent DoS (our protocol uses a static number of streams, 33 atm). The existing default configuration provides a max of 8192 streams, which works out to ~2Gb maximum allocation if they were all opened. With the new settings the maximum allocation is 640Mb.

Newer versions of the yamux crate scale the window dynamically based on latency + number of opened streams. But their API has breaking changes that we need to fix if we want to bump later.

Copy link
Member

@th4s th4s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

@yuroitaki yuroitaki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@sinui0 sinui0 merged commit ee17919 into dev Jan 25, 2024
15 checks passed
@sinui0 sinui0 deleted the perf/adjust-mux-config branch January 25, 2024 21:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants