-
Notifications
You must be signed in to change notification settings - Fork 4
perf(iroh-quinn): Add a mutable buffer so small writes are less expensive #249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: fix-send-buffer
Are you sure you want to change the base?
Conversation
1c77118 to
f7df1e1
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## fix-send-buffer #249 +/- ##
===================================================
- Coverage 76.48% 76.30% -0.19%
===================================================
Files 83 83
Lines 23250 23266 +16
===================================================
- Hits 17783 17752 -31
- Misses 5467 5514 +47 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
extend_from_slice will *not* use existing buffer, but just reallocate reserve will move existing data to the front before allocating
not necessary after all!
|
Documentation for this PR has been generated and is available at: https://n0-computer.github.io/quinn/pr/249/docs/iroh_quinn/ Last updated: 2025-12-16T12:42:19Z |
flub
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anything needed here to unblock this work?
| offset: u64, | ||
| /// Buffered data segments | ||
| segments: VecDeque<Bytes>, | ||
| /// Total size of `buffered_segments` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bufferred_segments doesn't exist, can we fix this up?
And also turn it into a real doc link so it doesn't end up being wrong so easily again. Our CI checks these things now.
| } | ||
| // the rest has to be in the last segment | ||
| self.last_segment.advance(n); | ||
| // shrink segments if we have a lot of unused capacity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the last_segment ever be shrunk?
I guess with default MTUD probing config you go up to 1452 bytes per QUIC packet. A few of those will be taken up with the header and AEAD tag. |
Description
Coalesces small writes, while keeping large writes as separate slices.
E.g. you got iroh-blobs writing a bunch of hash pairs and then an entire 16 KiB chunk group. Writes will be this:
In this branch the SendBuffer won't store the [u8;64] as separate Bytes but will append the to the last_segment of the SendBuffer until that reaches the threshold of MAX_COMBINE (currently 1024).
This means that the actual segment buffer
segments: VecDeque<Bytes>contains fewer elements and all those linear scans we do aren't that expensive.If we get lucky and we get polled quickly enough, for small writes the data will never make it out of last_segment but will be copied right from there.
The next step would be to make sure the Bytes that are currently created when using AsyncWrite::write are not created. But that's for the next PR.
Todo: basic tests and proptests for SendBufferData
Breaking Changes
None
Notes & open questions
Q: Value for MAX_COMBINE? 1024 seems reasonable. This shouldn't be much more than what will be requested in poll_transmit.