-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preemptively close chunks before big messages to isolate them #1291
base: main
Are you sure you want to change the base?
Preemptively close chunks before big messages to isolate them #1291
Conversation
I think this makes sense and is a good improvement for most users. I'm inclined to believe it should be possible to disable this behavior with a writer option. I also think it changes the documented behavior of the |
Sounds good, I updated the comment for the chunkSize option and added a "noHugeMessageChunk" option to disable this behavior if needed. |
This makes me wonder if we should instead close any chunk when writing a message if the current chunk size + new message size > chunkSize? That is, why check if a message itself is larger than a whole chunk, rather than just closing chunks when they are about to become larger chunkSize (rather than when they are already larger than chunkSize)? |
FYI to @jhurliman in case you have any opinions on this change. |
This seems like good default behavior, I’m not sure it needs a configurable option? EDIT: My personal inclination is toward @jtbandes idea. Bump the major version and change the behavior to chunk size being a (soft) ceiling instead of a floor |
…iling, rather than soft floor
Ok. I did that. I removed the new unit-test since it became identical to the existing one that I had to modify. I'll let you worry about bumping up the major version with your release process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially I was worried that this might mean we end up writing Channel/Schema records into a different chunk from the Messages themselves. Upon reviewing the spec, it sounds like that is actually fine. Just calling this out in case anyone remembers differently :)
Changelog
Close chunks when big messages are written to isolate them, leading to much better read performance when going over small-message channels since those isolated chunks can be skipped over entirely.
Docs
None
Description
A very common situation is to have a few topics with very large messages (images, point clouds, etc.) and many topics with very small (and frequent) messages (state, transforms, statuses, etc.). When writing all those into a single ("merged") mcap file, it naturally leads to chunks typically containing a series of small messages and ending with one big message which fills the rest of the chunk and "closes" it. The result of this is that nearly all chunks are "mixed" (i.e., messages from all channels appear in each chunk). During playback / reading, some of the client code (e.g., C++) implement logic to skip over chunks that don't have any message of interest (topics being read). That logic is completely thwarted if most chunks are mixed in this pathological pattern.
So, this PR adds a very simple change. When an incoming message is very large (larger than the chunk size), instead of writing that message to the current chunk (mixing with prior smaller messages), it closes the current chunk and writes the big message to a fresh new chunk (which will be closed immediately after). The end result is that big-message channels get isolated into their own series of chunks, enabling the playback / readers to skip over them without loading those chunks to RAM or decompressing them.
In theory (but I have not seen this), compression could also perform better on the isolated channels, where it matters most. All I have seen (see below) is that incompressible data (such a protobuf wireformat) does not get uselessly compressed along with the larger "bytes" data (images, point-clouds, etc.). This does not affect the resulting file size but speeds up reading times. (BTW, it would be nice to be able to configure the hardcoded to 1KB minimum compression size in the writer)
The only slight drawback that I can see is getting some smaller / partial chunks. I guess there might be some rather unrealistic pathological cases (like interleaving every big and small message) where this could lead to a noticeable difference in file size (bigger chunk index). If you want a writer option to disable that behavior, that's fine, but I believe this should be the default.
I ran some tests on some simple mcap files with mixed channels, the result are seen below.
Running
mcap-cli info
on the recorded mcap file (2.7 GB):Running
time mcap-cli cat --topic [redacted_topic_4]
on the recorded mcap file (2.7 GB):Running
mcap-cli info
on a recorded mcap file (2.7 GB):Running
time mcap-cli cat --topic [redacted_topic_4]
on the recorded mcap file (2.7 GB):