-
-
Notifications
You must be signed in to change notification settings - Fork 851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JpegDecoder: Optimize HuffmanScanDecoder #1607
Comments
I'm assuming the worst offender is this section here? ImageSharp/src/ImageSharp/Formats/Jpeg/Components/Decoder/HuffmanScanBuffer.cs Lines 140 to 172 in 82bca55
I've considered something before using SSE to mask/compare 64 bits at a time there for a happy path which should save us a lot of time but never preloading the full MCU and doing using a fast/slow path for the whole thing is a new one. It would be amazing if we could pull that off! If that's something you could find time for at some point it would be most welcome. BTW if you're looking for some real stinkers to optimize take a look here #1476 (comment) . The Emit is a real stinker just now. I'm considering writing an buffered writer as a sticky plaster for some of the short writes but I reckon someone with your kind of mind could look at it and fine many better ways to improve it. |
Yep, that'd be the one. I can't think of a way to SIMDfy huffman decoding since you're parsing an unknown number of bits per symbol, but I'll study up on the algorithm and give it some thought. I can definitely see how the stream writes would be an issue in the encoder. Could probably handle that the same way with a happy path that works in a local 64 byte buffer and writes it out all at once on completion. I assume as well that if you're encoding, you can avoid the degenerate cases where the compressed data is larger, but maybe that can still happen when you work with a pre-defined dictionary. I have some reading to do... |
Just a note in case anyone else wants to look at this before I get time. I forgot the values being encoded/decoded here are quantized DCT coefficients, so they're 16-bit (10-bit actually, but you know...). That makes the 1:1 compression case 128 bytes for the MCU. Additionally, the ITU spec (Rec T.81, Annex C) explicitly limits huffman code lengths to 16 bits, meaning the absolute worst case would be 256 bytes for the MCU. So this is doable both on the decoder and encoder side. |
@br3aker Would you consider this issue still relevant? |
Oh sorry I've absolutely missed your message. I'd say 'yes', I have a couple of WIP local branches which benefit from various huffman-related optimizations but they are not so major: ~5-7% in my current local branch. Maybe we can squeeze a bit more from it - who knows :) Hopefully next year I'd be able to spend time on these optimizations - really want to push JPEG codec to be at least as fast as competitors. |
@br3aker you're a performance machine! |
Prerequisites
DEBUG
andRELEASE
modeDescription
To pair with #1597, I have identified some opportunities for improvement in baseline scan decoding performance (haven't looked at progressive yet, but the same ideas might apply).
I had a look through the asm from
HuffmanScanDecoder.DecodeBlockBaseline()
, and while there are some small codegen issues that are easy to fix without refactoring, I was only able to knock about 70 bytes off the code -- out of a total method size of 2400 bytes including inlined callees -- which was barely measurable in the full decode benchmarks.The bulk of that full method is made up of calls to
BufferedReadStream.ReadByte()
, which is inlined a total of 12 times into the outer method.BufferedReadStream
itself is quite clever in that it allowsReadByte()
to be devirtualized and inlined, but each byte read still comes with significant overhead.ImageSharp/src/ImageSharp/IO/BufferedReadStream.cs
Lines 129 to 148 in 82bca55
When inlined into the caller compiles to this x64 asm:
The
movsxd
could be eliminated by using a native-sizedreadBufferIndex
, but it would still be a huge amount of overhead for fetching a single byte of data.Additionally, the huffman decoder has checks and recovery built in for handling bad data which add overhead -- not necessarily to execution in this case since they're unlikely to be hit, but they add to code size.
I would think (and I'm about show my ignorance of JPEG huffman coding so correct me if I'm wrong) that it should be possible to build a happy path through the decoder that depends on eagerly peeking a number of bytes (one that should suffice for 99+% of all blocks) from the stream into a stack buffer and working through those without the need for individual byte loads. If that's something like 64 bytes since that would be 1:1 compression, that would be easy to satisfy. If it needs to handle a few degenerate cases where the compressed data is larger than the original, maybe double that?
The current resilient implementation could serve as a backstop so that if the happy path doesn't work for some reason, it could bail and restart that block with the slow path. This shouldn't happen often, and it should be only for cases where perf becomes secondary to security and correctness.
I don't have the time to work on restructuring the code just yet, but does that sound reasonable, @JimBobSquarePants?
The text was updated successfully, but these errors were encountered: