You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.
Hi,
The code to compute 8-bit non-sRGB output doesn't seem to follow the method outlined in the ASTC spec:
const int c = (c0 * (64 - weight) + c1 * weight + 32) / 64;
// TODO(google): Handle conversion to sRGB or FP16 per C.2.19.
const int quantized = ((c * 255) + 32767) / 65536;
assert(quantized < 256);
The spec says "If sRGB conversion is not enabled and the decoding mode is decode_unorm8, then the top 8 bits of the interpolation result for the R, G, B and A channels are used as the final result.":
The difference will be very slight, and I'm not sure if this actually causes any issues at all because the scale factors are so similar. I'm pointing it out here because there's a comment, and because I noticed that the implementation deviates from the spec in a way that could break bit-exact decoding.
The text was updated successfully, but these errors were encountered:
I have verified that the code which is checked in (that scales and divs by 65536) is definitely not producing correct results, by comparing this decoder's output vs. another. The fix is simple:
Also, this decoder needs an "sRGB" flag that the user can pass in, because it impacts how the 8-bit endpoints are scaled to 16-bits before the interpolation. Without this you can't correctly validate the codec when sRGB is enabled:
Hi,
The code to compute 8-bit non-sRGB output doesn't seem to follow the method outlined in the ASTC spec:
The spec says "If sRGB conversion is not enabled and the decoding mode is decode_unorm8, then the top 8 bits of the interpolation result for the R, G, B and A channels are used as the final result.":
Section 23.19
The difference will be very slight, and I'm not sure if this actually causes any issues at all because the scale factors are so similar. I'm pointing it out here because there's a comment, and because I noticed that the implementation deviates from the spec in a way that could break bit-exact decoding.
The text was updated successfully, but these errors were encountered: