-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using MultiGzDecoder for file with garbage after gzip data #396
Comments
Thanks for bringing this up! Could you provide a testfile along with a test-program that shows the current behaviour? With such example, it would be possible to setup a test-case and eventually find a solution. |
Sure. Here is a test file I created with the following commands:
Running gzcat on the file:
A short program that demonstrates the issue: use flate2::read::MultiGzDecoder;
use std::fs::File;
use std::io::{BufReader, Read, Write};
fn main() {
let file = File::open("hello.gz").unwrap();
let mut buf_reader = BufReader::new(MultiGzDecoder::new(file));
let mut buf = [0_u8; 100];
loop {
match buf_reader.read(&mut buf) {
Ok(0) => break,
Ok(n) => {
println!("read {} bytes:", n);
std::io::stdout().write_all(&buf[0..n]).unwrap();
std::io::stdout().flush().unwrap();
}
Err(e) => {
println!("{:?}", e);
break;
}
}
}
} And the output:
|
Thanks a lot for your help! ThoughtsIt looks like the question here is of Another guess is that if the trailing bytes are longer, the error message will be a different, maybe more specific one. After all, the magic signature seems to be 2 bytes long, so we'd run out of data before reading this even. Next StepsI think it would be worth setting up a couple of test cases with varying length of trailing garbage to get an idea of how it can be detected on application level right now. From there, we might be able to figure out how to optionally adjust or enhance the API in a backward compatible manner to make detecting this situation easier. |
To do this However, if the trailing garbage can be modelled as random data, there is a 1 in 65536 chance that random garbage will be incorrectly interpreted as a valid gzip member and will return an error. Is that OK? Depends on the caller's requirements. We could check more data. The next byte is always 8, so that improves the chances, but still not guaranteed (1 in 16,777,216). Is that OK? Depends on the caller's requirements. Ultimately the best guarantee is to read the entire gzip member and only if it succeeds, return decompressed data. This requires arbitrary amounts of buffering. What is the correct balance of correctness vs buffering to make a good choice? Depends on the caller's requirements. Rather than have Note that, when I say that the caller should decide, that does not rule out it using another third-party crate that wraps |
I have a gzip json file that I did not create that I am using flate2 and serde_json to parse and transform. When I run my code over the unzipped file, everything is file. When I run it on the gzipped file, it throws an unexpected end of file error. I am trying to figure out what is going on.
My working assumption is that the file, which is a multi-member gzip file, has some extra garbage after the end of the last member; and indeed, when I use gzcat to uncompress it, it does say "trailing garbage ignored". The section about multi-member files in the introduction says "If a file contains contains non-gzip data after the gzip data, MultiGzDecoder will emit an error after decoding the gzip data. This behavior matches the gzip, gunzip, and zcat command line tools."
I would like some way of decoding such a file without an error being returned, and just having any trailing garbage be ignored. What would be the simplest way of doing this?
The text was updated successfully, but these errors were encountered: