-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenTelemetry flaky test #916
Comments
We cannot reproduce the flaky test, we've introduced more debugging information to better understand what's going on. Moving away from the 'todo' column, we will wait for the failure to happen again. |
I can see this error when I run the tests locally:
|
this is getting annoying, let's try to sort this out in time for 1.22 |
bumping the otel collector testcontainer to the latest version (0.119.0 at the time of writing) 2025-02-17T17:10:08.612342Z ERROR opentelemetry_sdk: name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: Connection reset by peer (os error 104)\" })" Failed during the export process
2025-02-17T17:10:08.613382Z ERROR opentelemetry_sdk: name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
2025-02-17T17:10:12.114413Z ERROR opentelemetry_sdk: name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
test test_otel has been running for over 60 seconds
2025-02-17T17:10:52.115248Z ERROR opentelemetry_sdk: name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
2025-02-17T17:11:52.115397Z ERROR opentelemetry_sdk: name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
test test_otel ... FAILED
failures:
---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:938:14:
called `Result::unwrap()` on an `Err` value: Error("EOF while parsing a value", line: 1, column: 0)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
test_otel
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 21 filtered out; finished in 137.64s
error: test failed, to rerun pass `--test integration_test` We need to bump to this version before proceeding with the bug fix. |
Just sharing something that I've already said to some members of the team during one of our calls... I believe there is some concurrency issue in our tests. I found a way to simulate the issue running two tests with a single thread: cargo test --workspace -- test_otel test_detect_certificate_rotation --nocapture --test-threads 1 If the number of threads are changed from 1 to 2, the tests passes. If we ensure that the
The traces always works. I'm wondering if the issue is not caused because the meter provider is global. Therefore, depending of the order of execution it will be proper configured or not. |
This might be it, good catch! Since opentelemetry relies on a global state to configure metrics and tracing, a simple fix could be isolating the test with features. Cargo.toml: [features]
otel_tests = [] Integration test: #[test]
#[cfg(feature = "otel_tests")]
fn test_otel() {
....
} cargo test --features otel_tests |
Yes, I agree that would be a fix. But I think we should improve a little further some day. Because, as the traces and metrics are global, if one enable debug log level, all the test afterward will pollute the output or cause other issues. That's what I was trying to do for a while. But maybe it is a premature optimization for now |
The integration test covering OpenTelemetry has been flaky during some runs of the GH CI.
This is the error reported:
The text was updated successfully, but these errors were encountered: