Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenTelemetry flaky test #916

Closed
flavio opened this issue Sep 20, 2024 · 7 comments · Fixed by #1097
Closed

OpenTelemetry flaky test #916

flavio opened this issue Sep 20, 2024 · 7 comments · Fixed by #1097
Assignees
Milestone

Comments

@flavio
Copy link
Member

flavio commented Sep 20, 2024

The integration test covering OpenTelemetry has been flaky during some runs of the GH CI.

This is the error reported:

---- test_otel stdout ----
Cannot connect to Kubernetes cluster: Failed to infer configuration: failed to infer config: in-cluster: (failed to read an incluster environment variable: environment variable not found), kubeconfig: (failed to read kubeconfig from '"/home/runner/.kube/config"': No such file or directory (os error 2))
Cannot connect to Kubernetes, context aware policies will not work properly
thread 'test_otel' panicked at tests/integration_test.rs:798:5:
assertion failed: metrics["metrics"].as_array().unwrap().iter().any(|m|
        { m["name"] == "kubewarden_policy_evaluation_latency_milliseconds" })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
@flavio
Copy link
Member Author

flavio commented Sep 25, 2024

We cannot reproduce the flaky test, we've introduced more debugging information to better understand what's going on.

Moving away from the 'todo' column, we will wait for the failure to happen again.

@flavio flavio removed this from the 1.17 milestone Sep 25, 2024
@jvanz
Copy link
Member

jvanz commented Sep 30, 2024

I can see this error when I run the tests locally:

test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:815:5:
metrics_output_json: {
  "resourceMetrics": [
    {
      "resource": {
        "attributes": [
          {
            "key": "service.name",
            "value": {
              "stringValue": "unknown_service"
            }
          },
          {
            "key": "telemetry.sdk.name",
            "value": {
              "stringValue": "opentelemetry"
            }
          },
          {
            "key": "telemetry.sdk.version",
            "value": {
              "stringValue": "0.24.1"
            }
          },
          {
            "key": "telemetry.sdk.language",
            "value": {
              "stringValue": "rust"
            }
          }
        ]
      },
      "scopeMetrics": [
        {
          "metrics": [
            {
              "name": "kubewarden_policy_evaluations_total",
              "sum": {
                "aggregationTemporality": 2,
                "dataPoints": [
                  {
                    "asInt": "1",
                    "attributes": [
                      {
                        "key": "initialization_error",
                        "value": {
                          "stringValue": "bootstrap failure: wrong_url: Error while downloading policy 'wrong_url' from ghcr.io/kubewarden/tests/not_existing:v0.1.0: Fail to interact with OCI registry: Registry error: url https://ghcr.io/v2/kubewarden/tests/not_existing/manifests/v0.1.0, envelope: OCI API errors: [OCI API error: manifest unknown]"
                        }
                      },
                      {
                        "key": "policy_name",
                        "value": {
                          "stringValue": "wrong_url"
                        }
                      }
                    ],
                    "startTimeUnixNano": "1727700657372706354",
                    "timeUnixNano": "1727700657528442954"
                  }
                ],
                "isMonotonic": true
              }
            }
          ],
          "scope": {
            "name": "kubewarden"
          }
        }
      ]
    }
  ]
}
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 18 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 115.27s

error: test failed, to rerun pass `--test integration_test`
make: *** [Makefile:24: test] Error 101

@flavio flavio moved this to Todo in Kubewarden Feb 14, 2025
@flavio flavio added this to the 1.22 milestone Feb 14, 2025
@flavio
Copy link
Member Author

flavio commented Feb 14, 2025

this is getting annoying, let's try to sort this out in time for 1.22

@fabriziosestito
Copy link
Contributor

fabriziosestito commented Feb 17, 2025

bumping the otel collector testcontainer to the latest version (0.119.0 at the time of writing)
produces a different set of errors:

2025-02-17T17:10:08.612342Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: Connection reset by peer (os error 104)\" })" Failed during the export process
2025-02-17T17:10:08.613382Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
2025-02-17T17:10:12.114413Z ERROR opentelemetry_sdk:  name="BatchSpanProcessor.Flush.ExportError" reason="ExportFailed(Status { code: Unavailable, message: \", detailed error message: tls handshake eof\" })" Failed during the export process
test test_otel has been running for over 60 seconds
2025-02-17T17:10:52.115248Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
2025-02-17T17:11:52.115397Z ERROR opentelemetry_sdk:  name="PeriodicReader.ExportFailed" Failed to export metrics reason="Metrics exporter otlp failed with the grpc server returns error (The service is currently unavailable): , detailed error message: tls handshake eof"
test test_otel ... FAILED

failures:

---- test_otel stdout ----
thread 'test_otel' panicked at tests/integration_test.rs:938:14:
called `Result::unwrap()` on an `Err` value: Error("EOF while parsing a value", line: 1, column: 0)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    test_otel

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 21 filtered out; finished in 137.64s

error: test failed, to rerun pass `--test integration_test`

We need to bump to this version before proceeding with the bug fix.

@jvanz
Copy link
Member

jvanz commented Feb 24, 2025

Just sharing something that I've already said to some members of the team during one of our calls... I believe there is some concurrency issue in our tests. I found a way to simulate the issue running two tests with a single thread:

cargo test --workspace --  test_otel test_detect_certificate_rotation  --nocapture --test-threads 1

If the number of threads are changed from 1 to 2, the tests passes. If we ensure that the test_otel test run first, the tests also passes. When debug log level is enabled, it's possible to see that the metric reader cannot see any metric to export:

2025-02-24T21:23:24.410887Z DEBUG opentelemetry_sdk:  name="PeriodicReader.ExportTriggered" Export message received.                                                                                                                         
2025-02-24T21:23:24.410942Z DEBUG opentelemetry_sdk:  name="MeterProviderInvokingObservableCallbacks" count=0                                                                                                                                2025-02-24T21:23:24.410969Z DEBUG opentelemetry_sdk:  name="PeriodicReaderWorker.NoMetricsToExport"                                                                                                                                          

The traces always works. I'm wondering if the issue is not caused because the meter provider is global. Therefore, depending of the order of execution it will be proper configured or not.

@fabriziosestito
Copy link
Contributor

@jvanz

I'm wondering if the issue is not caused because the meter provider is global. Therefore, depending of the order of execution it will be proper configured or not.

This might be it, good catch!

Since opentelemetry relies on a global state to configure metrics and tracing, a simple fix could be isolating the test with features.

Cargo.toml:

[features]
otel_tests = []

Integration test:

#[test]
#[cfg(feature = "otel_tests")]
fn test_otel() {
....
}
cargo test --features otel_tests 

@jvanz
Copy link
Member

jvanz commented Feb 25, 2025

Yes, I agree that would be a fix. But I think we should improve a little further some day. Because, as the traces and metrics are global, if one enable debug log level, all the test afterward will pollute the output or cause other issues. That's what I was trying to do for a while. But maybe it is a premature optimization for now

@jvanz jvanz moved this from In Progress to Pending review in Kubewarden Feb 25, 2025
@github-project-automation github-project-automation bot moved this from Pending review to Done in Kubewarden Feb 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants