Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trait Trait Reply is not implemented for PrometheusResponse [E0277] is not implemented for PrometheusResponse [E0277] #181

Closed
marvin-hansen opened this issue Sep 3, 2024 · 4 comments

Comments

@marvin-hansen
Copy link
Contributor

autometrics = { version = "2.0.0", features = ["prometheus-exporter"] }
warp = "0.3"
Rust = 1.80

When configuring autometrics with warp, the compiler thows an error stating the Trait Reply is not implemented for PrometheusResponse.

Repro:

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    prometheus_exporter::init();

    let routes = warp::get()
        .and(warp::path("metrics"))
        .and(warp::path::end())
        .map(|| prometheus_exporter::encode_http_response());


    let signal = shutdown_utils::signal_handler("http server");
    let (_, http_server) = warp::serve(routes).bind_with_graceful_shutdown(([0, 0, 0, 0], 8083), signal);

    let http_handle = tokio::spawn(http_server);

    match tokio::try_join!(http_handle) {
        Ok(_) => {}
        Err(e) => {
            println!("Failed to start server: {:?}", e);
        }
    }

  Ok(())

When I wrap the Prometheus exporter in a Warp Reply, only the string encodind works, but then the code compiles without exporting metrics.

async fn metrics_handler() -> Result<impl warp::Reply, warp::Rejection> {
    match autometrics::prometheus_exporter::encode_to_string() {
        Ok(metrics) => Ok(warp::reply::json(&metrics)),
        Err(_) => Err(warp::reject::not_found()),
    }
}
    let get_metrics = warp::get()
        .and(warp::path(metrics_uri.clone()))
        .and(warp::path::end())
        .and_then(metrics_handler);

    let routes = get_metrics;

    let signal = shutdown_utils::signal_handler("http server");
    let (_, http_server) = warp::serve(routes).bind_with_graceful_shutdown(http_addr, signal);

    // https://github.com/hyperium/tonic/discussions/740
    let http_handle = tokio::spawn(http_server);

    match tokio::try_join!(http_handle) {
        Ok(_) => {}
        Err(e) => {
            println!("Failed to start server: {:?}", e);
        }
    }

  Ok(())

I am a bit puzzled here. Is warp not supported?

@mellowagain
Copy link
Member

mellowagain commented Sep 5, 2024

Hey! We upgraded autometrics in version 1.0.1 to use http 1.0. warp however still uses http 0.2. As far as I can see they're planning a upgrade to hyper 1.0 which also ships with http 1.0 but that still seems not fully ready yet: seanmonstar/warp#1088

You've got multiple ways to work around this:

  1. Instead of directly replying the PrometheusResponse, encode the metrics to a String and return that. You were very close to already doing that but you replied with warp::reply::json. The exported metrics are not in JSON format, you can instead just return the String directly:
#[autometrics]
async fn metrics_handler() -> Result<impl warp::Reply, warp::Rejection> {
    match autometrics::prometheus_exporter::encode_to_string() {
        Ok(metrics) => Ok(metrics),
        Err(_) => Err(warp::reject::not_found()),
    }
}
  1. You can downgrade to autometrics 1.0.0 which still uses http 0.2. Then you can directly respond the PrometheusResponse:
# note the `=` at the beginning of the version pinning it to 1.0.0, as 1.0.1 upgraded the `http` version
autometrics = { version = "=1.0.0", features = ["prometheus-exporter"] }
let routes = warp::get()
    .and(warp::path("metrics"))
    .and(warp::path::end())
    .map(|| prometheus_exporter::encode_http_response());

We personally recommend the first variant so that you can take advantage of all the latest features and don't have to consult outdated documentation. Once warp has upgraded to hyper 1.0 (and thus also http 1.0) this workaround won't be needed anymore and you can directly return the PrometheusResponse again.

@marvin-hansen
Copy link
Contributor Author

@mellowagain

Thank you;
I understand that it comes down to warp lagging in adopting http 0.2.

When I do the recommended workaround, with the updated hander,
I only see the metric exporter in the dashboard and nothing else.

I've instrumented an rGPC service according to the code example in the repo,
and I was supposed to see all the endpoints of the implemented tonic trait, but somehow that doesn't happen.
Also, the assigned API SLO don't show up.

Maybe a good idea to update the gRPC example to version 2, as it still hangs on V1.0.0

Has the configuration changed or is there something else I have to know to make autometrics work with tonic gRPC?

@mellowagain
Copy link
Member

@mellowagain

Thank you; I understand that it comes down to warp lagging in adopting http 0.2.

When I do the recommended workaround, with the updated hander, I only see the metric exporter in the dashboard and nothing else.

I've instrumented an rGPC service according to the code example in the repo, and I was supposed to see all the endpoints of the implemented tonic trait, but somehow that doesn't happen. Also, the assigned API SLO don't show up.

Maybe a good idea to update the gRPC example to version 2, as it still hangs on V1.0.0

Has the configuration changed or is there something else I have to know to make autometrics work with tonic gRPC?

do you have a repo or reproduceable example i could take a look at? ill update the grpc-http example to use autometrics 2.0

@marvin-hansen
Copy link
Contributor Author

@mellowagain Thank you so much for your help.

However, I had to remove autometrics from my project due to multiple issues during a complex
multi-cross compilation setup with Bazel; In all fairness, autometrics did get along surprisingly well,
but when the number of compile targets doubled and multiple c compiler were used for underlying dependences,
things were falling apart for no obvious reasons, but all issues were gone once autometrics was removed.
I don't think it has anything to do with the Rust code in autometrics, but rather with some legacy C network code deep down in some sub-sub dependencies.

I am not filling an issue for that because the setup is so far above and beyond the normal range that it is totally not worth fixing.

Closing this for good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants