-
Notifications
You must be signed in to change notification settings - Fork 2
Description
By default, an Actix server starts with multiple threads. When configured as shown in the example, this produces rather unexpected behavior for a newcomer. Each thread maintains its own Prometheus metrics, thus, each time you call /metrics
, a different thread may be answering. This can result in counters appearing to decrease.
Since I didn't want the performance hit of just using 1 worker, I solved this by passing in clones of a Prometheus registry initialized outside of my HttpServer::new()
(should be safe since it's just a Arc<RwLock<_>>
), and by passing in an Arc<AtomicUsize>
that I used to track which worker was being initialized (worker_id
). I then used the worker_id
as a const label, so each metric reports which thread it's coming from. This solves the decreasing counters issue, but dramatically increases the number of metrics reported (8 workers = 8x more metrics).
let prometheus_registry = prometheus::Registry::new();
let worker_id = Arc::new(AtomicUsize::new(0));
HttpServer::new(move || {
let id = worker_id.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
let mut labels = HashMap::new();
labels.insert("worker_id".to_string(), id.to_string());
let prometheus = PrometheusMetricsBuilder::new("actix")
.endpoint("/metrics")
.registry(prometheus_registry.clone())
.const_labels(labels)
.build()
.unwrap();
App::new().wrap(prometheus)
});
There are probably better ways to do some of those things, but that's how I got around it.
I'm not sure what the proper solution is, but I wanted to share this in case anyone else runs into this behavior.