Autometrics provides a macro that makes it easy to instrument any function with the most useful metrics: request rate, error rate, and latency. It then uses the instrumented function names to generate powerful Prometheus queries to help you identify and debug issues in production.
- ✨
#[autometrics]
macro instruments any function orimpl
block to track the most useful metrics - 💡 Writes Prometheus queries so you can understand the data generated without knowing PromQL
- 🔗 Injects links to live Prometheus charts directly into each function's doc comments
- 🔍 Identify commits that introduced errors or increased latency
- 🚨 Define alerts using SLO best practices directly in your source code
- 📊 Grafana dashboards work out of the box to visualize the performance of instrumented functions & SLOs
- ⚙️ Configurable metric collection library (
opentelemetry
,prometheus
,prometheus-client
ormetrics
) - 📍 Attach exemplars to connect metrics with traces
- ⚡ Minimal runtime overhead
See autometrics.dev for more details on the ideas behind autometrics.
use autometrics::autometrics;
#[autometrics]
pub async fn create_user() {
// Now this function has metrics! 📈
}
See an example of a PromQL query generated by Autometrics
If your eyes glaze over when you see this, don't worry! Autometrics writes complex queries like this so you don't have to!
(
sum by (function, module, commit, version) (
rate({__name__=~"function_calls(_count)?(_total)?",function="create_user",result="error"}[5m])
* on (instance, job) group_left (version, commit)
last_over_time(build_info[1s])
)
)
/
(
sum by (function, module, commit, version) (
rate({__name__=~"function_calls(_count)?(_total)?",function="create_user"}[5m])
* on (instance, job) group_left (version, commit)
last_over_time(build_info[1s])
)
)
Here is a demo of jumping from function docs to live Prometheus charts:
Autometrics.Demo.mp4
-
Add
autometrics
to your project:cargo add autometrics --features=prometheus-exporter
-
Instrument your functions with the
#[autometrics]
macroTip: Adding autometrics to all functions using the
tracing::instrument
macro
You can use a search and replace to add autometrics to all functions instrumented with
tracing::instrument
.Replace:
#[instrument]
With:
#[instrument] #[autometrics]
And then let Rust Analyzer tell you which files you need to add
use autometrics::autometrics
at the top of.Tip: Adding autometrics to all
pub
functions (not necessarily recommended 😅)
You can use a search and replace to add autometrics to all public functions. Yes, this is a bit nuts.
Use a regular expression search to replace:
(pub (?:async)? fn.*)
With:
#[autometrics] $1
And then let Rust Analyzer tell you which files you need to add
use autometrics::autometrics
at the top of. -
Export the metrics for Prometheus
For projects not currently using Prometheus metrics
Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.
In your
main
function, initialize theprometheus_exporter
:pub fn main() { prometheus_exporter::init(); // ... }
And create a route on your API (probably mounted under
/metrics
) that returns the following:use autometrics::prometheus_exporter::{self, PrometheusResponse}; /// Export metrics for Prometheus to scrape pub fn get_metrics() -> PrometheusResponse { prometheus_exporter::encode_http_response() }
For projects already using custom Prometheus metrics
Configure
autometrics
to use the same underlying metrics library you use with the appropriate feature flag:opentelemetry
,prometheus
,prometheus-client
, ormetrics
.[dependencies] autometrics = { version = "*", features = ["prometheus"], default-features = false }
The
autometrics
metrics will be produced alongside yours.Note
You must ensure that you are using the exact same version of the library as
autometrics
. If not, theautometrics
metrics will not appear in your exported metrics. This is because Cargo will include both versions of the crate and the global statics used for the metrics registry will be different.You do not need to use the Prometheus exporter functions this library provides (you can leave out the
prometheus-exporter
feature flag) and you do not need a separate endpoint for autometrics' metrics. -
Configure Prometheus to scrape your metrics endpoint
-
(Optional) If you have Grafana, import the Autometrics dashboards for an overview and detailed view of the function metrics
To see autometrics in action:
-
Install prometheus locally
-
Run the complete example:
cargo run -p example-full-api
-
Hover over the function names to see the generated query links (like in the image above) and view the Prometheus charts
Using each of the following metrics libraries, tracking metrics with the autometrics
macro adds approximately:
prometheus
: 140-150 nanosecondsprometheus-client
: 150-250 nanosecondsmetrics
: 550-650 nanosecondsopentelemetry
: 550-750 nanoseconds
These were calculated on a 2021 MacBook Pro with the M1 Max chip and 64 GB of RAM.
To run the benchmarks yourself, run the following command, replacing BACKEND
with the metrics library of your choice:
cargo bench --features prometheus-exporter,BACKEND
Issues, feature suggestions, and pull requests are very welcome!
If you are interested in getting involved:
- Join the conversation on Discord
- Ask questions and share ideas in the Github Discussions
- Take a look at the overall Autometrics Project Roadmap