Skip to content

Conversation

@o7si
Copy link
Contributor

@o7si o7si commented Nov 19, 2025

The /metrics endpoint returns Prometheus-format text that is incorrectly JSON-escaped, causing the output to be wrapped in double quotes, which prevents Prometheus from parsing it correctly.

Related code snippets:

res->ok(prometheus.str());

void ok(const json & response_data) {
status = 200;
data = safe_json_to_str(response_data);
}

I added an ok() overload with a std::string parameter to fix this issue.

Related issue:

Note: I've checked other calls to ok() and they all pass JSON types, so this issue won't occur elsewhere.

Comment on lines 4445 to 4448
void ok(const std::string & response_data) {
status = 200;
data = response_data;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a safe solution as std::string can also be a valid json.

Instead, you can simply set the status and data directly in the handle_metrics without using ok() macro

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @ngxson Thank you for the guidance!

After thinking about it carefully, the situation you described with I don't think it's a safe solution as std::string can also be a valid json. does indeed exist.

I noticed that all handlers return by calling either error or ok to set the HTTP response. To make the code look more consistent, I avoided using this approach in the handler functions:

res->status = 200;
res->data = prometheus.str();

My original thinking was:

For JSON type return values, call this signature: ok(const json & response_data)
For string type return values, call this signature: ok(const std::string & response_data)

So I'm wondering which approach would be better - should I simply use:

res->status = 200;
res->data = prometheus.str();

Or should I define specific methods like:

void ok_json(const json & response_data);
void ok_text(const std::string & response_data);  
void ok_html(const std::string & response_data);

Or is there another approach you'd recommend?

I want to maintain consistency with the existing codebase while properly handling different response content types.

@o7si
Copy link
Contributor Author

o7si commented Nov 21, 2025

Hi, I chose the simplest approach:

res->status = 200;
res->data = prometheus.str();

When requesting via curl command, the output is as follows:

curl http://127.0.0.1:8080/metrics
# HELP llamacpp:prompt_tokens_total Number of prompt tokens processed.
# TYPE llamacpp:prompt_tokens_total counter
llamacpp:prompt_tokens_total 30
# HELP llamacpp:prompt_seconds_total Prompt process time
# TYPE llamacpp:prompt_seconds_total counter
llamacpp:prompt_seconds_total 0.152
# HELP llamacpp:tokens_predicted_total Number of generation tokens processed.
# TYPE llamacpp:tokens_predicted_total counter
llamacpp:tokens_predicted_total 10
# HELP llamacpp:tokens_predicted_seconds_total Predict process time
# TYPE llamacpp:tokens_predicted_seconds_total counter
llamacpp:tokens_predicted_seconds_total 0.299
# HELP llamacpp:n_decode_total Total number of llama_decode() calls
# TYPE llamacpp:n_decode_total counter
llamacpp:n_decode_total 10
# HELP llamacpp:n_tokens_max Largest observed n_tokens.
# TYPE llamacpp:n_tokens_max counter
llamacpp:n_tokens_max 39
# HELP llamacpp:n_busy_slots_per_decode Average number of busy slots per llama_decode() call
# TYPE llamacpp:n_busy_slots_per_decode counter
llamacpp:n_busy_slots_per_decode 1
# HELP llamacpp:prompt_tokens_seconds Average prompt throughput in tokens/s.
# TYPE llamacpp:prompt_tokens_seconds gauge
llamacpp:prompt_tokens_seconds 197.368
# HELP llamacpp:predicted_tokens_seconds Average generation throughput in tokens/s.
# TYPE llamacpp:predicted_tokens_seconds gauge
llamacpp:predicted_tokens_seconds 33.4448
# HELP llamacpp:requests_processing Number of requests processing.
# TYPE llamacpp:requests_processing gauge
llamacpp:requests_processing 0
# HELP llamacpp:requests_deferred Number of requests deferred.
# TYPE llamacpp:requests_deferred gauge
llamacpp:requests_deferred 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants