What
Both `llm.api::chat()` and `llm.api::agent()` return a `usage` list with token counts, but no `cost` field. Surface a numeric `usage$cost` (in USD) when the provider exposes pricing info or the lib can compute it from a per-model price table.
Why
corteza 0.6.6 now displays per-subagent cumulative cost in `/agents` (see cornball-ai/corteza#79). The accumulator already handles `usage$cost` if it's present:
```r
if (!is.null(usage$cost) && !is.na(usage$cost)) {
info$cumulative_cost <- prev + as.numeric(usage$cost)
}
```
But the installed llm.api never sets it, so `/agents` shows `?` for cost across all providers. Once llm.api surfaces a numeric `cost`, corteza picks it up with zero downstream change.
Current shape
`agent()` (Anthropic-style names):
```r
usage = list(
input_tokens = ...,
output_tokens = ...,
total_tokens = ...
)
```
`chat()` (OpenAI-style names — also inconsistent with `agent()`):
```r
usage = list(
prompt_tokens = ...,
completion_tokens = ...,
total_tokens = ...
)
```
Worth normalizing the field names while at it, but that's a separate concern from cost.
Proposed shape
```r
usage = list(
input_tokens = ...,
output_tokens = ...,
total_tokens = ...,
cost = 0.0012 # USD; NA when not available
)
```
Implementation sketch
-
Anthropic / OpenAI: their HTTP responses include token counts; cost can be computed from a per-model $/M-input and $/M-output table baked into llm.api. The table will drift; document where to update it.
-
Moonshot: no cost in response. Either omit (NA) or compute from a price table.
-
Ollama: local; cost = 0 (or NA, since "free" is semantically different from "we have a 0 number for you").
A simple `compute_cost(provider, model, input_tokens, output_tokens)` helper would do it. Return NA when no pricing entry exists rather than 0, so consumers can distinguish "free" from "unknown".
Acceptance
- `usage$cost` is present in the return of `agent()` and `chat()`.
- Numeric when the provider/model has a price entry; NA otherwise.
- corteza's `/agents` shows a real $ figure for Anthropic/OpenAI subagents instead of `?`.
What
Both `llm.api::chat()` and `llm.api::agent()` return a `usage` list with token counts, but no `cost` field. Surface a numeric `usage$cost` (in USD) when the provider exposes pricing info or the lib can compute it from a per-model price table.
Why
corteza 0.6.6 now displays per-subagent cumulative cost in `/agents` (see cornball-ai/corteza#79). The accumulator already handles `usage$cost` if it's present:
```r
if (!is.null(usage$cost) && !is.na(usage$cost)) {
info$cumulative_cost <- prev + as.numeric(usage$cost)
}
```
But the installed llm.api never sets it, so `/agents` shows `?` for cost across all providers. Once llm.api surfaces a numeric `cost`, corteza picks it up with zero downstream change.
Current shape
`agent()` (Anthropic-style names):
```r
usage = list(
input_tokens = ...,
output_tokens = ...,
total_tokens = ...
)
```
`chat()` (OpenAI-style names — also inconsistent with `agent()`):
```r
usage = list(
prompt_tokens = ...,
completion_tokens = ...,
total_tokens = ...
)
```
Worth normalizing the field names while at it, but that's a separate concern from cost.
Proposed shape
```r
usage = list(
input_tokens = ...,
output_tokens = ...,
total_tokens = ...,
cost = 0.0012 # USD; NA when not available
)
```
Implementation sketch
A simple `compute_cost(provider, model, input_tokens, output_tokens)` helper would do it. Return NA when no pricing entry exists rather than 0, so consumers can distinguish "free" from "unknown".
Acceptance