Skip to content

Add configurable metric selection and multi-metric display in Evaluation UI #162

@Aki-07

Description

@Aki-07

The evaluation UI currently supports only ROUGE, but the backend (adk-python) already allows multiple metrics through RunEvalRequest and exposes them via /metrics-info. This issue adds UI support for dynamically selecting metrics (e.g., ROUGE, BERTScore, LLM-as-judge, path accuracy) and displaying multiple evaluation results.

No backend changes are required only frontend updates to:

  • Fetch available metrics from /metrics-info
  • Add a metric selection dropdown in the evaluation panel
  • Include selected metrics in the /eval-sets/{eval_set_id}/run request payload
  • Render multiple metric results dynamically in the results view

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions