Skip to content

Query performance benchmark #16

@jiekun

Description

@jiekun

Is your feature request related to a problem? Please describe

In previous internal benchmarks, query performance was roughly compared by manually ingesting data for several days and navigating different pages of the Jaeger Frontend to subjectively experience the query duration differences. Although VictoriaTraces performed comparably to ClickHouse and significantly outperformed Elasticsearch, these results were neither accurate nor quantifiable.

Describe the solution you'd like

It's good to design re-runnable benchmark to measure the query duration for different scenarios:

  1. Filter traces by condition.
  2. Filter trace by trace_id.

Given that the time range filter significantly impacts performance, the benchmark should target queries across large time ranges, short time ranges, fresh data, old data, and more.

Describe alternatives you've considered

No response

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions