Skip to content

Conversation

@chandlerc
Copy link
Contributor

Even with --benchmark_dry_run, the benchmarks that use batching still do one batch at a minimum as that's inherent to how batching works in the benchmark framework.

This means that the minimum batch size can (and in practice does) trigger timeouts by forcing 1k iterations in the test run that is just trying to ensure the benchmark doesn't crash in some way.

Reduce the minimum size to 128 instead of 1k for this benchmark which should put it (much) further from any timeout limit. It also still seems perfectly effective for getting good benchmark data -- I think the original value was set much too aggressively.

Even with `--benchmark_dry_run`, the benchmarks that use _batching_
still do one batch at a minimum as that's inherent to how batching works
in the benchmark framework.

This means that the minimum batch size can (and in practice does)
trigger timeouts by forcing 1k iterations in the test run that is just
trying to ensure the benchmark doesn't _crash_ in some way.

Reduce the minimum size to 128 instead of 1k for this benchmark which
should put it (much) further from any timeout limit. It also still seems
perfectly effective for getting good benchmark data -- I think the
original value was set _much_ too aggressively.
@chandlerc chandlerc requested a review from a team as a code owner November 26, 2025 08:48
@chandlerc chandlerc requested review from josh11b and removed request for a team November 26, 2025 08:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant