-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add keep option to distinct nvbench #16497
Add keep option to distinct nvbench #16497
Conversation
.add_int64_axis("NumRows", {10'000, 100'000, 1'000'000, 10'000'000}); | ||
.add_string_axis("keep", {"any", "first", "last", "none"}) | ||
.add_int64_axis("cardinality", | ||
{100, 1'000, 10'000, 100'000, 1'000'000, 10'000'000, 100'000'000, 1'000'000'000}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can just decrease the default values in the axis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One non-blocking suggestion
LGTM.
if (cardinality > num_rows) { | ||
state.skip("cardinality > num_rows"); | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to omit this skipping condition. I recognize that we can't have 1M distinct elements in 1K rows, but this condition adds a lot of friction when sweeping NumRows for the high cardinality case. It forces me to run a full factorial of matching NumRows and Cardinality values and filter the outputs for the highest Cardinality unskipped for each NumRows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll rewrite this logic. Thanks for the feedback!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. @GregoryKimball I reviewed the NVBench docs and I don't see a way to filter out certain jobs except by skipping them. https://github.com/NVIDIA/nvbench/blob/main/docs/benchmarks.md#beware-combinatorial-explosion-is-lurking
We might be able to use a string axis like {"100,100", "100,1000", ..., "1000000000,1000000000"}
and parse it, but that's hard to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a way to filter out certain jobs except by skipping them.
NVIDIA/nvbench#80 can solve this issue but the PR has been stalled for a while.
Co-authored-by: Yunsong Wang <[email protected]>
Thanks guys, here is a performance snapshot of these benchmarks on A100 I would like to merge this update ASAP and then have @srinivasyadav18 pull the changes into #16484 I'm noticing that these throughput numbers on A100 are about 10x lower than what @srinivasyadav18 posted for H100. This is another reason I'm interested in running wider benchmarks on #16484 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srinivasyadav18 Can you approve?
…inct-benchmark-keep
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI fail is still failing due styling issues.
Otherwise LGTM! Thanks!
Just fixed the CI style check. I'll merge this when CI passes. |
/merge |
Description
This PR adopts some work from @srinivasyadav18 with additional modifications. This is meant to complement #16484.
Checklist