forked from mlcommons/inference
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenShift AI Caikit+TGIS MLPerf Inference Implementation for Llama2-70b #1
Open
Maxusmusti
wants to merge
75
commits into
master
Choose a base branch
from
api-server
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Maxusmusti
force-pushed
the
api-server
branch
from
January 22, 2024 17:14
6809071
to
86e3b72
Compare
… compliance tests (mlcommons#1576) * Fix offline_min_samples in submission checker and mlcommons#1569 * Removed mlperf.conf from llama2 directory to avoid confusion * Update submission_checker.py * Fixes for 4.0 * Cleanup compliance dir check for models without compliance tests
Maxusmusti
force-pushed
the
api-server
branch
from
January 25, 2024 19:24
6a43fa5
to
15a8805
Compare
…emote 'compliance/sources_checksums.json' (mlcommons#1582) Co-authored-by: mlcommons-bot <null>
Co-authored-by: Miro <[email protected]>
Co-authored-by: Miro <[email protected]>
…'compliance/check.py' (mlcommons#1587) Co-authored-by: mlcommons-bot <null>
* Ignore trailing whitespace lines in spl.txt files. * Remove fix from sync'ed power_checker.py. * Reformat according to black.
…mlcommons#1591) * Add support to dump 10 compliance images during accuracy run for SDXL * Fix typo * Dump caption.txt in the same path
…_log_sampling_target is enabled (mlcommons#1599)
* Fix loadgen token metrics latency constrains * Update perf constraints check for token metrics * Add equal issue mode for LLMs models
* Add sample length check to test06 * Remove spaces in token metrics recomendation * Add important item to Llama readme * Fix Bug: number of tokens logged before computing them * Fix typo: lenght -> length
* Enable equal issue mode for LLM benchmarks * Reduce min_query_count to 1 for server/MS/SS * Remove scenario * Remove min_query_count so default is used; revoke padding change for equal issue offline * Pad min_queries, not samples_per_query for non-offline * Add documentation to the sample equal issue
Co-authored-by: Miro <[email protected]>
* Update README.md No longer need custom fork as the relevant changes are in the inference repository * Update dataset.py --------- Co-authored-by: Miro <[email protected]>
Co-authored-by: Miro <[email protected]>
…and dlrmv2 models (mlcommons#1604) * Update README.md Add CM commands to download Stable diffusion models * Update README.md * Update README.md
* Turn equal issue mode off for Llama2 TEST06 * Add TEST06 to the output dir
* Fix submission checker and TEST06 for Llama2 * Remove redundant line * Move test_dir check
Maxusmusti
force-pushed
the
api-server
branch
from
February 12, 2024 14:52
cb93b59
to
258c9c6
Compare
…UNet) (mlcommons#1624) Currently 3D-UNet is the only workload using equal-issue mode on Offline scenario. Recent code change on LLM equal-issue mode caused 3D-UNet accuracy run to run more than 1 queries, causing the accuracy log to bloat and fail the accuracy checking script. This change fixes the problem described above.
Maxusmusti
force-pushed
the
api-server
branch
from
February 12, 2024 20:12
c5157a9
to
53dd0be
Compare
* Hotfix: DLRMv2 Audit Test01 fallback failure DLRMv2 Audit TEST01 may go to fallback route and the accuracy check script (accuracy-dlrm.py) didn't expect this to happen. It always expects entire sample set to be in the accuracy log while Audit TEST01 would generate subset only. This fixes the Audit TEST01 failure described above. * typo fix
Maxusmusti
force-pushed
the
api-server
branch
from
February 27, 2024 19:52
6ebfa88
to
230d495
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.