Skip to content

Commit

Permalink
Restrict CLA check to MLC
Browse files Browse the repository at this point in the history
Fix submission checker version

Added results updater GH action

Updated results summary

Results on system test

Updated results summary

Update update-results.yml

Updated results summary

Added results updater GH action

Updated results summary

Added results updater GH action

Updated results summary

Use incremental dbversion

Use incremental dbversion

Updated results summary

Delete summary_results.json

Updated results summary

Update update-results.yml

Updated results summary

Update publish.yml

Updated results summary

Update update-results.yml

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Update update-results.yml

Updated results summary

Update mkdocs.yml

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Results from self hosted Github actions - NVIDIARTX4090

Updated results summary

Updated results summary

Update update-results.yml

Updated results summary

Update update-results.yml

Updated results summary

Update update-results.yml

Updated results summary

Update update-results.yml

Updated results summary
  • Loading branch information
arjunsuresh committed Oct 7, 2024
1 parent a527ebd commit 6165aea
Show file tree
Hide file tree
Showing 182 changed files with 8,298 additions and 16 deletions.
1 change: 1 addition & 0 deletions .github/workflows/cla.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ on:

jobs:
cla-check:
if: github.repository_owner == 'mlcommons'
runs-on: ubuntu-latest
steps:
- name: "MLCommons CLA bot check"
Expand Down
1 change: 0 additions & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ on:
push:
branches:
- mlperf-inference-results-scc24
- main
- docs

jobs:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ jobs:
python3 -m pip install cm4mlops
- name: Run MLPerf Inference Submission Checker
run: |
cm run script --tags=run,mlperf,inference,submission,checker,_short-run --adr.submission-checker-src.tags=_repo.https://github.com/gateoverflow/inference --version=r4.1 --quiet --extra_args=" --skip-extra-files-in-root-check" --submission_dir=./
cm run script --tags=run,mlperf,inference,submission,checker,_short-run --adr.submission-checker-src.tags=_repo.https://github.com/gateoverflow/inference --src_version=v4.1 --quiet --extra_args=" --skip-extra-files-in-root-check" --submission_dir=./
51 changes: 51 additions & 0 deletions .github/workflows/update-results.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: MLPerf inference results updater


on:
push:
branches: [ "main", "mlperf-inference-results-scc24" ]

jobs:
build:

runs-on: ubuntu-latest
env:
CM_INDEX: "on"
strategy:
fail-fast: false
matrix:
python-version: [ "3.10" ]

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: |
python3 -m pip install cm4mlops
- name: Run MLPerf Inference Submission Checker and generate results summary
run: |
cm run script --tags=run,mlperf,inference,submission,checker,_short-run --src_version=v4.1 --adr.submission-checker-src.tags=_repo.https://github.com/gateoverflow/inference,_branch.improve_result_generation --quiet --extra_args=" --skip-extra-files-in-root-check" --submission_dir=./ > >(tee -a out.txt) 2> >(tee -a checker_log.txt >&2)
cm run script --tags=convert,from-csv,to-md --csv_file=summary.csv --md_file=README.md
USER="arjunsuresh"
[email protected]
git config --global user.name "$USER"
git config --global user.email "$EMAIL"
#git remote set-url origin https://x-access-token:${{ secrets.GITHUB_TOKEN_TOKEN }}@github.com/${{ github.repository }}
git add summary*
echo -e 'Please download [summary.xlsx](summary.xlsx) to view the most recent results. \n ```' > temp
tail -n 16 checker_log.txt >> temp
echo -e '\n```\n' >> temp
cat temp | cat - README.md > temp1
head -n 100 temp1 > README.md
git add README.md
git diff-index --quiet HEAD || (git commit -am "Updated results summary" && git push origin)
32 changes: 32 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
Please download [summary.xlsx](summary.xlsx) to view the most recent results.
```
[2024-10-07 21:35:17,615 submission_checker1.py:2936 INFO] Results=9, NoResults=0, Power Results=0
[2024-10-07 21:35:17,615 submission_checker1.py:2943 INFO] ---
[2024-10-07 21:35:17,615 submission_checker1.py:2944 INFO] Closed Results=0, Closed Power Results=0
[2024-10-07 21:35:17,615 submission_checker1.py:2949 INFO] Open Results=9, Open Power Results=0
[2024-10-07 21:35:17,615 submission_checker1.py:2954 INFO] Network Results=0, Network Power Results=0
[2024-10-07 21:35:17,615 submission_checker1.py:2959 INFO] ---
[2024-10-07 21:35:17,615 submission_checker1.py:2961 INFO] Systems=8, Power Systems=0
[2024-10-07 21:35:17,616 submission_checker1.py:2962 INFO] Closed Systems=0, Closed Power Systems=0
[2024-10-07 21:35:17,616 submission_checker1.py:2967 INFO] Open Systems=8, Open Power Systems=0
[2024-10-07 21:35:17,616 submission_checker1.py:2972 INFO] Network Systems=0, Network Power Systems=0
[2024-10-07 21:35:17,616 submission_checker1.py:2977 INFO] ---
[2024-10-07 21:35:17,616 submission_checker1.py:2982 INFO] SUMMARY: submission looks OK
INFO:root: ! call "postprocess" from /home/runner/CM/repos/mlcommons@cm4mlops/script/run-mlperf-inference-submission-checker/customize.py
```

| | Organization | Availability | Division | SystemType | SystemName | Platform | Model | MlperfModel | Scenario | Result | Accuracy | number_of_nodes | host_processor_model_name | host_processors_per_node | host_processor_core_count | accelerator_model_name | accelerators_per_node | Location | framework | operating_system | notes | compliance | errors | version | inferred | has_power | Units | weight_data_types |
|---:|:---------------|:---------------|:-----------|:-------------|:-------------|:-------------------------------------------------------|:--------------------|:--------------------|:-----------|----------:|:--------------------------------------------------------------|------------------:|:----------------------------|---------------------------:|----------------------------:|:-------------------------|------------------------:|:----------------------------------------------------------------------------------------------------------|:---------------|:------------------------------------------------|:----------------------------------|-------------:|---------:|:----------|-----------:|:------------|:----------|:--------------------|
| 0 | MLCommons | available | open | datacenter | 48ed6105bd85 | 48ed6105bd85-nvidia-gpu-TensorRT-scc24-main | stable-diffusion-xl | stable-diffusion-xl | Offline | 1.13292 | CLIP_SCORE: 15.586050063371658 FID_SCORE: 236.8087101317688 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/48ed6105bd85-nvidia-gpu-TensorRT-scc24-main/stable-diffusion-xl/offline | TensorRT | Ubuntu 20.04 (linux-6.2.0-39-generic-glibc2.31) | Automated by MLCommons CM v2.3.6. | 1 | 0 | v4.1 | 0 | False | Samples/s | int8 |
| 1 | MLCommons | available | open | datacenter | e8dbfdd7ca14 | e8dbfdd7ca14-nvidia-gpu-TensorRT-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 1.13976 | CLIP_SCORE: 15.617164582014084 FID_SCORE: 233.28573786792805 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/e8dbfdd7ca14-nvidia-gpu-TensorRT-scc24-base/stable-diffusion-xl/offline | TensorRT | Ubuntu 20.04 (linux-6.2.0-39-generic-glibc2.31) | Automated by MLCommons CM v2.3.9. | 1 | 0 | v4.1 | 0 | False | Samples/s | int8 |
| 2 | MLCommons | available | open | datacenter | 48ed6105bd85 | 48ed6105bd85-nvidia-gpu-TensorRT-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 1.13598 | CLIP_SCORE: 15.586050063371658 FID_SCORE: 236.8087101317688 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/48ed6105bd85-nvidia-gpu-TensorRT-scc24-base/stable-diffusion-xl/offline | TensorRT | Ubuntu 20.04 (linux-6.2.0-39-generic-glibc2.31) | Automated by MLCommons CM v2.3.6. | 1 | 0 | v4.1 | 0 | False | Samples/s | int8 |
| 3 | MLCommons | available | open | datacenter | 13fce262fb79 | 13fce262fb79-reference-gpu-pytorch_v2.4.1-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 0.375843 | CLIP_SCORE: 15.18544016778469 FID_SCORE: 235.69504308101006 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/13fce262fb79-reference-gpu-pytorch_v2.4.1-scc24-base/stable-diffusion-xl/offline | pytorch v2.4.1 | Ubuntu 22.04 (linux-6.2.0-39-generic-glibc2.35) | Automated by MLCommons CM v2.3.9. | 1 | 0 | v4.1 | 0 | False | Samples/s | fp32 |
| 4 | MLCommons | available | open | edge | gh_action | gh_action-reference-gpu-pytorch_v2.4.1-default_config | gptj-99 | gptj-99 | Offline | 52.9478 | nan | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/gh_action-reference-gpu-pytorch_v2.4.1-default_config/gptj-99/offline | pytorch v2.4.1 | Ubuntu 22.04 (linux-6.2.0-39-generic-glibc2.35) | Automated by MLCommons CM v2.3.4. | 1 | 0 | v4.1 | 0 | False | Tokens/s | fp32 |
| 5 | MLCommons | available | open | edge | gh_action | gh_action-reference-gpu-pytorch_v2.4.1-default_config | stable-diffusion-xl | stable-diffusion-xl | Offline | 0.345721 | CLIP_SCORE: 15.18544016778469 FID_SCORE: 235.69504308101006 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/gh_action-reference-gpu-pytorch_v2.4.1-default_config/stable-diffusion-xl/offline | pytorch v2.4.1 | Ubuntu 22.04 (linux-6.2.0-39-generic-glibc2.35) | Automated by MLCommons CM v2.3.4. | 1 | 0 | v4.1 | 0 | False | Samples/s | fp32 |
| 6 | MLCommons | available | open | datacenter | 48ed6105bd85 | 48ed6105bd85-reference-gpu-pytorch_v2.1.0a0-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 0.373636 | CLIP_SCORE: 15.236237794160843 FID_SCORE: 238.78369342212613 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/48ed6105bd85-reference-gpu-pytorch_v2.1.0a0-scc24-base/stable-diffusion-xl/offline | TensorRT | Ubuntu 20.04 (linux-6.2.0-39-generic-glibc2.31) | Automated by MLCommons CM v2.3.6. | 1 | 0 | v4.1 | 0 | False | Samples/s | fp32 |
| 7 | MLCommons | available | open | datacenter | f9ac88850adc | f9ac88850adc-reference-gpu-pytorch_v2.4.1-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 0.376944 | CLIP_SCORE: 15.18544016778469 FID_SCORE: 235.69504308101006 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/f9ac88850adc-reference-gpu-pytorch_v2.4.1-scc24-base/stable-diffusion-xl/offline | pytorch v2.4.1 | Ubuntu 22.04 (linux-6.2.0-39-generic-glibc2.35) | Automated by MLCommons CM v2.3.9. | 1 | 0 | v4.1 | 0 | False | Samples/s | fp32 |
| 8 | MLCommons | available | open | datacenter | 3b07702db56d | 3b07702db56d-reference-gpu-pytorch_v2.4.1-scc24-base | stable-diffusion-xl | stable-diffusion-xl | Offline | 0.374549 | CLIP_SCORE: 15.18544016778469 FID_SCORE: 235.69504308101006 | 1 | Intel(R) Xeon(R) w7-2495X | 1 | 24 | NVIDIA GeForce RTX 4090 | 1 | open/MLCommons/results/3b07702db56d-reference-gpu-pytorch_v2.4.1-scc24-base/stable-diffusion-xl/offline | pytorch v2.4.1 | Ubuntu 22.04 (linux-6.2.0-39-generic-glibc2.35) | Automated by MLCommons CM v2.3.9. | 1 | 0 | v4.1 | 0 | False | Samples/s | fp32 |
1 change: 1 addition & 0 deletions dbversion
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
100
6 changes: 4 additions & 2 deletions docinit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,16 @@ fi
repo_owner=${INFERENCE_RESULTS_REPO_OWNER:-mlcommons}
repo_branch=${INFERENCE_RESULTS_REPO_BRANCH:-main}
repo_name=${INFERENCE_RESULTS_REPO_NAME:-inference_results_${INFERENCE_RESULTS_VERSION}}

ver_num=$(cat dbversion)
let ver_num++
echo "ver_num=$ver_num" > dbversion
if [ ! -e docs/javascripts/config.js ]; then
if [ -n "${INFERENCE_RESULTS_VERSION}" ]; then
echo "const results_version=\"${INFERENCE_RESULTS_VERSION}\";" > docs/javascripts/config.js;
echo "var repo_owner=\"${repo_owner}\";" >> docs/javascripts/config.js;
echo "var repo_branch=\"${repo_branch}\";" >> docs/javascripts/config.js;
echo "var repo_name=\"${repo_name}\";" >> docs/javascripts/config.js;
ver_num=`echo ${INFERENCE_RESULTS_VERSION} | tr -cd '0-9'`
#ver_num=`echo ${INFERENCE_RESULTS_VERSION} | tr -cd '0-9'`
echo "const dbVersion =\"${ver_num}\";" >> docs/javascripts/config.js;
else
echo "Please export INFERENCE_RESULTS_VERSION=v4.1 or the corresponding version";
Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
site_name: MLPerf Inference Results Comparison
repo_url: https://github.com/mlcommons/inference_results_v4.0
repo_url: https://github.com/mlcommons/cm4mlperf-inference
theme:
name: material
logo: img/logo_v2.svg
Expand Down
1 change: 1 addition & 0 deletions open/MLCommons/code/gptj-99/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TBD
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------------|------------|-----------------------|--------------|-------------------|
| stable-diffusion-xl | offline | (15.18544, 235.69504) | 0.376 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"starting_weights_filename": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0",
"retraining": "no",
"input_data_types": "fp32",
"weight_data_types": "fp32",
"weight_transformations": "no"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).

*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
* CPU version: x86_64
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
* MLCommons CM version: 3.0.1

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U cmind

cm rm cache -f

cm pull repo gateoverflow@cm4mlops --checkout=cfdb5f7dd34057f75d35fa296becaeae4aadf532

cm run script \
--tags=app,mlperf,inference,generic,_reference,_sdxl,_pytorch,_cuda,_test,_r4.1-dev_default,_float16,_offline \
--quiet=true \
--env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes \
--env.CM_QUIET=yes \
--env.CM_MLPERF_IMPLEMENTATION=reference \
--env.CM_MLPERF_MODEL=sdxl \
--env.CM_MLPERF_RUN_STYLE=test \
--env.CM_MLPERF_BACKEND=pytorch \
--env.CM_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter \
--env.CM_MLPERF_CLEAN_ALL=True \
--env.CM_MLPERF_DEVICE=cuda \
--env.CM_MLPERF_USE_DOCKER=True \
--env.CM_MLPERF_MODEL_PRECISION=float16 \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_LOADGEN_SCENARIO=Offline \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.CM_MLPERF_INFERENCE_VERSION=4.1-dev \
--env.CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r4.1-dev_default \
--env.CM_MLPERF_SUBMISSION_GENERATION_STYLE=short \
--env.CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX4=scc24-base \
--env.CM_MLPERF_LOADGEN_ALL_MODES=yes \
--env.CM_MLPERF_LAST_RELEASE=v4.0 \
--env.CM_TMP_CURRENT_PATH=/home/arjun/actions-runner/_work/cm4mlops/cm4mlops \
--env.CM_TMP_PIP_VERSION_STRING= \
--env.CM_CLEAN_EXTRA_CACHE_RM_TAGS=scc24-main \
--env.CM_MODEL=sdxl \
--env.CM_MLPERF_LOADGEN_COMPLIANCE=no \
--env.CM_MLPERF_CLEAN_SUBMISSION_DIR=yes \
--env.CM_RERUN=yes \
--env.CM_MLPERF_LOADGEN_EXTRA_OPTIONS= \
--env.CM_MLPERF_LOADGEN_MODE=performance \
--env.CM_MLPERF_LOADGEN_SCENARIOS,=Offline \
--env.CM_MLPERF_LOADGEN_MODES,=performance,accuracy \
--env.CM_OUTPUT_FOLDER_NAME=test_results \
--add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
--add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
--add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
--add_deps_recursive.submission-checker.tags=_short-run \
--add_deps_recursive.coco2014-preprocessed.tags=_size.50,_with-sample-ids \
--add_deps_recursive.coco2014-dataset.tags=_size.50,_with-sample-ids \
--add_deps_recursive.nvidia-preprocess-data.extra_cache_tags=scc24-base \
--v=False \
--print_env=False \
--print_deps=False \
--dump_version_info=True \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.SDXL_CHECKPOINT_PATH=/home/cmuser/CM/repos/local/cache/6be1f30ecbde4c4e/stable_diffusion_fp16
```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
you should simply reload gateoverflow@cm4mlops without checkout and clean CM cache as follows:*

```bash
cm rm repo gateoverflow@cm4mlops
cm pull repo gateoverflow@cm4mlops
cm rm cache -f

```

## Results

Platform: 13fce262fb79-reference-gpu-pytorch_v2.4.1-scc24-base

Model Precision: fp32

### Accuracy Results
`CLIP_SCORE`: `15.18544`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
`FID_SCORE`: `235.69504`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`

### Performance Results
`Samples per second`: `0.375843`
Loading

0 comments on commit 6165aea

Please sign in to comment.