Skip to content

Commit

Permalink
Results from self hosted Github actions - NVIDIARTX4090
Browse files Browse the repository at this point in the history
  • Loading branch information
arjunsuresh committed Oct 10, 2024
1 parent 3a8ef93 commit 7554d5b
Show file tree
Hide file tree
Showing 24 changed files with 1,385 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------------|------------|-----------------------|--------------|-------------------|
| stable-diffusion-xl | offline | (15.18544, 235.69504) | 0.374 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).

*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
* CPU version: x86_64
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
* MLCommons CM version: 3.0.1

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U cmind

cm rm cache -f

cm pull repo gateoverflow@cm4mlops --checkout=50431589d01cd60060a0d1be3c1f8193d99c6144

cm run script \
--tags=app,mlperf,inference,generic,_reference,_sdxl,_pytorch,_cuda,_test,_r4.1-dev_default,_float16,_offline \
--quiet=true \
--env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes \
--env.CM_QUIET=yes \
--env.CM_MLPERF_IMPLEMENTATION=reference \
--env.CM_MLPERF_MODEL=sdxl \
--env.CM_MLPERF_RUN_STYLE=test \
--env.CM_MLPERF_BACKEND=pytorch \
--env.CM_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter \
--env.CM_MLPERF_CLEAN_ALL=True \
--env.CM_MLPERF_DEVICE=cuda \
--env.CM_MLPERF_USE_DOCKER=True \
--env.CM_MLPERF_MODEL_PRECISION=float16 \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_LOADGEN_SCENARIO=Offline \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.CM_MLPERF_INFERENCE_VERSION=4.1-dev \
--env.CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r4.1-dev_default \
--env.CM_MLPERF_SUBMISSION_GENERATION_STYLE=short \
--env.CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX4=scc24-base \
--env.CM_DOCKER_IMAGE_NAME=scc24-reference \
--env.CM_MLPERF_LOADGEN_ALL_MODES=yes \
--env.CM_MLPERF_LAST_RELEASE=v4.0 \
--env.CM_TMP_CURRENT_PATH=/home/arjun/actions-runner/_work/cm4mlops/cm4mlops \
--env.CM_TMP_PIP_VERSION_STRING= \
--env.CM_MODEL=sdxl \
--env.CM_MLPERF_LOADGEN_COMPLIANCE=no \
--env.CM_MLPERF_CLEAN_SUBMISSION_DIR=yes \
--env.CM_RERUN=yes \
--env.CM_MLPERF_LOADGEN_EXTRA_OPTIONS= \
--env.CM_MLPERF_LOADGEN_MODE=performance \
--env.CM_MLPERF_LOADGEN_SCENARIOS,=Offline \
--env.CM_MLPERF_LOADGEN_MODES,=performance,accuracy \
--env.CM_OUTPUT_FOLDER_NAME=test_results \
--add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
--add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
--add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
--add_deps_recursive.submission-checker.tags=_short-run \
--add_deps_recursive.coco2014-preprocessed.tags=_size.50,_with-sample-ids \
--add_deps_recursive.coco2014-dataset.tags=_size.50,_with-sample-ids \
--add_deps_recursive.nvidia-preprocess-data.extra_cache_tags=scc24-base \
--v=False \
--print_env=False \
--print_deps=False \
--dump_version_info=True \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.SDXL_CHECKPOINT_PATH=/home/cmuser/CM/repos/local/cache/6be1f30ecbde4c4e/stable_diffusion_fp16
```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
you should simply reload gateoverflow@cm4mlops without checkout and clean CM cache as follows:*

```bash
cm rm repo gateoverflow@cm4mlops
cm pull repo gateoverflow@cm4mlops
cm rm cache -f

```

## Results

Platform: 41485dfb4f36-reference-gpu-pytorch_v2.4.1-scc24-base_cu124

Model Precision: fp32

### Accuracy Results
`CLIP_SCORE`: `15.18544`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
`FID_SCORE`: `235.69504`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`

### Performance Results
`Samples per second`: `0.373501`
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------------|------------|-----------------------|--------------|-------------------|
| stable-diffusion-xl | offline | (15.47816, 232.17873) | 1.134 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).

*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.2.0-39-generic-x86_64-with-glibc2.29
* CPU version: x86_64
* Python version: 3.8.10 (default, Sep 11 2024, 16:02:53)
[GCC 9.4.0]
* MLCommons CM version: 3.0.2

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U cmind

cm rm cache -f

cm pull repo gateoverflow@cm4mlops --checkout=50431589d01cd60060a0d1be3c1f8193d99c6144

cm run script \
--tags=app,mlperf,inference,generic,_nvidia,_sdxl,_tensorrt,_test,_r4.1-dev_default,_float16,_offline \
--quiet=true \
--env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes \
--env.CM_QUIET=yes \
--env.CM_MLPERF_IMPLEMENTATION=nvidia \
--env.CM_MLPERF_MODEL=sdxl \
--env.CM_MLPERF_RUN_STYLE=test \
--env.CM_MLPERF_BACKEND=tensorrt \
--env.CM_MLPERF_SUBMISSION_SYSTEM_TYPE=datacenter \
--env.CM_MLPERF_CLEAN_ALL=True \
--env.CM_MLPERF_DEVICE= \
--env.CM_MLPERF_USE_DOCKER=True \
--env.CM_MLPERF_MODEL_PRECISION=float16 \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_LOADGEN_SCENARIO=Offline \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.CM_MLPERF_INFERENCE_VERSION=4.1-dev \
--env.CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r4.1-dev_default \
--env.CM_MLPERF_SUBMISSION_GENERATION_STYLE=short \
--env.CM_MLPERF_SUT_NAME_RUN_CONFIG_SUFFIX4=scc24-base \
--env.CM_DOCKER_IMAGE_NAME=scc24-nvidia \
--env.CM_MLPERF_LOADGEN_ALL_MODES=yes \
--env.CM_MLPERF_LAST_RELEASE=v4.0 \
--env.CM_TMP_CURRENT_PATH=/home/arjun/actions-runner/_work/cm4mlops/cm4mlops \
--env.CM_TMP_PIP_VERSION_STRING= \
--env.CM_MODEL=sdxl \
--env.CM_MLPERF_LOADGEN_COMPLIANCE=no \
--env.CM_MLPERF_CLEAN_SUBMISSION_DIR=yes \
--env.CM_RERUN=yes \
--env.CM_MLPERF_LOADGEN_EXTRA_OPTIONS= \
--env.CM_MLPERF_LOADGEN_MODE=performance \
--env.CM_MLPERF_LOADGEN_SCENARIOS,=Offline \
--env.CM_MLPERF_LOADGEN_MODES,=performance,accuracy \
--env.CM_OUTPUT_FOLDER_NAME=test_results \
--add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
--add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
--add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
--add_deps_recursive.submission-checker.tags=_short-run \
--add_deps_recursive.coco2014-preprocessed.tags=_size.50,_with-sample-ids \
--add_deps_recursive.coco2014-dataset.tags=_size.50,_with-sample-ids \
--add_deps_recursive.nvidia-preprocess-data.extra_cache_tags=scc24-base \
--v=False \
--print_env=False \
--print_deps=False \
--dump_version_info=True \
--env.OUTPUT_BASE_DIR=/home/arjun/scc_gh_action_results \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/scc_gh_action_submissions \
--env.SDXL_CHECKPOINT_PATH=/home/cmuser/CM/repos/local/cache/6be1f30ecbde4c4e/stable_diffusion_fp16 \
--env.MLPERF_SCRATCH_PATH=/home/cmuser/CM/repos/local/cache/e066920512fd47b7
```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
you should simply reload gateoverflow@cm4mlops without checkout and clean CM cache as follows:*

```bash
cm rm repo gateoverflow@cm4mlops
cm pull repo gateoverflow@cm4mlops
cm rm cache -f

```

## Results

Platform: 97c3ec750d0e-nvidia-gpu-TensorRT-scc24-base

Model Precision: int8

### Accuracy Results
`CLIP_SCORE`: `15.47816`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
`FID_SCORE`: `232.17873`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`

### Performance Results
`Samples per second`: `1.13424`
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|---------------------|------------|-----------------------|--------------|-------------------|
| stable-diffusion-xl | offline | (15.18544, 235.69504) | 0.346 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).

*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
* CPU version: x86_64
* Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
* MLCommons CM version: 3.0.1

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U cmind

cm rm cache -f

cm pull repo gateoverflow@cm4mlops --checkout=52f2ebb1436d6e255d60a3dc0acf99e4f24f492d

cm run script \
--tags=app,mlperf,inference,generic,_reference,_sdxl,_pytorch,_cuda,_test,_r4.1-dev_default,_float16,_offline \
--quiet=true \
--env.CM_MLPERF_MODEL_SDXL_DOWNLOAD_TO_HOST=yes \
--env.CM_QUIET=yes \
--env.CM_MLPERF_IMPLEMENTATION=reference \
--env.CM_MLPERF_MODEL=sdxl \
--env.CM_MLPERF_RUN_STYLE=test \
--env.CM_MLPERF_BACKEND=pytorch \
--env.CM_MLPERF_CLEAN_ALL=True \
--env.CM_MLPERF_DEVICE=cuda \
--env.CM_MLPERF_USE_DOCKER=True \
--env.CM_HW_NAME=gh_action \
--env.CM_MLPERF_MODEL_PRECISION=float16 \
--env.OUTPUT_BASE_DIR=/home/arjun/gh_action_results \
--env.CM_MLPERF_LOADGEN_SCENARIO=Offline \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/gh_action_submissions \
--env.CM_MLPERF_SUBMITTER=MLCommons \
--env.CM_MLPERF_LOADGEN_TARGET_QPS=1 \
--env.CM_TEST_QUERY_COUNT=1 \
--env.CM_MLPERF_LOADGEN_COMPLIANCE=no \
--env.CM_MLPERF_SUBMISSION_RUN=yes \
--env.CM_RUN_MLPERF_ACCURACY=on \
--env.CM_RUN_SUBMISSION_CHECKER=yes \
--env.CM_TAR_SUBMISSION_DIR=yes \
--env.CM_MLPERF_SUBMISSION_GENERATION_STYLE=short \
--env.CM_MLPERF_LOADGEN_ALL_MODES=yes \
--env.CM_MLPERF_INFERENCE_VERSION=4.1-dev \
--env.CM_RUN_MLPERF_INFERENCE_APP_DEFAULTS=r4.1-dev_default \
--env.CM_MLPERF_LAST_RELEASE=v4.0 \
--env.CM_TMP_CURRENT_PATH=/home/arjun/actions-runner/_work/cm4mlops/cm4mlops \
--env.CM_TMP_PIP_VERSION_STRING= \
--env.CM_MODEL=sdxl \
--env.CM_MLPERF_CLEAN_SUBMISSION_DIR=yes \
--env.CM_RERUN=yes \
--env.CM_MLPERF_LOADGEN_EXTRA_OPTIONS= \
--env.CM_MLPERF_LOADGEN_MODE=performance \
--env.CM_MLPERF_LOADGEN_SCENARIOS,=Offline \
--env.CM_MLPERF_LOADGEN_MODES,=performance,accuracy \
--env.CM_OUTPUT_FOLDER_NAME=test_results \
--add_deps_recursive.compiler.tags=gcc \
--add_deps_recursive.submission-checker.tags=_short-run \
--add_deps_recursive.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
--add_deps_recursive.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
--add_deps_recursive.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
--adr.compiler.tags=gcc \
--adr.submission-checker.tags=_short-run \
--adr.get-mlperf-inference-results-dir.tags=_version.r4_1-dev \
--adr.get-mlperf-inference-submission-dir.tags=_version.r4_1-dev \
--adr.mlperf-inference-nvidia-scratch-space.tags=_version.r4_1-dev \
--v=False \
--print_env=False \
--print_deps=False \
--dump_version_info=True \
--env.OUTPUT_BASE_DIR=/home/arjun/gh_action_results \
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/gh_action_submissions \
--env.SDXL_CHECKPOINT_PATH=/home/cmuser/CM/repos/local/cache/6be1f30ecbde4c4e/stable_diffusion_fp16
```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
you should simply reload gateoverflow@cm4mlops without checkout and clean CM cache as follows:*

```bash
cm rm repo gateoverflow@cm4mlops
cm pull repo gateoverflow@cm4mlops
cm rm cache -f

```

## Results

Platform: gh_action-reference-gpu-pytorch_v2.4.1-cu124

Model Precision: fp32

### Accuracy Results
`CLIP_SCORE`: `15.18544`, Required accuracy for closed division `>= 31.68632` and `<= 31.81332`
`FID_SCORE`: `235.69504`, Required accuracy for closed division `>= 23.01086` and `<= 23.95008`

### Performance Results
`Samples per second`: `0.345763`
Loading

0 comments on commit 7554d5b

Please sign in to comment.