Skip to content

Commit

Permalink
Auto-merge updates from auto-update branch
Browse files Browse the repository at this point in the history
  • Loading branch information
mlcommons-bot committed Feb 7, 2025
2 parents 60d16b0 + 6d79f16 commit 6ac1fcb
Show file tree
Hide file tree
Showing 75 changed files with 58,652 additions and 0 deletions.
1 change: 1 addition & 0 deletions open/MLCommons/code/bert-99/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TBD
1 change: 1 addition & 0 deletions open/MLCommons/code/retinanet/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TBD
1 change: 1 addition & 0 deletions open/MLCommons/code/rgat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
TBD
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
| Model | Scenario | Accuracy | Throughput | Latency (in ms) |
|-----------|------------|------------|--------------|-------------------|
| retinanet | offline | 49.593 | 0.431 | - |
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*

## Host platform

* OS version: Linux-6.8.0-1020-azure-x86_64-with-glibc2.34
* CPU version: x86_64
* Python version: 3.8.18 (default, Dec 12 2024, 19:15:30)
[GCC 13.2.0]
* MLC version: unknown

## CM Run Command

See [CM installation guide](https://docs.mlcommons.org/inference/install/).

```bash
pip install -U mlcflow

mlc rm cache -f

mlc pull repo anandhu-eng@mlperf-automations --checkout=89d56a9917bae940aa71a9eef3f297e64480f8a1


```
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf,
you should simply reload anandhu-eng@mlperf-automations without checkout and clean MLC cache as follows:*

```bash
mlc rm repo anandhu-eng@mlperf-automations
mlc pull repo anandhu-eng@mlperf-automations
mlc rm cache -f

```

## Results

Platform: default-mlcommons_cpp-cpu-onnxruntime-default_config

Model Precision: fp32

### Accuracy Results
`mAP`: `49.593`, Required accuracy for closed division `>= 37.1745`

### Performance Results
`Samples per second`: `0.431199`
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
MLPerf Conf path: /home/runner/MLC/repos/local/cache/get-git-repo_953b9e7b/inference/mlperf.conf
User Conf path: /home/runner/MLC/repos/anandhu-eng@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/2552893f156943c6a926b9215bcfc11b.conf
Dataset Preprocessed path: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_a8c5398f
Dataset List filepath: /home/runner/MLC/repos/local/cache/get-preprocessed-dataset-openimages_a8c5398f/annotations/openimages-mlperf.json
Scenario: Offline
Mode: AccuracyOnly
Batch size: 1
Query count override: 0
Performance sample count override in application: 0
loaded openimages with 10 samples
starting benchmark
loading samples to ram with total sample size: 10
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"MLC_HOST_CPU_WRITE_PROTECT_SUPPORT": "yes",
"MLC_HOST_CPU_MICROCODE": "0xffffffff",
"MLC_HOST_CPU_FPU_SUPPORT": "yes",
"MLC_HOST_CPU_FPU_EXCEPTION_SUPPORT": "yes",
"MLC_HOST_CPU_BUGS": "sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass srso",
"MLC_HOST_CPU_TLB_SIZE": "2560 4K pages",
"MLC_HOST_CPU_CFLUSH_SIZE": "64",
"MLC_HOST_CPU_ARCHITECTURE": "x86_64",
"MLC_HOST_CPU_TOTAL_CORES": "4",
"MLC_HOST_CPU_ON_LINE_CPUS_LIST": "0-3",
"MLC_HOST_CPU_VENDOR_ID": "AuthenticAMD",
"MLC_HOST_CPU_MODEL_NAME": "AMD EPYC 7763 64-Core Processor",
"MLC_HOST_CPU_FAMILY": "25",
"MLC_HOST_CPU_THREADS_PER_CORE": "2",
"MLC_HOST_CPU_PHYSICAL_CORES_PER_SOCKET": "2",
"MLC_HOST_CPU_SOCKETS": "1",
"MLC_HOST_CPU_L1D_CACHE_SIZE": "64 KiB (2 instances)",
"MLC_HOST_CPU_L1I_CACHE_SIZE": "64 KiB (2 instances)",
"MLC_HOST_CPU_L2_CACHE_SIZE": "1 MiB (2 instances)",
"MLC_HOST_CPU_L3_CACHE_SIZE": "32 MiB (1 instance)",
"MLC_HOST_CPU_NUMA_NODES": "1",
"MLC_HOST_CPU_TOTAL_LOGICAL_CORES": "4",
"MLC_HOST_MEMORY_CAPACITY": "16G",
"MLC_HOST_DISK_CAPACITY": "159G"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"starting_weights_filename": "resnext50_32x4d_fpn.onnx",
"retraining": "no",
"input_data_types": "fp32",
"weight_data_types": "fp32",
"weight_transformations": "no"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
graph TD
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.0-dev_default,_offline_) --> detect,os
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.0-dev_default,_offline_) --> get,sys-utils-cm
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.0-dev_default,_offline_) --> get,python
get-mlperf-inference-src,4b57186581024797 --> detect,os
get-mlperf-inference-src,4b57186581024797 --> get,python3
get-mlperf-inference-src,4b57186581024797 --> get,git,repo,_branch.master,_repo.https://github.com/mlcommons/inference
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.0-dev_default,_offline_) --> get,mlcommons,inference,src
get-mlperf-inference-utils,e341e5f86d8342e5 --> get,mlperf,inference,src
app-mlperf-inference,d775cac873ee4231_(_cpp,_retinanet,_onnxruntime,_cpu,_test,_r5.0-dev_default,_offline_) --> get,mlperf,inference,utils
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> detect,cpu
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,sys-utils-cm
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> detect,os
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,python3
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,mlcommons,inference,src
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,compiler,gcc
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,cmake
get-generic-python-lib,94b62a682bc44791_(_package.wheel_) --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
get-generic-python-lib,94b62a682bc44791_(_package.wheel_) --> detect,cpu
get-generic-python-lib,94b62a682bc44791_(_package.wheel_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_pip_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_package.wheel_) --> get,generic-python-lib,_pip
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,generic-python-lib,_package.wheel
get-generic-python-lib,94b62a682bc44791_(_pip_) --> get,python3
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,generic-python-lib,_pip
get-generic-python-lib,94b62a682bc44791_(_package.pybind11_) --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
get-generic-python-lib,94b62a682bc44791_(_package.pybind11_) --> detect,cpu
get-generic-python-lib,94b62a682bc44791_(_package.pybind11_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_pip_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_package.pybind11_) --> get,generic-python-lib,_pip
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,generic-python-lib,_package.pybind11
get-generic-python-lib,94b62a682bc44791_(_package.setuptools_) --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
get-generic-python-lib,94b62a682bc44791_(_package.setuptools_) --> detect,cpu
get-generic-python-lib,94b62a682bc44791_(_package.setuptools_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_pip_) --> get,python3
get-generic-python-lib,94b62a682bc44791_(_package.setuptools_) --> get,generic-python-lib,_pip
get-mlperf-inference-loadgen,64c3d98d0ba04950 --> get,generic-python-lib,_package.setuptools
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,loadgen
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,mlcommons,inference,src
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,lib,onnxruntime,lang-cpp,_cpu
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,dataset,preprocessed,openimages,_validation,_NCHW,_50
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> get,ml-model,retinanet,_onnx,_fp32
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,os
detect-cpu,586c8a43320142f7 --> detect,os
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,cpu
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,python
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,mlcommons,inference,src
get-mlperf-inference-sut-configs,c2fbf72009e2445b --> get,cache,dir,_name.mlperf-inference-sut-configs
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,sut,configs
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> generate,user-conf,mlperf,inference
detect-cpu,586c8a43320142f7 --> detect,os
compile-program,c05042ba005a4bfa --> detect,cpu
compile-program,c05042ba005a4bfa --> get,compiler,gcc
detect-cpu,586c8a43320142f7 --> detect,os
get-compiler-flags,31be8b74a69742f8 --> detect,cpu
compile-program,c05042ba005a4bfa --> get,compiler-flags
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> compile,cpp-program
detect-cpu,586c8a43320142f7 --> detect,os
benchmark-program,19f369ef47084895 --> detect,cpu
benchmark-program-mlperf,cfff0132a8aa4018 --> benchmark-program,program
app-mlperf-inference-mlcommons-cpp,bf62405e6c7a44bf_(_retinanet,_onnxruntime,_offline,_cpu_) --> benchmark-mlperf
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 6ac1fcb

Please sign in to comment.