Skip to content

Commit fc780b9

Browse files
committed
Results from GH action on NVIDIA_RTX4090x1
1 parent bbc0607 commit fc780b9

File tree

83 files changed

+21032
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+21032
-0
lines changed
+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
TBD
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
2+
3+
## Host platform
4+
5+
* OS version: Linux-6.8.0-51-generic-x86_64-with-glibc2.29
6+
* CPU version: x86_64
7+
* Python version: 3.8.10 (default, Jan 17 2025, 14:40:23)
8+
[GCC 9.4.0]
9+
* MLC version: unknown
10+
11+
## CM Run Command
12+
13+
See [CM installation guide](https://docs.mlcommons.org/inference/install/).
14+
15+
```bash
16+
pip install -U mlcflow
17+
18+
mlc rm cache -f
19+
20+
mlc pull repo mlcommons@mlperf-automations --checkout=02683cf5e8beb0cc5baaf27802daafc08fe42e67
21+
22+
23+
```
24+
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf,
25+
you should simply reload mlcommons@mlperf-automations without checkout and clean MLC cache as follows:*
26+
27+
```bash
28+
mlc rm repo mlcommons@mlperf-automations
29+
mlc pull repo mlcommons@mlperf-automations
30+
mlc rm cache -f
31+
32+
```
33+
34+
## Results
35+
36+
Platform: RTX4090x1-nvidia-gpu-TensorRT-default_config
37+
38+
Model Precision: int8
39+
40+
### Accuracy Results
41+
`acc`: `76.064`, Required accuracy for closed division `>= 75.6954`
42+
43+
### Performance Results
44+
`Samples per query`: `498402.0`
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"starting_weights_filename": "https://armi.in/files/resnet50_v1.onnx",
3+
"retraining": "no",
4+
"input_data_types": "int8",
5+
"weight_data_types": "int8",
6+
"weight_transformations": "no"
7+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
[2025-01-31 10:59:07,739 main.py:229 INFO] Detected system ID: KnownSystem.Nvidia_0755fa269fb6
2+
[2025-01-31 10:59:07,917 generate_conf_files.py:107 INFO] Generated measurements/ entries for Nvidia_0755fa269fb6_TRT/resnet50/MultiStream
3+
[2025-01-31 10:59:07,917 __init__.py:46 INFO] Running command: ./build/bin/harness_default --logfile_outdir="/mlc-mount/home/arjun/gh_action_results/valid_results/RTX4090x1-nvidia_original-gpu-tensorrt-vdefault-default_config/resnet50/multistream/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=2048 --test_mode="AccuracyOnly" --gpu_copy_streams=1 --gpu_inference_streams=1 --use_deque_limit=true --gpu_batch_size=8 --map_path="data_maps/imagenet/val_map.txt" --mlperf_conf_path="/home/mlcuser/MLC/repos/local/cache/get-git-repo_02ea1bfc/inference/mlperf.conf" --tensor_path="build/preprocessed_data/imagenet/ResNet50/int8_linear" --use_graphs=true --user_conf_path="/home/mlcuser/MLC/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/c2a191ab699445b2a36400ed97b48c34.conf" --gpu_engines="./build/engines/Nvidia_0755fa269fb6/resnet50/MultiStream/resnet50-MultiStream-gpu-b8-int8.lwis_k_99_MaxP.plan" --max_dlas=0 --scenario MultiStream --model resnet50
4+
[2025-01-31 10:59:07,917 __init__.py:53 INFO] Overriding Environment
5+
benchmark : Benchmark.ResNet50
6+
buffer_manager_thread_count : 0
7+
data_dir : /home/mlcuser/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_fe95ede4/data
8+
disable_beta1_smallk : True
9+
gpu_batch_size : 8
10+
gpu_copy_streams : 1
11+
gpu_inference_streams : 1
12+
input_dtype : int8
13+
input_format : linear
14+
log_dir : /home/mlcuser/MLC/repos/local/cache/get-git-repo_e7fa5107/repo/closed/NVIDIA/build/logs/2025.01.31-10.59.06
15+
map_path : data_maps/imagenet/val_map.txt
16+
mlperf_conf_path : /home/mlcuser/MLC/repos/local/cache/get-git-repo_02ea1bfc/inference/mlperf.conf
17+
multi_stream_expected_latency_ns : 0
18+
multi_stream_samples_per_query : 8
19+
multi_stream_target_latency_percentile : 99
20+
precision : int8
21+
preprocessed_data_dir : /home/mlcuser/MLC/repos/local/cache/get-mlperf-inference-nvidia-scratch-space_fe95ede4/preprocessed_data
22+
scenario : Scenario.MultiStream
23+
system : SystemConfiguration(host_cpu_conf=CPUConfiguration(layout={CPU(name='AMD Ryzen 9 7950X 16-Core Processor', architecture=<CPUArchitecture.x86_64: AliasedName(name='x86_64', aliases=(), patterns=())>, core_count=16, threads_per_core=2): 1}), host_mem_conf=MemoryConfiguration(host_memory_capacity=Memory(quantity=131.080068, byte_suffix=<ByteSuffix.GB: (1000, 3)>, _num_bytes=131080068000), comparison_tolerance=0.05), accelerator_conf=AcceleratorConfiguration(layout=defaultdict(<class 'int'>, {GPU(name='NVIDIA GeForce RTX 4090', accelerator_type=<AcceleratorType.Discrete: AliasedName(name='Discrete', aliases=(), patterns=())>, vram=Memory(quantity=23.98828125, byte_suffix=<ByteSuffix.GiB: (1024, 3)>, _num_bytes=25757220864), max_power_limit=450.0, pci_id='0x268410DE', compute_sm=89): 1})), numa_conf=None, system_id='Nvidia_0755fa269fb6')
24+
tensor_path : build/preprocessed_data/imagenet/ResNet50/int8_linear
25+
test_mode : AccuracyOnly
26+
use_deque_limit : True
27+
use_graphs : True
28+
user_conf_path : /home/mlcuser/MLC/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/c2a191ab699445b2a36400ed97b48c34.conf
29+
system_id : Nvidia_0755fa269fb6
30+
config_name : Nvidia_0755fa269fb6_resnet50_MultiStream
31+
workload_setting : WorkloadSetting(HarnessType.LWIS, AccuracyTarget.k_99, PowerSetting.MaxP)
32+
optimization_level : plugin-enabled
33+
num_profiles : 1
34+
config_ver : lwis_k_99_MaxP
35+
accuracy_level : 99%
36+
inference_server : lwis
37+
skip_file_checks : False
38+
power_limit : None
39+
cpu_freq : None
40+
&&&& RUNNING Default_Harness # ./build/bin/harness_default
41+
[I] mlperf.conf path: /home/mlcuser/MLC/repos/local/cache/get-git-repo_02ea1bfc/inference/mlperf.conf
42+
[I] user.conf path: /home/mlcuser/MLC/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/c2a191ab699445b2a36400ed97b48c34.conf
43+
Creating QSL.
44+
Finished Creating QSL.
45+
Setting up SUT.
46+
[I] [TRT] Loaded engine size: 26 MiB
47+
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +7, GPU +10, now: CPU 76, GPU 844 (MiB)
48+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 77, GPU 854 (MiB)
49+
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +24, now: CPU 0, GPU 24 (MiB)
50+
[I] Device:0.GPU: [0] ./build/engines/Nvidia_0755fa269fb6/resnet50/MultiStream/resnet50-MultiStream-gpu-b8-int8.lwis_k_99_MaxP.plan has been successfully loaded.
51+
[E] [TRT] 3: [runtime.cpp::~Runtime::401] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::401, condition: mEngineCounter.use_count() == 1 Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior.)
52+
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +10, now: CPU 50, GPU 846 (MiB)
53+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 51, GPU 854 (MiB)
54+
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +17, now: CPU 0, GPU 41 (MiB)
55+
[I] Start creating CUDA graphs
56+
[I] Capture 8 CUDA graphs
57+
[I] Finish creating CUDA graphs
58+
[I] Creating batcher thread: 0 EnableBatcherThreadPerDevice: false
59+
Finished setting up SUT.
60+
Starting warmup. Running for a minimum of 5 seconds.
61+
Finished warmup. Ran for 5.0242s.
62+
Starting running actual test.
63+
64+
No warnings encountered during test.
65+
66+
No errors encountered during test.
67+
Finished running actual test.
68+
Device Device:0.GPU processed:
69+
6250 batches of size 8
70+
Memcpy Calls: 0
71+
PerSampleCudaMemcpy Calls: 0
72+
BatchedCudaMemcpy Calls: 6250
73+
&&&& PASSED Default_Harness # ./build/bin/harness_default
74+
[2025-01-31 10:59:48,208 run_harness.py:166 INFO] Result: Accuracy run detected.
75+
[2025-01-31 10:59:48,208 __init__.py:46 INFO] Running command: python3 /home/mlcuser/MLC/repos/local/cache/get-git-repo_e7fa5107/repo/closed/NVIDIA/build/inference/vision/classification_and_detection/tools/accuracy-imagenet.py --mlperf-accuracy-file /mlc-mount/home/arjun/gh_action_results/valid_results/RTX4090x1-nvidia_original-gpu-tensorrt-vdefault-default_config/resnet50/multistream/accuracy/mlperf_log_accuracy.json --imagenet-val-file data_maps/imagenet/val_map.txt --dtype int32
76+
accuracy=76.064%, good=38032, total=50000
77+
78+
======================== Result summaries: ========================
79+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
{
2+
"MLC_HOST_CPU_WRITE_PROTECT_SUPPORT": "yes",
3+
"MLC_HOST_CPU_MICROCODE": "0xa601206",
4+
"MLC_HOST_CPU_FPU_SUPPORT": "yes",
5+
"MLC_HOST_CPU_FPU_EXCEPTION_SUPPORT": "yes",
6+
"MLC_HOST_CPU_BUGS": "sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass srso",
7+
"MLC_HOST_CPU_TLB_SIZE": "3584 4K pages",
8+
"MLC_HOST_CPU_CFLUSH_SIZE": "64",
9+
"MLC_HOST_CPU_ARCHITECTURE": "x86_64",
10+
"MLC_HOST_CPU_TOTAL_CORES": "32",
11+
"MLC_HOST_CPU_ON_LINE_CPUS_LIST": "0-31",
12+
"MLC_HOST_CPU_THREADS_PER_CORE": "2",
13+
"MLC_HOST_CPU_PHYSICAL_CORES_PER_SOCKET": "16",
14+
"MLC_HOST_CPU_SOCKETS": "1",
15+
"MLC_HOST_CPU_NUMA_NODES": "1",
16+
"MLC_HOST_CPU_VENDOR_ID": "AuthenticAMD",
17+
"MLC_HOST_CPU_FAMILY": "25",
18+
"MLC_HOST_CPU_MODEL_NAME": "AMD Ryzen 9 7950X 16-Core Processor",
19+
"MLC_HOST_CPU_MAX_MHZ": "5881.0000",
20+
"MLC_HOST_CPU_L1D_CACHE_SIZE": "512 KiB",
21+
"MLC_HOST_CPU_L1I_CACHE_SIZE": "512 KiB",
22+
"MLC_HOST_CPU_L2_CACHE_SIZE": "16 MiB",
23+
"MLC_HOST_CPU_L3_CACHE_SIZE": "64 MiB",
24+
"MLC_HOST_CPU_TOTAL_LOGICAL_CORES": "32",
25+
"MLC_HOST_MEMORY_CAPACITY": "128G",
26+
"MLC_HOST_DISK_CAPACITY": "6.8T"
27+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
graph TD
2+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> detect,os
3+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,sys-utils-cm
4+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,python
5+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,mlcommons,inference,src
6+
pull-git-repo,c23132ed65c4421d --> detect,os
7+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> pull,git,repo
8+
get-mlperf-inference-utils,e341e5f86d8342e5 --> get,mlperf,inference,src
9+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,mlperf,inference,utils
10+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,dataset-aux,imagenet-aux
11+
get-cuda-devices,7a3ede4d3558427a_(_with-pycuda_) --> get,cuda,_toolkit
12+
get-cuda-devices,7a3ede4d3558427a_(_with-pycuda_) --> get,python3
13+
get-generic-python-lib,94b62a682bc44791_(_package.pycuda_) --> get,python3
14+
get-cuda-devices,7a3ede4d3558427a_(_with-pycuda_) --> get,generic-python-lib,_package.pycuda
15+
get-generic-python-lib,94b62a682bc44791_(_package.numpy_) --> get,python3
16+
get-cuda-devices,7a3ede4d3558427a_(_with-pycuda_) --> get,generic-python-lib,_package.numpy
17+
app-mlperf-inference,d775cac873ee4231_(_nvidia,_resnet50,_tensorrt,_cuda,_valid,_r5.0-dev_default,_multistream_) --> get,cuda-devices,_with-pycuda
18+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> detect,os
19+
detect-cpu,586c8a43320142f7 --> detect,os
20+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> detect,cpu
21+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,sys-utils-cm
22+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,mlperf,inference,nvidia,scratch,space,_version.5.0-dev
23+
get-generic-python-lib,94b62a682bc44791_(_mlperf_logging_) --> get,python3
24+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,generic-python-lib,_mlperf_logging
25+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,dataset,original,imagenet,_full
26+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,ml-model,resnet50,_fp32,_onnx,_opset-8
27+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,mlcommons,inference,src
28+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,nvidia,mlperf,inference,common-code,_mlcommons
29+
pull-git-repo,c23132ed65c4421d --> detect,os
30+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> pull,git,repo
31+
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,os
32+
detect-cpu,586c8a43320142f7 --> detect,os
33+
generate-mlperf-inference-user-conf,3af4475745964b93 --> detect,cpu
34+
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,python
35+
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,mlcommons,inference,src
36+
get-mlperf-inference-sut-configs,c2fbf72009e2445b --> get,cache,dir,_name.mlperf-inference-sut-configs
37+
generate-mlperf-inference-user-conf,3af4475745964b93 --> get,sut,configs
38+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> generate,user-conf,mlperf,inference
39+
get-generic-python-lib,94b62a682bc44791_(_package.pycuda_) --> get,python3
40+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,generic-python-lib,_package.pycuda
41+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,cuda,_cudnn
42+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,tensorrt
43+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> build,nvidia,inference,server,_mlcommons
44+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> detect,os
45+
detect-cpu,586c8a43320142f7 --> detect,os
46+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> detect,cpu
47+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,sys-utils-cm
48+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,mlperf,inference,nvidia,scratch,space,_version.5.0-dev
49+
get-generic-python-lib,94b62a682bc44791_(_mlperf_logging_) --> get,python3
50+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,generic-python-lib,_mlperf_logging
51+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,dataset,original,imagenet,_full
52+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,ml-model,resnet50,_fp32,_onnx,_opset-8
53+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,mlcommons,inference,src
54+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,nvidia,mlperf,inference,common-code,_mlcommons
55+
pull-git-repo,c23132ed65c4421d --> detect,os
56+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> pull,git,repo
57+
get-generic-python-lib,94b62a682bc44791_(_package.pycuda_) --> get,python3
58+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,generic-python-lib,_package.pycuda
59+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,cuda,_cudnn
60+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,tensorrt
61+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> build,nvidia,inference,server,_mlcommons
62+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> reproduce,mlperf,inference,nvidia,harness,_preprocess_data,_resnet50,_cuda,_tensorrt,_v4.1-dev
63+
get-generic-python-lib,94b62a682bc44791_(_onnx-graphsurgeon_) --> get,python3
64+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,generic-python-lib,_onnx-graphsurgeon
65+
get-generic-python-lib,94b62a682bc44791_(_package.onnx_) --> get,python3
66+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> get,generic-python-lib,_package.onnx
67+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8_) --> save,mlperf,inference,state
68+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> reproduce,mlperf,inference,nvidia,harness,_build_engine,_resnet50,_cuda,_multistream,_tensorrt,_v4.1-dev,_batch_size.8
69+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> reproduce,mlperf,inference,nvidia,harness,_preprocess_data,_resnet50,_cuda,_tensorrt,_v4.1-dev
70+
get-generic-python-lib,94b62a682bc44791_(_onnx-graphsurgeon_) --> get,python3
71+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,generic-python-lib,_onnx-graphsurgeon
72+
get-generic-python-lib,94b62a682bc44791_(_package.onnx_) --> get,python3
73+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> get,generic-python-lib,_package.onnx
74+
detect-cpu,586c8a43320142f7 --> detect,os
75+
benchmark-program,19f369ef47084895 --> detect,cpu
76+
benchmark-program-mlperf,cfff0132a8aa4018 --> benchmark-program,program
77+
app-mlperf-inference-nvidia,bc3b17fb430f4732_(_run_harness,_resnet50,_cuda,_multistream,_tensorrt,_rtx_4090_) --> benchmark-mlperf

0 commit comments

Comments
 (0)