Skip to content

Commit 4ca2a51

Browse files
committed
Auto-merge updates from auto-update branch
2 parents 3abd32c + 325e7ad commit 4ca2a51

File tree

29 files changed

+960
-971
lines changed

29 files changed

+960
-971
lines changed

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/README.md

+6-11
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
This experiment is generated using the [MLCommons Collective Mind automation framework (CM)](https://github.com/mlcommons/cm4mlops).
2-
31
*Check [CM MLPerf docs](https://docs.mlcommons.org/inference) for more details.*
42

53
## Host platform
@@ -19,7 +17,7 @@ pip install -U cmind
1917

2018
cm rm cache -f
2119

22-
cm pull repo mlcommons@mlperf-automations --checkout=c52956b27fa8d06ec8db53f885e1f05021e379e9
20+
cm pull repo mlcommons@mlperf-automations --checkout=48ea6b46a7606d1c5d74909e94d5599dbe7ff9e1
2321

2422
cm run script \
2523
--tags=app,mlperf,inference,generic,_nvidia,_bert-99.9,_tensorrt,_cuda,_valid,_r4.1-dev_default,_offline \
@@ -41,8 +39,8 @@ cm run script \
4139
--env.CM_RUN_MLPERF_SUBMISSION_PREPROCESSOR=yes \
4240
--env.CM_MLPERF_INFERENCE_PULL_CODE_CHANGES=yes \
4341
--env.CM_MLPERF_INFERENCE_PULL_SRC_CHANGES=yes \
44-
--env.OUTPUT_BASE_DIR=/home/arjun/gh_action_results \
45-
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/home/arjun/gh_action_submissions \
42+
--env.OUTPUT_BASE_DIR=/cm-mount/home/arjun/gh_action_results \
43+
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/cm-mount/home/arjun/gh_action_submissions \
4644
--env.CM_MLPERF_SUBMITTER=MLCommons \
4745
--env.CM_USE_DATASET_FROM_HOST=yes \
4846
--env.CM_USE_MODEL_FROM_HOST=yes \
@@ -71,7 +69,7 @@ cm run script \
7169
--env.CM_DOCKER_REUSE_EXISTING_CONTAINER=yes \
7270
--env.CM_DOCKER_DETACHED_MODE=yes \
7371
--env.CM_MLPERF_INFERENCE_RESULTS_DIR_=/home/arjun/gh_action_results/valid_results \
74-
--env.CM_DOCKER_CONTAINER_ID=0b3b13aa449e \
72+
--env.CM_DOCKER_CONTAINER_ID=10cc24c1e5c3 \
7573
--env.CM_MLPERF_LOADGEN_COMPLIANCE_TEST=TEST01 \
7674
--add_deps_recursive.compiler.tags=gcc \
7775
--add_deps_recursive.coco2014-original.tags=_full \
@@ -104,10 +102,7 @@ cm run script \
104102
--v=False \
105103
--print_env=False \
106104
--print_deps=False \
107-
--dump_version_info=True \
108-
--env.OUTPUT_BASE_DIR=/cm-mount/home/arjun/gh_action_results \
109-
--env.CM_MLPERF_INFERENCE_SUBMISSION_DIR=/cm-mount/home/arjun/gh_action_submissions \
110-
--env.MLPERF_SCRATCH_PATH=/home/cmuser/CM/repos/local/cache/4db00c74da1e44c8
105+
--dump_version_info=True
111106
```
112107
*Note that if you want to use the [latest automation recipes](https://docs.mlcommons.org/inference) for MLPerf (CM scripts),
113108
you should simply reload mlcommons@mlperf-automations without checkout and clean CM cache as follows:*
@@ -129,4 +124,4 @@ Model Precision: fp16
129124
`F1`: `90.88324`, Required accuracy for closed division `>= 90.78313`
130125

131126
### Performance Results
132-
`Samples per second`: `3348.97`
127+
`Samples per second`: `3339.26`

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy_console.out

+39-39
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
[2024-12-28 21:43:12,494 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2-
[2024-12-28 21:43:13,048 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99.9/Offline
3-
[2024-12-28 21:43:13,048 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/11daf5a55e5449f4/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/c6913ddf30654865abebcb8dbfb0abd9.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan" --scenario Offline --model bert
4-
[2024-12-28 21:43:13,048 __init__.py:53 INFO] Overriding Environment
1+
[2024-12-31 21:45:34,506 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2+
[2024-12-31 21:45:35,062 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99.9/Offline
3+
[2024-12-31 21:45:35,063 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/fe9ab9b6109f4c4b/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/aa5ddd5ef2924d75a64fe1514566f612.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan" --scenario Offline --model bert
4+
[2024-12-31 21:45:35,063 __init__.py:53 INFO] Overriding Environment
55
benchmark : Benchmark.BERT
66
buffer_manager_thread_count : 0
77
coalesced_tensor : True
@@ -11,8 +11,8 @@ gpu_copy_streams : 2
1111
gpu_inference_streams : 2
1212
input_dtype : int32
1313
input_format : linear
14-
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.28-21.43.11
15-
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/11daf5a55e5449f4/inference/mlperf.conf
14+
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.31-21.45.33
15+
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/fe9ab9b6109f4c4b/inference/mlperf.conf
1616
offline_expected_qps : 0.0
1717
precision : fp16
1818
preprocessed_data_dir : /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/preprocessed_data
@@ -21,7 +21,7 @@ system : SystemConfiguration(host_cpu_conf=CPUConfiguration(layout={CPU(name='In
2121
tensor_path : build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy
2222
test_mode : AccuracyOnly
2323
use_graphs : False
24-
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/c6913ddf30654865abebcb8dbfb0abd9.conf
24+
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/aa5ddd5ef2924d75a64fe1514566f612.conf
2525
system_id : RTX4090x2
2626
config_name : RTX4090x2_bert_Offline
2727
workload_setting : WorkloadSetting(HarnessType.Custom, AccuracyTarget.k_99_9, PowerSetting.MaxP)
@@ -34,8 +34,8 @@ skip_file_checks : True
3434
power_limit : None
3535
cpu_freq : None
3636
&&&& RUNNING BERT_HARNESS # ./build/bin/harness_bert
37-
I1228 21:43:13.096694 20264 main_bert.cc:163] Found 2 GPUs
38-
I1228 21:43:13.218209 20264 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan
37+
I1231 21:45:35.110177 20264 main_bert.cc:163] Found 2 GPUs
38+
I1231 21:45:35.231178 20264 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan
3939
[I] [TRT] Loaded engine size: 700 MiB
4040
[I] [TRT] Loaded engine size: 700 MiB
4141
[W] [TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
@@ -45,53 +45,53 @@ I1228 21:43:13.218209 20264 bert_server.cc:147] Engine Path: ./build/engines/RTX
4545
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +10, now: CPU 1018, GPU 1255 (MiB)
4646
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 1019, GPU 1265 (MiB)
4747
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +1, GPU +576, now: CPU 1, GPU 1152 (MiB)
48-
I1228 21:43:13.877281 20264 bert_server.cc:208] Engines Creation Completed
49-
I1228 21:43:13.902434 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
50-
I1228 21:43:13.902443 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51-
I1228 21:43:13.902449 20264 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
48+
I1231 21:45:35.893620 20264 bert_server.cc:208] Engines Creation Completed
49+
I1231 21:45:35.927969 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
50+
I1231 21:45:35.927980 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51+
I1231 21:45:35.927986 20264 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
5252
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 319, GPU 2859 (MiB)
5353
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 320, GPU 2867 (MiB)
54-
I1228 21:43:13.967170 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55-
I1228 21:43:13.967195 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
54+
I1231 21:45:35.992913 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55+
I1231 21:45:35.992939 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
5656
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 1, GPU 1152 (MiB)
57-
I1228 21:43:13.967973 20264 bert_core_vs.cc:476] Setup complete
58-
I1228 21:43:13.968132 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
59-
I1228 21:43:13.968134 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60-
I1228 21:43:13.968137 20264 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
57+
I1231 21:45:35.993705 20264 bert_core_vs.cc:476] Setup complete
58+
I1231 21:45:35.993885 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
59+
I1231 21:45:35.993891 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60+
I1231 21:45:35.993894 20264 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
6161
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 443, GPU 2603 (MiB)
6262
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 443, GPU 2611 (MiB)
63-
I1228 21:43:14.032351 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64-
I1228 21:43:14.032366 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
63+
I1231 21:45:36.061340 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64+
I1231 21:45:36.061358 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
6565
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +0, now: CPU 2, GPU 1152 (MiB)
66-
I1228 21:43:14.033135 20264 bert_core_vs.cc:476] Setup complete
67-
I1228 21:43:14.033304 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
68-
I1228 21:43:14.033309 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69-
I1228 21:43:14.033313 20264 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
66+
I1231 21:45:36.062158 20264 bert_core_vs.cc:476] Setup complete
67+
I1231 21:45:36.062332 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
68+
I1231 21:45:36.062336 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69+
I1231 21:45:36.062340 20264 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
7070
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 566, GPU 4345 (MiB)
71-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 566, GPU 4355 (MiB)
72-
I1228 21:43:14.097095 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 1
71+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 567, GPU 4355 (MiB)
72+
I1231 21:45:36.127660 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 1
7373
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
7474
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +0, now: CPU 3, GPU 1152 (MiB)
75-
I1228 21:43:14.097440 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76-
I1228 21:43:14.098248 20264 bert_core_vs.cc:476] Setup complete
77-
I1228 21:43:14.098415 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
78-
I1228 21:43:14.098419 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79-
I1228 21:43:14.098423 20264 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
75+
I1231 21:45:36.128007 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76+
I1231 21:45:36.128804 20264 bert_core_vs.cc:476] Setup complete
77+
I1231 21:45:36.128968 20264 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
78+
I1231 21:45:36.128973 20264 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79+
I1231 21:45:36.128975 20264 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
8080
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 690, GPU 4089 (MiB)
8181
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 690, GPU 4099 (MiB)
82-
I1228 21:43:14.162550 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 1
82+
I1231 21:45:36.193878 20264 bert_core_vs.cc:426] Setting Opt.Prof. to 1
8383
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
8484
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 3, GPU 1152 (MiB)
85-
I1228 21:43:14.162904 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86-
I1228 21:43:14.163702 20264 bert_core_vs.cc:476] Setup complete
87-
I1228 21:43:15.365103 20264 main_bert.cc:184] Starting running actual test.
88-
I1228 21:43:18.679199 20264 main_bert.cc:190] Finished running actual test.
85+
I1231 21:45:36.194209 20264 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86+
I1231 21:45:36.195005 20264 bert_core_vs.cc:476] Setup complete
87+
I1231 21:45:37.395401 20264 main_bert.cc:184] Starting running actual test.
88+
I1231 21:45:40.743121 20264 main_bert.cc:190] Finished running actual test.
8989

9090
No warnings encountered during test.
9191

9292
No errors encountered during test.
93-
[2024-12-28 21:43:18,912 run_harness.py:166 INFO] Result: Accuracy run detected.
94-
[2024-12-28 21:43:18,912 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/predictions.json --output_dtype float16
93+
[2024-12-31 21:45:40,987 run_harness.py:166 INFO] Result: Accuracy run detected.
94+
[2024-12-31 21:45:40,987 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/predictions.json --output_dtype float16
9595
{"exact_match": 83.67076631977294, "f1": 90.8832407068292}
9696
Reading examples...
9797
Loading cached features from 'eval_features.pickle'...

0 commit comments

Comments
 (0)