Skip to content

Commit 5204e61

Browse files
committed
Auto-merge updates from auto-update branch
2 parents 83d5f7b + 9c3a45a commit 5204e61

File tree

26 files changed

+799
-797
lines changed

26 files changed

+799
-797
lines changed

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ pip install -U cmind
1919

2020
cm rm cache -f
2121

22-
cm pull repo mlcommons@mlperf-automations --checkout=467517e4a572872046058e394a0d83512cfff38b
22+
cm pull repo mlcommons@mlperf-automations --checkout=c52956b27fa8d06ec8db53f885e1f05021e379e9
2323

2424
cm run script \
2525
--tags=app,mlperf,inference,generic,_nvidia,_bert-99,_tensorrt,_cuda,_valid,_r4.1-dev_default,_offline \
@@ -71,7 +71,7 @@ cm run script \
7171
--env.CM_DOCKER_REUSE_EXISTING_CONTAINER=yes \
7272
--env.CM_DOCKER_DETACHED_MODE=yes \
7373
--env.CM_MLPERF_INFERENCE_RESULTS_DIR_=/home/arjun/gh_action_results/valid_results \
74-
--env.CM_DOCKER_CONTAINER_ID=0ea02743d854 \
74+
--env.CM_DOCKER_CONTAINER_ID=6733602d12a8 \
7575
--env.CM_MLPERF_LOADGEN_COMPLIANCE_TEST=TEST01 \
7676
--add_deps_recursive.compiler.tags=gcc \
7777
--add_deps_recursive.coco2014-original.tags=_full \
@@ -129,4 +129,4 @@ Model Precision: int8
129129
`F1`: `90.15674`, Required accuracy for closed division `>= 89.96526`
130130

131131
### Performance Results
132-
`Samples per second`: `8277.86`
132+
`Samples per second`: `8237.36`

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy_console.out

+38-38
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
[2024-12-27 20:25:43,536 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2-
[2024-12-27 20:25:44,069 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99/Offline
3-
[2024-12-27 20:25:44,069 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/6c5d0d8c0f4f47c1/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/3ef477688d004a39a48da5ba31ae9c98.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-int8_S_384_B_256_P_2_vs.custom_k_99_MaxP.plan" --scenario Offline --model bert
4-
[2024-12-27 20:25:44,069 __init__.py:53 INFO] Overriding Environment
1+
[2024-12-28 20:41:21,631 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2+
[2024-12-28 20:41:22,190 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99/Offline
3+
[2024-12-28 20:41:22,190 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/10b872089277481d/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/733a046654da43acb8b34def0e921432.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-int8_S_384_B_256_P_2_vs.custom_k_99_MaxP.plan" --scenario Offline --model bert
4+
[2024-12-28 20:41:22,190 __init__.py:53 INFO] Overriding Environment
55
benchmark : Benchmark.BERT
66
buffer_manager_thread_count : 0
77
coalesced_tensor : True
@@ -11,8 +11,8 @@ gpu_copy_streams : 2
1111
gpu_inference_streams : 2
1212
input_dtype : int32
1313
input_format : linear
14-
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.27-20.25.42
15-
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/6c5d0d8c0f4f47c1/inference/mlperf.conf
14+
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.28-20.41.20
15+
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/10b872089277481d/inference/mlperf.conf
1616
offline_expected_qps : 0.0
1717
precision : int8
1818
preprocessed_data_dir : /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/preprocessed_data
@@ -21,7 +21,7 @@ system : SystemConfiguration(host_cpu_conf=CPUConfiguration(layout={CPU(name='In
2121
tensor_path : build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy
2222
test_mode : AccuracyOnly
2323
use_graphs : False
24-
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/3ef477688d004a39a48da5ba31ae9c98.conf
24+
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/733a046654da43acb8b34def0e921432.conf
2525
system_id : RTX4090x2
2626
config_name : RTX4090x2_bert_Offline
2727
workload_setting : WorkloadSetting(HarnessType.Custom, AccuracyTarget.k_99, PowerSetting.MaxP)
@@ -34,8 +34,8 @@ skip_file_checks : True
3434
power_limit : None
3535
cpu_freq : None
3636
&&&& RUNNING BERT_HARNESS # ./build/bin/harness_bert
37-
I1227 20:25:44.119817 20262 main_bert.cc:163] Found 2 GPUs
38-
I1227 20:25:44.249424 20262 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-int8_S_384_B_256_P_2_vs.custom_k_99_MaxP.plan
37+
I1228 20:41:22.237586 20263 main_bert.cc:163] Found 2 GPUs
38+
I1228 20:41:22.367789 20263 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-int8_S_384_B_256_P_2_vs.custom_k_99_MaxP.plan
3939
[I] [TRT] Loaded engine size: 414 MiB
4040
[I] [TRT] Loaded engine size: 414 MiB
4141
[W] [TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
@@ -45,53 +45,53 @@ I1227 20:25:44.249424 20262 bert_server.cc:147] Engine Path: ./build/engines/RTX
4545
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +8, now: CPU 737, GPU 969 (MiB)
4646
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 738, GPU 979 (MiB)
4747
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +1, GPU +291, now: CPU 1, GPU 581 (MiB)
48-
I1227 20:25:44.739281 20262 bert_server.cc:208] Engines Creation Completed
49-
I1227 20:25:44.759653 20262 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
50-
I1227 20:25:44.759660 20262 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51-
I1227 20:25:44.759663 20262 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
48+
I1228 20:41:22.845306 20263 bert_server.cc:208] Engines Creation Completed
49+
I1228 20:41:22.863188 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
50+
I1228 20:41:22.863198 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51+
I1228 20:41:22.863202 20263 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
5252
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 324, GPU 1901 (MiB)
5353
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 324, GPU 1909 (MiB)
54-
I1227 20:25:44.826346 20262 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55-
I1227 20:25:44.826371 20262 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
54+
I1228 20:41:22.928879 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55+
I1228 20:41:22.928902 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
5656
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 1, GPU 581 (MiB)
57-
I1227 20:25:44.827178 20262 bert_core_vs.cc:476] Setup complete
58-
I1227 20:25:44.827324 20262 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
59-
I1227 20:25:44.827328 20262 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60-
I1227 20:25:44.827332 20262 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
57+
I1228 20:41:22.929729 20263 bert_core_vs.cc:476] Setup complete
58+
I1228 20:41:22.929889 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
59+
I1228 20:41:22.929893 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60+
I1228 20:41:22.929896 20263 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
6161
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 447, GPU 1645 (MiB)
6262
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 447, GPU 1653 (MiB)
63-
I1227 20:25:44.893194 20262 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64-
I1227 20:25:44.893208 20262 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
63+
I1228 20:41:22.995138 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64+
I1228 20:41:22.995153 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
6565
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 1, GPU 581 (MiB)
66-
I1227 20:25:44.894070 20262 bert_core_vs.cc:476] Setup complete
67-
I1227 20:25:44.894234 20262 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
68-
I1227 20:25:44.894239 20262 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69-
I1227 20:25:44.894243 20262 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
66+
I1228 20:41:22.995955 20263 bert_core_vs.cc:476] Setup complete
67+
I1228 20:41:22.996120 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
68+
I1228 20:41:22.996124 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69+
I1228 20:41:22.996127 20263 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
7070
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 570, GPU 2715 (MiB)
7171
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 570, GPU 2725 (MiB)
72-
I1227 20:25:44.957968 20262 bert_core_vs.cc:426] Setting Opt.Prof. to 1
72+
I1228 20:41:23.060681 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 1
7373
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
7474
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +0, now: CPU 2, GPU 581 (MiB)
75-
I1227 20:25:44.958278 20262 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76-
I1227 20:25:44.959084 20262 bert_core_vs.cc:476] Setup complete
77-
I1227 20:25:44.959231 20262 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
78-
I1227 20:25:44.959236 20262 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79-
I1227 20:25:44.959239 20262 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
75+
I1228 20:41:23.061033 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76+
I1228 20:41:23.061902 20263 bert_core_vs.cc:476] Setup complete
77+
I1228 20:41:23.062060 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 704644608
78+
I1228 20:41:23.062063 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79+
I1228 20:41:23.062067 20263 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
8080
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 693, GPU 2459 (MiB)
8181
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 693, GPU 2469 (MiB)
82-
I1227 20:25:45.022871 20262 bert_core_vs.cc:426] Setting Opt.Prof. to 1
82+
I1228 20:41:23.127297 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 1
8383
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
8484
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 2, GPU 581 (MiB)
85-
I1227 20:25:45.023195 20262 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86-
I1227 20:25:45.024029 20262 bert_core_vs.cc:476] Setup complete
87-
I1227 20:25:45.478945 20262 main_bert.cc:184] Starting running actual test.
88-
I1227 20:25:46.866901 20262 main_bert.cc:190] Finished running actual test.
85+
I1228 20:41:23.127616 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86+
I1228 20:41:23.128468 20263 bert_core_vs.cc:476] Setup complete
87+
I1228 20:41:23.583055 20263 main_bert.cc:184] Starting running actual test.
88+
I1228 20:41:24.961525 20263 main_bert.cc:190] Finished running actual test.
8989

9090
No warnings encountered during test.
9191

9292
No errors encountered during test.
93-
[2024-12-27 20:25:47,087 run_harness.py:166 INFO] Result: Accuracy run detected.
94-
[2024-12-27 20:25:47,088 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy/predictions.json --output_dtype float16
93+
[2024-12-28 20:41:25,182 run_harness.py:166 INFO] Result: Accuracy run detected.
94+
[2024-12-28 20:41:25,182 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99/offline/accuracy/predictions.json --output_dtype float16
9595
{"exact_match": 82.81929990539263, "f1": 90.15673510616978}
9696
Reading examples...
9797
Loading cached features from 'eval_features.pickle'...

0 commit comments

Comments
 (0)