Skip to content

Commit 577207b

Browse files
committed
Auto-merge updates from auto-update branch
2 parents 83faa8a + 8ca8352 commit 577207b

File tree

29 files changed

+960
-961
lines changed

29 files changed

+960
-961
lines changed

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ pip install -U cmind
1919

2020
cm rm cache -f
2121

22-
cm pull repo mlcommons@mlperf-automations --checkout=a90475d2de72bf0622cebe8d5ca8eb8c9d872fbd
22+
cm pull repo mlcommons@mlperf-automations --checkout=467517e4a572872046058e394a0d83512cfff38b
2323

2424
cm run script \
2525
--tags=app,mlperf,inference,generic,_nvidia,_bert-99.9,_tensorrt,_cuda,_valid,_r4.1-dev_default,_offline \
@@ -71,7 +71,7 @@ cm run script \
7171
--env.CM_DOCKER_REUSE_EXISTING_CONTAINER=yes \
7272
--env.CM_DOCKER_DETACHED_MODE=yes \
7373
--env.CM_MLPERF_INFERENCE_RESULTS_DIR_=/home/arjun/gh_action_results/valid_results \
74-
--env.CM_DOCKER_CONTAINER_ID=dcd2f7571ddb \
74+
--env.CM_DOCKER_CONTAINER_ID=d7e43b0ba70a \
7575
--env.CM_MLPERF_LOADGEN_COMPLIANCE_TEST=TEST01 \
7676
--add_deps_recursive.compiler.tags=gcc \
7777
--add_deps_recursive.coco2014-original.tags=_full \
@@ -129,4 +129,4 @@ Model Precision: fp16
129129
`F1`: `90.88324`, Required accuracy for closed division `>= 90.78313`
130130

131131
### Performance Results
132-
`Samples per second`: `3335.53`
132+
`Samples per second`: `3360.15`

open/MLCommons/measurements/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy_console.out

+48-48
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
[2024-12-24 21:34:11,306 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2-
[2024-12-24 21:34:11,827 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99.9/Offline
3-
[2024-12-24 21:34:11,828 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/a4a1d6d93e5c47cf/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/b3ac50e3c5aa43b3826ed3ad51524f28.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan" --scenario Offline --model bert
4-
[2024-12-24 21:34:11,828 __init__.py:53 INFO] Overriding Environment
1+
[2024-12-27 21:28:18,054 main.py:229 INFO] Detected system ID: KnownSystem.RTX4090x2
2+
[2024-12-27 21:28:18,579 generate_conf_files.py:107 INFO] Generated measurements/ entries for RTX4090x2_TRT/bert-99.9/Offline
3+
[2024-12-27 21:28:18,579 __init__.py:46 INFO] Running command: ./build/bin/harness_bert --logfile_outdir="/cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy" --logfile_prefix="mlperf_log_" --performance_sample_count=10833 --test_mode="AccuracyOnly" --gpu_batch_size=256 --mlperf_conf_path="/home/cmuser/CM/repos/local/cache/13da9a9a9e4e460f/inference/mlperf.conf" --tensor_path="build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy" --use_graphs=false --user_conf_path="/home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/1e993d10f7d444a99c5cc2482bfdc85c.conf" --gpu_inference_streams=2 --gpu_copy_streams=2 --gpu_engines="./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan" --scenario Offline --model bert
4+
[2024-12-27 21:28:18,579 __init__.py:53 INFO] Overriding Environment
55
benchmark : Benchmark.BERT
66
buffer_manager_thread_count : 0
77
coalesced_tensor : True
@@ -11,8 +11,8 @@ gpu_copy_streams : 2
1111
gpu_inference_streams : 2
1212
input_dtype : int32
1313
input_format : linear
14-
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.24-21.34.10
15-
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/a4a1d6d93e5c47cf/inference/mlperf.conf
14+
log_dir : /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/logs/2024.12.27-21.28.16
15+
mlperf_conf_path : /home/cmuser/CM/repos/local/cache/13da9a9a9e4e460f/inference/mlperf.conf
1616
offline_expected_qps : 0.0
1717
precision : fp16
1818
preprocessed_data_dir : /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/preprocessed_data
@@ -21,7 +21,7 @@ system : SystemConfiguration(host_cpu_conf=CPUConfiguration(layout={CPU(name='In
2121
tensor_path : build/preprocessed_data/squad_tokenized/input_ids.npy,build/preprocessed_data/squad_tokenized/segment_ids.npy,build/preprocessed_data/squad_tokenized/input_mask.npy
2222
test_mode : AccuracyOnly
2323
use_graphs : False
24-
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/b3ac50e3c5aa43b3826ed3ad51524f28.conf
24+
user_conf_path : /home/cmuser/CM/repos/mlcommons@mlperf-automations/script/generate-mlperf-inference-user-conf/tmp/1e993d10f7d444a99c5cc2482bfdc85c.conf
2525
system_id : RTX4090x2
2626
config_name : RTX4090x2_bert_Offline
2727
workload_setting : WorkloadSetting(HarnessType.Custom, AccuracyTarget.k_99_9, PowerSetting.MaxP)
@@ -34,64 +34,64 @@ skip_file_checks : True
3434
power_limit : None
3535
cpu_freq : None
3636
&&&& RUNNING BERT_HARNESS # ./build/bin/harness_bert
37-
I1224 21:34:11.880457 20261 main_bert.cc:163] Found 2 GPUs
38-
I1224 21:34:11.998015 20261 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan
39-
[I] [TRT] Loaded engine size: 700 MiB
40-
[I] [TRT] Loaded engine size: 700 MiB
37+
I1227 21:28:18.626047 20263 main_bert.cc:163] Found 2 GPUs
38+
I1227 21:28:18.748579 20263 bert_server.cc:147] Engine Path: ./build/engines/RTX4090x2/bert/Offline/bert-Offline-gpu-fp16_S_384_B_256_P_2_vs.custom_k_99_9_MaxP.plan
39+
[I] [TRT] Loaded engine size: 699 MiB
40+
[I] [TRT] Loaded engine size: 699 MiB
4141
[W] [TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
4242
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +7, GPU +10, now: CPU 1008, GPU 1511 (MiB)
43-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +2, GPU +10, now: CPU 1010, GPU 1521 (MiB)
43+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 1009, GPU 1521 (MiB)
4444
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +1152, now: CPU 0, GPU 1152 (MiB)
45-
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +10, now: CPU 1018, GPU 1254 (MiB)
46-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 1019, GPU 1264 (MiB)
45+
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +6, GPU +10, now: CPU 1017, GPU 1256 (MiB)
46+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 1018, GPU 1266 (MiB)
4747
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +1, GPU +576, now: CPU 1, GPU 1152 (MiB)
48-
I1224 21:34:12.646703 20261 bert_server.cc:208] Engines Creation Completed
49-
I1224 21:34:12.680385 20261 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
50-
I1224 21:34:12.680392 20261 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51-
I1224 21:34:12.680397 20261 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
48+
I1227 21:28:19.399394 20263 bert_server.cc:208] Engines Creation Completed
49+
I1227 21:28:19.432389 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
50+
I1227 21:28:19.432394 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
51+
I1227 21:28:19.432399 20263 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
5252
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 319, GPU 2859 (MiB)
53-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 320, GPU 2867 (MiB)
54-
I1224 21:34:12.747864 20261 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55-
I1224 21:34:12.747892 20261 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
53+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 319, GPU 2867 (MiB)
54+
I1227 21:28:19.498804 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 0
55+
I1227 21:28:19.498829 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
5656
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 1, GPU 1152 (MiB)
57-
I1224 21:34:12.748744 20261 bert_core_vs.cc:476] Setup complete
58-
I1224 21:34:12.748906 20261 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
59-
I1224 21:34:12.748910 20261 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60-
I1224 21:34:12.748914 20261 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
61-
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 443, GPU 2602 (MiB)
62-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 443, GPU 2610 (MiB)
63-
I1224 21:34:12.813745 20261 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64-
I1224 21:34:12.813759 20261 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
57+
I1227 21:28:19.499617 20263 bert_core_vs.cc:476] Setup complete
58+
I1227 21:28:19.499763 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
59+
I1227 21:28:19.499768 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
60+
I1227 21:28:19.499770 20263 bert_core_vs.cc:415] Engine - Profile 0 maxDims 98304 Bmax=256 Smax=384
61+
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 442, GPU 2604 (MiB)
62+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 442, GPU 2612 (MiB)
63+
I1227 21:28:19.565945 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 0
64+
I1227 21:28:19.565958 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
6565
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +0, now: CPU 2, GPU 1152 (MiB)
66-
I1224 21:34:12.814572 20261 bert_core_vs.cc:476] Setup complete
67-
I1224 21:34:12.814747 20261 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
68-
I1224 21:34:12.814751 20261 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69-
I1224 21:34:12.814755 20261 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
66+
I1227 21:28:19.566730 20263 bert_core_vs.cc:476] Setup complete
67+
I1227 21:28:19.566908 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
68+
I1227 21:28:19.566911 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
69+
I1227 21:28:19.566915 20263 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
7070
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 566, GPU 4345 (MiB)
7171
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 566, GPU 4355 (MiB)
72-
I1224 21:34:12.879891 20261 bert_core_vs.cc:426] Setting Opt.Prof. to 1
72+
I1227 21:28:19.632443 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 1
7373
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
7474
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +1, GPU +0, now: CPU 3, GPU 1152 (MiB)
75-
I1224 21:34:12.880244 20261 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76-
I1224 21:34:12.881059 20261 bert_core_vs.cc:476] Setup complete
77-
I1224 21:34:12.881232 20261 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
78-
I1224 21:34:12.881237 20261 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79-
I1224 21:34:12.881240 20261 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
80-
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 690, GPU 4088 (MiB)
81-
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 690, GPU 4098 (MiB)
82-
I1224 21:34:12.944664 20261 bert_core_vs.cc:426] Setting Opt.Prof. to 1
75+
I1227 21:28:19.632763 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
76+
I1227 21:28:19.633572 20263 bert_core_vs.cc:476] Setup complete
77+
I1227 21:28:19.633745 20263 bert_core_vs.cc:385] Engine - Device Memory requirements: 1409287680
78+
I1227 21:28:19.633749 20263 bert_core_vs.cc:393] Engine - Number of Optimization Profiles: 2
79+
I1227 21:28:19.633752 20263 bert_core_vs.cc:415] Engine - Profile 1 maxDims 98304 Bmax=256 Smax=384
80+
[I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 689, GPU 4090 (MiB)
81+
[I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 689, GPU 4100 (MiB)
82+
I1227 21:28:19.699026 20263 bert_core_vs.cc:426] Setting Opt.Prof. to 1
8383
[I] [TRT] Could not set default profile 0 for execution context. Profile index must be set explicitly.
8484
[I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +0, now: CPU 3, GPU 1152 (MiB)
85-
I1224 21:34:12.945001 20261 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86-
I1224 21:34:12.945799 20261 bert_core_vs.cc:476] Setup complete
87-
I1224 21:34:14.149075 20261 main_bert.cc:184] Starting running actual test.
88-
I1224 21:34:17.479260 20261 main_bert.cc:190] Finished running actual test.
85+
I1227 21:28:19.699366 20263 bert_core_vs.cc:444] Context creation complete. Max supported batchSize: 256
86+
I1227 21:28:19.700189 20263 bert_core_vs.cc:476] Setup complete
87+
I1227 21:28:20.895964 20263 main_bert.cc:184] Starting running actual test.
88+
I1227 21:28:24.215916 20263 main_bert.cc:190] Finished running actual test.
8989

9090
No warnings encountered during test.
9191

9292
No errors encountered during test.
93-
[2024-12-24 21:34:17,717 run_harness.py:166 INFO] Result: Accuracy run detected.
94-
[2024-12-24 21:34:17,717 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/predictions.json --output_dtype float16
93+
[2024-12-27 21:28:24,446 run_harness.py:166 INFO] Result: Accuracy run detected.
94+
[2024-12-27 21:28:24,446 __init__.py:46 INFO] Running command: PYTHONPATH=code/bert/tensorrt/helpers python3 /home/cmuser/CM/repos/local/cache/94a57f78972843c6/repo/closed/NVIDIA/build/inference/language/bert/accuracy-squad.py --log_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/mlperf_log_accuracy.json --vocab_file build/models/bert/vocab.txt --val_data /home/cmuser/CM/repos/local/cache/4db00c74da1e44c8/data/squad/dev-v1.1.json --out_file /cm-mount/home/arjun/gh_action_results/valid_results/RTX4090x2-nvidia_original-gpu-tensorrt-vdefault-default_config/bert-99.9/offline/accuracy/predictions.json --output_dtype float16
9595
{"exact_match": 83.67076631977294, "f1": 90.8832407068292}
9696
Reading examples...
9797
Loading cached features from 'eval_features.pickle'...

0 commit comments

Comments
 (0)