-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: TC for Metric P0 nv_load_time per model #7697
base: main
Are you sure you want to change the base?
Conversation
Added support for OTel context propagation --------- Co-authored-by: Markus Hennerbichler <[email protected]> Co-authored-by: Ryan McCormick <[email protected]>
This validates the change made to ../core wrt how model configuration mtime is handled.
* Run all cases wihh shm probe * Warmup test and then run multiple iterations * Log free shared memory on enter/exit of probe * Add shm probe to all tests * Add debug_str to shm_util * Refactor ensemble_io test, modify probe to check for growth rather than inequality * Improve stability of bls_tensor_lifecycle gpu memory tests * Add more visibility into failing model/case in python_unittest helper * [FIXME] Skip probe on certain subtests for now * [FIXME] Remove shm probe from test_restart on unhealthy stub * Start clean server run for each bls test case * Don't exit early on failure so logs can be properly collected * Restore bls test logic * Fix shm size compare * Print region name that leaked * Remove special handling on unittest * Remove debug str * Add enter and exit delay to shm leak probe --------- Co-authored-by: Ryan McCormick <[email protected]>
* Update trace_summery script * Remove GRPC_WAITREAD and Overhead
* Add gsutil cp retry helper function * Add max retry to GCS upload * Use simple sequential upload
* Handle empty output * Add test case for 0 dimension output * Fix up number of tests
* tensorrt-llm benchmarking test
…und (#6834) * Update miniconda version * Install pytest for different py version * Install pytest
* Add test for shutdown while loading * Fix intermittent failure on test_model_config_overwrite
Adding OpenTelemetry Batch Span Processor --------- Co-authored-by: Theo Clark <[email protected]> Co-authored-by: Ryan McCormick <[email protected]>
* Support Double-Type Infer/Response Parameters
* Base Python Backend Support for Windows
* Add unit test reports to L0_dlpack_multi_gpu * Add unit test reports to L0_warmup
* Add response statistics * Add L0_response_statistics * Enable http vs grpc statistics comparison * Add docs for response statistics protocol * Add more comments for response statistics test * Remove model name from config * Improve docs wordings * [Continue] Improve docs wordings * [Continue] Add more comments for response statistics test * [Continue 2] Improve docs wordings * Fix typo * Remove mentioning decoupled from docs * [Continue 3] Improve docs wordings * [Continue 4] Improve docs wordings Co-authored-by: Ryan McCormick <[email protected]> --------- Co-authored-by: Ryan McCormick <[email protected]>
* Switch to Python model for busyop test * Clean up * Address comment * Remove unused import
* Add cancellation into response statistics * Add test for response statistics cancel * Remove debugging print * Use is None comparison * Fix docs * Use default args None * Refactor RegisterModelStatistics()
…er each model reload (#7735)
Co-authored-by: Ryan McCormick <[email protected]>
Co-authored-by: Misha Chornyi <[email protected]>
#### Load Time Per-Model | ||
The *Model Load Duration* reflects the time to load a model from storage into GPU/CPU in seconds. | ||
``` | ||
# HELP nv_model_load_duration_secs Model load time in seconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need a sample output for a gauge metric?
Co-authored-by: Ryan McCormick <[email protected]>
…ublished containers (#7759)
qa/L0_metrics/test.sh
Outdated
# Test 3 for explicit mode UNLOAD | ||
python3 -m pytest --junitxml="general_metrics_test.test_metrics_load_time_explicit_unload.report.xml" $CLIENT_PY::TestGeneralMetrics::test_metrics_load_time_explicit_unload >> $CLIENT_LOG 2>&1 | ||
kill_server | ||
set -e | ||
|
||
# Test 4 for explicit mode LOAD and UNLOAD with multiple versions | ||
set +e | ||
CLIENT_PY="./general_metrics_test.py" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove
print(f"Model '{model_name}' loaded successfully.") | ||
else: | ||
except AssertionError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want the test to pass if failed to load the model? If not, you should remove try...except...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's expected behaviour.
Models should load and unload. Else test should fail as subsequent metrics will be incorrect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If load or unload failure will result test to fail anyway, why not let it fail at the HTTP response code check instead of metrics check? This way people can easiler identify the root cause of job failure.
…am metric buckets (#7752)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come the core PR was merged way before this one finished? We currently have no ongoing tests for the merged feature on our nightly pipelines in core, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was approved in parallel. A couple of days appart.
I was unable to get a CI passing due to other build issues.
And then @yinggeh added more comments after it was approved. Hence the delay.
Yes I will get this in ASAP after the trtllm Code freeze
4764717
to
4ab4c00
Compare
What does the PR do?
Test Case of model load time metrics
Checklist
<commit_type>: <Title>
Commit Type:
Check the conventional commit type
box here and add the label to the github PR.
Related PRs:
Core : triton-inference-server/core#397
Where should the reviewer start?
qa/L0_metrics/general_metrics_test.py
Test plan:
Added tests for
Background
Improve metrics in Triton