Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: TC for Metric P0 nv_load_time per model #7697

Open
wants to merge 3,469 commits into
base: main
Choose a base branch
from

Conversation

indrajit96
Copy link
Contributor

@indrajit96 indrajit96 commented Oct 14, 2024

What does the PR do?

Test Case of model load time metrics

Checklist

  • PR title reflects the change and is of format <commit_type>: <Title>
  • Changes are described in the pull request.
  • Related issues are referenced.
  • Populated github labels field
  • Added test plan and verified test passes.
  • Verified that the PR passes existing CI.
  • Verified copyright is correct on all changed files.
  • Added succinct git squash message before merging ref.
  • All template sections are filled out.
  • Optional: Additional screenshots for behavior/output changes with before/after.

Commit Type:

Check the conventional commit type
box here and add the label to the github PR.

  • build
  • ci
  • docs
  • feat
  • fix
  • perf
  • refactor
  • revert
  • style
  • test

Related PRs:

Core : triton-inference-server/core#397

Where should the reviewer start?

qa/L0_metrics/general_metrics_test.py

Test plan:

Added tests for

  1. Normal Mode Model Load
  2. Explicit Model Load
  3. Explicit Model Unload

Background

Improve metrics in Triton

oandreeva-nv and others added 30 commits January 17, 2024 10:32
Added support for OTel context propagation

---------

Co-authored-by: Markus Hennerbichler <[email protected]>
Co-authored-by: Ryan McCormick <[email protected]>
This validates the change made to ../core wrt how model configuration mtime is handled.
* Run all cases wihh shm probe

* Warmup test and then run multiple iterations

* Log free shared memory on enter/exit of probe

* Add shm probe to all tests

* Add debug_str to shm_util

* Refactor ensemble_io test, modify probe to check for growth rather than inequality

* Improve stability of bls_tensor_lifecycle gpu memory tests

* Add more visibility into failing model/case in python_unittest helper

* [FIXME] Skip probe on certain subtests for now

* [FIXME] Remove shm probe from test_restart on unhealthy stub

* Start clean server run for each bls test case

* Don't exit early on failure so logs can be properly collected

* Restore bls test logic

* Fix shm size compare

* Print region name that leaked

* Remove special handling on unittest

* Remove debug str

* Add enter and exit delay to shm leak probe

---------

Co-authored-by: Ryan McCormick <[email protected]>
* Update trace_summery script

* Remove GRPC_WAITREAD and Overhead
* Add gsutil cp retry helper function

* Add max retry to GCS upload

* Use simple sequential upload
* Handle empty output

* Add test case for 0 dimension output

* Fix up number of tests
* tensorrt-llm benchmarking test
* Update README and versions for 2.42.0 / 24.01 (#6789)

* Update versions

* Update README and versions for 2.42.0 / 24.01

* Fix documentaation genarion (#6801)

* Ser version of sphix to 5.0

* Set verions 5.0.0

* Update README.md and versions post 24.01
…und (#6834)

* Update miniconda version

* Install pytest for different py version

* Install pytest
* Add test for shutdown while loading

* Fix intermittent failure on test_model_config_overwrite
Adding OpenTelemetry Batch Span Processor
---------

Co-authored-by: Theo Clark <[email protected]>
Co-authored-by: Ryan McCormick <[email protected]>
* Support Double-Type Infer/Response Parameters
* Base Python Backend Support for Windows
* Add unit test reports to L0_dlpack_multi_gpu

* Add unit test reports to L0_warmup
* Add response statistics

* Add L0_response_statistics

* Enable http vs grpc statistics comparison

* Add docs for response statistics protocol

* Add more comments for response statistics test

* Remove model name from config

* Improve docs wordings

* [Continue] Improve docs wordings

* [Continue] Add more comments for response statistics test

* [Continue 2] Improve docs wordings

* Fix typo

* Remove mentioning decoupled from docs

* [Continue 3] Improve docs wordings

* [Continue 4] Improve docs wordings

Co-authored-by: Ryan McCormick <[email protected]>

---------

Co-authored-by: Ryan McCormick <[email protected]>
* Switch to Python model for busyop test

* Clean up

* Address comment

* Remove unused import
* Add cancellation into response statistics

* Add test for response statistics cancel

* Remove debugging print

* Use is None comparison

* Fix docs

* Use default args None

* Refactor RegisterModelStatistics()
#### Load Time Per-Model
The *Model Load Duration* reflects the time to load a model from storage into GPU/CPU in seconds.
```
# HELP nv_model_load_duration_secs Model load time in seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need a sample output for a gauge metric?

qa/L0_metrics/general_metrics_test.py Outdated Show resolved Hide resolved
qa/L0_metrics/general_metrics_test.py Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Outdated Show resolved Hide resolved
qa/L0_metrics/test.sh Show resolved Hide resolved
# Test 3 for explicit mode UNLOAD
python3 -m pytest --junitxml="general_metrics_test.test_metrics_load_time_explicit_unload.report.xml" $CLIENT_PY::TestGeneralMetrics::test_metrics_load_time_explicit_unload >> $CLIENT_LOG 2>&1
kill_server
set -e

# Test 4 for explicit mode LOAD and UNLOAD with multiple versions
set +e
CLIENT_PY="./general_metrics_test.py"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove

print(f"Model '{model_name}' loaded successfully.")
else:
except AssertionError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want the test to pass if failed to load the model? If not, you should remove try...except...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's expected behaviour.
Models should load and unload. Else test should fail as subsequent metrics will be incorrect

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If load or unload failure will result test to fail anyway, why not let it fail at the HTTP response code check instead of metrics check? This way people can easiler identify the root cause of job failure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come the core PR was merged way before this one finished? We currently have no ongoing tests for the merged feature on our nightly pipelines in core, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was approved in parallel. A couple of days appart.
I was unable to get a CI passing due to other build issues.
And then @yinggeh added more comments after it was approved. Hence the delay.
Yes I will get this in ASAP after the trtllm Code freeze

@pvijayakrish pvijayakrish force-pushed the ibhosale_metrics_google branch from 4764717 to 4ab4c00 Compare January 15, 2025 17:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.