The NVIDIA Triton Inference Server provides a robust and configurable solution for deploying and managing AI models. The Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server. It selects the most promising model format and configuration, matches the provided constraints, and helps optimize performance.
forked from triton-inference-server/model_navigator
-
Notifications
You must be signed in to change notification settings - Fork 0
The Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server.
License
mayani-nv/model_navigator
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
The Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server.
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published
Languages
- Python 97.8%
- Gherkin 0.6%
- Makefile 0.6%
- Shell 0.4%
- Dockerfile 0.3%
- Mustache 0.1%
- Other 0.2%