-
The code style convention is enforced by clang-format. See the Developer Guide for instructions on how to ensure your contributions conform. In general please follow the existing conventions in the relevant file, submodule, module, and project when you add new code or when you extend/fix existing functionality.
-
Avoid introducing unnecessary complexity into existing code so that maintainability and readability are preserved.
-
Try to keep pull requests (PRs) as concise as possible:
-
Avoid committing commented-out code.
-
Wherever possible, each PR should address a single concern. If there are several otherwise-unrelated things that should be fixed to reach a desired endpoint, it is perfectly fine to open several PRs and state in the description which PR depends on another PR. The more complex the changes are in a single PR, the more time it will take to review those changes.
-
Make sure that the build log is clean, meaning no warnings or errors should be present.
-
-
Make sure all
L0_*
tests pass:- In the
qa/
directory, there are basic sanity tests scripted in directories namedL0_...
. See the Testing section in the Developer Guide for instructions on running these tests.
- In the
-
Triton Inference Server's default build assumes recent versions of dependencies (CUDA, TensorFlow, PyTorch, TensorRT, etc.). Contributions that add compatibility with older versions of those dependencies will be considered, but NVIDIA cannot guarantee that all possible build configurations work, are not broken by future contributions, and retain highest performance.
-
Make sure that you can contribute your work to open source (no license and/or patent conflict is introduced by your code). You need to complete the CLA described below before your PR can be merged.
-
Thanks in advance for your patience as we review your contributions; we do appreciate them!
Triton requires that all contributors (or their corporate entity) send a signed copy of the Contributor License Agreement to [email protected].