- missing support for stateful models (ex. time-series one)
- missing support for models without batching support
- no verification of conversion results for conversions: TF -> ONNX, TorchScript -> ONNX
- possible to define a single profile for TensorRT
- no support for TF-TRT conversion
- no custom ops support
- Triton Inference Server stays in the background when the profile process is interrupted by the user