-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Bert/Roberta with "tensorrtllm" backend directly ? (no Python lib like tensorrt-llm package) #368
Comments
Currently, backend only supports decoder model. |
Thank you a lot @byshiue for your answer. Our use case for encoder models are RAG linked. FWIW, on A10 GPUs we got a 2.2 speedup on batch 64 / seqlen 430 (on average) compared to PyTorch FP16 in rerank (cross encoder), and, for our data, a 3.1 speedup on indexation (bi encoder setup). |
@pommedeterresautee Why don't you use TensorRT for embedding model instead of TensorRT-LLM |
@byshiue Can't we just use the chained models(ensemble) in any encoder-decoder model?, I mean the encoder's output serves as the input for the decoder, and also this applies to the the cross-attention layer as well I guess? What constraints prevent us from using the encoder-decoder model here? Thanks in advanced |
@robosina It it not supported yet, instead of it cannot be supported. |
Hi @byshiue are sequence classification with T5 models not supported yet? |
I'd love to see this feature - is there anywhere I can track it? |
@pommedeterresautee did you notice speed ups when comparing TensorRT-LLM vs TensorRT (from |
On large batches yes but we are using custom code to reach peak performance. |
System Info
Who can help?
As it s not obvious if this is a doc issue or a feature request:
@ncomly-nvidia @juney-nvidia
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I have compiled a Roberta model for classification (roberta more exactly) on tensorrt-llm. Accuracy is good, perf too.
It follows code from example folder of tensorrt-llm repo.
If I follow receipe from NVIDIA/TensorRT-LLM#778, triton serves the model, with expected performances.
However this PR rely on the use of tensorrt-llm package, which means using custom Python env quite slow to load, or custom image. If possible I would prefer to use vanilla image for maintenance reason.
I tried to use directly the
tensorrtllm
backend, but it crashes whatever I tried.and the /engines/model-ce/config.json contains:
However it crashes (see below).
Is it even possible to use this backend for a bert like model?
Fastertransformer dev being stopped, and TRT vanilla example of Bert deploy being 2 years old, tensorrt-llm option seems to be the most up to date for NLP models.
Expected behavior
it prints the IP and the port and it serves the model.
actual behavior
Trying to load the server produces those logs:
additional notes
N/A
The text was updated successfully, but these errors were encountered: