Does tensorRT-LLM support serving 4bit quantised unsloth Llama model #2472
Labels
quantization
Issue about lower bit quantization, including int8, int4, fp8
question
Further information is requested
triaged
Issue has been triaged by maintainers
We want to deploy https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit which is 4-bit quantized version of llama-3.2-1B model. It is quantized using bitsandbytes. Can we deploy this using tensor RT-LLM backend ? If so, is there any documentation to refer?
The text was updated successfully, but these errors were encountered: