We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I build my model with bfloat16 for lora data type. But in: https://github.com/triton-inference-server/tensorrtllm_backend/blob/v0.16.0/all_models/inflight_batcher_llm/tensorrt_llm/1/model.py lora_weights is float16.
I have to change line 271 in this file:
if lora_weights is not None: kwargs["weights"] = from_numpy(lora_weights).squeeze()
to:
if lora_weights is not None: kwargs["weights"] = from_numpy(lora_weights).squeeze().to(torch.bfloat16)
Can you change these code that work for both float16 and bfloat16
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I build my model with bfloat16 for lora data type. But in:
https://github.com/triton-inference-server/tensorrtllm_backend/blob/v0.16.0/all_models/inflight_batcher_llm/tensorrt_llm/1/model.py
lora_weights is float16.
I have to change line 271 in this file:
to:
Can you change these code that work for both float16 and bfloat16
The text was updated successfully, but these errors were encountered: