You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Same to before, we will make choice on https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/tree/main. And quantized the model to Q4_x. Although there are articles state that quantization would bad for multimodal model accuracy. However, we are at the experiment stag, it should be ok.
And after we migrate the base model to multimodal, the basic requirements of the local environment should be also upgraded.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Same to before, we will make choice on https://huggingface.co/microsoft/Phi-3-vision-128k-instruct/tree/main. And quantized the model to Q4_x. Although there are articles state that quantization would bad for multimodal model accuracy. However, we are at the experiment stag, it should be ok.
And after we migrate the base model to multimodal, the basic requirements of the local environment should be also upgraded.
Currently:
It's absolutely for us beginning our multimodal journey.
Beta Was this translation helpful? Give feedback.
All reactions