You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I checked the log and the pytorch_model_distill.pt is picked in processing. But the latency is same as the ema ckpt: 51s on A100. Is this normal? Is there any argument I haven't set correctly to unlock efficient model running?
The quality is a little lower than the ema ckpt. I guess it is a distilled model. But why the latency keeps the same?
I checked the log and the pytorch_model_distill.pt is picked in processing. But the latency is same as the ema ckpt: 51s on A100. Is this normal? Is there any argument I haven't set correctly to unlock efficient model running?
The quality is a little lower than the ema ckpt. I guess it is a distilled model. But why the latency keeps the same?
I download ckpt from here: https://hf-mirror.com/Tencent-Hunyuan/Distillation-v1.1/tree/main
The text was updated successfully, but these errors were encountered: