-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can it be used in sd1.5 and can it be combined with other acceleration methods such as ByteDance/Hyper SD #1
Comments
MixDQ supports SD1.5. By using the lcm_lora.yaml file, you can conduct quantization for SD1.50like models (Dreamlike) with LCM-lora. Our quantization code is independent of the timestep-wise acceleration method. By substituting the sdxl-turbo model ID in the config, it is compatible with HyperSD. |
Can it be used directly in sd1.5 or sdxl. What i mean is using W8A8 to accelerate normal 20 steps inference, without lcm_lora or sdxl turbo. |
Yes, it could be directly used. Just follow the example sdxl.yaml in our configs. For Sd1.5 model, you could remove the lora-related configs in |
if I want to use normal sdxl 20steps inference in the pipeline from https://huggingface.co/nics-efc/MixDQ/tree/main, what should I do ? |
pipeline from https://huggingface.co/nics-efc/MixDQ/tree/main, seems to only compatible with lcm_lora and sdxl turbo. |
seems like i need to Generate Calibration Data and Post Training Quantization (PTQ) Process. |
Can it be used in sd1.5 and can it be combined with other acceleration methods such as ByteDance/Hyper SD
The text was updated successfully, but these errors were encountered: