-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange Results From W2A8 Model #7
Comments
I also have the same problem. |
@ThisisBillhe Sorry to bother, but I still can not reproduce the result of W2A8 setting, is there any way to fix this? |
Hi, I will look into this when I am available..working on other project now |
Thanks for reply so soon! Yes, I successfully trained the w4a4 model and the results look good. But for W2A8, I really dont know why the result is that strange. |
what about using more steps and epochs during training, e.g., 250 ddim_steps and more epochs? |
The best result I got is by directly training 20 ddim_steps with 800 epoch, and I got FID 21 and sFID 12, still far away from the paper results. When I double the training epoch, the model training crashes at 1200 epoch. I must doing something wrong, any idea how to reproduce the paper result? |
Hi,
I executed the W2A8 ImageNet finetuing using the script by directly use n_bits_w = 2, n_bits_a = 8.
But it produces unexpected results in the W2A8 setting. Could you please advise if there are any specific hyperparameters or configurations that need adjustment in the default code to address this problem?
Here are my results.
The text was updated successfully, but these errors were encountered: