You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 14, 2023. It is now read-only.
Some weights of the model checkpoint were not used when initializing UNet3DConditionModel:
This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Has anyone else had similar issues. I believe it has to do with the Lora Training because I only notice such behavior on models created while also training the new webui lora. The most recent model did not use the Loras, and had no such issues.
The text was updated successfully, but these errors were encountered:
Hello. I cannot reproduce this issue. I would check to see that your model path is correct. If it is, then could you please post the following? (You can remove any personally identifiable information).
The config.json in your model's directory.
The log leading up to this point (the one you have in your post).
I'm not able to get to the yaml or log file at the moment, but maybe you will notice something here? But the error message occurred when loading the model for inference using inference.py .
I believe the error message could be related to this, since it's a similar error message, but in mine it shows a lot of the layers in the model.
since disabling the lora training, I haven't had issues with that error message. It could be a possible glitch because of the version of the software I used. But I was hoping to make it known to confirm whether there was something truly going on. Is anyone else able to reproduce the error message with this model? Or is there something wrong in the models configuration that could be easily fixable so that I could use the model?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Has anyone else had similar issues. I believe it has to do with the Lora Training because I only notice such behavior on models created while also training the new webui lora. The most recent model did not use the Loras, and had no such issues.
The text was updated successfully, but these errors were encountered: