Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sd3去做DreamBooth微调时候,运行报错缺失pp-peft #837

Open
knoka812 opened this issue Nov 26, 2024 · 4 comments
Open

sd3去做DreamBooth微调时候,运行报错缺失pp-peft #837

knoka812 opened this issue Nov 26, 2024 · 4 comments
Assignees

Comments

@knoka812
Copy link

昨天使用sd3去做DreamBooth微调时候,它运行一直报错这个

If your task is similar to the task the model of the checkpoint was trained on,you can already use T5EncoderModel for predictions without further training
All model checkpoint weights were used when initializing AutoencoderKL.
All the weights of AutoencoderKL were initialized from the model checkpoint at stabilityai/stable-diffusion-3-medium-diffusers.
If your task is similar to the task the model of the checkpoint was trained on,you can already use AutoencoderKL for predictions without further training.
All model checkpoint weights were used when initializing SD3Transformer2DModel.
All the weights of SD3Transformer2DModel were initialized from the model checkpoint at stabilityai/stable-diffusion-3-medium-diffusers.
If your task is similar to the task the model of the checkpoint was trained on,you can already use SD3Transformer2DModel for predictions without further training
Traceback (most recent call last):
File "/home/istudio/PaddleMIX/ppdiffusers/examples/dreambooth/train_dreambooth_lora_sd3.py",line 1571,in
main(args)
File "/home/aistudio/PaddleMIX/ppdiffusers/examples/dreambooth/train_dreambooth_lora_sd3.py",line 1058,in main
transformer.add_adapter(transformer lora_config)
File "/home/aistudio/.local/lib/python3.10/site-packages/ppdiffusers/models/modeling_utils.py",line 349,in add_adapter
check_peft_version(min_version=MIN_PEFT_VERSION)
File "/home/aistudio/.local/lib/python3.10/site-packages/ppdiffusers/utils/peft_utils.py",line 259,in check_peft_version
raise ValueError("PP-PEFT is not installed.Please install it with 'pip install -U ppdiffusers'")
ValueError:PP-PEFT is not installed.Please install it with 'pip install -U ppdiffusers'

但是前面检查过ppdiffusers也有安装,版本是0.29的,重启内核也不行
是因为0.29的删去了这个pp-peft了吗

@knoka812
Copy link
Author

换成sdxl是没问题

@knoka812
Copy link
Author

解决了那个,然后报错这个,用的环境是a100 40g那个

File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1532, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/ppdiffusers/transformers/clip/modeling.py", line 382, in forward
hidden_states, attn_weights = self.self_attn(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1532, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/ppdiffusers/transformers/clip/modeling.py", line 307, in forward
attn_weights = F.softmax(attn_weights, axis=-1)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/functional/activation.py", line 1247, in softmax
return _C_ops.softmax(outs_cast, axis)
OSError: (External) CUDNN error(4), CUDNN_STATUS_INTERNAL_ERROR.
[Hint: 'CUDNN_STATUS_INTERNAL_ERROR'. An internal cuDNN operation failed. ] (at ../paddle/phi/backends/gpu/gpu_resources.cc:315)

@luyao-cv luyao-cv assigned westfish and unassigned JunnYu Nov 29, 2024
@westfish
Copy link
Contributor

文档里面有写,需要设置export USE_PEFT_BACKEND=True这个环境变量

@knoka812
Copy link
Author

文档里面有写,需要设置export USE_PEFT_BACKEND=True这个环境变量

是的,这个设置了,但是设置后还是有个报错,报错这个
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1532, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/ppdiffusers/transformers/clip/modeling.py", line 382, in forward
hidden_states, attn_weights = self.self_attn(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/layer/layers.py", line 1532, in call
return self.forward(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/ppdiffusers/transformers/clip/modeling.py", line 307, in forward
attn_weights = F.softmax(attn_weights, axis=-1)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/nn/functional/activation.py", line 1247, in softmax
return _C_ops.softmax(outs_cast, axis)
OSError: (External) CUDNN error(4), CUDNN_STATUS_INTERNAL_ERROR.
[Hint: 'CUDNN_STATUS_INTERNAL_ERROR'. An internal cuDNN operation failed. ] (at @../paddle/phi/backends/gpu/gpu_resources.cc:315)
@westfish

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants