-
-
Notifications
You must be signed in to change notification settings - Fork 464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error running job: 'NoneType' object is not callable #243
Comments
Fixed it, by uninstalling diffuser and reinstalling it and going back to an older commit |
I got this error too. and reinstalled with |
I think the issue is caused by diffusers-0.33.0.dev0 . Reinstalling diffuser with pip install diffusers fixes it. |
ok |
I uninstalled and reinstalled it yet showing the same thing what to do , am using AI toolkit on runpod till yesterday everythin was alright idk what happened today |
It's still not working in Colab |
Works - thank you hero :-) |
workaround worked for me as well, but what needs to happen for the issue to be fixed at the source? |
Thank you for explaining and for the fix. I am not sure how will I know / how can I tell when the proposed change is merged into the main code, I am not very proficient with github |
In the Colab notebook, this issue is resolved by uninstalling and reinstalling diffusers. However, since this causes the session to restart and the variables to be lost, you should not restart running the first cell, otherwise, the old non-functional diffusers will be reinstalled again. Instead, restart from the cell where the HF tokens are set. |
Downgrading to diffusers Version: 0.32.2 also resolved this error for me on RunPod today. I uninstalled and reinstalled it with pip from PyPI as others recommended. Thanks very much. |
@AfterHAL Thank you, saved me twice. |
I think it's been fixed at the source, out-of-the-box now it installs diffusers 0.32.2 |
Fixed |
This is for bugs only
Did you already ask in the discord?
Yes
You verified that this is a bug and not a feature request or question by asking in the discord?
Yes/No
Result:
========================================
Traceback (most recent call last):
File "E:\ai-toolkit\run.py", line 90, in
main()
File "E:\ai-toolkit\run.py", line 86, in main
raise e
File "E:\ai-toolkit\run.py", line 78, in main
job.run()
File "E:\ai-toolkit\jobs\ExtensionJob.py", line 22, in run
process.run()
File "E:\ai-toolkit\jobs\process\BaseSDTrainProcess.py", line 1853, in run
loss_dict = self.hook_train_loop(batch_list)
File "E:\ai-toolkit\extensions_built_in\sd_trainer\SDTrainer.py", line 1659, in hook_train_loop
loss = self.train_single_accumulation(batch)
File "E:\ai-toolkit\extensions_built_in\sd_trainer\SDTrainer.py", line 1606, in train_single_accumulation
noise_pred = self.predict_noise(
File "E:\ai-toolkit\extensions_built_in\sd_trainer\SDTrainer.py", line 968, in predict_noise
return self.sd.predict_noise(
File "E:\ai-toolkit\toolkit\stable_diffusion_model.py", line 1871, in predict_noise
noise_pred = self.unet(
File "E:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\ai-toolkit\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "E:\ai-toolkit\venv\lib\site-packages\diffusers\models\transformers\transformer_flux.py", line 520, in forward
encoder_hidden_states, hidden_states = self._gradient_checkpointing_func(
TypeError: 'NoneType' object is not callable
my_first_flux_lora_v1: 0%| | 0/2000 [00:00<?, ?it/s]
The error comes from turning gradient checkpointing , on windows .
Turning Gradient checkpointing off makes it go out of memory - Caching latents to disk: 100%|███████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Generating baseline samples before training
my_first_flux_lora_v1: 0%| | 0/2000 [00:01<?, ?it/s, lr: 1.0e-04 loss: 5.966e-01]Error running job: CUDA out of memory. Tried to allocate 216.00 MiB. GPU 0 has a total capacity of 79.54 GiB of which 120.81 MiB is free. Of the allocated memory 70.52 GiB is allocated by PyTorch, and 1.05 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
========================================
Result:
========================================
The text was updated successfully, but these errors were encountered: