Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error using cpu instead of cuda #32

Closed
LazyCat420 opened this issue Apr 5, 2023 · 5 comments
Closed

Runtime error using cpu instead of cuda #32

LazyCat420 opened this issue Apr 5, 2023 · 5 comments

Comments

@LazyCat420
Copy link

Output exceeds the size limit. Open the full output data in a text editor---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 model = get_kandinsky2(
2 'cuda',
3 task_type='text2img',
4 cache_dir='/tmp/kandinsky2',
5 model_version='2.1',
6 use_flash_attention=False
7 )

File f:\AI_Scripts\Kandinsky 2.1\2.1\Kandisky\lib\site-packages\kandinsky2_init_.py:179, in get_kandinsky2(device, task_type, cache_dir, use_auth_token, model_version, use_flash_attention)
172 model = get_kandinsky2_0(
173 device,
174 task_type=task_type,
175 cache_dir=cache_dir,
176 use_auth_token=use_auth_token,
177 )
178 elif model_version == "2.1":
--> 179 model = get_kandinsky2_1(
180 device,
181 task_type=task_type,
182 cache_dir=cache_dir,
183 use_auth_token=use_auth_token,
184 use_flash_attention=use_flash_attention,
...
170 'to map your storages to the CPU.')
171 device_count = torch.cuda.device_count()
172 if device >= device_count:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

tried switching fp16 to false with the workaround from another thread wasn't able to get it to work.

@Blucknote
Copy link

#25
You can try #25 (comment)
Not tried, but will

@LazyCat420
Copy link
Author

I just tried that method and nothing changed...I decided to redo the entire env just in case I did something wrong on my end. If it ends up working I'll let you know.

@Blucknote
Copy link

I just tried that method and nothing changed...I decided to redo the entire env just in case I did something wrong on my end.

It was unnecessary to reinstall whole env, there was an options like git stash or git reset --hard origin/master which reset changes in code but does not affect venv.

If it ends up working I'll let you know.

Thanks

@LazyCat420
Copy link
Author

So I changed the code based on the instructions from

#26

just to confirm I changed conv.py located in lib\sit-packages\torch\nn\modules\conv.py

from

return F.conv2d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)

to

return F.conv2d(input.float(), weight, bias, self.stride, self.padding, self.dilation, self.groups)

then I changed the unet.py under model folder

self.use_fp16 = use_fp16 to self.use_fp16 = #False use_fp16

Didn't see any difference still getting the same error about CPU. Ran out of ideas at this point. Also there is 2 unet.py files and I tried changing both but I assume we aren't using for_onnx folder.

@LazyCat420
Copy link
Author

I just tried that method and nothing changed...I decided to redo the entire env just in case I did something wrong on my end.

It was unnecessary to reinstall whole env, there was an options like git stash or git reset --hard origin/master which reset changes in code but does not affect venv.

If it ends up working I'll let you know.

Thanks

ran the bat file from this github and got it to work. I think it was just missing some dependencies.

#33 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants