Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU error: Demons #39

Open
jaykumar16 opened this issue Jun 18, 2021 · 0 comments
Open

GPU error: Demons #39

jaykumar16 opened this issue Jun 18, 2021 · 0 comments

Comments

@jaykumar16
Copy link

I have been following the example code provided for demon registration and running into issues for the GPU usage.

Here, my device is only set to cuda:0.

print(device)
cuda:0

I am getting error in

 registration.start()
~/.local/lib/python3.8/site-packages/airlab/registration/registration.py in start(self, EarlyStopping, StopPatience)
    138             if self._verbose:
    139                 print(str(iter_index) + " ", end='', flush=True)
--> 140             loss = self._optimizer.step(self._closure)
    141             if EarlyStopping:
    142                 if loss < self.loss:

~/.local/lib/python3.8/site-packages/torch/optim/optimizer.py in wrapper(*args, **kwargs)
     87                 profile_name = "Optimizer.step#{}.step".format(obj.__class__.__name__)
     88                 with torch.autograd.profiler.record_function(profile_name):
---> 89                     return func(*args, **kwargs)
     90             return wrapper
     91 

~/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
     25         def decorate_context(*args, **kwargs):
     26             with self.__class__():
---> 27                 return func(*args, **kwargs)
     28         return cast(F, decorate_context)
     29 

~/.local/lib/python3.8/site-packages/torch/optim/adam.py in step(self, closure)
     64         if closure is not None:
     65             with torch.enable_grad():
---> 66                 loss = closure()
     67 
     68         for group in self.param_groups:

~/.local/lib/python3.8/site-packages/airlab/registration/registration.py in _closure(self)
    100         loss_names = []
    101         for image_loss in self._image_loss:
--> 102              lossList.append(image_loss(displacement))
    103              loss_names.append(image_loss.name)
    104 

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~/.local/lib/python3.8/site-packages/airlab/loss/pairwise.py in forward(self, displacement)
    126 
    127         # compute displacement field
--> 128         displacement = self._grid + displacement
    129 
    130         # compute current mask

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

May you please help solve this issue?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant