Skip to content
This repository has been archived by the owner on Jul 30, 2024. It is now read-only.

An error on Colab Full Nerf #31

Open
lqy93 opened this issue Jun 24, 2021 · 3 comments
Open

An error on Colab Full Nerf #31

lqy93 opened this issue Jun 24, 2021 · 3 comments

Comments

@lqy93
Copy link

lqy93 commented Jun 24, 2021

When I run the Colab script for Full Nerf version an error occurred:

RuntimeError                              Traceback (most recent call last)
<ipython-input-26-1d7c06d853cb> in <module>()
    119     loss = coarse_loss + fine_loss
    120     print(coarse_loss.shape,fine_loss.shape)
--> 121     loss.backward()
    122     optimizer.step()
    123     optimizer.zero_grad()

1 frames
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    253                 create_graph=create_graph,
    254                 inputs=inputs)
--> 255         torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    256 
    257     def register_hook(self, hook):

/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    147     Variable._execution_engine.run_backward(
    148         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    150 
    151 

**RuntimeError: Function AddmmBackward returned an invalid gradient at index 2 - got [72, 128] but expected shape compatible with [66, 128]**

I didn't change any code. How to solve the problem? Or do I need to set any configuration?

@curryJ
Copy link

curryJ commented Dec 15, 2021

Do you address this issue?
I didn't change any code too,but get another issue
image

@rajatthakur
Copy link

Uncommenting ReplicateNeRFModel fixed this issue for me.

model_coarse = ReplicateNeRFModel(
    hidden_size=128,
    num_encoding_fn_xyz=num_encoding_fn_xyz,
    num_encoding_fn_dir=num_encoding_fn_dir,
    include_input_xyz=include_input_xyz,
    include_input_dir=include_input_dir
)
# model_coarse = VeryTinyNeRFModel()
model_coarse.to(device)

# Initialize a fine-resolution model, if specified.
model_fine = ReplicateNeRFModel(
    hidden_size=128,
    num_encoding_fn_xyz=num_encoding_fn_xyz,
    num_encoding_fn_dir=num_encoding_fn_dir,
    include_input_xyz=include_input_xyz,
    include_input_dir=include_input_dir
)
# model_fine = VeryTinyNeRFModel()
model_fine.to(device)

@jdiazram
Copy link

I have the same error that @curryJ, but the solution of @rajatthakur is run now. Best

@Hawaiii Hawaiii mentioned this issue Sep 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants