Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong behavior of modulated convolutions line 71-72 #25

Open
LozaKiN opened this issue Dec 8, 2021 · 0 comments
Open

Wrong behavior of modulated convolutions line 71-72 #25

LozaKiN opened this issue Dec 8, 2021 · 0 comments

Comments

@LozaKiN
Copy link

LozaKiN commented Dec 8, 2021

Hi,
First, thank you for your work!
I tried to train a network on my side and I feel like some modulated convolutions are not working as intended regarding the following code:
temp_fea.append(F.upsample_bilinear(self.DyConv[0](x[feature_names[level + 1]], **conv_args), size=[feature.size(2), feature.size(3)])) (l.71-72 of dyhead.py)
When running this line, the modulated conv receives an input which is four times smaller than the offset and mask (twice shorter on H and W dimensions).
As there is no "assert" on the shape of the inputs, the code runs fine but what is being computed is not really what you expect: the offset and the mask are flattened and only the first quarter of the vector is being used.
This leads to a huge shifting in the computation of the output of the modulated convolution.
To "fix" the issue, I think that the upsample_bilinear() should be applied on x[featurenames[level + 1]] and not the output of the layer.
Hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant