Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 12, 1]) #69

Open
HuangLLL123 opened this issue Jan 14, 2024 · 7 comments

Comments

@HuangLLL123
Copy link

HuangLLL123 commented Jan 14, 2024

Always reporting errors on different samples,

Has anyone encountered the same problem before?

微信截图1

微信截图2

@HuangLLL123 HuangLLL123 closed this as not planned Won't fix, can't repro, duplicate, stale Jan 14, 2024
@HuangLLL123 HuangLLL123 reopened this Jan 14, 2024
@vacant-ztz
Copy link

hello,have you solved the problem? and how?

@HuangLLL123
Copy link
Author

hello,have you solved the problem? and how?

no, how about you now?

@vacant-ztz
Copy link

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 19201080, kitti's is 1270400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

@HuangLLL123
Copy link
Author

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 1920_1080, kitti's is 1270_400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

I am using a self-built dataset,too.I changed the values of w and h(1280 *960) according to your suggestion, but the problem still exists, and I found that when I increase the batch size, the problem will be alleviated, but it still exists. Could you please tell me how do you modify the code to generate the depth map to 1920 *1080? Or do you have any other good suggestions? @vacant-ztz

@vacant-ztz
Copy link

If you want to modify the output size of the depth map, first, you need to modify the values of oheight, owidth, cwidth in SFD-TWISE-main/dataloaders/kitti_loader.py and make sure that they can be divisible by 16, and after that, you need to modify the size of pred_dep tensor in evaluate.py as the size of your output image. @HuangLLL123

@HuangLLL123
Copy link
Author

Thank you very much. I have also solved my problem through your method, but I have encountered a new problem when using my self-built dataset, The problem is as follows:
File "/home/tianran/workdir/SFD/pcdet/models/roi_heads/target_assigner/proposal_target_layer.py", line 162, in subsample_rois
raise NotImplementedError
NotImplementedError
maxoverlaps:(min=nan, max=nan)
ERROR: FG=0, BG=0

I have tried many methods mentioned in the answers of other issues, such as normalizing point cloud features and reducing learning rates, but the problem has not been completely solved. Have you ever encountered this problem while using a self-built dataset? Could you please tell me your solution? @vacant-ztz

@Zixiu99
Copy link

Zixiu99 commented Sep 5, 2024

@HuangLLL123
maxoverlaps:(min=nan, max=nan)
ERROR: FG=0, BG=0
Hi, I'm experiencing the same problem on self-built dataset, have you solved it please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants