-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 12, 1]) #69
Comments
hello,have you solved the problem? and how? |
no, how about you now? |
I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 19201080, kitti's is 1270400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image). |
I am using a self-built dataset,too.I changed the values of w and h(1280 *960) according to your suggestion, but the problem still exists, and I found that when I increase the batch size, the problem will be alleviated, but it still exists. Could you please tell me how do you modify the code to generate the depth map to 1920 *1080? Or do you have any other good suggestions? @vacant-ztz |
If you want to modify the output size of the depth map, first, you need to modify the values of oheight, owidth, cwidth in SFD-TWISE-main/dataloaders/kitti_loader.py and make sure that they can be divisible by 16, and after that, you need to modify the size of pred_dep tensor in evaluate.py as the size of your output image. @HuangLLL123 |
Thank you very much. I have also solved my problem through your method, but I have encountered a new problem when using my self-built dataset, The problem is as follows: |
@HuangLLL123 |
Always reporting errors on different samples,
Has anyone encountered the same problem before?
The text was updated successfully, but these errors were encountered: