-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why 352*1216? #69
Comments
Could you please explain your question more specifically? I wonder if you mean you resize the images/gt during the inference stage, but get inferior performance? |
@zhyever Thanks for your reply. I have some questions.
So I'm wondering why the indicator is the best only in that one case?Thank you very much! |
Hi, I'm sorry for this late reply.
|
@zhyever Thanks for your reply. I think grap_crop is a trick , not universal. And I can understand image -> NYUCrop -> pred depth -> evaluation, but the most interesting thing is why it is 3521216 instead of other values(may be 3201184). In other words, when we get a new dataset, we don't know how big it should be cropped. |
I think the raw resolution is better if there is no invalid GT value lying in the fringe of images. If we have to crop, I guess we can calculate some error statistic information (I have to say that this is hard. But I recommend visualizing the GT maps of KITTI or NYU, and then you can see that the invalid fringe is so universal that you can easily set the crop size) about the error and then select relatively accurate areas (as large as possible). Another consideration is the requirement of the model. Many models require inputs with certain resolutions (multiple of 2, 4, 8, ...). You may also consider it when setting the crop size. |
@zhyever Thanks for your sharing. The input of the model is only the best when it is 3521216, and the performance of other sizes becomes worse. Could you tell me why the output size is 3521216? Thank you very much!
The text was updated successfully, but these errors were encountered: