Hi! Thanks so much for sharing the code of this amazing work!
I am trying to train some other models on your dataset. I notice that in the data pre-processing step, you flip the depth map here (the operation of [::-1, ::-1]). However, when I visualize the color and depth in the input, I find that they are not aligned after the depth is flipped. Is there any particular reason that you apply this flipping operation?
Besides, could you help provide the name of each semantic label in your dataset? Thank you!
Hi! Thanks so much for sharing the code of this amazing work!
I am trying to train some other models on your dataset. I notice that in the data pre-processing step, you flip the depth map here (the operation of [::-1, ::-1]). However, when I visualize the color and depth in the input, I find that they are not aligned after the depth is flipped. Is there any particular reason that you apply this flipping operation?
Besides, could you help provide the name of each semantic label in your dataset? Thank you!