Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where do you use mono_depth? #91

Open
mks0601 opened this issue Dec 4, 2023 · 7 comments
Open

Where do you use mono_depth? #91

mks0601 opened this issue Dec 4, 2023 · 7 comments

Comments

@mks0601
Copy link

mks0601 commented Dec 4, 2023

Hi, thanks for your amazing work.
I was wondering where do you use depthmaps from mono_depth.
I can't find it in both of your pre-processing files including 1) https://github.com/apple/ml-neuman/blob/main/preprocess/export_alignment.py and 2) https://github.com/apple/ml-neuman/blob/main/preprocess/optimize_smpl.py.
Could you clarify where and how do you use the depthmaps from mono_depth?

@jiangwei221
Copy link

The depth values are used for regularizing the background nerf, and you can find it inside the dataloader:

depths_list = [] # MVS/fused depth values

@mks0601
Copy link
Author

mks0601 commented Dec 4, 2023

Hi, I backtraced your code, and it seems you're not using mono_depth, but only use depth_maps.

  1. Read depth data with from this code (https://github.com/apple/ml-neuman/blob/0149d258b2afe6ef65c91557bba9f874675871e4/train.py#L37C26-L37C26).
  2. read_data_to_ram can be found in here:
    def read_data_to_ram(self, data_list=['image']):
  3. read_data_to_ram function calls read_image_to_ram and read_depth_to_ram function for each cap, where cap is an instance of this class (
    class NeuManCapture(captures_module.RigRGBDPinholeCapture):
    )
  4. read_image_to_ram first reads mono_depth with this code (
    return self.captured_image.read_image_to_ram() + self.captured_mask.read_image_to_ram() + self.captured_mono_depth.read_depth_to_ram()
    )
  5. But it overwrites the depth data with read_depth_to_ram again from the above 3rd stage, where the read_depth_to_ram calls this function (
    def read_depth(self):
    ).
  6. As captured_depth is defined with depth_path, not mono_depth_path, it overwrites mono_depth data of the above 4th stage with depth data of the 5th stage. FYI, depth_path refers to MVS depth data.

Could you check am I right? Thanks!

@jiangwei221
Copy link

Sorry for the over complicated pipeline... It was inherited from a sfm project.
I think what happens to mvs depth and monocular depth is that we used monocular depth to fill the holes in mvs depth maps. see:

@property
def fused_depth_map(self):
if self._fused_depth_map is None:
valid_mask = (self.depth_map > 0) & (self.mask == 0)
x = self.mono_depth_map[valid_mask]
y = self.depth_map[valid_mask]
res = scipy.stats.linregress(x, y)
self._fused_depth_map = self.depth_map.copy()
self._fused_depth_map[~valid_mask] = self.mono_depth_map[~valid_mask] * res.slope + res.intercept
return self._fused_depth_map

Then the fused depth map is used to regularize the bkg nerf:
depths = (cap.fused_depth_map[coords[:, 1], coords[:, 0]]).astype(np.float32)

@mks0601
Copy link
Author

mks0601 commented Dec 4, 2023

Great thanks! Now I got it. BTW, are you using other geometry data, such as densepose and keypooints when training NeRF (not for preprocessing)? It seems NeuManCapture loads them but not use them

@jiangwei221
Copy link

We didn't use densepose/keypoints during nerf training stage, iirc. You can double check by setting them to None manually.

@mks0601
Copy link
Author

mks0601 commented Dec 5, 2023

Awesome. thanks!

@mks0601
Copy link
Author

mks0601 commented Dec 5, 2023

Hi jianwei221, two follow-up questions.

  1. Why do you use mono_depth only for background? Why not use it for foreground?
  2. Why do you adjust scale and translation only based on human area like this (
    res = scipy.stats.linregress(x, y)
    )? Why not use bkg area for the adjustment?

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants