Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to render image from an out-of-room camera location (3D-Front dataset) #1165

Open
xiaomianzhou opened this issue Dec 5, 2024 · 1 comment
Labels
first answer provided question Question, not yet a bug ;)

Comments

@xiaomianzhou
Copy link

Describe the issue

I can render images from a in-room point normally.
But when I increase the height of the camera and try to get a top-down view, I get a black image.
What is the proper setting to render images from such out-of-room camera location?

Minimal code example

import blenderproc as bproc
import argparse
import os
import numpy as np
import h5py
import cv2


def check_name(name):
    for category_name in ["chair", "sofa", "table", "bed"]:
        if category_name in name.lower():
            return True
    return False

def get_img_from_hdf5(hdf5_path, output_path): 
    file_name = os.path.basename(hdf5_path).split('.')[0]
    with h5py.File(hdf5_path, 'r') as file:
        data_keys = file.keys()
        if 'colors' in data_keys:
            colors_data = file['colors'][()]
            if colors_data.ndim == 3:
                cv2.imwrite(f'{output_path}/{file_name}_colors.png', colors_data)
            elif colors_data.ndim == 4 and colors_data.shape[0] == 2:
                cv2.imwrite(f'{output_path}/{file_name}_colors_left.png', colors_data[0, :, :, :])
                cv2.imwrite(f'{output_path}/{file_name}_colors_right.png', colors_data[1, :, :, :])
        if 'depth' in data_keys:
            depth_img = file['depth'][()]
            cv2.imwrite(f'{output_path}/{file_name}_depth_img.tiff', depth_img)
    
if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("front", nargs='?', default="/home/ai/data_one/zjz/3D-FRONT/ffc7ddf1-827b-4154-9eee-5f8b497d0a12.json", help="Path to the 3D front file")
    parser.add_argument("future_folder", nargs='?', default="/home/ai/data_one/zjz/3D-FUTURE-model/",help="Path to the 3D Future Model folder.")
    parser.add_argument("front_3D_texture_path", nargs='?', default="/home/ai/data_one/zjz/3D-FRONT-texture/",help="Path to the 3D FRONT texture folder.")
    parser.add_argument("output_dir",nargs='?',  default="./output",help="Path to where the data should be saved")
    args = parser.parse_args()

    if not os.path.exists(args.front) or not os.path.exists(args.future_folder):
        raise Exception("One of the two folders does not exist!")

    bproc.init()
    mapping_file = bproc.utility.resolve_resource(os.path.join("front_3D", "3D_front_mapping.csv"))
    mapping = bproc.utility.LabelIdMapping.from_csv(mapping_file)

    # set the light bounces
    bproc.renderer.set_light_bounces(diffuse_bounces=200, glossy_bounces=200, max_bounces=200,
                                    transmission_bounces=200, transparent_max_bounces=200)
    
    # load the front 3D objects
    loaded_objects = bproc.loader.load_front3d(
        json_path=args.front,
        future_model_path=args.future_folder,
        front_3D_texture_path=args.front_3D_texture_path,
        label_mapping=mapping
    )
    
    # Init sampler for sampling locations inside the loaded front3D house
    # point_sampler = bproc.sampler.Front3DPointInRoomSampler(loaded_objects)
    
    # Init bvh tree containing all mesh objects
    bvh_tree = bproc.object.create_bvh_tree_multi_objects([o for o in loaded_objects if isinstance(o, bproc.types.MeshObject)])
    
    # define the camera intrinsics
    K = np.array([
        [432, 0., 320],
        [0., 432, 240],
        [0., 0., 1.]
    ])
    bproc.camera.set_intrinsics_from_K_matrix(K, 640, 480)
    # bproc.camera.set_stereo_parameters(interocular_distance=0.03, convergence_mode="PARALLEL", convergence_distance=0.00001)
    
    height = 2 # height = 5, you will get a black image
    location = np.array([-2.7, 0.7, height], dtype=np.float32)
    
    
    rotation = np.random.uniform([0, 0, 0], [0, 0, 0])
    cam2world_matrix = bproc.math.build_transformation_mat(location, rotation)
    bproc.camera.add_camera_pose(cam2world_matrix)
    
    # bproc.renderer.enable_depth_output(activate_antialiasing=False)
    bproc.material.add_alpha_channel_to_textures(blurry_edges=True)
    # bproc.renderer.toggle_stereo(True)
    
    # render the whole pipeline
    data = bproc.renderer.render()

    # write the data to a .hdf5 container
    bproc.writer.write_hdf5(args.output_dir, data)
    
    get_img_from_hdf5('./output/0.hdf5', './output')

Files required to run the code

No response

Expected behavior

render images from out-of-room camera locations

BlenderProc version

Blender 4.2.1 LTS (hash 396f546c9d82 built 2024-08-19 23:32:23)

@xiaomianzhou xiaomianzhou added the question Question, not yet a bug ;) label Dec 5, 2024
@cornerfarmer
Copy link
Member

Hey @xiaomianzhou,

the renderings outside are probably black because there is no light source outside and the room lights are concealed by the ceiling.
Did you try remove the ceiling first?

I would recommend to use the debugging mode ( https://github.com/DLR-RM/BlenderProc?tab=readme-ov-file#debugging-in-the-blender-gui) to see what your scene looks like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
first answer provided question Question, not yet a bug ;)
Projects
None yet
Development

No branches or pull requests

2 participants