Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with projecting pixel to point #3161

Open
shrutichakraborty opened this issue Jul 11, 2024 · 5 comments
Open

Issues with projecting pixel to point #3161

shrutichakraborty opened this issue Jul 11, 2024 · 5 comments
Labels

Comments

@shrutichakraborty
Copy link

shrutichakraborty commented Jul 11, 2024


Required Info
Camera Model D435i
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 22.04) }
Language python
Segment Robot
ROS Distro Humble

Issue Description

Hello, I have a realsense D435i attached to a robot end effector, with resolution 1280x720. I have an object detection network and I use the camera's internal functions to convert the detected objects center positions in pixels to 3D coordinates. This was working fine until recently, when the pixel to camera coordinates function is giving completely wrong output but only in the first few frames. I am using the first 10 frames to get position of detected objects an dthen servo my robot to these positions so this is leading to big issues in my application..
What I am cocerned by is that this has started happening recently, can anyone think of reasons that could cause such an issue? I am using the camera's internal functions for getting the intrinsics :

class frame_grabber():
    def __init__(self, turn_on_camera= True) -> None:
        self.pipeline = rs.pipeline()
        self.config = rs.config()
        self.pipeline_wrapper = rs.pipeline_wrapper(self.pipeline)
        self.pipeline_profile = self.config.resolve(self.pipeline_wrapper)
        self.device = self.pipeline_profile.get_device()
        self.device_product_line = str(self.device.get_info(rs.camera_info.product_line))
        self.camera_is_on = False
        self.config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 6)

        if self.device_product_line == 'L500':
            self.config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
        else:
            self.config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 6)

        # Start streaming
        p1 = threading.Thread(target=self.show_frame)
        self.frames = None 
        self.depth_frame = None 
        self.color_frame = None 
        if self.turn_on_camera:
            self.start_camera()
            p1.start
            

    def start_camera(self): 
        self.cfg = self.pipeline.start(self.config)
        self.profile = self.pipeline.get_active_profile()

        self.depth_sensor = self.profile.get_device().first_depth_sensor()
        self.depth_scale = self.depth_sensor.get_depth_scale()
        self.align_to = rs.stream.color
        self.align = rs.align(self.align_to)

        self.depth_profile = rs.video_stream_profile(self.profile.get_stream(rs.stream.color))
        self.depth_intrinsics = self.depth_profile.get_intrinsics()
        self.custom_calib_intrinsics = self.getCameraIntrinsics()
        self.camera_is_on = True 

        #### Defining realsense filters 

        self.spatial = rs.spatial_filter()
        # spatial.set_option(rs.option.filter_magnitude,5)
        # spatial.set_option(rs.option.holes_fill,3)

        self.temporal = rs.temporal_filter()

        self.hole_filling = rs.hole_filling_filter()

To convert pixel to 3D point I use :

def convert_pixel_to_point(self,pixel_list:list, depth = None):
        self.intrinsics = self.getCameraIntrinsics()
        r = []
        for point in pixel_list: 
            u = np.round(point[0]).astype("int")  aligned
            v = np.round(point[1]).astype("int") 
            pix = (u,v)
            if depth is None : 
                d_f = self.depth_frame.as_depth_frame()
                depth = d_f.get_distance(pix[0],pix[1])  
            result= rs.rs2_deproject_pixel_to_point(self.intrinsics,[pix[0],pix[1]],depth)
            r.append([result])
        return r

Where the intrinsics are :

def getCameraIntrinsics(self):
        depth_profile = rs.video_stream_profile(self.profile.get_stream(rs.stream.color))
        intrinsics = depth_profile.get_intrinsics()
        return intrinsics

As I said this code was working for me until recently. Can anyone think of what might have lead to the rs.rs2_deproject_pixel_to_point leading to incorrect results (gives a depth of more than 1m)? Could is be due to a change in the camera intrinsics? If so, what could have lead to this change and would I need to custom calibrate the intrinsics?I see that the results are wrong for the first few frames only, does anyone have any ideas (algorithms or post processing methods) to bypass this issue?

Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 11, 2024

Hi @shrutichakraborty Does your complete Python script have a wait_for_frames() instruction anywhere in it? If it does then you can place a for i in range(5): instruction directly before the wait_for_frames() line to skip the first several frames.

for i in range(5):

pipe.wait_for_frames()

A reason for skipping the first several frames is that auto-exposure may not settle down until after those initial frames, which can result in the first frames having bad exposure. This does not occur if auto-exposure is disabled and a fixed manual exposure value is used.


If you are able to try the above method in your script and do not see improvement, if you have access to the RealSense Viewer tool then please try resetting your camera to its factory-new default calibration using the instructions at IntelRealSense/librealsense#10182 (comment)

@MartyG-RealSense
Copy link
Collaborator

Hi @shrutichakraborty Do you require further assistance with this case, please? Thanks!

@shrutichakraborty
Copy link
Author

Hi @MartyG-RealSense I have found that this error only occurs when the camera is in certain orientations and not others which is quite strange. Any reasons this might happen?

@MartyG-RealSense
Copy link
Collaborator

RealSense cameras will operate best when facing straight ahead, rotated 90 degrees straight up / down or diagonally tilted forward / back 30 degrees.

If the camera is tilted forward / back more than 30 degrees then the chance of problems occurring in the depth stream may increase.

@MartyG-RealSense
Copy link
Collaborator

Hi @shrutichakraborty Do you have an update about this case that you can provide, please? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants