Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PYRealsense2 depth post-processing takes 0.3 seconds per frame #10995

Closed
MartinPedersenpp opened this issue Oct 14, 2022 · 6 comments
Closed

Comments

@MartinPedersenpp
Copy link

Required Info
Camera Model D435
Firmware Version 05.12.15.50
Operating System & Version Windows 10
Platform PC
SDK Version 2.49.0
Language python
Segment others

Issue Description

I have a program using pyrealsense2 where I set up the camera and use multiple aligned depth frames to detect objects.
I have noticed that whenever my program is performing the post processing of each depth frame, it takes around 0.3 seconds.
When I use realsense viewer I can easily get around 30FPS, so I am guessing that the issue is with my pyrealsense setup.
Pipeline setup:

            self.profile = self.pipeline.start(self.config)
            profile = self.profile.get_stream(rs.stream.color) # Fetch stream profile for color stream
            intr = profile.as_video_stream_profile().get_intrinsics() # Downcast to video_stream_profile and fetch intrinsics
            roisensor = self.profile.get_device().first_roi_sensor()
            roi = roisensor.get_region_of_interest()
            roi.min_x, roi.max_x, roi.min_y, roi.max_y = 362, 533, 157, 276
            roisensor.set_region_of_interest(roi)
            roi = roisensor.get_region_of_interest()
            depth_sensor, color_sensor, *_ = self.profile.get_device().query_sensors()
            import json
            # Loading custom preset file and setting options for the color and depth sensors
            json_obj = json.load(open("custom.json", "r"))
            json_str = str(json_obj).replace("'", '\"')
            dev = self.profile.get_device()
            adv_mode = rs.rs400_advanced_mode(dev)
            adv_mode.load_json(json_str)
            depth_sensor.set_option(rs.option.enable_auto_exposure, 1)
            depth_sensor.set_option(rs.option.laser_power, 210)
            depth_multiplier = 6
            depth_sensor.set_option(rs.option.depth_units, 0.001000 / depth_multiplier)
            depth_sensor.set_option(rs.option.emitter_always_on, 1.0)
            color_sensor.set_option(rs.option.enable_auto_exposure, self.camera_config.enable_auto_exposure)
            ### Set gamma to reduce over exposure due to interior lighting or direct light from windows
            color_sensor.set_option(rs.option.gamma, 500) # default 300, 100-500 lower in high lighting
            color_sensor.set_option(rs.option.saturation, 64) # default 64
            color_sensor.set_option(rs.option.sharpness, 50) # default 50
            color_sensor.set_option(rs.option.backlight_compensation, 0)
            color_sensor.set_option(rs.option.enable_auto_white_balance, 1)
            preset_range = depth_sensor.get_option_range(rs.option.visual_preset)
            # Setting up filters for the depth sensor
            self.threshold_filter = rs.threshold_filter(0.3, 0.7)
            self.temp_filter = rs.temporal_filter(0.1, 80.0, 6)
            self.spat_filter = rs.spatial_filter(0.40, 40.0, 4.0, 1.0)  # test settings - 1st = aplha, 2nd = delta, 3rd = magniture, 4th 
            = holefilling(0 = none, 1 = 2px, 2 = 4px, 3 = 8px, 4 = 16px, 5 = unlimited)
            # self.dec_filter = rs.decimation_filter(1.0)
            self.hole_filter = rs.hole_filling_filter(2)

Capturing frame:

            start = time.time()
            def filter_depth_data(depth_frame):
                depth_frame = self.threshold_filter.process(depth_frame)
                depth_frame = rs.disparity_transform(True).process(depth_frame)
                depth_frame = self.spat_filter.process(depth_frame)
                depth_frame = self.temp_filter.process(depth_frame)
                depth_frame = rs.disparity_transform(False).process(depth_frame)
                return depth_frame
            align = rs.align(rs.stream.color)
            color_sensor = self.profile.get_device().query_sensors()[1]
            frameset = self.pipeline.wait_for_frames()
            frameset = align.process(frameset)
            color_frame = frameset.get_color_frame()
            depth_frame = frameset.get_depth_frame()
            midstart = time.time()
            depth_frame = filter_depth_data(depth_frame)
            print(f'Filtering frames took: {time.time()-midstart}')
            depth_data = np.array(depth_frame.get_data())
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 14, 2022

Hi @MartinPedersenpp The Hole-Filling post-processing filter would normally be disabled by default in the RealSense Viewer (one of the only filters that is default-disabled) so enabling it in your Python script may be causing additional processing compared to the Viewer. Post-processing filters are calculated on the computer's CPU instead of the camera hardware, so the more filters that are applied then the greater the processing burden that is placed on the CPU.

You may be able to remove the Hole-Filling filter, as the Spatial filter provides hole filling.

I also note that you are apparently setting the Temporal filter's smooth alpha to '0.1'. The default is 0.4. Reducing smooth alpha to 0.1, whilst having the effect of stabilizing depth fluctuations, will delay the updating of the depth image. This can be demonstrated by moving the camera around and observing how the depth image slowly transitions from one state to another instead of updating immediately.

@MartinPedersenpp
Copy link
Author

@MartyG-RealSense
I think the issue of course is that the processing is on the CPU and not the Camera.
Is there any way to utilize the camera hardware with pyrealsense?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 14, 2022

The camera hardware is not able to assist with the processing of post-processing filters or with pyrealsense2 in general, unfortunately.

Alignment may also be a significant factor in your Python application's lag compared to the Viewer. The Viewer does not perform alignment between depth and color in its 2D mode (though it can map color onto depth in 3D pointcloud mode). Like post-processing, alignment is also calculated on the CPU.

If C++ was being used then there would be the option to offload processing of alignment from the computer's CPU onto its graphics GPU with the SDK's GLSL processing blocks - as described at #3654 - but it involves modifying C++ rs2:: type instructions and so I have never seen it successfully used with Python projects.

If your Windows computer had an Nvidia graphics chip or graphics card then it is possible that the SDK's CUDA support could be enabled in order to offload CPU processing of alignment onto the Nvidia GPU like GLSL does. This would mean though that if the program was run on computers without Nvidia graphics then alignment would not benefit from that acceleration. The SDK's CUDA support is also almost always used on Nvidia Jetson computing boards, so there are few examples of enabling that support on an Nvidia-equipped PC.

@MartinPedersenpp
Copy link
Author

@MartyG-RealSense
The alignment is fast compared to the filtering, around 0.02 seconds
I guess I have to see if I can find a sweet spot between post processing and the desired performance.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @MartinPedersenpp

Disabling the Spatial filter and having the Hole-Filling filter enabled may provide improved performance, as a RealSense team member states at #4468 (comment) that "from my experience spatial filter is the one taking the most time and giving least quality improvement, so you might decide to drop it".

@MartinPedersenpp
Copy link
Author

@MartyG-RealSense
Wow, indeed, disabling the spatial filter removed 0.15 seconds per frame, I will check my generated point clouds / meshes to see what kind of difference this does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants