Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High speed alignment of depth and color frames #8362

Closed
fisakhan opened this issue Feb 14, 2021 · 22 comments
Closed

High speed alignment of depth and color frames #8362

fisakhan opened this issue Feb 14, 2021 · 22 comments

Comments

@fisakhan
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D435 }
Firmware Version 05.11.01.100
Operating System & Version {Linux (Ubuntu 18)
Kernel Version (Linux Only) 5.4.0-65-generic
Platform PC
SDK Version { legacy / 2.. }
Language {python }
Segment {others }

Issue Description

During fast movement of an object, realsense 435 fails to accurately align color and depth frames. With the given Python code, I can successfully align color and depth frames for a static or slowly moving object. However, for a fast moving object (camera fixed), the color frame seems to be slow or with lag and the depth frame faster. The first picture shows aligned frames while the object is static. The next three pictures show upward, downward and left movements, respectively. It is apparent that display/processing of color frame is slow or with lag. How can solve this alignment problem?
981
988
1002
1100

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
res_cols = 640
res_rows = 480
config = rs.config()
config.enable_stream(rs.stream.depth, res_cols, res_rows, rs.format.z16, 30)
config.enable_stream(rs.stream.color, res_cols, res_rows, rs.format.bgr8, 30)

profile = pipeline.start(config)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
clipping_distance_in_meters = 1 #1 meter
clipping_distance = clipping_distance_in_meters / depth_scale

align_to = rs.stream.color
align = rs.align(align_to)

try:
    while True:
        frames = pipeline.wait_for_frames()
        aligned_frames = align.process(frames)

        aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
        color_frame = aligned_frames.get_color_frame()

        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        
        # Remove background - Set pixels further than clipping_distance to grey
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

        cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
        images = bg_removed
        cv2.imshow('Align Example', images)
        
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 14, 2021

Hi @fisakhan Please try setting the color FPS to 60 instead of 30. This may help to reduce blurring on the RGB image due to the RGB sensor on the D435 having a slower rolling shutter than the faster global shutter of the depth sensor on the D435.

The recent D455 camera model is the first RealSense camera to have a fast global shutter on both the RGB and depth sensors.

@sam598
Copy link

sam598 commented Feb 14, 2021

It looks like the depth image has a much shorter exposure time than the color image, which has significant motion blur.

Based on your code I'm assuming you are using auto exposure. If you need the frames to be precisely aligned I would recommend manually setting the exposure of both the color and depth cameras to the same duration.

Also keep in mind that the exposure values for color and depth cameras are at different scales in the SDK:
#8243

@sam598
Copy link

sam598 commented Feb 14, 2021

@MartyG-RealSense the difference between rolling and global shutters is not speed. Rolling shutters start exposing line-by-line and can sometimes cause a skew effect with fast movement, where global shutters expose the entire sensor at the same time. Neither has an effect on motion blur, which is determined by exposure time.

@MartyG-RealSense
Copy link
Collaborator

@sam598 You are not incorrect, but increasing the FPS is also a valid method of reducing blurring. The subject is discussed further in the links below.

#2461
#3554

@sam598
Copy link

sam598 commented Feb 14, 2021

@MartyG-RealSense I guess within the context of the SDK that makes sense. Increasing the framerate limits the maximum time the auto exposure will expose for.

But a frame with an exposure of 16ms at 60fps has the exact same amount of motion blur as a frame with an exposure of 16ms at 30fps. If someone also has resolution and data bandwidth requirements, running at 60fps may not be possible.

Also setting the framerate to limit the auto exposure duration still does not guarantee that the color and depth cameras start and stop their exposures at the same time, which looks to be the main issue with the example photos.

@fisakhan
Copy link
Author

Thanks @MartyG-RealSense and @sam598 for your responses. First, I don't think rolling or global shutter is the problem. I found this problem with D415 as well. Second, blur on color image is not a problem for me but alignment is (I need nearly perfect alignment). As suggested by @sam598 , exposure might be the reason. Let me play with the exposure and lets see if that can help.

@fisakhan
Copy link
Author

sensor.get_option(rs.option.exposure) is 8500.0 for depth sensor and 166.0 for color sensor. Should I increase 166 to 8500 for color sensor?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 15, 2021

Hi @fisakhan The exposure settings for the depth and color sensors have different scales (large values for depth and small values for color). So I would not recommend setting the color exposure to 8500.

The image below illustrates color exposure at 166 (upper) and 8500 (lower):

image

@fisakhan
Copy link
Author

@MartyG-RealSense yes, increasing the exposure increases the blur and whitish effect but doesn't solve the problem. Increasing exposure above 500 makes the problem even worst. Setting FPS to 30 or 60 also doesn't work.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 15, 2021

Can you test please whether enforcing a constant FPS improves your image? You can test this in the RealSense Viewer program by enabling 'Enable Auto-Exposure' in the Stereo Module options and disabling 'Auto-Exposure Priority' in the RGB Camera options, under the Controls sub-section.

image

@sam598
Copy link

sam598 commented Feb 15, 2021

Should I increase 166 to 8500 for color sensor?

@fisakhan Is I mentioned before the exposure values for color and depth cameras are at different scales in the SDK:
#8243

The depth camera exposure is set in microseconds, and for some reason the color camera is exposure is set in hundreds of microseconds. So if you wanted both exposures to take the same amount of time, it would be 8500 for the depth sensor, and 85 for the color sensor. That is why increasing the color exposure time looked worse.

Please keep in mind that this is only if you want the exposures to be the exact same amount of time. If you want to follow @MartyG-RealSense 's suggestion and control auto exposure using the frame rate, you MUST NOT set the exposure time manually, as this will disable auto exposure.

@fisakhan
Copy link
Author

Suggestion of @MartyG-RealSense doesn't work. Exposure of 85 for color sensor makes the color image black and I can't process the color image now.
182

@sam598
Copy link

sam598 commented Feb 15, 2021

@fisakhan an exposure of 85 was meant as an example what the equivalent exposure would be. For example you could also set the depth camera exposure time to 16600 and the color camera exposure to 166.

@MartyG-RealSense
Copy link
Collaborator

Hi @fisakhan Do you require further assistance with this case, please? Thanks!

@fisakhan
Copy link
Author

I'm still struggling with that problem. Changing FPS or exposure didn't solve it.

@MartyG-RealSense
Copy link
Collaborator

@fisakhan Were you able to test the earlier suggestion of enforcing a constant FPS?

#8362 (comment)

@fisakhan
Copy link
Author

i tested for 30 and 15 FPS but doesn't work.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 26, 2021

Hi @fisakhan i went over your script carefully again. I see that it is essentially the align_depth2color example script with comments removed and a couple of edits at the end. So it may be useful to focus on the edits in your own script.

  1. You have used cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE), whereas the original uses:

cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)

  1. This block of code in the original script is removed:
# Render images:
#   depth align to color on left
#   depth on right
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
images = np.hstack((bg_removed, depth_colormap))

And this single line is inserted underneath the cv2.namedWindow instruction:

images = bg_removed


If the original align_depth2color example script is able to function without blurring during fast motion, this would suggest that the problem is occurring within one of the sections that was modified.

@fisakhan
Copy link
Author

Thanks @MartyG-RealSense, After running the original align_depth2color example without any change I can see a bit of improvement for low to medium movement of the object. But fast motion still introduces some blur and align problems.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 27, 2021

As long as the RGB sensor on the D435 model has a rolling shutter that is slower than the fast global shutter that the depth image has available then achieving depth to color alignment with fast motion may be problematic.

The YouTube video linked to below, of a D435 attached to a vehicle moving at full speed, demonstrates the capture speed that is supported when using the depth stream only.

https://www.youtube.com/watch?v=OwJmCyAn3JQ

Perhaps you could try aligning the depth image to the infrared image, since infrared and depth originate from the same sensor.

#5093

Point 2 of the section of Intel's camera tuning guide linked to below provides more information about the benefits of using the left infrared sensor due to its natural alignment with depth.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-use-the-left-color-camera

Alternatively, drop use of color and have a pure depth image if the color element is not vital to the image.

As alignment is a CPU-intensive operaton, you could also check in Ubuntu whether there is high CPU usage during alignment with the htop command.

https://en.wikipedia.org/wiki/Htop

If you could add an Nvidia graphics GPU to your Ubuntu computer if it is a desktop machine (and if it does not already have an Nvidia GPU) then librealsense could take advantage of it to provide acceleration of alignment operations via CUDA support.

@MartyG-RealSense
Copy link
Collaborator

Hi @fisakhan Do you require further assistance with this case, please?

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants