-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435 Aligning Depth Frame to RGB Frame #6708
Comments
If you need to create your own frameset, the discussion in the link below may be useful guidance. If you are aligning depth to color with 2D frames (not a point cloud) then the process may be relatively straightforward. The SDK's Align sample program offers a pre-made example of how to do so. https://github.com/IntelRealSense/librealsense/tree/master/examples/align In the code of that example, to define the stream configurations that you want, you would change these lines: cfg.enable_stream(RS2_STREAM_DEPTH); To these lines: cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30); The first two numbers in the bracket define the resolution (e.g '1280, 720' = 1280x720) and the final number at the end of the bracket sets the FPS (e.g '30' = 30 FPS). For the D435 model though, bear in mind that the optimal depth resolution is 840x480, whilst for the D415 model the optimal depth resolution is 1280x720. If however you want to map color texture onto a 3D point cloud, the SDK also has an example program called rs-pointcloud: https://dev.intelrealsense.com/docs/rs-pointcloud You can also map color texture to a 3D point cloud in the RealSense Viewer. To do so:
It sounds as though you require a custom solution using coding though, as you wish to use raw data that has not been adjusted ('rectified'). This could be difficult though. The Y16 IR format provides unrectified data (the other stream formats are rectified by the Vision Processor D4 hardware inside the camera after the data is captured). The IR sensor can also provide a color image, though on the D435 models it is monochrome, whilst the D415 model can provide a color image from the left IR sensor. So I wonder if you could align an unrectified Y16 color image from the left IR sensor of a D415 with the depth image. Even that would be complicated though, as described in the comment in a discussion linked to below (please read the comment beneath it too). This may not be a helpful method though if you need to use a D435 or the program needs to work with any model of 400 Series depth camera. BTW, as an interesting side-note, the SDK's compatibility wrapper for the LabVIEW software has a sample program that can look at the raw camera images that are input into the camera ASIC before they are calibrated and rectified. |
Hi @MartyG-RealSense, thank you for the links and information. My intended pipeline involves a jetson receiving frames from the D435, then streaming those frames to a processing server. The processing server has access to the raw data buffer (just a void *), and the width and height and bytes per pixel of the depth and color frames.
Alternatively, I have seen code that performs alignment by directly calling the |
The hardware setup that you are describing sounds similar to a RealSense paper that Intel recently published about creating an open-source ethernet network. In that paper, they used Raspberry Pi boards to demonstrate the concept. The camera was connected to the Pi and the Pi was connected by ethernet to a central computer. |
Hi @matthewha123 Do you still require assistance with this case please, or can it be closed? Thanks! |
Thanks very much for providing an update! |
I want to align a depth frame (stream profile 76, Z16 1280x720@ 30Hz), to a rgb8 frame (RGB8 1280x720@ 30Hz).
I've seen on https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20
that I will need to make calls to
rs2_project_point_to_pixel(...)
, etc to achieve this.However, the depth stream profile has Brown Conrady as its distortion model.
In the source code for
rs2_deproject_pixel_to_point
, I've noticed thatRS2_DISTORTION_BROWN_CONRADY
is not a handled case. Doesrs2_deproject_pixel_to_point
support that distortion model?I am also interested in running frame alignment using processing blocks, as described in the
Frame Alignment
section here: https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20However, the frames I would like to align are not in rs2::frame format. Instead, I have access to the frame's raw data buffer, width, height, and bytes_per_pixel. How might I directly construct a rs2::frameset from this data to then use in the processing block?
The text was updated successfully, but these errors were encountered: