Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435 Aligning Depth Frame to RGB Frame #6708

Closed
matthewha123 opened this issue Jun 26, 2020 · 5 comments
Closed

D435 Aligning Depth Frame to RGB Frame #6708

matthewha123 opened this issue Jun 26, 2020 · 5 comments

Comments

@matthewha123
Copy link

matthewha123 commented Jun 26, 2020


Required Info
Camera Model D435
Firmware Version 5.12.5.0
Operating System & Version Ubuntu 18.04.4
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version 2.32.1
Language C++
Segment others

I want to align a depth frame (stream profile 76, Z16 1280x720@ 30Hz), to a rgb8 frame (RGB8 1280x720@ 30Hz).

I've seen on https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20
that I will need to make calls to rs2_project_point_to_pixel(...), etc to achieve this.
However, the depth stream profile has Brown Conrady as its distortion model.

In the source code for rs2_deproject_pixel_to_point, I've noticed that RS2_DISTORTION_BROWN_CONRADY is not a handled case. Does rs2_deproject_pixel_to_point support that distortion model?

I am also interested in running frame alignment using processing blocks, as described in the Frame Alignment section here: https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20

However, the frames I would like to align are not in rs2::frame format. Instead, I have access to the frame's raw data buffer, width, height, and bytes_per_pixel. How might I directly construct a rs2::frameset from this data to then use in the processing block?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 27, 2020

If you need to create your own frameset, the discussion in the link below may be useful guidance.

#5847

If you are aligning depth to color with 2D frames (not a point cloud) then the process may be relatively straightforward. The SDK's Align sample program offers a pre-made example of how to do so.

https://github.com/IntelRealSense/librealsense/tree/master/examples/align

In the code of that example, to define the stream configurations that you want, you would change these lines:

cfg.enable_stream(RS2_STREAM_DEPTH);
cfg.enable_stream(RS2_STREAM_COLOR);

To these lines:

cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_BGR8, 30);

The first two numbers in the bracket define the resolution (e.g '1280, 720' = 1280x720) and the final number at the end of the bracket sets the FPS (e.g '30' = 30 FPS).

For the D435 model though, bear in mind that the optimal depth resolution is 840x480, whilst for the D415 model the optimal depth resolution is 1280x720.


If however you want to map color texture onto a 3D point cloud, the SDK also has an example program called rs-pointcloud:

https://dev.intelrealsense.com/docs/rs-pointcloud

You can also map color texture to a 3D point cloud in the RealSense Viewer. To do so:

  1. Switch to '3D' mode to view camera information as a point cloud.

  2. Activate both the Depth and RGB streams. This makes drop-down menus appear at the top of the Viewer window.

  3. Left-click on the drop-down menu called Texture Source and select the Color option to map the RGB onto the point cloud.

image

image

It sounds as though you require a custom solution using coding though, as you wish to use raw data that has not been adjusted ('rectified'). This could be difficult though. The Y16 IR format provides unrectified data (the other stream formats are rectified by the Vision Processor D4 hardware inside the camera after the data is captured).

The IR sensor can also provide a color image, though on the D435 models it is monochrome, whilst the D415 model can provide a color image from the left IR sensor. So I wonder if you could align an unrectified Y16 color image from the left IR sensor of a D415 with the depth image. Even that would be complicated though, as described in the comment in a discussion linked to below (please read the comment beneath it too).

#5062 (comment)

This may not be a helpful method though if you need to use a D435 or the program needs to work with any model of 400 Series depth camera.


BTW, as an interesting side-note, the SDK's compatibility wrapper for the LabVIEW software has a sample program that can look at the raw camera images that are input into the camera ASIC before they are calibrated and rectified.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/labview#realsense-hello-world-left-right-unrectified

@matthewha123
Copy link
Author

matthewha123 commented Jun 29, 2020

Hi @MartyG-RealSense, thank you for the links and information.
However, I do not have access to the rs2::frames produced by the D435. I am not running the depth alignment on the same device that is connected to the realsense cameras.

My intended pipeline involves a jetson receiving frames from the D435, then streaming those frames to a processing server. The processing server has access to the raw data buffer (just a void *), and the width and height and bytes per pixel of the depth and color frames.

Is there a way I could also manually create rs2::Frames from that data? Nevermind, I see here: https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device
how I might accomplish that.

Alternatively, I have seen code that performs alignment by directly calling the rs2_deproject... and rs2_project... functions, and I was wondering if you knew whether the distortion models I mentioned were supported by these functions.

@MartyG-RealSense
Copy link
Collaborator

The hardware setup that you are describing sounds similar to a RealSense paper that Intel recently published about creating an open-source ethernet network. In that paper, they used Raspberry Pi boards to demonstrate the concept. The camera was connected to the Pi and the Pi was connected by ethernet to a central computer.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

image

@MartyG-RealSense
Copy link
Collaborator

Hi @matthewha123 Do you still require assistance with this case please, or can it be closed? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for providing an update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants