-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2D pixel and 3D pointcloud coordinates dismatch #3180
Comments
By the way unfortunately I cannot use realsense SDK to directly call |
Hi @HRItdy It is more usual for depth_image_proc to be used with the RealSense ROS1 wrapper to obtain a pointcloud rather than the ROS2 wrapper. In the 'rs_rgbd.launch' ROS1 launch file that publishes its pointcloud to depth_image_proc, it sets So it may be worth trying to enable alignment in your ROS2 launch. If you are publishing the RealSense topics with the rs_launch.py launch file then you can enable align_depth in the launch instruction. This will have the same effect as calling
|
Thanks for your prompt response! @MartyG-RealSense Actually the
In this case, my mask should overlay with the leftmost ball, but in the converted pointcloud, the corresponding point cloud is weird (circled). I doubt it's because some transformation is not correct. Do you have some off-the-shelf package or demo to instruct the projection from 2D to 3D coordinates? Any suggestion is appreciated! |
In the specification listing at the top of this discussion your ROS wrapper version is listed as {4.51.1, 4.54.1, etc..} but I see that none of the information in that box has been edited from its defaults. So can you confirm if you are actually using the ROS1 wrapper and the rs_rgbd launch file, please? Thanks! |
Oh sorry, I forget to change this... I changed several but because the realsense is remotely connected, I will update the info later I have access to the remote machine. Yeah I'm using ROS1 wrapper and the rs_rgbd launch file. And the content of this launch is
|
If you are using 640x480 resolution then it may be worth removing the Decimation filter. This filter 'downsamples' the depth resolution to half of the one that has been set, so the filter will reduce the pixel resolution on the depth image from 640x480 to 320x240. |
Thanks for your remind! I have removed this filter, but the segmented point cloud is still weird... Is there any suggestion on how to precisely project 2D pixel coordinates to 3D? Thanks! |
The ROS1 wrapper has a Python node script called 'show_center_depth.py' that converts the 2D coordinates into a 3D depth value. A node script is launched from the ROS terminal after the ROS wrapper has been launched. This is perhaps not what you have in mind though if you would prefer to do everything within the launch file instead of using an external script. In regard to your mention of depth_image_proc/register, 'registration' means to align depth and color images together. As align_depth is already doing this, you could try either not using depth_image_proc/register or setting align_depth to false to see what happens. |
Thanks! I will try this script ASAP. An external script works for me! By the way, I'm very new to the RGBD camera, I would like to ask, after we do the depth_to_color_alignment, the depth image is overplayed with the color image right? Then if I want to get the 3D coordinate of one 2D pixel, say [224, 125], I can get it by substituting this
with
Thanks! |
When align_depth is enabled, the depth image is matched to the color image coordinates. The depth image's 'field of view size' is resized to match the color image's field of view size. If a D435 type RealSense camera is being used (D435, D435i, D435f, etc) then this alignment causes the outer edges of the depth image to be excluded on the aligned image. This is because the D435 type cameras have a smaller field of view on the color sensor and so cannot see as much of the scene as the depth sensor. On the D455 type cameras, they have a wide field of view and field of view sizes that are almost the same, so the amount of edge information that gets cut off from the aligned image is minimal. Yes, I would recommend switching to the aligned topic, as show_center_depth.py does not use the color stream in the script's default state. |
Hi! @MartyG-RealSense Really thanks for your explain! I tried the show_center_depth.py script, and the original code had some issue so I slightly revised it as:
And I have changed the subscribe topic:
Unfortunately in the published result the segmented pointcloud still doesn't assemble with the groundtruth: The upper part is the segmented pointcloud according to the mask. The mask is correct on the color frame. And in the code I tried both |
Are you able to move the black equipment at the bottom of the color image out of the camera's view? Black or dark gray objects are difficult for a camera to obtain depth information from as they absorb light, and the result is an area of empty black without any depth values in the approximate area where the black / dark gray surface is in. This depth-empty area could be confusing the camera regarding its close proximity to the separated ball. The black area only looks as though it has depth because it is shaped like the black object. For example, a black USB cable will appear as a cable-shaped area on the depth image but there is no data in that area. |
Thanks for your reply! @MartyG-RealSense Unfortunately I cannot move the gripper out of the camera's view, but I think I found the reason why the segmented points are not assembled with the entire pointcloud. The pointcloud of realsense seems to be published in a different frame with But there is still one problem that why there is a deviation between the segmented part and the entire pointcloud? Is there any function I can use to eliminate this? Thanks! |
Is it the black holes on the image that you have problems with? If it is and you are using ROS1 Noetic then you could try applying the hole_filling filter to fill in black holes.
If the deviation that you describe relates to something else other than the black holes then please let me know. |
Hi @MartyG-RealSense Sorry I didn't make it clear. In this image, the red part is the segmented out pointcloud part, which is generated by the projecting the 2D pixels corresponding to the yellow part to the 3D pointcloud coordinate. So the red part should be overlapped with the yellow part. But for now there is a deviation between them. And the code I used is similar with the one you suggested:
Any suggestion is appreciated! |
camera_link corresponds to the left infrared sensor of the camera, which is the origin point of depth. When depth is aligned to color though, the origin of depth changes from the centerpoint of the left infrared sensor to the centerpoint of the RGB sensor, which is horizontally offset from the position of the left IR sensor on the front of the camera. So in this situation when projecting 2D pixels to 3D points, aligned intrinsics or color intrinsics are used instead of depth intrinsics. If your manual adjustment involves intrinsics then it may be worth checking whether your adjustment code is using depth intrinsics instead of color or aligned intrinsics. |
Are you able to access the RealSense Viewer tool? If you are then please next try resetting the calibration of your camera to its factory-new default settings using the instructions at IntelRealSense/librealsense#10182 (comment) in order to eliminate the possibility that your camera sensors have become mis-calibrated. This could occur if there is a physical shock to the camera such as a hard knock, a drop to the ground or severe vibration. |
Hi @HRItdy Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Hi @MartyG-RealSense. Really thanks for your help. I have solved the problem of 2D-3D mismatch issue and I want to record it here for someone potentially has the same issue. If practice with ROS instead of pyrealsense2 library: |
You are very welcome, @HRItdy - thanks so much for the update about your success and for sharing your solution! |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi! I'm using a realsense to do some object detection work. Basically, I used some object detection model to get the 2D coordinates of the object mask in the color image, and then project it to the pointcloud to get the corresponding 3D coordinates.
I used depth_image_proc/register and depth_image_proc/point_cloud_xyzrgb to do the alignment between the color and depth, and the result is quite good:

But after I get the mask and find the 3D coordinates, the segmented pointcloud deviates from the one I want (should be the left most red ball, but output is the one circled).

My code is like following. Is there extra transformation between the original color image and the aligned color image? Thanks!
The text was updated successfully, but these errors were encountered: