-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The width of pointcloud does not match the rgb image size #2937
Comments
Hi @quangmta Do you get a pointcloud closer to expectations if you add pointcloud.allow_no_texture_points:=true to your launch instruction, please? |
hi @MartyG-RealSense. Thank you for answering so quickly. When I add 'pointcloud.allow_no_texture_points:=true', it seems that the point cloud is aligned according to the depth image. At the top of the point cloud the colors are added automatically (predicted) and it seems incorrect. You can see in the picture. The width of pointcloud increases to 60266, 59276, .... It's also not fixed. I want to create pointcloud with correct color (aligned according to the rgb image) and has fixed width as size of the rgb picture. |
Which RealSense camera model are you using, please? If it is one of the 'D435' type models such as D435, D435i or D435f then the depth sensor's field of view is larger than the color sensor's. This means that the outer regions of the original depth image get cut off when a depth-RGB aligned pointcloud is created because the color sensor cannot see all of the areas of a scene that the depth sensor can. So when |
I'm using D435i. Yes I agree with you, the missing area is filled in and it seems incorrect color. So I don't use |
The ROS wrapper aligns depth to color, meaning that some of the depth detail gets cut off due to aligning to the color sensor's smaller field of view on the D435 type camera models. Whilst color to depth alignment is possible in the librealsense SDK (which stretches the color image to the size of the larger depth image), the wrapper does not support color to depth alignment unfortunately. |
Thanks for the help. I'll try another way. If something works out, I'll tell you. |
Hi @quangmta Do you have an update about this case that you can provide, please? Thanks! |
Hi @MartyG-RealSense I just sorted the pointcloud by x to get the right horizontal order |
It's excellent news that you were successful, @quangmta - thanks very much for the update! |
@MartyG-RealSense Thank you very much for the help |
Hi @MartyG-RealSense I'm using D435i, but the depth error is very large, about 10% at 2m and 20% at 4m. I want to align pointcloud from camera with the laser scan from LIDAR HOKUYO URG-04LX-UG01 (2D LIDAR). Have you done this before or do you have any ideas ? |
Aligning RealSense and non-RealSense cameras is complicated but possible, like in the case at IntelRealSense/librealsense#10363 where a RealSense camera was aligned with a thermal camera. The method uses the RealSense SDK's software-device interface, which is best used with the C++ language (which may not suit your needs if you are using ROS). I note that you have a Jetson Nano. A ROS project at the link below for using a RealSense and Hokuyo camera together with Jetson may therefore be of interest. |
The wavy depth resembles a phenomenon known as "ghost noise". In IntelRealSense/librealsense#4553 and IntelRealSense/librealsense#3741 (comment) possible remedies for this that are suggested by Intel RealSense team members include performing a calibration and applying post-processing filters. |
Hi @quangmta Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense I still didn't resolve it. Now I want to try to reprocess the depth map instead of point cloud. Thanks for your dedication! |
Hi @quangmta Do you need advice about using post-processing filters in ROS, please? |
Hi @MartyG-RealSense. Yes. Could you tell me how to create point cloud from depth map, please? I want to post-process depth map and then create point cloud from it. |
The filters in the ROS2 wrapper have the following parameter names. disparity_filter - convert depth to disparity before applying other filters and back. Below is an example of a ROS2 launch instruction that enables the decimation filter and pointcloud.
For other types of post-processing filter, change decimation_filter.enable:=true to another filter name, such as |
Hi @MartyG-RealSense But how can I use my manual filter? Thanks and Happy new year! |
Are you referring to the resolutions for depth and color that you manually defined? If so then when a depth + color 'textured pointcloud' is generated, color data will automatically be mapped onto the depth coordinates to generate a pointcloud where depth is aligned to color. If you have a D435 or D455 type of camera (such as D435 / D435i or D455 / D455f) then you should be able to set both depth and color to a resolution of 424x240. On the D435 type of camera though, the field of view size of the color sensor is smaller than the depth field of view. This means that when depth and color are aligned, some of the image will be cut off at its outer edges. You can obtain a full pointcloud by adding the parameter pointcloud.allow_no_texture_points:=true to your launch instruction to color the regions beyond the cut-off area with zeros. You cannot add color texture to the area filled with zeroes though. This is because if the camera's color field of view is smaller than the depth field of view, the RGB sensor cannot see as much of the scene as the depth sensor can, and so color data for the outer regions of the depth image will not exist. On camera models where the depth and color field of view sizes are approximately the same, such as D455, then the amount of edge area that is cut off in an aligned image is much less. |
I want to subscribe the topics from depth map and rgb images, change something using my own methods and then create pointcloud from this. I want to ask how to create point cloud from depth map. Thanks. |
Including pointcloud.enable:=true in your launch instruction converts the depth and color data into a point cloud published on the topic |
Ok. Thank you! |
Hi @quangmta Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense I think I don't need now. You can close this issue. Thank you for the help! |
You are very welcome. Thanks very much for the update! |
Hello
I run this project in ros with config
depth_module.profile := 424x240x6
andrgb_camera.profile := 320x180x30
.When I run topic /camera/depth/color/points, the height is 1 and the width is about 43117, 43661, 43700, ....
When pointcloud is created point cloud from rgb image and depth image, the it's width should be 320x180 = 57600. I guess there are some filters that are applied to this pointcloud.
Can I get the pointcloud with original size? For example in my case is 57600x1, or if it can be 320x180, so it will be better . I want to subcribe this pointcloud topic and need the original size to determine which point of poitncloud corresponds to which in the rgb image.
My environment system is :
Jetson nano, Ubuntu 20.04, RealSense SDK 2.51.1, Wrapper 4.51.1
Thank you!
The text was updated successfully, but these errors were encountered: