You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used rs2_deproject_pixel_to_point to generate the point cloud, and there is a difference between what I see in the RealSense viewer. As you can see in the photo below, the point cloud generated by the function, judging by the shape of the can opener, looks like it has been rotated 180 degrees around the x-axis of the image when compared to what it looks like in the RealSense Viewer. Also, if you look at the boundaries of the can, the point cloud I got contains a wider margin around the can with an incorrect depth value. My question is,
is there any way to convert the pointcloud to what it looks like in RealSense instead of what it looks like rotated around the x-axis except changing sign of y,z values?
is there any way to fix the background around the can being flush with the can, even though I used an aligned depth frame and image frame?
Hi @WRWA The two main techniques of generating an RGB-textured pointcloud with the RealSense SDK are to use align_to and rs2_deproject_pixel_to_point (as you did) or to use pc.calculate and pc.map_to to map depth and RGB together.
It is my understanding that the Viewer uses pc.calculate for its pointcloud generation, as demonstrated by the SDK's C++ example program rs-pointcloud.
Using rs2_deproject_pixel_to_point in combination with alignment can be less accurate than the pc.calculate method because the alignment process can introduce a small amount of inaccuracy. #4612 (comment) has an example of Python code for using pc.calculate.
If a pointcloud is 'wavy' like in your images - like in #1375 - then moving the camera further away from the object can help to reduce waviness. Performing a calibration of the camera can also help.
The table under the tray may not be rendering well because of its color. This is because it is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. Casting a strong light source onto a black surface can help to bring out depth detail from that surface, though increasing illumination could reduce the quality of the tray area of the image by increasing reflections from the metal.
Issue Description
I used rs2_deproject_pixel_to_point to generate the point cloud, and there is a difference between what I see in the RealSense viewer. As you can see in the photo below, the point cloud generated by the function, judging by the shape of the can opener, looks like it has been rotated 180 degrees around the x-axis of the image when compared to what it looks like in the RealSense Viewer. Also, if you look at the boundaries of the can, the point cloud I got contains a wider margin around the can with an incorrect depth value. My question is,
The code I used is as below;
The text was updated successfully, but these errors were encountered: