-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting numpy depth_map back into pyrealsense2 depth frame #5784
Comments
following |
Interested in an answer too ! |
Hi, |
@LMurphy99 Sorry for late response. Have submitted your requirement to engineering team. Any update will let you know. Thanks! |
following |
Are there any updates on this? Thanks! |
@adityashrm21 Sorry, this enhancement is still under working. Any update will let you know. Thanks! |
following |
Hi everyone, The BufData object creation question has been answered by a RealSense team member in the link below. This case will remain open though due to the ongoing Enhancement feature request associated with it that is related to the original question. |
@MartyG-RealSense are there any updates to this? thanks |
Hi All, Any updates on this feature to convert the numpy array back to real sense depth frames to utilize the inbuilt functions of pyrealsense2? Thanks |
Hi @DeepakChandra2612 There is no further information to report about the official feature request to Intel, though it remains active. I am not aware of any further progress that RealSense users have made themselves on developing numpy to rs2::frame conversion either since the #2551 link already mentioned. |
I am trying to get a region of the depth map and convert it into point cloud. To do so, I have cropped the depth map to get the ROI I wanted and in doing so, the depth frame was converted into a numpy array. I want to convert this numpy array back to depth frame format so that I can extract point clouds from the cropped region. How to do that? Here is the code: pipeline = rs.pipeline() pipeline_wrapper = rs.pipeline_wrapper(pipeline) found_rgb = False config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) if device_product_line == 'L500': profile = pipeline.start(config) clipping_distance_in_meters = 1 #1 meter pcd_depth_object = o3d.geometry.PointCloud()
Here, depth_object is the cropped depth frame which is in numpy array format. I need to convert it into realsense depth frame format for extracting point clouds from it. |
Hi @sanjaiiv04 Converting numpy to rs2::frame remains an unsolved problem where there is no further advice available, unfortunately. |
It's mind-blowing that 4 years later you haven't added support for this, unbelievable |
I am waiting for the answer in 2024.... |
Same. Following. |
Issue Description
I have been working on pre-processing depth maps before I convert them to pointclouds. This involved converting the depth_frame.get_data() into a numpy array, and doing some processing (i.e, converting all pixels of no interest to nan). I wanted to use the rs.pointcloud().calculate() function to calculate my point cloud, with my altered depth map as input. I'm getting the following error however..
"TypeError: calculate(): incompatible function arguments. The following argument types are supported:
1. (self: pyrealsense2.pyrealsense2.pointcloud, depth: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.points"
i was wondering how I would go about converting my numpy depth map into a format that the function can accept.
Attached is the code of note

The text was updated successfully, but these errors were encountered: