Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting numpy depth_map back into pyrealsense2 depth frame #5784

Open
LMurphy99 opened this issue Feb 4, 2020 · 18 comments
Open

Converting numpy depth_map back into pyrealsense2 depth frame #5784

LMurphy99 opened this issue Feb 4, 2020 · 18 comments

Comments

@LMurphy99
Copy link

Required Info
Camera Model {D435i}
Firmware Version (N/A)
Operating System & Version {Win (10)
Kernel Version (Linux Only) (N/A)
Platform PC
SDK Version {N/A }
Language {python }
Segment {N/A}

Issue Description

I have been working on pre-processing depth maps before I convert them to pointclouds. This involved converting the depth_frame.get_data() into a numpy array, and doing some processing (i.e, converting all pixels of no interest to nan). I wanted to use the rs.pointcloud().calculate() function to calculate my point cloud, with my altered depth map as input. I'm getting the following error however..

"TypeError: calculate(): incompatible function arguments. The following argument types are supported:
1. (self: pyrealsense2.pyrealsense2.pointcloud, depth: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.points"

i was wondering how I would go about converting my numpy depth map into a format that the function can accept.

Attached is the code of note
Capture

@zapaishchykova
Copy link

following

@rcroset
Copy link

rcroset commented Apr 14, 2020

Interested in an answer too !

@r-ratajczak
Copy link

Hi,
I also try to convert back the numpy array to BufData in order to reuse the modified data. I tried casting, memoryview, creating new BufData objects but without any success. Is there any possibility to recreate BufData object? The environment returns an error, that there is no defined constructor.

@RealSenseCustomerSupport
Copy link
Collaborator


@LMurphy99 Sorry for late response. Have submitted your requirement to engineering team. Any update will let you know. Thanks!

@jts3k
Copy link

jts3k commented Aug 8, 2020

following

@adityashrm21
Copy link

Are there any updates on this? Thanks!

@RealSenseSupport
Copy link
Collaborator

@adityashrm21 Sorry, this enhancement is still under working. Any update will let you know. Thanks!

@tsaizhenling
Copy link

following

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 22, 2021

Hi everyone,

The BufData object creation question has been answered by a RealSense team member in the link below.

#8394 (comment)

This case will remain open though due to the ongoing Enhancement feature request associated with it that is related to the original question.

@kailkuhn
Copy link

kailkuhn commented Mar 7, 2021

@MartyG-RealSense are there any updates to this? thanks

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 7, 2021

Hi @kailkuhn If you are asking about the official feature request for numpy depth map conversion, there are no updates to report.

In regard to exploration of numpy conversion by RealSense users, the link below may provide you with some further useful insights.

#2551

@DeepakChandra2612
Copy link

Hi All,

Any updates on this feature to convert the numpy array back to real sense depth frames to utilize the inbuilt functions of pyrealsense2?

Thanks

@MartyG-RealSense
Copy link
Collaborator

Hi @DeepakChandra2612 There is no further information to report about the official feature request to Intel, though it remains active. I am not aware of any further progress that RealSense users have made themselves on developing numpy to rs2::frame conversion either since the #2551 link already mentioned.

@sanjaiiv04
Copy link

I am trying to get a region of the depth map and convert it into point cloud. To do so, I have cropped the depth map to get the ROI I wanted and in doing so, the depth frame was converted into a numpy array. I want to convert this numpy array back to depth frame format so that I can extract point clouds from the cropped region. How to do that? Here is the code:

pipeline = rs.pipeline()
config = rs.config()
colorizer = rs.colorizer()
dec_filter = rs.decimation_filter () # Decimation - reduces depth frame density
spat_filter = rs.spatial_filter() # Spatial - edge-preserving spatial smoothing
temp_filter = rs.temporal_filter()

pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False
for s in device.sensors:
if s.get_info(rs.camera_info.name) == 'RGB Camera':
found_rgb = True
break
if not found_rgb:
print("The demo requires Depth camera with Color sensor")
exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

if device_product_line == 'L500':
config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config)
pc = rs.pointcloud()
points = rs.points()
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()

clipping_distance_in_meters = 1 #1 meter
clipping_distance = clipping_distance_in_meters / depth_scale
intrinsics_depth = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics()
intrinsics_color = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

pcd_depth_object = o3d.geometry.PointCloud()
try:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
colorized = colorizer.process(frames)

#aligning the rgb and the depth frames
align = rs.align(rs.stream.color)
frames = align.process(frames)  


#Applying hole filter
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
'''
depth_frame = dec_filter.process(depth_frame)
depth_frame = spat_filter.process(depth_frame)
depth_frame = temp_filter.process(depth_frame)'''

# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())

#Creating a binary mask of the same size as depth map for cropping
mask_image = np.zeros_like(depth_image, dtype=np.uint8)

#Obect ID for filtering the desired classes for detection
desired_class_ids=[0]

#Segmentation
results=model.predict(source=color_image,stream=True)
for person in results:
    indexes = [index for index, item in enumerate(person.boxes.cls) if item in desired_class_ids]
    mask=person.masks[indexes].xy #getting mask cordinates for each object detected in each frame
    #print(mask)
    for i in range(len(mask)): #iterating through each mask 
        
        act_mask = mask[i].tolist()  # Assuming the mask data is a list of coordinates
        
        # Convert the mask data into a list of contours
        mask_contours = [np.array(act_mask, dtype=np.int32)]

        cv2.fillPoly(mask_image, mask_contours, color=255) #Filling the mask_image with white pixels in place of mask coordinates
        #cv2.fillPoly(color_image, mask_contours, color=(0, 0, 255))


# Apply colormap on depth image (image must be converted to 8-bit per pixel first)

depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.05), cv2.COLORMAP_JET)
depth_object = cv2.bitwise_and(depth_image, depth_image, mask=mask_image).

Here, depth_object is the cropped depth frame which is in numpy array format. I need to convert it into realsense depth frame format for extracting point clouds from it.

@MartyG-RealSense
Copy link
Collaborator

Hi @sanjaiiv04 Converting numpy to rs2::frame remains an unsolved problem where there is no further advice available, unfortunately.

@charlielito
Copy link

depth_image

It's mind-blowing that 4 years later you haven't added support for this, unbelievable

@mxshen
Copy link

mxshen commented Jun 18, 2024

I am waiting for the answer in 2024....

@Ali-AliAli-Ali
Copy link

Ali-AliAli-Ali commented Nov 29, 2024

Same. Following.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests