Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert Realsense poincloud in Open3D pointcloud #473

Closed
baptiste-mnh opened this issue Jul 25, 2018 · 28 comments · Fixed by #622
Closed

Convert Realsense poincloud in Open3D pointcloud #473

baptiste-mnh opened this issue Jul 25, 2018 · 28 comments · Fixed by #622
Labels

Comments

@baptiste-mnh
Copy link

baptiste-mnh commented Jul 25, 2018

Hi!
I'm currently trying to experiment real time alignement with Realsense D400 with the python wrapper.
I'm successfully align my point cloud by saving the PLY and then open them with the 'read_point_cloud()' function.
But the open/writing time is to long and of course not the proper way to do it.
So I tried to convert my realsense pointclouds to numpy and then get it from Open3D but it looks like it's not the same format of numpy.

Here is my code :

    def get_pcd_stream(self, device_i):
        while True:
            dev = self._stream_devices[device_i]
            frames = dev.pipeline.wait_for_frames()
            depth_frame = frames.get_depth_frame()
            color_frame = frames.get_color_frame()
            if not depth_frame or not color_frame:
                continue
            dev.pc.map_to(color_frame)
            points = dev.pc.calculate(depth_frame)

            pcd_load=read_point_cloud("/home/baptistemnh/Pictures/rs/seq0c0_rs_pcd__732612060188.pcd")
            xyz_load = np.asarray(pcd_load.points)

            vtx = np.asarray(points.get_vertices())
            tex = np.asarray(points.get_texture_coordinates())

Here is what I get from the debugger:

debugpcd
We can see that the format between vtx and xyz_load isn't the same.

What I tried

while True:
            dev = self._stream_devices[0]
            frames = dev.pipeline.wait_for_frames()
            depth_frame = frames.get_depth_frame()
            color_frame = frames.get_color_frame()
            if not depth_frame or not color_frame:
                continue

            # Tell pointcloud object to map to this color frame
            dev.pc.map_to(color_frame)

            # Generate the pointcloud and texture mappings
            points = dev.pc.calculate(depth_frame)

            pcd_load = read_point_cloud("/home/baptiste-mnh/Pictures/rs/seq0c0_rs_pcd__732612060188.pcd")
            xyz_load = np.asarray(pcd_load.points)

            vtx = np.asarray(points.get_vertices())
            tex = np.asarray(points.get_texture_coordinates())

            pcd_convert = PointCloud()
            pcd_convert.points = Vector3dVector(vtx.tolist())
            draw_geometries([pcd_convert])

It works but it's way too long ...
So is there any efficient way to use the realsense point clouds in the Open3D library?

Thanks!

@syncle
Copy link
Contributor

syncle commented Jul 25, 2018

How about making a RGBD image first and then make a point cloud? For example:

intrinsic = read_pinhole_camera_intrinsic("real_sense_intrinsic")
while True:
     depth_frame_open3d = Image(depth_frame)
     color_frame_open3d = Image(color_frame)
     rgbd_image = create_rgbd_image_from_color_and_depth(
          color_frame_open3d, depth_frame_open3d)
     pcd = create_point_cloud_from_rgbd_image(rgbd_image, intrinsic)
     draw_geometries([pcd])

This will remove point cloud I/O. I actually haven't tested depth_frame_open3d = Image(depth_frame). It is just quick suggestion. You may need to copy raw frame buffers.

@baptiste-mnh
Copy link
Author

baptiste-mnh commented Jul 25, 2018

Hi,
Yes that what I'm doing for now but using this method there is a small offset between the point cloud and the texture (if I take my face with a wall behind me, the very left of my face will get the color of the wall), this doesn't appear when I use the point cloud from the Realsense SDK.
Maybe there is a way to better apply the création of the pointcloud from depth and color using open3D lib?

@syncle
Copy link
Contributor

syncle commented Jul 25, 2018

I see. How about using librealsense in order to transform depth image to color image domain? There is an interesting example for this: https://github.com/IntelRealSense/librealsense/tree/master/examples/align

@baptiste-mnh
Copy link
Author

baptiste-mnh commented Jul 26, 2018

I'll try to use the equivalent with there python wrapper, thanks!

I can't move the object, each time I pool_events it reset the view point is there a way to save this view point and re-apply it?

With this code :

    def visualisation_thread(self):
        vis = Visualizer()
        vis.create_window('Aligned Column', width=1280, height=720)
        pointcloud = PointCloud()
        while True:
            vis.add_geometry(pointcloud)
            pointcloud.clear()
            print "pooling"
            # Get the intrinsics of the realsense device
            frames_devices = self.deviceManager.poll_frames()
            intrinsics_devices = self.deviceManager.get_device_intrinsics(frames_devices)
            extrinsics_devices = self.deviceManager.get_depth_to_color_extrinsics(frames_devices)
            for serial, frameset in frames_devices.items():
                try:
                    assert (rs.stream.depth in frameset and rs.stream.color in frameset)
                    depth = frameset[rs.stream.depth]
                    color = frameset[rs.stream.color]

                    intrinsic = intrinsics_devices[serial][rs.stream.depth]
                    # filtered_depth_frame = post_process_depth_frame(depth, temporal_smooth_alpha=0.1,
                    #                                                 temporal_smooth_delta=80)
                    img_color = Image(np.array(color.get_data()))
                    img_depth = Image(np.array(depth.get_data()))
                    rgbd = create_rgbd_image_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False)
                    pinhole_camera_intrinsic = PinholeCameraIntrinsic(intrinsic.width, intrinsic.height, intrinsic.fx,
                                                                      intrinsic.fy, intrinsic.ppx, intrinsic.ppy)
                    pcd_down = voxel_down_sample(create_point_cloud_from_rgbd_image(rgbd, pinhole_camera_intrinsic), 0.01)
                    pointcloud += pcd_down
                except Exception as e:
                    print("Error in frameset work : {0}".format(e.message))
                    continue
            vis.update_geometry()
            vis.reset_view_point(False)
            vis.poll_events()
            vis.update_renderer()

@baptiste-mnh
Copy link
Author

To test te most efficient way to show in realtime the pointcloud from Realsense, I apply this solution and it works : https://github.com/erleben/pySoRo#adding-two-dimensional-data-protocols

So I just have to do :

from datetime import datetime
import pyrealsense2 as rs
import numpy as np
from open3d import *


# Create a pipeline
pipeline = rs.pipeline()

# Create a config and configure the pipeline to stream
#  different resolutions of color and depth streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)

# Start streaming
profile = pipeline.start(config)


# Streaming loop
try:
    vis = Visualizer()
    vis.create_window("Tests")
    pcd = PointCloud()
    while True:
        dt0 = datetime.now()
        vis.add_geometry(pcd)
        pcd.clear()
        # Get frameset of color and depth
        frames = pipeline.wait_for_frames()
        color = frames.get_color_frame()
        depth = frames.get_depth_frame()
        if not color or not depth:
            continue
        pc = rs.pointcloud()
        pc.map_to(color)
        points = pc.calculate(depth)
        vtx = np.asanyarray(points.get_vertices_2d())
        pcd.points = Vector3dVector(vtx)
        vis.update_geometry()
        vis.poll_events()
        vis.update_renderer()
        process_time = datetime.now() - dt0
        print("FPS = {0}".format(1/process_time.total_seconds()))

finally:
    pipeline.stop()

But it still be slow due to the Vector3dVector(vtx) part.
So I close the issue because this issue is already treat here #403.

Thanks a lot for your help and your time!

@syncle
Copy link
Contributor

syncle commented Jul 26, 2018

Thanks for sharing your valuable script to us. It is obvious the remaining bottleneck is #403.

@baptiste-mnh
Copy link
Author

baptiste-mnh commented Jul 26, 2018

Those new functions to get vertices in 2d and coordinate texture in 2d in the Realsense SDK development branch.

Do you have any clue about why I can't rotate the point cloud in the viewer with my mouse?(the keyboard is working).

@syncle
Copy link
Contributor

syncle commented Jul 26, 2018

Umm it is because non-blocking rendering does not correctly linked to mouse callback functions. I will have a look how it can be fixed.

@syncle
Copy link
Contributor

syncle commented Jul 26, 2018

Aha. I found very simple solution. Please remove vis.reset_view_point(False). This resets view point every iteration, and therefore ignores user mouse input.

@Fred3D-tech
Copy link

It is obvious the remaining bottleneck is #403.

Is there a way to remove this bottleneck ? It looks like it was working well and fast in previous versions of open3d.

@baptiste-mnh
Copy link
Author

@syncle I remove it and it still doesn't work

@baptiste-mnh
Copy link
Author

baptiste-mnh commented Jul 27, 2018

I resolved the bug of the mouse with this code by adding only once the geomtrie in the viewer :) :

def visualisation_thread(self):
     geometrie_added = False
     vis = Visualizer()
     vis.create_window('Aligned Column', width=1280, height=720)
     pointcloud = PointCloud()
     while True:
         dt0 = datetime.now()
         pointcloud.clear()
         print "pooling"
         frames_devices = self.deviceManager.poll_frames()
         intrinsics_devices = self.deviceManager.get_device_intrinsics(frames_devices)
         # extrinsics_devices = self.deviceManager.get_depth_to_color_extrinsics(frames_devices)
         for serial, frameset in frames_devices.items():
             try:
                 assert (rs.stream.depth in frameset and rs.stream.color in frameset)
                 depth = frameset[rs.stream.depth]
                 color = frameset[rs.stream.color]
                 # filtered_depth_frame = post_process_depth_frame(depth, temporal_smooth_alpha=0.1,
                 #                                                 temporal_smooth_delta=80)

                 # pcd =  self.get_pcd(depth, color)
                 pcd = self.get_pcd_from_open3D(depth, color, intrinsics_devices[serial][rs.stream.depth])

                 camera = self.cameraRig.get_camera(serial)
                 if camera:
                     pcd.transform(camera.transformation_matrix)
                 pointcloud += pcd
             except Exception as e:
                 print("Error in frameset work : {0}".format(e.message))
                 continue
         if not geometrie_added:
             vis.add_geometry(pointcloud)
             geometrie_added = True
         vis.update_geometry()
         vis.poll_events()
         # vis.update_renderer()
         process_time = datetime.now() - dt0
         fps =  1.0/process_time.total_seconds()
         print ("Fps : {0}".format(fps))

Is there a way to check if I'm currently clicking on the windows, so I could disable the refresh of the viewer because it's very laggy to move the pointcloud while the window is refreshing?

For those who need the deviceManager in my code, I use this file : https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/box_dimensioner_multicam/realsense_device_manager.py

@Fred3D-tech
Copy link

Fred3D-tech commented Jul 27, 2018

the problem is still the "bottleneck" which is in fact a complete jam. The vector3dvector not efficiency makes open3d not usable for us. We will have to investigate other solutions.

@LLDavid
Copy link

LLDavid commented Oct 19, 2018

I resolved the bug of the mouse with this code by adding only once the geomtrie in the viewer :) :

def visualisation_thread(self):
     geometrie_added = False
     vis = Visualizer()
     vis.create_window('Aligned Column', width=1280, height=720)
     pointcloud = PointCloud()
     while True:
         dt0 = datetime.now()
         pointcloud.clear()
         print "pooling"
         frames_devices = self.deviceManager.poll_frames()
         intrinsics_devices = self.deviceManager.get_device_intrinsics(frames_devices)
         # extrinsics_devices = self.deviceManager.get_depth_to_color_extrinsics(frames_devices)
         for serial, frameset in frames_devices.items():
             try:
                 assert (rs.stream.depth in frameset and rs.stream.color in frameset)
                 depth = frameset[rs.stream.depth]
                 color = frameset[rs.stream.color]
                 # filtered_depth_frame = post_process_depth_frame(depth, temporal_smooth_alpha=0.1,
                 #                                                 temporal_smooth_delta=80)

                 # pcd =  self.get_pcd(depth, color)
                 pcd = self.get_pcd_from_open3D(depth, color, intrinsics_devices[serial][rs.stream.depth])

                 camera = self.cameraRig.get_camera(serial)
                 if camera:
                     pcd.transform(camera.transformation_matrix)
                 pointcloud += pcd
             except Exception as e:
                 print("Error in frameset work : {0}".format(e.message))
                 continue
         if not geometrie_added:
             vis.add_geometry(pointcloud)
             geometrie_added = True
         vis.update_geometry()
         vis.poll_events()
         # vis.update_renderer()
         process_time = datetime.now() - dt0
         fps =  1.0/process_time.total_seconds()
         print ("Fps : {0}".format(fps))

Is there a way to check if I'm currently clicking on the windows, so I could disable the refresh of the viewer because it's very laggy to move the pointcloud while the window is refreshing?

For those who need the deviceManager in my code, I use this file : https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/box_dimensioner_multicam/realsense_device_manager.py

@baptiste-mnh Hi, I am trying your script in the comment with realsense D415. My script:

    vis = Visualizer()
    vis.create_window('PCD', width=1280, height=720)
    pointcloud = PointCloud()
    while True:
        vis.add_geometry(pointcloud)
        dt0=datetime.now()
        # Wait for the next set of frames from the camera
        frames_1 = pipeline_1.wait_for_frames()
        # Align depth and color frame
        depth_1 = frames_1.get_depth_frame()
        color_1 = frames_1.get_color_frame()
        if not depth_1 or not color_1: #or not depth_2 or not color_2:
            continue
        # Create RGBD
        color_raw =  Image(np.array(color_1.get_data()))
        depth_raw =  Image(np.array(depth_1.get_data()))
        rgbd_image = create_rgbd_image_from_color_and_depth(color_raw, depth_raw, convert_rgb_to_intensity=False)
        # Get intrinsic
        tt_1 = profile_1.get_stream(rs.stream.depth)
        intr_1 = tt_1.as_video_stream_profile().get_intrinsics()
        pinhole_camera_intrinsic = PinholeCameraIntrinsic(intr_1.width, intr_1.height, intr_1.fx,
                                                          intr_1.fy, intr_1.ppx, intr_1.ppy)
        # Create Point cloud from rgbd
        pcd_1 = create_point_cloud_from_rgbd_image(rgbd_image,pinhole_camera_intrinsic)
        pcd_1.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
        #draw_geometries([pcd_1])
        pointcloud += pcd_1
        vis.update_geometry()
        #vis.reset_view_point(False)
        vis.poll_events()
        vis.update_renderer()
        process_time = datetime.now() - dt0
        print("FPS: "+str(1/process_time.total_seconds()))
finally:
    pipeline_1.stop()

As expected, it is slow. But on my machine, the FPS is actually dropping:
image
It seems at first few frames, it is pretty good (10 fps is what I expect). Do you know what cause the issue?
Thanks

@syncle
Copy link
Contributor

syncle commented Oct 19, 2018

I looked the code and noticed this part:

pointcloud = PointCloud()
while True:
    :
    vis.add_geometry(pointcloud)
    pointcloud += pcd_1
    :

It accumulates point cloud into pointcloud for every frame, so pointcloud become very large.

@LLDavid
Copy link

LLDavid commented Oct 19, 2018

I change it to

pointcloud = pcd_1

It is better, but still dropping
image

@baptiste-mnh
Copy link
Author

Hi,
I think the error is from here vis.add_geometry(pointcloud).
The vis variable is too big, that's why in my code I did this :

if not geometrie_added:
             vis.add_geometry(pointcloud)
             geometrie_added = True

So i add only once the pcd to the vis variable (the pcd can be then modified).

@LLDavid
Copy link

LLDavid commented Oct 19, 2018

@baptiste-mnh Thanks! You are right. I simply add pointcloud.clear() at start of while loop. It seems like the if is unnecessary

@baptiste-mnh
Copy link
Author

I did this in my code because I had a strange behavior when I was adding the geometry outside the while loop :)

@ljc19800331
Copy link

Hey guys,

I have just finished the algorithm and it works. Thank for the information.

@LLDavid I am using the D415 with your code and my FPS for 1280x720 is around 10, 640*480 around 20. Did you get the similar result? I think it would still be a little bit slow. Thanks.

@syncle
Copy link
Contributor

syncle commented Oct 19, 2018

Thanks for the active discussion!

I also tried the script by @baptiste-mnh and @LLDavid with D415 camera. Overall, it looks great to me. I think @LLDavid's suggestion can detour Vector3dVector issue that is main concern in this thread.

@ljc19800331: I just profile the script, and found create_rgbd_image_from_color_and_depth is lagging. It would be because of dynamic memory assignment vector.push_back in these lines:
https://github.com/IntelVCL/Open3D/blob/ef042a6c50e54c4bb56ccb4657a745c9b38c1b7f/src/Core/Geometry/PointCloudFactory.cpp#L89-L91
Let me try to fix this and share updates. :)

@syncle
Copy link
Contributor

syncle commented Oct 19, 2018

@ljc19800331 After static assignment, I was able to get slight gain - 30FPS for 640*480 res. and 15FPS for 1280x720 res with my mac.
@baptiste-mnh and @LLDavid: Let me make a PR to add this script in example/Python/. This would be great for Open3D users. :)

@LLDavid
Copy link

LLDavid commented Oct 20, 2018

Glad to be helpful
@ljc19800331 I guess it is related to your CPU. Mine is i7-8750H, and the FPS is at 30 in average

@ljc19800331
Copy link

ljc19800331 commented Oct 20, 2018

@LLDavid Yes it may be since my laptop's CUP is i5. Thanks for your answer.

I am test the code with SR300 and found that

Convert:
depth_1 = frames_1.get_depth_frame()
color_1 = frames_1.get_color_frame()

To:
frames = pipe.wait_for_frames()
align_to = rs.stream.color
align = rs.align(align_to)
aligned_frames = align.process(frames)
color = frames.get_color_frame()
color_raw = open3d.Image(np.asanyarray(color.get_data()))
depth_raw = open3d.Image(np.asanyarray(depth.get_data()))

may be better. I didn't see any problem in D415 with the old code but found some problems with the color alignment in SR300.

Also, I am wondering if there is a better solution for this problem since I saw the realtime point cloud visualization code before with a earlier version of the python realsense in https://github.com/toinsson/pyrealsense/blob/master/examples/show_vtk.py. This is another demo of using the realtime point cloud. However, I tried multiple times but failed to get the data in a faster way and input to the vtk object.

Hello @syncle, Thanks for your answer. I think Open3d has its way to manipulating the RGBD object and visualization so it seems different with the realtime point cloud I saw in realsense-viewer and the example shown here https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud. Is it because the rendering method is different? Thanks.

@syncle
Copy link
Contributor

syncle commented Oct 20, 2018

@ljc19800331: Open3D is being built independently so it might have some overheads on binding function. We are providing Realsense Python example for user convince. Feel free to point out any issue if you found :)

@Ayush2009
Copy link

I did the same thing as stated. My depth and color frame is a 640x360 image but when I converted it into open3d pointcloud, the number of points coming is more than 640x360. Can someone tell me how to access specific point in the open3d pointcloud that corresponds to a specific pixel in the color image used to create it?

Here is my code 👍

pcd = PointCloud()
while True:

	dt0 = datetime.now()

	pcd.clear()

	# Get frameset of color and depth
	frames = pipeline.wait_for_frames()

	# frames.get_depth_frame() is a 640x360 depth image
    
    # Align the depth frame to color frame
	aligned_frames = align.process(frames)

	# Get aligned frames
	aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
	color_frame = aligned_frames.get_color_frame()


	# Validate that both frames are valid
	if not aligned_depth_frame or not color_frame:
		continue

	img_depth= Image(np.asanyarray(aligned_depth_frame.get_data()))
	img_color= Image(np.asanyarray(color_frame.get_data()))
	rgbd = create_rgbd_image_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False)
	pinhole_camera_intrinsic = PinholeCameraIntrinsic(intrinsic.width, intrinsic.height, intrinsic.fx, intrinsic.fy, intrinsic.ppx, intrinsic.ppy)
	pcd=create_point_cloud_from_rgbd_image(rgbd, pinhole_camera_intrinsic)

@richipower
Copy link

richipower commented Jul 3, 2019

hi @baptiste-mnh I just tried your example with the new Open3D (0.7) and pyrealsense2 (2.24) versions available through pip. I changed the line
vtx = np.asanyarray(points.get_vertices_2d()) to vtx = np.asanyarray(points.get_vertices()) because 2d is not available anymore, at the end i receive the vtx as a void96 type, where all the points are stored (x,y,z) have you tried a fast solution to generate a pcd.points via Vector3dVector? Directly passing the vtx void would not work..

My solution is
vtx = np.asanyarray(points.get_vertices()) pts = [(pt[0], pt[1], pt[2]) for pt in vtx] pcd.points = Vector3dVector(pts)

But of course is not efficient, i am getting FPS = 0.95.. does anyone has an idea?

I modified my version but i found if i leave the pointcloud = pcd, the cloud does not update.. i needed to explicitly change the points pointcloud.points = pcd.points and colors.. does anyone knows why the update_geometry and update_renderer does not update the cloud?

my script looks like:

import pyrealsense2 as rs
import numpy as np
import cv2
from open3d import *

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)
align = rs.align(rs.stream.color)

vis = Visualizer()
vis.create_window('PCD', width=1280, height=720)
pointcloud = PointCloud()
geom_added = False
    
while True:
    dt0=datetime.now()
    frames = pipeline.wait_for_frames()
    frames = align.process(frames)
    profile = frames.get_profile()
    depth_frame = frames.get_depth_frame()
    color_frame = frames.get_color_frame()
    if not depth_frame or not color_frame:
        continue
    
    # Convert images to numpy arrays
    depth_image = np.asanyarray(depth_frame.get_data())
    color_image = np.asanyarray(color_frame.get_data())
    
    img_depth = Image(depth_image)
    img_color = Image(color_image)
    rgbd = create_rgbd_image_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False)
    
    intrinsics = profile.as_video_stream_profile().get_intrinsics()
    pinhole_camera_intrinsic = PinholeCameraIntrinsic(intrinsics.width, intrinsics.height, intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy)
    pcd = create_point_cloud_from_rgbd_image(rgbd, pinhole_camera_intrinsic)
    pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
    pointcloud.points = pcd.points
    pointcloud.colors = pcd.colors
    
    if geom_added == False:
        vis.add_geometry(pointcloud)
        geom_added = True
    
    vis.update_geometry()
    vis.poll_events()
    vis.update_renderer()
    
    cv2.imshow('bgr', color_image)
    key = cv2.waitKey(1)
    if key == ord('q'):
        break

    process_time = datetime.now() - dt0
    print("FPS: "+str(1/process_time.total_seconds()))
    
    

pipeline.stop()
cv2.destroyAllWindows()
vis.destroy_window()
del vis``



thanks in advance!

@Chris45215
Copy link

Considering how useful this information is, and the fact that it's been referenced so many times, can the code for this be added to the official Open3D examples? What is the best way to make that request/suggestion?

Otherwise, the examples only show you how to save a pointcloud from the Realsense and then display it later - and the code for that is substantially different. And the examples Intel has will only give a pointcloud with the 'colorizer' which does false-color images rather than the real camera color - and again is a very different workflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants