-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert Realsense poincloud in Open3D pointcloud #473
Comments
How about making a RGBD image first and then make a point cloud? For example:
This will remove point cloud I/O. I actually haven't tested |
Hi, |
I see. How about using librealsense in order to transform depth image to color image domain? There is an interesting example for this: https://github.com/IntelRealSense/librealsense/tree/master/examples/align |
I'll try to use the equivalent with there python wrapper, thanks! I can't move the object, each time I pool_events it reset the view point is there a way to save this view point and re-apply it? With this code : def visualisation_thread(self):
vis = Visualizer()
vis.create_window('Aligned Column', width=1280, height=720)
pointcloud = PointCloud()
while True:
vis.add_geometry(pointcloud)
pointcloud.clear()
print "pooling"
# Get the intrinsics of the realsense device
frames_devices = self.deviceManager.poll_frames()
intrinsics_devices = self.deviceManager.get_device_intrinsics(frames_devices)
extrinsics_devices = self.deviceManager.get_depth_to_color_extrinsics(frames_devices)
for serial, frameset in frames_devices.items():
try:
assert (rs.stream.depth in frameset and rs.stream.color in frameset)
depth = frameset[rs.stream.depth]
color = frameset[rs.stream.color]
intrinsic = intrinsics_devices[serial][rs.stream.depth]
# filtered_depth_frame = post_process_depth_frame(depth, temporal_smooth_alpha=0.1,
# temporal_smooth_delta=80)
img_color = Image(np.array(color.get_data()))
img_depth = Image(np.array(depth.get_data()))
rgbd = create_rgbd_image_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False)
pinhole_camera_intrinsic = PinholeCameraIntrinsic(intrinsic.width, intrinsic.height, intrinsic.fx,
intrinsic.fy, intrinsic.ppx, intrinsic.ppy)
pcd_down = voxel_down_sample(create_point_cloud_from_rgbd_image(rgbd, pinhole_camera_intrinsic), 0.01)
pointcloud += pcd_down
except Exception as e:
print("Error in frameset work : {0}".format(e.message))
continue
vis.update_geometry()
vis.reset_view_point(False)
vis.poll_events()
vis.update_renderer() |
To test te most efficient way to show in realtime the pointcloud from Realsense, I apply this solution and it works : https://github.com/erleben/pySoRo#adding-two-dimensional-data-protocols So I just have to do : from datetime import datetime
import pyrealsense2 as rs
import numpy as np
from open3d import *
# Create a pipeline
pipeline = rs.pipeline()
# Create a config and configure the pipeline to stream
# different resolutions of color and depth streams
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)
# Start streaming
profile = pipeline.start(config)
# Streaming loop
try:
vis = Visualizer()
vis.create_window("Tests")
pcd = PointCloud()
while True:
dt0 = datetime.now()
vis.add_geometry(pcd)
pcd.clear()
# Get frameset of color and depth
frames = pipeline.wait_for_frames()
color = frames.get_color_frame()
depth = frames.get_depth_frame()
if not color or not depth:
continue
pc = rs.pointcloud()
pc.map_to(color)
points = pc.calculate(depth)
vtx = np.asanyarray(points.get_vertices_2d())
pcd.points = Vector3dVector(vtx)
vis.update_geometry()
vis.poll_events()
vis.update_renderer()
process_time = datetime.now() - dt0
print("FPS = {0}".format(1/process_time.total_seconds()))
finally:
pipeline.stop() But it still be slow due to the Vector3dVector(vtx) part. Thanks a lot for your help and your time! |
Thanks for sharing your valuable script to us. It is obvious the remaining bottleneck is #403. |
Those new functions to get vertices in 2d and coordinate texture in 2d in the Realsense SDK development branch. Do you have any clue about why I can't rotate the point cloud in the viewer with my mouse?(the keyboard is working). |
Umm it is because non-blocking rendering does not correctly linked to mouse callback functions. I will have a look how it can be fixed. |
Aha. I found very simple solution. Please remove |
Is there a way to remove this bottleneck ? It looks like it was working well and fast in previous versions of open3d. |
@syncle I remove it and it still doesn't work |
I resolved the bug of the mouse with this code by adding only once the geomtrie in the viewer :) : def visualisation_thread(self):
geometrie_added = False
vis = Visualizer()
vis.create_window('Aligned Column', width=1280, height=720)
pointcloud = PointCloud()
while True:
dt0 = datetime.now()
pointcloud.clear()
print "pooling"
frames_devices = self.deviceManager.poll_frames()
intrinsics_devices = self.deviceManager.get_device_intrinsics(frames_devices)
# extrinsics_devices = self.deviceManager.get_depth_to_color_extrinsics(frames_devices)
for serial, frameset in frames_devices.items():
try:
assert (rs.stream.depth in frameset and rs.stream.color in frameset)
depth = frameset[rs.stream.depth]
color = frameset[rs.stream.color]
# filtered_depth_frame = post_process_depth_frame(depth, temporal_smooth_alpha=0.1,
# temporal_smooth_delta=80)
# pcd = self.get_pcd(depth, color)
pcd = self.get_pcd_from_open3D(depth, color, intrinsics_devices[serial][rs.stream.depth])
camera = self.cameraRig.get_camera(serial)
if camera:
pcd.transform(camera.transformation_matrix)
pointcloud += pcd
except Exception as e:
print("Error in frameset work : {0}".format(e.message))
continue
if not geometrie_added:
vis.add_geometry(pointcloud)
geometrie_added = True
vis.update_geometry()
vis.poll_events()
# vis.update_renderer()
process_time = datetime.now() - dt0
fps = 1.0/process_time.total_seconds()
print ("Fps : {0}".format(fps)) Is there a way to check if I'm currently clicking on the windows, so I could disable the refresh of the viewer because it's very laggy to move the pointcloud while the window is refreshing? For those who need the deviceManager in my code, I use this file : https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/box_dimensioner_multicam/realsense_device_manager.py |
the problem is still the "bottleneck" which is in fact a complete jam. The vector3dvector not efficiency makes open3d not usable for us. We will have to investigate other solutions. |
@baptiste-mnh Hi, I am trying your script in the comment with realsense D415. My script:
As expected, it is slow. But on my machine, the FPS is actually dropping: |
I looked the code and noticed this part:
It accumulates point cloud into |
Hi,
So i add only once the pcd to the vis variable (the pcd can be then modified). |
@baptiste-mnh Thanks! You are right. I simply add |
I did this in my code because I had a strange behavior when I was adding the geometry outside the while loop :) |
Hey guys, I have just finished the algorithm and it works. Thank for the information. @LLDavid I am using the D415 with your code and my FPS for 1280x720 is around 10, 640*480 around 20. Did you get the similar result? I think it would still be a little bit slow. Thanks. |
Thanks for the active discussion! I also tried the script by @baptiste-mnh and @LLDavid with D415 camera. Overall, it looks great to me. I think @LLDavid's suggestion can detour @ljc19800331: I just profile the script, and found |
@ljc19800331 After static assignment, I was able to get slight gain - 30FPS for 640*480 res. and 15FPS for 1280x720 res with my mac. |
Glad to be helpful |
@LLDavid Yes it may be since my laptop's CUP is i5. Thanks for your answer. I am test the code with SR300 and found that Convert: To: may be better. I didn't see any problem in D415 with the old code but found some problems with the color alignment in SR300. Also, I am wondering if there is a better solution for this problem since I saw the realtime point cloud visualization code before with a earlier version of the python realsense in https://github.com/toinsson/pyrealsense/blob/master/examples/show_vtk.py. This is another demo of using the realtime point cloud. However, I tried multiple times but failed to get the data in a faster way and input to the vtk object. Hello @syncle, Thanks for your answer. I think Open3d has its way to manipulating the RGBD object and visualization so it seems different with the realtime point cloud I saw in realsense-viewer and the example shown here https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud. Is it because the rendering method is different? Thanks. |
@ljc19800331: Open3D is being built independently so it might have some overheads on binding function. We are providing Realsense Python example for user convince. Feel free to point out any issue if you found :) |
I did the same thing as stated. My depth and color frame is a 640x360 image but when I converted it into open3d pointcloud, the number of points coming is more than 640x360. Can someone tell me how to access specific point in the open3d pointcloud that corresponds to a specific pixel in the color image used to create it? Here is my code 👍 pcd = PointCloud()
|
hi @baptiste-mnh I just tried your example with the new Open3D (0.7) and pyrealsense2 (2.24) versions available through pip. I changed the line My solution is But of course is not efficient, i am getting FPS = 0.95.. does anyone has an idea? I modified my version but i found if i leave the my script looks like:
|
rgbDframes: capture depth frame too test_realtimePC: streaming video point cloud testing using open 3d using isl-org/Open3D#473 (comment)
Considering how useful this information is, and the fact that it's been referenced so many times, can the code for this be added to the official Open3D examples? What is the best way to make that request/suggestion? Otherwise, the examples only show you how to save a pointcloud from the Realsense and then display it later - and the code for that is substantially different. And the examples Intel has will only give a pointcloud with the 'colorizer' which does false-color images rather than the real camera color - and again is a very different workflow. |
Hi!
I'm currently trying to experiment real time alignement with Realsense D400 with the python wrapper.
I'm successfully align my point cloud by saving the PLY and then open them with the 'read_point_cloud()' function.
But the open/writing time is to long and of course not the proper way to do it.
So I tried to convert my realsense pointclouds to numpy and then get it from Open3D but it looks like it's not the same format of numpy.
Here is my code :
Here is what I get from the debugger:
We can see that the format between vtx and xyz_load isn't the same.
What I tried
It works but it's way too long ...
So is there any efficient way to use the realsense point clouds in the Open3D library?
Thanks!
The text was updated successfully, but these errors were encountered: