Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D reconstruction using 4 realsense cameras #11941

Closed
Kishor-Ramesh opened this issue Jun 26, 2023 · 35 comments
Closed

3D reconstruction using 4 realsense cameras #11941

Kishor-Ramesh opened this issue Jun 26, 2023 · 35 comments

Comments

@Kishor-Ramesh
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version (Open RealSense Viewer --> 2.54)
Operating System & Version {Win 11}
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC
SDK Version { 2.0 }
Language {python}

Issue Description

Hi, I am working depth cameras and want to perform 3D reconstruction using multiple realsense cameras. Does the SDK offer any algorithms to perform reconstruction? if not, any suggestions or pipeline on how to proceed to achieve this objective?
thanks!

@MartyG-RealSense
Copy link
Collaborator

Hi @Kishor-Ryouta The RealSense SDK does not have built-in support for multiple camera 3D reconstruction.

If you are able to use commercial software then the RealSense-compatible 3D scanning tool RecFusion Pro supports multiple RealSense cameras. It supports the 400 Series RealSense models D415, D435, D435i and D455.

https://www.recfusion.net/products/

Some non-commercial options for multiple camera scanning are discussed at #9832 (comment)

@Kishor-Ramesh
Copy link
Author

Kishor-Ramesh commented Jun 28, 2023

Hi @MartyG-RealSense, apologies for the delayed response.

I have a question on calibration. I am using 4 cameras on my study table, which is my setup. I place a checkerboard pattern in the center and capture that image from each cam and perform calibration. but the process is a bit time consuming. can you provide any suggestions / improvements or any alternate calibration procedure for obtain the intrinsic an extrinsic properties? thank you.

@MartyG-RealSense
Copy link
Collaborator

The Python example box_dimensioner_multicam.py automatically calibrates together the positions of multiple cameras placed around an empty checkerboard when the program is run. After auto-calibration is completed, the program requests to the user that they place an object on the checerboard so that its volume can be measured.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

@Kishor-Ramesh
Copy link
Author

Kishor-Ramesh commented Jun 28, 2023

@MartyG-RealSense
so in this example, there is no need to move the pattern or the cameras around (the conventional approach) to capture different images, is it?

@MartyG-RealSense
Copy link
Collaborator

Correct, the cameras and checkerboard can be in fixed positions and only the object placed on the checkerboard is moved.

@Kishor-Ramesh
Copy link
Author

@MartyG-RealSense
Based on my search, the pipeline to achieve this is capturing the aligned images from each camera, converting them to point clouds, stitching each point cloud and register.
this is currently the approach i am going with, if you think any changes are to be made please do suggest it or if my approach is right and if you have any supporting links that can help with my approach, it would be really appreciated. thank you.

@MartyG-RealSense
Copy link
Collaborator

Your described approach is a correct and commonly used one.

In Open3D, generating multiple pointclouds and stitching them together is described as ICP registration, as discussed at #9590

@Kishor-Ramesh
Copy link
Author

Thank you @MartyG-RealSense
I really appreciate the early response and you've been really helpful. Waiting to see the results of my work.

@Kishor-Ramesh
Copy link
Author

hi @MartyG-RealSense,
the point cloud I am getting has its axis flipped about the focal length.

this is the realsense viewer
image

and this is the point cloud I have

image

@MartyG-RealSense
Copy link
Collaborator

Is the flipped image generated with Open3D please?

@Kishor-Ramesh
Copy link
Author

Not open3D, that's the 'pointcloud_viewer.py' example from the SDK.

@MartyG-RealSense
Copy link
Collaborator

opencv_pointcloud_viewer.py has controls for rotating the pointcloud by holding the left mouse button and dragging the mouse. Are you able to achieve an image that resembles the RealSense Viewer one if you rotate the cloud with the mouse?

@Kishor-Ramesh
Copy link
Author

@MartyG-RealSense I can obtain the image that resembles the viewer. but even then, the coordinates are different right?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 2, 2023

The Python pointcloud will be closer to one in the Viewer if all post-processing filters are disabled in the Viewer, as the Viewer applies a range of filters by default whilst a script written by a RealSense user will have no filters or colorization settings applied unless the user deliberately programs them into the script.

@MartyG-RealSense
Copy link
Collaborator

Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks!

@Kishor-Ramesh
Copy link
Author

hi! @MartyG-RealSense i am still working on the project and experimenting.

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks very much for the update!

@MartyG-RealSense
Copy link
Collaborator

If you are using pyrealsense2 then a method called an affine transform can be used to set all the pointclouds to the same position and rotation in 3D space. An instruction called rs2_transform_point_to_point in the RealSense SDK is an affine transform. Information about this function in regards to Python can be found at #5583 (comment)

@MartyG-RealSense
Copy link
Collaborator

Yes, set all pointclouds to the same rotation and position.

@MartyG-RealSense
Copy link
Collaborator

#8333 may be a helpful reference about affine transform. It can explain about the procedure better than I can as my knowledge of affine transform is admittedly limited.

Another method of multiple camera pointcloud construction that could be considered is to do it with ROS using Intel's guide for ROS1 at the link below.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update, @Kishor-Ryouta - good luck!

@Kishor-Ramesh
Copy link
Author

@MartyG-RealSense what does the following code do?

depth_to_color_extrinsics = depth_frame.profile.get_extrinsics_to(color_frame.profile)

does it give the extrinsic parameter of the camera ? or the translation vector between the stereo and rgb camera?

@MartyG-RealSense
Copy link
Collaborator

It is retrieving the extrinsics (translation and rotation) between the stereo depth sensor and the RGB color sensor.

@MartyG-RealSense
Copy link
Collaborator

ArUco is certainly a valid method of camera calibration. You can find OpenCV and Python information resources for ArUco by googling for the search term aruco opencv python

There is an.ArUco tutorial for OpenCV with Python code here:

https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/

@MartyG-RealSense
Copy link
Collaborator

Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks!

@Kishor-Ramesh
Copy link
Author

@MartyG-RealSense nothing from my side for now. thanks for the support. In case i do need any assistance i can reopen the issue.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @Kishor-Ryouta for the update!

@Kishor-Ramesh Kishor-Ramesh reopened this Aug 4, 2023
@Kishor-Ramesh
Copy link
Author

Kishor-Ramesh commented Aug 4, 2023

@MartyG-RealSense in a point cloud, If I have an object placed on the table, is there a way for me to extract only the object. I have seen segmentation functions in PCL, but I want to know if there are any other user friendly segmentation options that can help to acheive this objectiver of mine.

@MartyG-RealSense
Copy link
Collaborator

If the camera is mounted a fixed distance above the able, pointing downwards, then you could set a minimum depth sensing distance with the post-processing filter Threshold Filter so that the camera ignores the tabletop surface but captures the object on top of the table. Information about using Python to set up and apply the filter can be found at #5964 (comment)

@Kishor-Ramesh
Copy link
Author

@MartyG-RealSense the camera is placed on the table, but its view is not perpendicular to the table. its resting on the small tripod that comes in the realsense box on the table.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 5, 2023

If the table had a black surface (either a black top or a black cover placed upon it) then the camera would not be able to read the table top but could depth-sense the object so long as it is not dark grey or black.

The multiple camera RealSense example Python application box_dimensioner_multicam.py, which can support 4 cameras, could also generate a combined pointcloud of a box placed upon a chessboard image on the tabletop using data from 4 cameras arranged around the chessboard and identify the box but ignore the chessboard.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

image

@MartyG-RealSense
Copy link
Collaborator

Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks!

@Kishor-Ramesh
Copy link
Author

Hi @MartyG-RealSense . I was able to get a complete reconstructed model. thanks for all the support!

@MartyG-RealSense
Copy link
Collaborator

That's excellent news. Thanks very much for the update about your success!

@kejifuli9988
Copy link

Hello, could you please share your code? I have been working on 3D reconstruction recently

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants