-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3D reconstruction using 4 realsense cameras #11941
Comments
Hi @Kishor-Ryouta The RealSense SDK does not have built-in support for multiple camera 3D reconstruction. If you are able to use commercial software then the RealSense-compatible 3D scanning tool RecFusion Pro supports multiple RealSense cameras. It supports the 400 Series RealSense models D415, D435, D435i and D455. https://www.recfusion.net/products/ Some non-commercial options for multiple camera scanning are discussed at #9832 (comment) |
Hi @MartyG-RealSense, apologies for the delayed response. I have a question on calibration. I am using 4 cameras on my study table, which is my setup. I place a checkerboard pattern in the center and capture that image from each cam and perform calibration. but the process is a bit time consuming. can you provide any suggestions / improvements or any alternate calibration procedure for obtain the intrinsic an extrinsic properties? thank you. |
The Python example box_dimensioner_multicam.py automatically calibrates together the positions of multiple cameras placed around an empty checkerboard when the program is run. After auto-calibration is completed, the program requests to the user that they place an object on the checerboard so that its volume can be measured. |
@MartyG-RealSense |
Correct, the cameras and checkerboard can be in fixed positions and only the object placed on the checkerboard is moved. |
@MartyG-RealSense |
Your described approach is a correct and commonly used one. In Open3D, generating multiple pointclouds and stitching them together is described as ICP registration, as discussed at #9590 |
Thank you @MartyG-RealSense |
hi @MartyG-RealSense, and this is the point cloud I have |
Is the flipped image generated with Open3D please? |
Not open3D, that's the 'pointcloud_viewer.py' example from the SDK. |
opencv_pointcloud_viewer.py has controls for rotating the pointcloud by holding the left mouse button and dragging the mouse. Are you able to achieve an image that resembles the RealSense Viewer one if you rotate the cloud with the mouse? |
@MartyG-RealSense I can obtain the image that resembles the viewer. but even then, the coordinates are different right? |
The Python pointcloud will be closer to one in the Viewer if all post-processing filters are disabled in the Viewer, as the Viewer applies a range of filters by default whilst a script written by a RealSense user will have no filters or colorization settings applied unless the user deliberately programs them into the script. |
Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks! |
hi! @MartyG-RealSense i am still working on the project and experimenting. |
Okay, thanks very much for the update! |
If you are using pyrealsense2 then a method called an affine transform can be used to set all the pointclouds to the same position and rotation in 3D space. An instruction called rs2_transform_point_to_point in the RealSense SDK is an affine transform. Information about this function in regards to Python can be found at #5583 (comment) |
Yes, set all pointclouds to the same rotation and position. |
#8333 may be a helpful reference about affine transform. It can explain about the procedure better than I can as my knowledge of affine transform is admittedly limited. Another method of multiple camera pointcloud construction that could be considered is to do it with ROS using Intel's guide for ROS1 at the link below. https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/ |
Thanks very much for the update, @Kishor-Ryouta - good luck! |
@MartyG-RealSense what does the following code do? depth_to_color_extrinsics = depth_frame.profile.get_extrinsics_to(color_frame.profile) does it give the extrinsic parameter of the camera ? or the translation vector between the stereo and rgb camera? |
It is retrieving the extrinsics (translation and rotation) between the stereo depth sensor and the RGB color sensor. |
ArUco is certainly a valid method of camera calibration. You can find OpenCV and Python information resources for ArUco by googling for the search term aruco opencv python There is an.ArUco tutorial for OpenCV with Python code here: https://www.learnopencv.com/augmented-reality-using-aruco-markers-in-opencv-c-python/ |
Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks! |
@MartyG-RealSense nothing from my side for now. thanks for the support. In case i do need any assistance i can reopen the issue. |
Thanks very much @Kishor-Ryouta for the update! |
@MartyG-RealSense in a point cloud, If I have an object placed on the table, is there a way for me to extract only the object. I have seen segmentation functions in PCL, but I want to know if there are any other user friendly segmentation options that can help to acheive this objectiver of mine. |
If the camera is mounted a fixed distance above the able, pointing downwards, then you could set a minimum depth sensing distance with the post-processing filter Threshold Filter so that the camera ignores the tabletop surface but captures the object on top of the table. Information about using Python to set up and apply the filter can be found at #5964 (comment) |
@MartyG-RealSense the camera is placed on the table, but its view is not perpendicular to the table. its resting on the small tripod that comes in the realsense box on the table. |
If the table had a black surface (either a black top or a black cover placed upon it) then the camera would not be able to read the table top but could depth-sense the object so long as it is not dark grey or black. The multiple camera RealSense example Python application box_dimensioner_multicam.py, which can support 4 cameras, could also generate a combined pointcloud of a box placed upon a chessboard image on the tabletop using data from 4 cameras arranged around the chessboard and identify the box but ignore the chessboard. |
Hi @Kishor-Ryouta Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense . I was able to get a complete reconstructed model. thanks for all the support! |
That's excellent news. Thanks very much for the update about your success! |
Hello, could you please share your code? I have been working on 3D reconstruction recently |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi, I am working depth cameras and want to perform 3D reconstruction using multiple realsense cameras. Does the SDK offer any algorithms to perform reconstruction? if not, any suggestions or pipeline on how to proceed to achieve this objective?
thanks!
The text was updated successfully, but these errors were encountered: