-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
extracting pointclouds of multiple cameras #12082
Comments
Hi @Nimaro76 RGB color cannot be synced by hardware synchronization, but the RealSense SDK has mechanisms for synching depth and color. When depth and color FPS speed is the same then sync between them should automatically take place. It is only synching the depth and color on that particular individual camera though. You can help to ensure that depth and RGB FPS remain the same by having auto-exposure = true and setting to false an RGB option called Auto-Exposure Priority. When auto-exposure is true and auto-exposure priority is false then the SDK will try to enforce a constant FPS for both. It is not a necessity to use hardware sync when generating point clouds from multiple cameras though. For example, at the link below is a guide for doing so in ROS without sync and combining the separate clouds from each individual camera on the same computer into a single merged cloud. https://github.com/IntelRealSense/realsense-ros/wiki/Showcase-of-using-2-cameras This merging of pointclouds can also be done with cameras that are attached to different computers instead of all being attached to the same computer. https://github.com/IntelRealSense/realsense-ros/wiki/showcase-of-using-3-cameras-in-2-machines |
thank you for your reply, |
If you have a Windows computer and are able to make use of commercial software then the RealSense-compatible scanning tool RecFusion Pro could do that as the Pro version supports multicam. An open-source C++ alternative would be to use the RealSense SDK's rs-kinfu example. Instead of combining together pointclouds from multiple cameras, you can move a single camera around the scene and progressively build up a pointcloud with 'frame fusion'. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu |
Unfortunately, it is not in the nature of my project to use softwares in order to perform this task. there are two necessary features that should be in the algorithm:
with theses two key features that are required, do we need to do hardware synchronization? or software synchronization is still a possible choice |
Hardware sync is not a necessity for pointcloud merging, as demonstrated by the ROS example and by RecFusion Pro. If 4 cameras are going to be active simultaneously though then you will need a computer specification that can cope with the amount of computing resources that 4 cameras attached to the same machine will consume. For that number of cameras, a computing device with a CPU equivalent to an Intel i7 would be recommended. If Global Time is true (it is true by default on 400 Series cameras) then it will attempt to match up multiple cameras to a common timestamp, as described at #3909 |
Hi @Nimaro76 Do you require further assistance with this case, please? Thanks! |
thank you, I will contact further if I had an issue during implementation. kind regards |
You are very welcome, @Nimaro76 - thanks very much for the update. Please do feel free to contact us if you have further issues. Good luck! |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hello,
I am currently working on synchronizing multiple cameras to obtain point clouds from each of them when working simultaneously. To achieve this, I require both depth and color images from each camera. My question is whether hardware synchronization is necessary for this task or if it can be accomplished through software synchronization.
I came across the multicam example, but it rendered a frameset where depth and color frames were not separate variables. I plan to try separating them soon to assess the impact. I have also reviewed previous discussions, but they did not alleviate my uncertainty about which synchronization method to choose.
I would greatly appreciate your assistance and advice on this matter.
Thank you for your time and consideration.
Kind regards,
Nima Roshandel
The text was updated successfully, but these errors were encountered: