Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi overlapping cameras mapping coordinates into same plane. #2664

Closed
inders opened this issue Nov 3, 2018 · 7 comments
Closed

Multi overlapping cameras mapping coordinates into same plane. #2664

inders opened this issue Nov 3, 2018 · 7 comments
Assignees

Comments

@inders
Copy link

inders commented Nov 3, 2018

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 14/16/17)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform / NVIDIA Jetson /
SDK Version { 2}
Language {/C#/ }
Segment {Robot/ }

Issue Description

<Describe your question / >

Background - I have a setup wherein I have placed D415 in series in a line with some overlap between any two adjacent cameras.

Question - I need a way to transform the cameras coordinates from each camera to real world coordinates i.e. to a common plane to say (x1,y1,z1) in real world from Camera1 is same as (x2, y2, z2) from Camera2 in the overlapping region of camera1 and camera2.
Any suggestion/points on the theory and on how to do it with points to the code would be very helpful and highly appreciated.

Would using the following function help achieve this -

  1. Calibrate camera1 and camer2 extrinsic relative to reach other from a common plan in the real-world(How?)
  2. For camera1 (x,y,z) and extrinsic of camera1 calculated in [1] get from_point
  3. For camera2 (x,y,z) and extrinsic of camera2 calculated in [1] get from_point
  4. Both points should be same in the real world.
    static void rs2_transform_point_to_point(float to_point[3], const struct rs2_extrinsics * extrin, const float from_point[3])
    librealsense/rsutil.h at 5e73f7b · IntelRealSense/librealsense · GitHub
@inders
Copy link
Author

inders commented Nov 3, 2018

Additionally, to get extrinics of the two cameras using Vicalib how to set the world reference frame to a common point on the plane rather than it being one camera as mentioned in the tool here - https://github.com/arpg/Documentation/tree/master/Calibration

When done, Vicalib produces the file cameras.xml. This contains the fields width and height for the image size, params for the intrinsic calibration parameters, and T_wc with the transformation from the world to the camera. The world reference frame is set to the first camera, to that the pose of other cameras is given related to the first one.

@dorodnic
Copy link
Contributor

dorodnic commented Nov 5, 2018

Hi @inders

It's a good question we unfortunately don't have a good answer to, yet. We did experiments finding the relative pose using Open3D and opencv_contrib/rgbd, but nothing yet reliable enough to publish / recommend. Using vicalib for this seems to me like a great idea.

As for the more technical aspect of the question, you can project 2D pixels to 3D points using the method in rsutil.h or using pointcloud processing block (as shown in the pointcloud example) Once you have points in XYZ format its just a matter of multiplying them by the extrinsics matrix.

Also, there is some potentially useful information in the following white-paper:
Intel® RealSense™ Depth Module
D400 Series Custom Calibration

@inders
Copy link
Author

inders commented Nov 6, 2018

Hi @dorodnic Thanks for responding. I looked through the paper you recommended and I have the following follow up questions -

  1. In this approach the camera is moving constantly and target is fixed. In this case what's the reference point i.e. origin against which the camera is getting calibrated.

  2. Since real sense have depth data, i should be ok with one camera also to reduce the scope of the problem. I should be able to project the image pixel coordinates back into the real world against an origin let's say a corner of my room. How can I find the camera matrix given some 6 real world points measured against a known origin and corresponding 6 image points using real sense SDK. Can you please point me in the right direction.

@inders
Copy link
Author

inders commented Nov 8, 2018

@dorodnic With VICALib can I calibrate cam1 and cam2 having overlapping areas so that the extrinsic keeps the Z axis intact in the camera matrix.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Ticket being closed due to inactivity for 30+ days

@dk67604
Copy link

dk67604 commented Oct 10, 2019

How we calculated the extrinsic parameter for cam 1 and cam 2?

@Mohsen007
Copy link

Hi @inders
I have a big probelm with this. The SVD algorithm or opencv estimate 3d affine are not precise at all.
Could you please let me know if you found any way...
Im using D435 C#

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants