Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calibration with camera in hand #145

Closed
aashish-tud opened this issue Aug 23, 2022 · 2 comments
Closed

Calibration with camera in hand #145

aashish-tud opened this issue Aug 23, 2022 · 2 comments
Labels

Comments

@aashish-tud
Copy link

Hello,

I have a 3D camera mounted on the tip of a 6 DOF manipulator. I want to calibrate the transform between the camera frame and the robot's tip. To achieve this my plan is:

  • Place a checkerboard pattern on the table, in front of the robot.
  • Move the camera to 4-6 different poses where the checkerboard is being detected.
  • Set frame for manipulator group as the tip of the manipulator and frame for camera as the camera_frame in the optimization.yaml

However, on reading the "intended" use case in chains.h and Andy's question it appears that the checkerboard should be connected to the end of the manipulator. Is this correct?
If so, then is it possible to calibrate the transform between camera and robot tip if the camera is mounted on the robot using robot_calibration package?

@peterdavidfagan
Copy link

peterdavidfagan commented May 1, 2023

@aashish-tud I am addressing a similar setup in the following issue, will post details once complete.

@mikeferguson
Copy link
Owner

mikeferguson commented Nov 12, 2024

This is possible to do - the one "tricky" part is that you have to tweak the calibration steps just a bit - I created an example using my UBR-1 robot:

  • Capture pretty much as usual:
    • A checkerboard finder with correct topics. I set the "camera_sensor_name" to "gripper_camera" - these are the observations from the checkerboard finder and will be in the camera optical TF frame. I set the "chain_sensor_name" to "fixed" to denote that the checkerboard itself is fixed to the base link.
    • An "arm" chain (which the camera is at the end of, but robot_calibration doesn't care about that during capture).
  • For calibration:
    • I created a "gripper_camera" model of type "camera3d" - this will re-project the detected corners through the arm chain and the camera model into 3d space
    • I created a "fixed" model of type "chain3d" - this is pretty much a dummy model since the base link is the same as the "frame" parameter, but robot_calibration/KDL doesn't care and this works just fine.
    • Finally there is the usual "chain3d_to_chain3d" error block - "gripper_camera_to_checkerboard" is what I call it in the config.

This setup will then, for each set of observations:

  • Project the detected corner points through the arm kinematics and camera projection model into 3d space
  • Project the known corner points (basically, the actual locations of the points in the fictional "checkerboard" frame) through the dummy transformation from "torso_fixed_frame" to "torso_fixed_frame" and then use the free frame parameters to transform from "torso_lift_link" to "checkerboard" frame.
  • Compare the two transformed set of points to determine the error in re-projection, giving the optimizer the ability to adjust the free parameters to minimize the errors.

In the example, I'm not actually calibrating the robot - instead just letting the optimizer find the location of the checkerboard from a complete unknown. I ran this last week with ~10 robot poses and found that the checkerboard was correctly located (confirmed by using tf2_ros/static_transform_publisher to display the location of the checkerboard free parameter offsets after calibration finished)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants