# Revision history [back]

You should add a figure of your setup to better let other people understand what you want to do. If I have understood correctly, you are doing the following:

• move the gripper to point to a chessboard corner
• get the corner location in the image (will be img_pts)
• get the gripper position (will be in objectPoints)
• repeat for 8 corners
• compute solvePnP

This looks not correct for hand-eye calibration:

• solvePnP should be used to compute the transformation between the camera frame and the object frame
• in your case, objectPoints are expressed in some reference frame
• but for each corners in img_pts, the camera has moved
• and hand-eye calibration is different than perspective-N-points method (pose from 2D/3D)

### Hand-eye calibration

This topic is well covered in the literature:

Normally the setup should be:

• the camera is attached to the end-effector of the robot and you want to estimate the transformation between the camera frame and the end-effector frame T_robot_camera

• or the camera is static and you want to estimate the transformation between the camera and the robot reference frame T_ext_camera

In the first case, the calibration procedure is:

• the calibration pattern is static
• you move the end-effector to capture some images in a sphere around the calibration pattern
• for each images, you have:
• the transformation between the camera and the calibration pattern frames T_camera_pattern (with OpenCV, findChessboardCorners() + solvePnP())
• the transformation between the end-effector and the robot frames T_ext_end-effector

Then, you can estimate the transformation between the end-effector frame and the camera frame using a standard hand-eye calibration method (e.g. Tsai method).

You should be able to find some code that implements the Tsai method or some similar methods, for instance maybe this one: ROS + CamOdoCal Hand Eye Calibration.

My advice is to use ViSP to do the Tsai calibration since I have already used this library: