Ask Your Question

Revision history [back]

Anyways to do Hand to Eye transformation in OpenCV?

I’d like to get the transformation between the camera and the end effector for the eye-in-hand robot without specifying the object (like chessboard)’s position in the world coordinate. Does OpenCV have the function for this?

In the documentation of cameraCalibrate, it says it can return the transformation between the object / chessboard and the world coordinate (“brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view”), but it actually returns the transformation between the camera and the object. Because of this, it’s not possible to calculate the transformation between the camera and the end effector without specifying the world frame pose of the object.

http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#calibratecamera

My current understanding is that there are only two options to get what I want: Using CamOdoCal library (https://github.com/hengli/camodocal) or using ViSP’s hand2eye calibration (https://github.com/lagadic/vision_visp/tree/master/visp_hand2eye_calibration).

I’d like to confirm this and if anyone knows how to get the transformation from hand to eye in OpenCV, please let me know.