Hello guys,
for my project I am trying to calibrate my depth camera with my robot-arm (hand-eye calibration). Therefore, I measured 8 corners of the cheassboard and got 8 pixel vectors and their corresponding robot 3D coordinates (using the gripper to point exactly at the corner). Then, I tried to calculate the transformation matrix. But the result is way off. Where's the mistake? Thank you very much!
camera_intrinsics = sensor.ir_intrinsics
intrinsics = sensor.color_intrinsics
distortion = np.array([0.03056762860315778,
-0.1824835906443329,
-0.0007936359893356936,
-0.001735795279032343,
0])
img_pts = np.array(((237., 325.),
(242., 245.),
(248., 164.),
(318., 330.),
(322., 250.),
(328., 170.),
(398., 336.),
(403., 255.)), dtype=np.float32)
objectPoints = np.array(((-437., -491., -259.),
(-402., -457., -259.),
(-360., -421., -259.),
(-393., -534., -259.),
(-362., -504., -259.),
(-332., -476., -259.),
(-298., -511., -259.),
(-334., -546., -259.)), dtype=np.float32)
ret, rvec, tvec = cv2.solvePnP(objectPoints, img_pts, intrinsics.K, distortion)
rmtx, _ = cv2.Rodrigues(rvec)
T_gripper_cam = np.array([[rmtx[0][0], rmtx[0][1], rmtx[0][2], tvec[0]],
[rmtx[1][0], rmtx[1][1], rmtx[1][2], tvec[1]],
[rmtx[2][0], rmtx[2][1], rmtx[2][2], tvec[2]],
[0.0, 0.0, 0.0, 1.0]])
T_cam_gripper = np.linalg.inv(T_gripper_cam)
print(T_cam_gripper)
p = np.array((237., 325., 0.), dtype=np.float32)
p_new = np.array([p[0], p[1], p[2], 1])
objectPoint = np.dot(p_new, T_gripper_cam)[:3]
print(objectPoint)
# should be (-437., -491., -259.)
# --> actual = [ 33.00356493 -395.87868316 -63.12553304]