Ask Your Question

Revision history [back]

CameraCalibration for face landmarks

I have a doubt regarding camera calibration.

I want to calibrate my laptop camera.In opencv documentation,calibrateCamera function is given as follows,

cv2.calibrateCamera(objectPoints, imagePoints, imageSize[, cameraMatrix[, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]]])

For chess images we take z as zero,For non planar images like the face we will have to include the z values as well.So should I give the objpoints with respect to camera coordinates or the object coordinates itself? If it is with respect to camera coordinates ,then how to find the depth?or can I measure it and make an approximate?

Actually I want to know whether the extrinsic parameters from camera calibration are used to compute

1)the model matrix or 2)the view matrix in openGL

Model matrix in my understanding is used to convert object coordinates to world coordinates. View matrix is used to convert world coordinates to camera coordinates.

Suppose an object is in front of the laptop camera,should my values of the object points(objpoints as in cameraCalibrate function of opencv) be with respect to the object itself or with respect to camera(in which case i have to find the depth info).And how the extrinsic parameters of the camera calibration matrix relates to the model or view matrix being used in opengl

Apologies for the long explanation

regards srikanth