Is the rotation matrix R described in Camera Calibration the same as the rotation matrix R used in stitching?

asked 2015-04-14 19:45:44 -0500

cris gravatar image

I believe there is an inconsistency between the camera rotation matrix defined in the camera calibration module (documented here: and the camera rotation matrix defined in the stitching module (documented here:

The OpenCV rotation matrix described in the calibration module is the generally known rotation matrix from camera extrinsics, which I illustrate with an example. Suppose in real world coordinates, there is a red dot at point (-1, -1, 1). Suppose there is also a camera (at origin) whose principal axis also points to (-1, -1, 1). Then this rotation matrix R would be a transformation that turns (-1, -1, 1) into (0, 0, 1), since the red dot should appear in the middle of the photo taken by that camera.

However, the rotation matrix used in stitching, CameraParams.R, appears to be the inverse of the matrix R, mentioned above, based on the way CameraParams.R is calculated in motion_estimators.cpp. In the code, it looks like CameraParams.R is a matrix that takes the vector (0, 0, 1) and turns it into the principal axis of the camera in question, unless I am terribly mistaken. Using the example above, CameraParams.R is a transformation that turns (0, 0, 1) into (-1, -1, 1); in other words, the inverse of the previous rotation matrix R.

Is this indeed the case? And if yes, why are the two definitions of a rotation matrix different? Why have two definitions of a rotation matrix in OpenCV?

edit retag flag offensive close merge delete