decomposeProjectionMatrix leads to strange rotation matrix

asked 2017-06-29 14:25:05 -0500

swiss_knight gravatar image

updated 2017-06-30 08:49:39 -0500

I don't understand why this (decomposeProjectionMatrix) doesn't give the same rotation matrices as the input ones:

import cv2
import numpy as np
import math
def Rotx(angle): 
    Rx = np.array([[1,                0,                0],
                   [0,  math.cos(angle), -math.sin(angle)],
                   [0, +math.sin(angle),  math.cos(angle)]
    return Rx

def Roty(angle): 
    Ry = np.array([[ math.cos(angle), 0, +math.sin(angle)],
                   [               0, 1,                0],
                   [-math.sin(angle), 0,  math.cos(angle)]
    return Ry

def Rotz(angle): 
    Rz = np.array([[ math.cos(angle), -math.sin(angle), 0],
                   [+math.sin(angle),  math.cos(angle), 0],
                   [               0,                0, 1]
    return Rz


ax = math.pi*ax/180
by = math.pi*by/180
cz = math.pi*cz/180

Rx = Rotx(ax)
Ry = Roty(by)
Rz = Rotz(cz)

Pxyz = np.zeros((3,4))
Rxyz =,,Rz))

Pxyz[:,:3] = Rxyz

decomposition = cv2.decomposeProjectionMatrix(Pxyz)

Then, decomposition[3] is not equal to Rx, decomposition[4] is not equal to Ry and decomposition[5] != Rz. But surprisingly, decomposition[1] is equal to Rxyz
and Rdxyz =[3],[4],decomposition[5])) is not equal to Rxyz !!!

Do you know why?


An other way to see that is the following: Let retrieve some translation and rotation vectors from solvePnP:

retval, rvec,  tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)

Then, let rebuild the rotation matrix from the rotation vector:

rmat        = cv2.Rodrigues(rvec)[0]

Projection matrix:

And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square:

P        = np.zeros((4,4))
P[:3,:3] = rmat
P[:3,3]  = tvec.T # need to transpose tvec in order to fit with destination shape! 
P[3,3]   = 1
print P
[[  6.08851883e-01   2.99048587e-01   7.34758006e-01  -4.75705058e+01]
 [  6.78339121e-01   2.83943605e-01  -6.77666634e-01  -3.24002911e+01]
 [ -4.11285086e-01   9.11013706e-01  -2.99767575e-02   2.24834560e+01]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00   1.00000000e+00]]

If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame.


It is easily checked by drawing projected points on the original image:

projected_points = [,P[:3]),op)/,P[:3]),op)[2] for op in obj_pts]

where cam_mat is the intrinsic parameters matrix of the camera (basically with focal on the two first element of the diagonal and center coordinates in the two first element of the third column). And where obj_pts is an array of points coordinates expressed in the world reference frame, and in homogeneous coordinates, like this for example: [ 10. , 60. , 0. , 1. ].

Projected points may then be drawn on image:

 [,tuple(i),10,(0,0,255),-1) for i in projected_points.tolist()]

It works well. Projected points are near the original points.

Inverse ...

edit retag flag offensive close merge delete