# decomposeProjectionMatrix leads to strange rotation matrix

I don't understand why this (`decomposeProjectionMatrix`

) doesn't give the same rotation matrices as the input ones:

```
import cv2
import numpy as np
import math
def Rotx(angle):
Rx = np.array([[1, 0, 0],
[0, math.cos(angle), -math.sin(angle)],
[0, +math.sin(angle), math.cos(angle)]
])
return Rx
def Roty(angle):
Ry = np.array([[ math.cos(angle), 0, +math.sin(angle)],
[ 0, 1, 0],
[-math.sin(angle), 0, math.cos(angle)]
])
return Ry
def Rotz(angle):
Rz = np.array([[ math.cos(angle), -math.sin(angle), 0],
[+math.sin(angle), math.cos(angle), 0],
[ 0, 0, 1]
])
return Rz
ax=22
by=77
cz=11
ax = math.pi*ax/180
by = math.pi*by/180
cz = math.pi*cz/180
Rx = Rotx(ax)
Ry = Roty(by)
Rz = Rotz(cz)
Pxyz = np.zeros((3,4))
Rxyz = np.dot(Rx,np.dot(Ry,Rz))
Pxyz[:,:3] = Rxyz
decomposition = cv2.decomposeProjectionMatrix(Pxyz)
```

Then, `decomposition[3]`

is not equal to `Rx`

, `decomposition[4]`

is not equal to `Ry`

and `decomposition[5]`

!= `Rz`

.
But surprisingly, decomposition[1] is equal to Rxyz

and `Rdxyz = np.dot(decomposition[3],np.dot(decomposition[4],decomposition[5]))`

is not equal to `Rxyz`

!!!

Do you know why?

# Update:

An other way to see that is the following: Let retrieve some translation and rotation vectors from solvePnP:

```
retval, rvec, tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)
```

Then, let rebuild the rotation matrix from the rotation vector:

```
rmat = cv2.Rodrigues(rvec)[0]
```

## Projection matrix:

And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square:

```
P = np.zeros((4,4))
P[:3,:3] = rmat
P[:3,3] = tvec.T # need to transpose tvec in order to fit with destination shape!
P[3,3] = 1
print P
[[ 6.08851883e-01 2.99048587e-01 7.34758006e-01 -4.75705058e+01]
[ 6.78339121e-01 2.83943605e-01 -6.77666634e-01 -3.24002911e+01]
[ -4.11285086e-01 9.11013706e-01 -2.99767575e-02 2.24834560e+01]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
```

If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame.

### Checking:

It is easily checked by drawing projected points on the original image:

```
projected_points = [np.dot(np.dot(cam_mat,P[:3]),op)/np.dot(np.dot(cam_mat,P[:3]),op)[2] for op in obj_pts]
```

where `cam_mat`

is the intrinsic parameters matrix of the camera (basically with focal on the two first element of the diagonal and center coordinates in the two first element of the third column).
And where `obj_pts`

is an array of points coordinates expressed in the world reference frame, and in homogeneous coordinates, like this for example: `[ 10. , 60. , 0. , 1. ]`

.

Projected points may then be drawn on image:

```
[cv2.circle(img,tuple(i),10,(0,0,255),-1) for i in projected_points.tolist()]
```

It works well. Projected points are near the original points.