OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 29 Jun 2017 14:25:05 -0500findEssentialMat or decomposeEssentialMat do not work correctlyhttp://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect
<pre>
import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
the resut for R and t
<pre>
R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?KolyanMon, 14 Mar 2016 04:24:51 -0500http://answers.opencv.org/question/90070/decomposeProjectionMatrix leads to strange rotation matrixhttp://answers.opencv.org/question/162836/decomposeprojectionmatrix-leads-to-strange-rotation-matrix/ I don't understand why this (`decomposeProjectionMatrix`) doesn't give the same rotation matrices as the input ones:
import cv2
import numpy as np
import math
def Rotx(angle):
Rx = np.array([[1, 0, 0],
[0, math.cos(angle), -math.sin(angle)],
[0, +math.sin(angle), math.cos(angle)]
])
return Rx
def Roty(angle):
Ry = np.array([[ math.cos(angle), 0, +math.sin(angle)],
[ 0, 1, 0],
[-math.sin(angle), 0, math.cos(angle)]
])
return Ry
def Rotz(angle):
Rz = np.array([[ math.cos(angle), -math.sin(angle), 0],
[+math.sin(angle), math.cos(angle), 0],
[ 0, 0, 1]
])
return Rz
ax=22
by=77
cz=11
ax = math.pi*ax/180
by = math.pi*by/180
cz = math.pi*cz/180
Rx = Rotx(ax)
Ry = Roty(by)
Rz = Rotz(cz)
Pxyz = np.zeros((3,4))
Rxyz = np.dot(Rx,np.dot(Ry,Rz))
Pxyz[:,:3] = Rxyz
decomposition = cv2.decomposeProjectionMatrix(Pxyz)
Then, `decomposition[3]` is not equal to `Rx`, `decomposition[4]` is not equal to `Ry` and `decomposition[5]` != `Rz`.
But surprisingly, decomposition[1] is equal to Rxyz
and `Rdxyz = np.dot(decomposition[3],np.dot(decomposition[4],decomposition[5]))` is not equal to `Rxyz` !!!
Do you know why?
<h1>Update:</h1>
An other way to see that is the following:
Let retrieve some translation and rotation vectors from solvePnP:
retval, rvec, tvec = cv2.solvePnP(obj_pts, img_pts, cam_mat, dist_coeffs, rvec, tvec, flags=cv2.SOLVEPNP_ITERATIVE)
Then, let rebuild the rotation matrix from the rotation vector:
rmat = cv2.Rodrigues(rvec)[0]
<br>
<h2>Projection matrix:</h2>
And finally create the projection matrix as P = [ R | t ] with an extra line of [0, 0, 0, 1] to be square:
P = np.zeros((4,4))
P[:3,:3] = rmat
P[:3,3] = tvec.T # need to transpose tvec in order to fit with destination shape!
P[3,3] = 1
print P
[[ 6.08851883e-01 2.99048587e-01 7.34758006e-01 -4.75705058e+01]
[ 6.78339121e-01 2.83943605e-01 -6.77666634e-01 -3.24002911e+01]
[ -4.11285086e-01 9.11013706e-01 -2.99767575e-02 2.24834560e+01]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
If I understand, this matrix (does it have a name?) in addition to the camera intrinsic parameters matrix, brings points from the world reference frame to the camera reference frame.
<h3>Checking:</h3>
It is easily checked by drawing projected points on the original image:
projected_points = [np.dot(np.dot(cam_mat,P[:3]),op)/np.dot(np.dot(cam_mat,P[:3]),op)[2] for op in obj_pts]
where `cam_mat` is the intrinsic parameters matrix of the camera (basically with focal on the two first element of the diagonal and center coordinates in the two first element of the third column).
And where `obj_pts` is an array of points coordinates expressed in the world reference frame, and in homogeneous coordinates, like this for example: `[ 10. , 60. , 0. , 1. ]`.
Projected points may then be drawn on image:
[cv2.circle(img,tuple(i),10,(0,0,255),-1) for i in projected_points.tolist()]
It works well. Projected points are near the original points.
<br>
<h2>Inverse of projection matrix:</h2>
Then, here is the inverse transformation obtained by takine the inverse of the projection matrix:
P_inv = np.linalg.inv(P)
print P_inv
[[ 6.08851883e-01 6.78339121e-01 -4.11285086e-01 6.01888871e+01]
[ 2.99048587e-01 2.83943605e-01 9.11013706e-01 2.94301145e+00]
[ 7.34758006e-01 -6.77666634e-01 -2.99767575e-02 1.36701949e+01]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Here we can see the camera position expressed in the world reference frame as the last column: X=60.19, Y=29.43, Z=13.67.
This is exactly the same as the following value:
cam_pos = -np.matrix(rmat).T * np.matrix(tvec)
print cam_pos
[[ 60.18888712]
[ 2.94301145]
[ 13.67019486]]
And in addition, the upper 3x3 part of `P_inv` is the rotation matrix that should bring the points from the camera reference frame to the world reference frame. Which is exactly the same as the transpose of `rmat`, so:
print Pi[:3,:3]/rmat.T # result of the division of the 3x3 upper part of the inverted projection matrix by rmat.T. Should be only ones.
[[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]]
Up there, all is what I expected.
<br>
<h2>The cv2.decomposeProjectionMatrix() function:</h2>
Now, let have a look the the `cv2.decomposeProjectionMatrix()` function which should basically do the same job.
Let feed this function with our projection matrix (this function needs it to be reshaped to 3x4):
decomposition = cv2.decomposeProjectionMatrix(P[:3])
As the doc says:
`decomposition[0]` should be the estimated camera matrix
`decomposition[1]` should be the rotation matrix
`decomposition[2]` should be the translation vector
`decomposition[3]` to `decomposition[5]` should be the 3 individual rotation matrices around x,y,z camera axes (here comes the previoulsy described issue, op top of this post...).
and `decomposition[6]` should be the euler angles (around x,y,z camera axes).
Let check that:
print decomp[1]/rmat
[[ 1. 1. 1.]
[ 1. 1. 1.]
[ 1. 1. 1.]]
Ok; so here, we find the same rotation matrix as the **input** rotation matrix (upper 3x3 part of our projection matrix P). This would have given exactly the same result: `print (decomp[1]/P[:,:3][:3])`
Next, the translation (hint: it should absolutely be normalized by the last component which is a scale factor):
print (decomp[2][:3]/decomp[2][3])/tvec
[[-1.2652564 ]
[-0.09083287]
[ 0.60801128]]
Oh oh... What's wrong? Here is something I don't understand; I would have expected [1,1,1] as for the rotation matrix.
By having a glace at decomp[2] one can see it doesn't look like the input translation vector (same behavior as for the rotation part), rather it looks like the camera position, and in fact, it is (!):
print (decomp[2][:3]/decomp[2][3])/cam_pos
[[ 1.]
[ 1.]
[ 1.]]
And of course, on the inverted projection matrix `P_inv` the same does apperear in the reverse direction (I would have expected the camera position here, but it's tvec !):
print (decomp_inv[2][:3]/decomp_inv[2][3])/tvec
[[ 1.]
[ 1.]
[ 1.]]
<h2>Summary:</h2>
Shortly:
print (np.hstack((rmat, tvec)))/(np.hstack((decomp[1],decomp[2][:3]/decomp[2][3])))
[[ 1. 1. 1. -0.79035364]
[ 1. 1. 1. -11.00923039]
[ 1. 1. 1. 1.64470633]]
with `rmat` and `tvec` the inputs that forms the projection matrix `P` given to the `cv2decomposeProjectionMatrix()` function.
And:
print (np.hstack((rmat.T, cam_pos)))/(np.hstack((decomp_inv[1],decomp_inv[2][:3]/decomp_inv[2][3])))
[[ 1. 1. 1. -1.2652564 ]
[ 1. 1. 1. -0.09083287]
[ 1. 1. 1. 0.60801128]]
with `rmat.T` and `cam_pos` the inputs that forms the projection matrix `P_inv` given to the `cv2decomposeProjectionMatrix()` function.
Shouldn't there be "ones" everywhere?!
swiss_knightThu, 29 Jun 2017 14:25:05 -0500http://answers.opencv.org/question/162836/Turning ArUco marker in parallel with camera planehttp://answers.opencv.org/question/136796/turning-aruco-marker-in-parallel-with-camera-plane/I need to warp the image to fix its perspective distortion based on detected marker. In other words - to get the plane where the marker lays become parallel to the camera plane.
In general it works for me, when I simply map points of perspective-distorted marker to its orthogonal position [(Sketch)](/upfiles/14908652376747118.png) with getPerspectiveTranfrorm() and then warpPerspective(), which warps whole image:
The following are sample params for getPerspectiveTransform()
src1 (100, 100) => dst1 (100, 100)
src2 (110, 190) => dst2 (100, 200)
src3: (190, 190) => dst3 (200, 200)
src4: (200, 100) => dst4 (200, 100)
The result looks OK, but not always, so I think that this way is wrong.
My assumption that since for detected marker I can get its pose estimation (which shows its relation to camera) I can calculate required marker position (or camera position?) using marker points and rotation/translation vectors.
Now I'm stuck basically not understanding the math solution. Could you advise?
**UPDATE**
The following is a source image with detected markers. The white circles represent the desired position of marker that will be used in getPerspectiveTransform().
![source](/upfiles/14909619594485387.png)
Source corners: [479, 335; 530, 333; 528, 363; 475, 365]
Result corners: [479, 335; 529, 335; 529, 385; 479, 385]
The following is the result image, which is still distorted:
![image description](/upfiles/14909622416010795.png)tischenkoalexThu, 30 Mar 2017 04:19:28 -0500http://answers.opencv.org/question/136796/How to do "decomposeHomographyMat" in OpenCV 2.4http://answers.opencv.org/question/78993/how-to-do-decomposehomographymat-in-opencv-24/I need to use the euler angels from Homography.
I find a function â€śdecomposeHomographyMatâ€ť in OpenCV 3.
But I use OpenCV 2.4.
How to do it?
nistarFri, 11 Dec 2015 09:33:59 -0600http://answers.opencv.org/question/78993/