OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 29 Oct 2018 03:39:48 -0500findEssentialMat or decomposeEssentialMat do not work correctlyhttp://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect
<pre>
import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
the resut for R and t
<pre>
R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?Mon, 14 Mar 2016 04:24:51 -0500http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/Comment by ozgunus for <p>I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br/>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br/>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect </p>
<pre>import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
<p>the resut for R and t</p>
<pre>R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
<p>The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?</p>
http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?comment=96980#post-id-96980I have the same problem anyone knows if it is a bug? I use th C++ code.Tue, 21 Jun 2016 05:40:36 -0500http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?comment=96980#post-id-96980Answer by supersolver for <p>I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br/>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br/>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect </p>
<pre>import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
<p>the resut for R and t</p>
<pre>R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
<p>The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?</p>
http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?answer=182889#post-id-182889You are almost there. Just a couple things you need to tweak to get it to work.
1. I recommend using more input points. The [five-point algorithm](https://pdfs.semanticscholar.org/c288/7c83751d2c36c63139e68d46516ba3038909.pdf) used in calculating the essential matrix involves finding the roots of a tenth-degree polynomial. If you were using only five input points you would get multiple possible solutions. The bare minimum to get only one solution is six points, as you have done. However, I had to double the number of points you were using to get any decent results. A little bit of depth variation would also be helpful (though not mandatory) since all your points are coplanar.
2. You need to pick a good threshold for RANSAC (the default method) to actually work. The [OpenCV documentation](https://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findessentialmat) shows that the default threshold for RANSAC is 1.0, which in my opinion is a bit large even when using pixel coordinates. If you were using pixel coordinates I would recommend using something around 0.1 pixels. However, when using normalized image coordinates as you are doing, you should pick something even smaller, like 1e-4. The threshold you are using of 1.0 in this case corresponds to 45 degrees from the optical axis. Such a large threshold will permit many possible essential matrices for which every point is an inlier and no way to distinguish between them. If you aren't sure how to pick a good threshold, try using LMEDS instead. It doesn't require a threshold and has comparable computation time.
3. One of your points is invalid because it is at the origin of one of the camera views and in the other camera view it is behind the camera. I assume you were trying to make the camera move backwards in the z direction with the -1. However, the translation vector direction is counter intuitive. It is the direction the points translate in the camera frame, not the direction the cameras move. [This post](https://stackoverflow.com/a/36213818/4307850) explains this concept in more detail.Sat, 20 Jan 2018 22:42:01 -0600http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?answer=182889#post-id-182889Answer by Rengao Zhou for <p>I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br/>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br/>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect </p>
<pre>import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
<p>the resut for R and t</p>
<pre>R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
<p>The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?</p>
http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?answer=201962#post-id-201962Please try avoiding using the make-up data to test, since there could be so many details to pay attention to.
In your cases, points [0,0,0] and [0,0,5] are ineffective. Both of them would not be seen by the cameras, thus making it meaningless to project them to cameras. Besides, points on the same plane are not useful for calculating the essential matrix. And the least number of points would be 5 for essential matrix, while usually more would be needed in actual usage. The algorithm would use RANSAC to eliminate outliers.
What's more, keep in mind that the translation vectors you get from findEssentialMat() are unit vectors. "By decomposing E, you can only get the direction of the translation, so the function returns unit t.", from document of [decomposeEssentialMat()](https://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=decomposeessentialmat#void%20decomposeEssentialMat(InputArray%20E,%20OutputArray%20R1,%20OutputArray%20R2,%20OutputArray%20t)).Mon, 29 Oct 2018 03:39:48 -0500http://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/?answer=201962#post-id-201962