Ask Your Question

findEssentialMat or decomposeEssentialMat do not work correctly

asked 2016-03-14 04:24:51 -0500

Kolyan gravatar image

I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions.
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect

import cv2
import numpy as np

objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])

srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])

cameraMatrix = np.float64([[1,0,0],

srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]

R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t

the resut for R and t

R1=[[-2.8672671 ]
 [ 0.82984579]
 [ 0.12698814]]
R2=[[ 0.84605365]
 [ 2.92326821]
t=[[  8.47069335e-04]
 [ -3.75356183e-03]
 [ -9.99992597e-01]]

The rotation are correct just in case of the same height of cameras positions, but direction is always wrong. Is it bug or my mistake?

edit retag flag offensive close merge delete


I have the same problem anyone knows if it is a bug? I use th C++ code.

ozgunus gravatar imageozgunus ( 2016-06-21 05:40:36 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2018-01-20 22:42:01 -0500

supersolver gravatar image

You are almost there. Just a couple things you need to tweak to get it to work.

  1. I recommend using more input points. The five-point algorithm used in calculating the essential matrix involves finding the roots of a tenth-degree polynomial. If you were using only five input points you would get multiple possible solutions. The bare minimum to get only one solution is six points, as you have done. However, I had to double the number of points you were using to get any decent results. A little bit of depth variation would also be helpful (though not mandatory) since all your points are coplanar.
  2. You need to pick a good threshold for RANSAC (the default method) to actually work. The OpenCV documentation shows that the default threshold for RANSAC is 1.0, which in my opinion is a bit large even when using pixel coordinates. If you were using pixel coordinates I would recommend using something around 0.1 pixels. However, when using normalized image coordinates as you are doing, you should pick something even smaller, like 1e-4. The threshold you are using of 1.0 in this case corresponds to 45 degrees from the optical axis. Such a large threshold will permit many possible essential matrices for which every point is an inlier and no way to distinguish between them. If you aren't sure how to pick a good threshold, try using LMEDS instead. It doesn't require a threshold and has comparable computation time.
  3. One of your points is invalid because it is at the origin of one of the camera views and in the other camera view it is behind the camera. I assume you were trying to make the camera move backwards in the z direction with the -1. However, the translation vector direction is counter intuitive. It is the direction the points translate in the camera frame, not the direction the cameras move. This post explains this concept in more detail.
edit flag offensive delete link more

answered 2018-10-29 03:39:48 -0500

Rengao Zhou gravatar image

Please try avoiding using the make-up data to test, since there could be so many details to pay attention to.

In your cases, points [0,0,0] and [0,0,5] are ineffective. Both of them would not be seen by the cameras, thus making it meaningless to project them to cameras. Besides, points on the same plane are not useful for calculating the essential matrix. And the least number of points would be 5 for essential matrix, while usually more would be needed in actual usage. The algorithm would use RANSAC to eliminate outliers.

What's more, keep in mind that the translation vectors you get from findEssentialMat() are unit vectors. "By decomposing E, you can only get the direction of the translation, so the function returns unit t.", from document of decomposeEssentialMat().

edit flag offensive delete link more

Question Tools



Asked: 2016-03-14 04:24:51 -0500

Seen: 5,822 times

Last updated: Oct 29 '18