RecoverPose default arguments

asked 2020-06-26 08:14:07 -0500

ShrutheeshIR gravatar image

updated 2020-06-27 05:20:58 -0500

Hello. I asked this question in stackoverflow and was directed here as I would get a better answer.

I am trying to perform ego-motion estimation. Once I have obtained my correspondence points, I use cv2.findEssentialMat followed by cv2.recoverPose.

It is the arguments of cv2.recoverPose that I have a doubt it. When I just give cv2.recoverPose(E, kp1, kp2), without specifying the camera intrinsics, the code does not give any error, and runs.

However, looking at the function declaration of cv2.recoverPose, it has the camera intrinsics as optional parameters. What is the use of this. In case I do not give camera intrinsics, what default values are chosen?

There were two things I noticed, upon running experiments with and without passing the cam intrinsics for cv2.recoverPose. Details of the setup of the experiment: A camera mounted on a car (similar to KITTI dataset, if you are familiar with it). A video sequence of 1000 frames are obtained, I obtain feature correspondences for the sequential images.

  1. In the first case (no camera intrinsics passed), quite a lot of translation signs were reversed. Since my camera is mounted on a vehicle and there is always forward motion, I simply multiplied my translation by -1 in case the forward motion is negative. While this is not a good way, for a start, it works.
  2. The second bizarre thing that I noticed was that, when I passed my camera intrinsics, a lot less translation values were negative, so this must be a good thing, however, the trajectory is largely incorrect, meaning it performs poorly compared to not passing the camera intrinsics values. Upon further inspection, I notice that the number of points that pass the cheirality check is zero or nearly zero, I wonder why that is happening. (In the first case where I did not pass the camera intrinsics, the number of points returned are a lot, and comparable to the number of inliers found in the previous step of finding Essential matrix i.e. in the order of 100s)

Why is this happening? Any explanation would be appreciated, as I am really stuck here. Should I pass the camera intrinsics to recover pose? In that case, why am I getting incorrect values?

Here is what I think might have happened: When the camera intrinsics are not passed, I am guessing that the decomposeEssentialMat gives 4 possible results, one of them being the wrong sign, which is being picked up in the case. However, it doesnt explain my observation of why I would get poorer performance on passing the camera intrinsics.


edit retag flag offensive close merge delete