Extracting the Essential matrix from the Fundamental matrix

asked 2019-03-04 11:56:14 -0600

HYPEREGO gravatar image

updated 2019-04-03 10:08:02 -0600

Hello everybody,

today I've a question for you all. First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.

INTRODUCTION

I'm implementing an algorithm able to recover the calibrationof the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Ge...), I suppose that I know the value of K_00, K_01, D_00, D_01 (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.

I do the following:

  • Starting from the raw distorted images, I apply the undistortion using the intrinsics.
  • Extract corresponding points from the Left and Right images
  • Match them using a matcher (FLANN or BFMatcher or whatever)
  • Filter the matched points with an outlier rejection algorithm (I checked the result visually)
  • Call findFundamentalMat to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)

If I try to calculate the error of the points correspondence applying x' * F * x = 0 the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.

THE PROBLEM

First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):

cv::Mat E = K_01.t() * fundamentalMat* K_00;

I then normalize the coordinates to verify the quality of E. Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):

cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;

cv::Mat normalized_1 = (K_00.inv()) * _1;

So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship x'^T * E * x=0 (HZ page 257, formula 9.11) (I iterate over all the normalized coordinates)

cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];

Every execution of the algorithm, the value of the Fundamental Matrix slightly change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!

I tried to decompose the matrix using the ... (more)

edit retag flag offensive close merge delete

Comments

1

If I simply convert the point from cv::Point2i to cv::Point2f the result doesn't change, I get the same fundamental matrix.

they're converted to float32 internally, anyway.

berak gravatar imageberak ( 2019-03-04 12:26:31 -0600 )edit

Indeed, as for the 8 point algorithm. So that can't be the problem of my algorithm. Regarding the SVD you know how it is performed in OpenCV?

HYPEREGO gravatar imageHYPEREGO ( 2019-03-05 03:29:48 -0600 )edit

all i'm saying is: you should rule out the type problem, it's not relevant

berak gravatar imageberak ( 2019-03-05 03:32:53 -0600 )edit

I've edited the question, thank you for the reply :)

HYPEREGO gravatar imageHYPEREGO ( 2019-03-05 03:45:14 -0600 )edit

Yes the formula is correct: E = K2^T * F * K1. See also here.

Eduardo gravatar imageEduardo ( 2019-03-12 11:50:02 -0600 )edit

Hi Eduardo thank you for the reply. I've edited my question adding more details since today I've found that the problem is not in SVD but is in the Essential Matrix. Any help is really appreciated, eventually even with some references :) Thanks in advance to everybody that give an help!

HYPEREGO gravatar imageHYPEREGO ( 2019-03-12 12:23:28 -0600 )edit
1

In this answer you can find some code I wrote to play with the fundamental / essential matrix.

The idea is the following:

  • generate two viewpoints (the transformation between the left and right cameras is known)
  • estimate the fundamental matrix
  • get the essential matrix
  • recover the transformation (R and t) between the left/right cameras from the essential matrix
Eduardo gravatar imageEduardo ( 2019-03-12 15:12:44 -0600 )edit

Thank you again Eduardo. I've already seen your post (thank for being so exhaustive) and in fact I calculate the mean error of my fundamental matrix as you do in your example (I've copied and pasted the code eheh): the value I got is usually under 0.1. Like now, after an execution, I got 0.05 as mean error, sometimes 0.018.. are this measure in pixels? findEssentialMat (and most of OpenCV function in the same module) assumes that the two camera matrices are identical, and it isn't my case (with real camera they can't be tha same...) so I can't use it, but luckily your function will help me. However, if I try to find the Essential matrix using the formula mentioned above, every time the matrix is very dirrent each execution, and this drive me to a bad solution at the end

HYPEREGO gravatar imageHYPEREGO ( 2019-03-13 05:46:23 -0600 )edit

A new question was pointed out by a colleague regarding this problem. I first undistort point and I've seen that in the function another camera matrix can be provided, I think that I'm missing that but I cannot figure it out which matrix I need. And also, it is correct to perform the undistortion before the feature matching? I've 3 books regarding computer vision (HZ, Kanatani et al, Learning OpenCV) but I still doesn't understand which camera matrix use and when...

HYPEREGO gravatar imageHYPEREGO ( 2019-04-03 09:59:17 -0600 )edit

You are using undistortPoints()?

When looking at the equations, an image ray projected onto the normalized camera plane is distorted according to some estimated radial and tangential distortion coefficients. And then projected onto the image plane with the focal length and the principal points.

To undistort, reverse perspective projection is applied with cameraMatrix. Points are distorted but expressed in the normalized camera frame. Then, they are undistorted. To get points in the image coordinates, you have to pass the same camera matrix (P) in my opinion.

Eduardo gravatar imageEduardo ( 2019-04-04 03:21:10 -0600 )edit

What I've done so far is load an images, undistort using the radial/tangential distortion coefficient from calibration. After that I compute the corresponding point and then I call the the findFundamentalMat, using the same camera matrix used for undistort. But the point at the end look too much distorted, so I suppose that the undistortio is applied twice. So the correct pipeline is -> load images, compute corresponding point, undistort them and call findFundamentalMat? And regarding SVD and the singular value, have you any suggestion?

HYPEREGO gravatar imageHYPEREGO ( 2019-04-04 04:17:59 -0600 )edit