Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

findFundamentalMat and E extraction: input points must be float or double. Are we sure?

Hello everybody,

today I've a question for you all. I'm implementing an alorithm able to recover the fundamental matrix. Most of my pipeline is pretty easy, and can be found around of the web:

  • Extract corresponding points from the images
  • Match them using a matcher (FLANN or BFMatcher)
  • Extract the matched points and put them in two std::vector<cv::point2i>
  • Call findFundamentalMat to retrieve the fundamental matrix, I call with LMedS since I do some outlier removal before that.

Now it come out the question. In the official documentation it is written that the function input matching points should be in Float or Double. In that case I have points that are given as integer since each point is described by his x and y value, that I suppose are integer value (maybe I'm wrong with that?). The conversion to Float and Double to fulfill the requirements of the documentation seems to be useless to me but this drive me to that question: points need to be normalized before I call this function?

I know that normalized point coordinates gives more precision and stability in the eight points algorithm, this apply also to the LMedS case? Because as far as I remember, even the 8 points agorithm itself perform the normalization internally to the function findFundamentalMat. In case is necessary, how can I perform the normalization?

Just to be more precise, I'm asking this because If I simply convert the point from cv::Point2i to cv::Point2f the result doesn't change, I get the same fundamental matrix.

Another question that I have is: I want to extract the Essential matrix from F, I applied all the rules, following the algorithm provided in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:

E = K_01.t() * F* K_00

It is that correct?

findFundamentalMat and E extraction: input points must be float or double. Are we sure?

Hello everybody,

today I've a question for you all. I'm implementing an alorithm able to recover the fundamental matrix. Most of my pipeline is pretty easy, and can be found around of the web:

  • Extract corresponding points from the images
  • Match them using a matcher (FLANN or BFMatcher)
  • Extract the matched points and put them in two std::vector<cv::point2i>
  • Call findFundamentalMat to retrieve the fundamental matrix, I call with LMedS since I do some outlier removal before that.

Now it come out the question. In the official documentation it is written that the function input matching points should be in Float or Double. In that case I have points that are given as integer since each point is described by his x and y value, that I suppose are integer value (maybe I'm wrong with that?). The conversion to Float and Double to fulfill the requirements of the documentation seems to be useless to me but this drive me to that question: points need to be normalized before I call this function?

I know that normalized point coordinates gives more precision and stability in the eight points algorithm, this apply also to the LMedS case? Because as far as I remember, even the 8 points agorithm itself perform the normalization internally to the function findFundamentalMat. In case is necessary, how can I perform the normalization?

Just to be more precise, I'm asking this because If I simply convert the point from cv::Point2i to cv::Point2f the result doesn't change, I get the same fundamental matrix.

Another question that I have is: I want to extract the Essential matrix from F, I applied all the rules, following the algorithm provided in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:

E = K_01.t() * F* K_00

It is that correct?

findFundamentalMat and E extraction: input points must be float or double. Are we sure?

Hello everybody,

today I've a question for you all. I'm implementing an alorithm algorithm able to recover the fundamental matrix. Most of my pipeline is pretty easy, and can be found around of the web:

  • Extract corresponding points from the images
  • Match them using a matcher (FLANN or BFMatcher)
  • Extract the matched points and put them in two std::vector<cv::point2i>
  • Call findFundamentalMat to retrieve the fundamental matrix, I call with LMedS since I do some outlier removal before that.

Now it come out the question. In the official documentation it is written that the function input matching points should be in Float or Double. In that case I have points that are given as integer since each point is described by his x and y value, that I suppose are integer value (maybe I'm wrong with that?). The conversion to Float and Double to fulfill the requirements of the documentation seems to be useless to me but this drive me to that question: points need to be normalized before I call this function?

I know that normalized point coordinates gives more precision and stability in the eight points algorithm, this apply also to the LMedS case? Because as far as I remember, even the 8 points agorithm itself perform the normalization internally to the function findFundamentalMat. In case is necessary, how can I perform the normalization?

Just to be more precise, I'm asking this because If I simply convert the point from cv::Point2i to cv::Point2f the result doesn't change, I get the same fundamental matrix.

Another question that I have is: I want to extract the Essential matrix from F, I applied all the rules, following the algorithm provided in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:

E = K_01.t() * F* K_00

It is that correct? correct?

findFundamentalMat and E extraction: input points must be float or double. Are we sure?

Hello everybody,

today I've a question for you all. I'm implementing an algorithm able to recover the fundamental matrix. Most of my pipeline is pretty easy, and can be found around of the web:

  • Extract corresponding points from the images
  • Match them using a matcher (FLANN or BFMatcher)
  • Extract the matched points and put them in two std::vector<cv::point2i>
  • Call findFundamentalMat to retrieve the fundamental matrix, I call with LMedS since I do some outlier removal before that.

Another question that I have is: I want to extract the Essential matrix from F, I applied all the rules, following the algorithm provided in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:

E = K_01.t() * F* K_00

It is that correct?

findFundamentalMat and E extraction: input points must be float or double. Are we sure?Extracting the Essential matrix from the Fundamental with SVD

Hello everybody,

today I've a question for you all. I'm implementing an algorithm able to recover the fundamental matrix. Most of my pipeline is pretty easy, and can be found around of the web:

  • Extract corresponding points from the images
  • Match them using a matcher (FLANN or BFMatcher)
  • Extract the matched points and put them in two std::vector<cv::point2i>
  • Call findFundamentalMat to retrieve the fundamental matrix, I call with LMedS since I do some outlier removal before that.

Another question that I have is: I want to extract the Essential matrix from F, I applied all the rules, following the algorithm provided in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:

E = K_01.t() * F* K_00

It is that correct?

Extracting the Essential matrix from the Fundamental with SVDmatrix

Hello everybody,

today I've a question for you all. First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.

INTRODUCTION

I'm implementing an algorithm able to recover the fundamental matrix. calibration of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web:web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of K_00, K_01, D_00, D_01 (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.

I do the following:

  • Starting from the raw distorted images, I apply the undistortion using the intrinsics.
  • Extract corresponding points from the Left and Right images
  • Match them using a matcher (FLANN or BFMatcher)BFMatcher or whatever)
  • Extract Filter the matched points and put them in two std::vector<cv::point2i>with an outlier rejection algorithm (I checked the result visually)
  • Call findFundamentalMat to retrieve the fundamental matrix, I matrix (I call with LMedS since I do some outlier removal before that.I've already filtered most of the outliers in the previous step)

Another question If I try to calculate the error of the points correspondence applying x' * F * x = 0 the result seems to be good (less than 1) and I suppose that I have is: everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since
I want to extract rectify the images, I need the essential matrix.

THE PROBLEM

First of all, I obtain the Essential matrix from F, I applied all the rules, following the algorithm provided simply applying the formula (9.12) in another question HERE that I've rewrote in C++. SVD in OpenCV is very different from the one from Python? Which flag I've to put on it to expect the same behavior? Because everytime I got completely different and strange result, and this make me crazy.

Probably I'm also using the wrong cameraMatrix. In the KITTI Dataset I'm using as camera matrix the two provided in the calibration file, K_00 and K_01, and I compute E as:HZ book (page 257):

cv::Mat E = K_01.t() * F* K_00
fundamentalMat* K_00;

It I then normalize the coordinates to verify the quality of E. Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):

cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;

cv::Mat normalized_1 = (K_00.inv()) * _1;

So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship x'^T * E * x=0 (HZ page 257, formula 9.11)

cv::Mat residual = normalized_1.t() * E * normalized_2;
residual_value += cv::sum(residual)[0];

Every execution of the algorithm, the value of the Fundamental Matrix slightly change as expected while the Essential Matrix... change a lot! Also, what I get as results in variable residual_value are very big values and this lead me to a question: I'm doing something wrong on that? I know that correct?the images are unrectified and unsynched and may this can be also a problem but I think not in my case since I do assumption only on the intrinsic parameters.

I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.

I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.

Extracting the Essential matrix from the Fundamental matrix

Hello everybody,

today I've a question for you all. First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.

INTRODUCTION

I'm implementing an algorithm able to recover the calibration of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of K_00, K_01, D_00, D_01 (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.

I do the following:

  • Starting from the raw distorted images, I apply the undistortion using the intrinsics.
  • Extract corresponding points from the Left and Right images
  • Match them using a matcher (FLANN or BFMatcher or whatever)
  • Filter the matched points with an outlier rejection algorithm (I checked the result visually)
  • Call findFundamentalMat to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)

If I try to calculate the error of the points correspondence applying x' * F * x = 0 the result seems to be good (less than 1) 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.

THE PROBLEM

First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):

cv::Mat E = K_01.t() * fundamentalMat* K_00;

I then normalize the coordinates to verify the quality of E. Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):

cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;

cv::Mat normalized_1 = (K_00.inv()) * _1;

So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship x'^T * E * x=0 (HZ page 257, formula 9.11) (I iterate over all the normalized coordinates)

cv::Mat residual = normalized_1.t() * E * normalized_2;
residual_value += cv::sum(residual)[0];

Every execution of the algorithm, the value of the Fundamental Matrix slightly change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! Also, what I get as results in variable residual_value (the last calculation) are very big values and this lead me to a question: I'm doing something wrong on that? I know that the images are unrectified and unsynched and may this can be also a problem but I think not in my case since I do assumption only on the intrinsic parameters.

I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.

I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.

Extracting the Essential matrix from the Fundamental matrix

Hello everybody,

today I've a question for you all. First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.

INTRODUCTION

I'm implementing an algorithm able to recover the calibration of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of K_00, K_01, D_00, D_01 (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.

I do the following:

  • Starting from the raw distorted images, I apply the undistortion using the intrinsics.
  • Extract corresponding points from the Left and Right images
  • Match them using a matcher (FLANN or BFMatcher or whatever)
  • Filter the matched points with an outlier rejection algorithm (I checked the result visually)
  • Call findFundamentalMat to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)

If I try to calculate the error of the points correspondence applying x' * F * x = 0 the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.

THE PROBLEM

First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):

cv::Mat E = K_01.t() * fundamentalMat* K_00;

I then normalize the coordinates to verify the quality of E. Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):

cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;

cv::Mat normalized_1 = (K_00.inv()) * _1;

So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship x'^T * E * x=0 (HZ page 257, formula 9.11) (I iterate over all the normalized coordinates)

cv::Mat residual = normalized_1.t() normalized_2.t() * E * normalized_2;
normalized_1;
residual_value += cv::sum(residual)[0];

Every execution of the algorithm, the value of the Fundamental Matrix slightly change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot! Also, what I get as results in variable residual_value (the last calculation) are very big values and this lead me to a question: I'm doing something wrong on that? I know that the images are unrectified and unsynched and may this can be also a problem but I think not in my case since I do assumption only on the intrinsic parameters.

I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.

I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.

EDIT one previous error was in the formula, I've calculated the residual of E as x^T * E * x'=0 instead of x'^T * E * x=0. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have to

Extracting the Essential matrix from the Fundamental matrix

Hello everybody,

today I've a question for you all. First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.

INTRODUCTION

I'm implementing an algorithm able to recover the calibration of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of K_00, K_01, D_00, D_01 (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.

I do the following:

  • Starting from the raw distorted images, I apply the undistortion using the intrinsics.
  • Extract corresponding points from the Left and Right images
  • Match them using a matcher (FLANN or BFMatcher or whatever)
  • Filter the matched points with an outlier rejection algorithm (I checked the result visually)
  • Call findFundamentalMat to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)

If I try to calculate the error of the points correspondence applying x' * F * x = 0 the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.

THE PROBLEM

First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):

cv::Mat E = K_01.t() * fundamentalMat* K_00;

I then normalize the coordinates to verify the quality of E. Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):

cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;

cv::Mat normalized_1 = (K_00.inv()) * _1;

So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship x'^T * E * x=0 (HZ page 257, formula 9.11) (I iterate over all the normalized coordinates)

cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];

Every execution of the algorithm, the value of the Fundamental Matrix slightly change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!

I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.

I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.

EDIT one previous error was in the formula, I've calculated the residual of E as x^T * E * x'=0 instead of x'^T * E * x=0. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have toto.

EDIT This is the different SVD singular value result:

cv::SVD produce this result:

133.70399
127.47910
0.00000

while Eigen::SVD produce the following:

1.00777 0.00778 0.00000

Okay maybe is not an OpenCV related problem, for sure, but any help is more than welcome