OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 04 Mar 2019 11:56:14 -0600Extracting the Essential matrix from the Fundamental matrixhttp://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/Hello everybody,
today I've a question for you all.
First of all, I've searched across the forum, across OpenCV forum and so on. The answer is probably inside one of them, but at this point I need some clarification, that's why I'm here with my question.
**INTRODUCTION**
I'm implementing an algorithm able to recover the **calibration** of the cameras, able to rectify the images in a good manner (to be more clear, estimating the extrinsic parameters). Most of my pipeline is pretty easy, and can be found around of the web. Obviously, I don't want to recover the full calibration but most of it. For instance, since I'm actually working with the KITTI dataset (http://www.cvlibs.net/publications/Geiger2013IJRR.pdf), I suppose that I know the value of **K_00**, **K_01**, **D_00**, **D_01** (camera intrinsics, they're given in their calibration file), so the value of the camera matrices and the distortion coefficient are known.
I do the following:
- Starting from the raw distorted images, I apply the undistortion using the intrinsics.
- Extract corresponding points from the **Left** and **Right** images
- Match them using a matcher (FLANN or BFMatcher or whatever)
- Filter the matched points with an outlier rejection algorithm (I checked the result visually)
- Call **findFundamentalMat** to retrieve the fundamental matrix (I call with LMedS since I've already filtered most of the outliers in the previous step)
If I try to calculate the error of the points correspondence applying `x' * F * x = 0` the result seems to be good (less than 0.1) and I suppose that everything is ok since there are a lot of examples around the web of doing that, so nothing new.
Since I want to rectify the images, I need the essential matrix.
**THE PROBLEM**
First of all, I obtain the Essential matrix simply applying the formula (9.12) in HZ book (page 257):
cv::Mat E = K_01.t() * fundamentalMat* K_00;
I then normalize the coordinates to verify the quality of E.
Given two correspondent points (matched1 and matched2), I do the normalization process as (obviously I apply that to the two sets of inliers that I've found, this is the example of what I do):
cv::Mat _1 = cv::Mat(3, 1, CV_32F)
_1.at<float>(0,0) = matched1.x;
_1.at<float>(1,0) = matched1.y;
_1.at<float>(2,0) = 1;
cv::Mat normalized_1 = (K_00.inv()) * _1;
So now I have the Essential Matrix and the normalized coordinates (I can eventually convert to Point3f or other structures), so I can verify the relationship `x'^T * E * x=0 ` *(HZ page 257, formula 9.11)* (I iterate over all the normalized coordinates)
cv::Mat residual = normalized_2.t() * E * normalized_1;
residual_value += cv::sum(residual)[0];
Every execution of the algorithm, the value of the Fundamental Matrix **slightly** change as expected (but the mean error, as mentioned above, is always something around 0.01) while the Essential Matrix... change a lot!
I tried to decompose the matrix using the OpenCV SVD implementation (I've understand is not the best, for that reason I'll switch probably to LAPACK for doing this, any suggestion?) and again here, the constraint that the two singular values must be equal is not respected, and this drive all my algorithm in a completely wrong estimation of the rectification.
I would like to test this algorithm also with the images produced with my own cameras (I've two Allied Vision camera) but I'm waiting for a high quality chessboard, so the KITTI dataset is my starting point.
**EDIT** one previous error was in the formula, I've calculated the residual of E as `x^T * E * x'=0 ` instead of `x'^T * E * x=0`. This is now fixed and the residual error of E seems to be good, but the Essential matrix that I get everytime is very different... And after the SVD, the two singular value doesn't look similar as they have to.
**EDIT** This is the different SVD singular value result:
cv::SVD produce this result:
>133.70399
>127.47910
>0.00000
while Eigen::SVD produce the following:
>1.00777
>0.00778
>0.00000
Okay maybe is not an OpenCV related problem, for sure, but any help is more than welcomeHYPEREGOMon, 04 Mar 2019 11:56:14 -0600http://answers.opencv.org/question/209787/Wrong rank in Fundamental Matrixhttp://answers.opencv.org/question/204100/wrong-rank-in-fundamental-matrix/Hi guys,
I'm using the OpenCV for Python3 and, based on the Mastering OpenCV Book, try to compute the epipoles from many images (Structure from Motion algorithm).
In many books, they say which Fundamental Matrix has rank 2. But, the OpenCV function returns a rank 3 matrix.
How can I make this right?
orb = cv2.ORB_create()
# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors.
matches = bf.match(des1,des2)
# Sort them in the order of their distance.
matches = sorted(matches, key = lambda x:x.distance)
pts1 = []
pts2 = []
for m in matches:
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
F, mask = cv2.findFundamentalMat(pts1, pt2,cv2.FM_RANSAC)
pts1 = match['leftPts'][mask.ravel()==1]
pts2 = match['rightPts'][mask.ravel()==1]
# F is the Fundamental Matrix
From that code, the output are like
Processing image 0 and image 1
rank of F: 3
Processing image 0 and image 2
rank of F: 3
Processing image 0 and image 3
rank of F: 3
Processing image 0 and image 4
rank of F: 2
[...]
Someone could help me? Someone have any functional code for SfM using OpenCV?
Thanks in advance.
Lucas Amparo BarbosaMon, 26 Nov 2018 11:04:12 -0600http://answers.opencv.org/question/204100/findFundamentalMatrix and SiftGPUhttp://answers.opencv.org/question/185003/findfundamentalmatrix-and-siftgpu/Hi all!
I'm trying to find the Fundamental Matrix with the *findFundamentalMat* function.
I generate keypoints (x,y) with Sift-GPU.
The matrix I generate is
0, 0, 0.6
0, 0, -0.3
-0.4, 0.2, 0
(Can it be even possible that my diagonal is composed of 0's?)
If I use a *std::vector<uchar>* to look at outliers and inliers, it gives me only 0's (outliers), even if I change the used algorithm.
What I give to the function is 2 vectors composed with (x,y) of all correspondance. (x,y are for example (540, 355)).
/*
... Use siftgpu
*/
std::vector<int(*)[2]> match_bufs; //Contain (x,y) from the 2 images that are paired
SiftGPU::SiftKeypoint & key1 = keys[match_bufs[i][0]];
SiftGPU::SiftKeypoint & key2 = keys[match_bufs[i][1]];
float x_l, y_l, x_r, y_r; //(x,y of left and right images)
x_l = key1.x; y_l = key1.y;
x_r = key2.x; y_r = key2.y;
vec1.push_back(x_l); vec1.push_back(y_l);
vec2.push_back(x_r); vec2.push_back(y_r);
std::vector<uchar> results;
int size = vec1.size();
results.resize(size);
std::vector<cv::Point2f> points1; //corrected
std::vector<cv::Point2f> points2;
for (int i = 0; i < size; i+=2) {
points1.push_back(cv::Point2f(vec1[i], vec1[i + 1]));
points2.push_back(cv::Point2f(vec2[i], vec2[i + 1]));
}
cv::Mat fund = cv::findFundamentalMat(points1, points2, CV_FM_RANSAC, 3, 0.99, results);
I tried to normalize them to make them between [0,1] but it doesn't work neither.
Do I am missing something? Is there something I don't understand in the use of this function? /:
Thanks a lot!KirbXMon, 19 Feb 2018 07:22:55 -0600http://answers.opencv.org/question/185003/Wrong Epipolar lines, No Visual sanityhttp://answers.opencv.org/question/181730/wrong-epipolar-lines-no-visual-sanity/Hi,
I've tried using the code given https://docs.opencv.org/3.2.0/da/de9/tutorial_py_epipolar_geometry.html to find the epipolar lines, but instead of getting the output given in the link, I am getting the following output.
![image description](/upfiles/15151486894383214.png)
but when changing the line `F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)` to
`F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC)` i.e: using `RANSAC` algorithm to find Fundamental matrix instead of `LMEDS` this is the following output.
![image description](/upfiles/15151489062631591.png)
When the same line is replaced with `F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_8POINT)` i.e: use eight point algorithm this is the following output.
![image description](/upfiles/15151505056498559.png)
All of the above about output does not have any visual sanity nor anyway near to close to the given output in opencv documentation for finding epipolar lines. But ironically, if the same code if executed by changing the algorithm to find fundamental matrix in this particular sequence
1. FM_LMEDS
2. FM_8POINT
3. FM_7POINT
4. FM_LMEDS
most accurate results are generated. This is the output.
![image description](/upfiles/1515152387699454.png)
I thought we are suppose to get the above output in one run of any of the algorithm (with the variations in matrix values and error). Am I running the code incorrectly? What is that I've to do, to get the correct epipolar lines (i.e.; visually sane)? I am using opencv version 3.3.0 and python 2.7.
Looking forward for reply.
Thank you.salmankhhcuFri, 05 Jan 2018 06:00:03 -0600http://answers.opencv.org/question/181730/Inverse normalization in 8-point algorithm for fundamental matrixhttp://answers.opencv.org/question/178961/inverse-normalization-in-8-point-algorithm-for-fundamental-matrix/ The 8-point algorithm for Fundamental matrix, normalizes the pixel points before solving the linear system of equations and the solution is inverse normalized to get the Fundamental matrix.
Iam refering to the source at
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp
Lines 615 to 620
// apply the transformation that is inverse
// to what we used to normalize the point coordinates
Matx33d T1( scale1, 0, -scale1*m1c.x, 0, scale1, -scale1*m1c.y, 0, 0, 1 );
Matx33d T2( scale2, 0, -scale2*m2c.x, 0, scale2, -scale2*m2c.y, 0, 0, 1 );
F0 = T2.t()*F0*T1;
This appears to be same as the normalization procedure. Iam not able to understand how this is supposed to be inverse of the normalization. Any help is appreciated.
thensWed, 22 Nov 2017 22:00:34 -0600http://answers.opencv.org/question/178961/findFundamentalMat and drawMatches with maskhttp://answers.opencv.org/question/176621/findfundamentalmat-and-drawmatches-with-mask/ I want to draw the inlier matches of my Fundamental Matrix to interpretate the results.
I am using opencv 3.2 and C++.
The parameter mask in findFundamentalMat is used if one uses a method like RANSAC to dermine the inliers and is of type`vector<uchar>` .
In the documentation it says that in drawMatches the parameter "matchesMask - Mask determining which matches are drawn. If the mask is empty, all matches are drawn."
But here only a `vector<char>` is accepted.
**Do I have to convert my vector from uchar to char and how do I do that?**
Here is my code:
vector<uchar> inliers(matched_points_1.size());
Mat F = findFundamentalMat(matched_points_1, matched_points_2, CV_FM_RANSAC, 3, 0.99, inliers);
Mat matched_points;
drawMatches(img_1, keypoints_1, img_2, keypoints_2, sym_matches, matched_points, Scalar::all(-1), Scalar::all(-1), inliers, 0);
imshow("Matched points", matched_points);
If I try it like that it says: "Wrong argumenttypes."
If I change my inliers vector to `vector<char>` I get the error
> OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in cv::_OutputArray::create, file C:\opencv\opencv-master\modules\core\src\matrix.cpp, line 2461
GrilltellerThu, 19 Oct 2017 07:30:55 -0500http://answers.opencv.org/question/176621/Check the quality of Fundamental Matrix and/or Homographyhttp://answers.opencv.org/question/174705/check-the-quality-of-fundamental-matrix-andor-homography/ Hi,
I computed some points in two images (OpenCV 3.2, C++) and used the methods findHomography and findFundamentalMat on these points. I get results but how can I test if my matrices are correct?
Is there a quality check I can do within OpenCV?
I would be thankful for some ideas, other posts or literature :).GrilltellerWed, 20 Sep 2017 03:39:41 -0500http://answers.opencv.org/question/174705/Why findFundamentalMat gives different results for same but different orientation of points?http://answers.opencv.org/question/113067/why-findfundamentalmat-gives-different-results-for-same-but-different-orientation-of-points/Sorry if the title is kind of weird. It is quite hard to express the question of my problem.
So, I am in the middle of a 3D reconstruction project. The pipeline is more or less the same with the standard pipeline where
1. Undistort image
2. Detect points with keypoint detector
3. Track the points across frames (optical flow)
4. Calculate the fundamental matrix
and so on. The only different part is at step 2 where I use a Line Segment Detector and track it across frames.
So, if I am using a keypoint detector, giving two frame of images, I will get two set of keypoints (each set corresponds to each frame). But as for my situation, I have four set of keypoints (each two set correspond to each frame since a line has a start point and an end point).
In order to calculate the Fundamental matrix, I need to concatenate the two sets of point of each frame.
One way is by just vertically concatenate it: `np.vstack([start_point, end_point])`.
The other way is by :`np.hstack([start_point, end_point]).reshape(-1, 2)`. Means, it is concatenated 'alternately', i.e.
[[start_point[0],
end_point[0],
start_point[1],
end_point[1],
...]]
Both will end up with a same shape. But fair enough, they produce a quite different results. From my observation, the `vstack` produced a more '3D-like' result while the `hstack` produced a more 'planar-like' result for the reconstruction.
The question is why is this? And which one supposed to be better?
Below is sample code to give a view of this question:
import numpy as np
import cv2
np.random.seed(0)
def prepare_points(pts_frame1, pts_frame2):
# Prepare the four sets of points
(p1_f1, p2_f1) = pts_frame1
(p1_f2, p2_f2) = pts_frame2
v_stacked_f1f2 = (np.vstack([p1_f1, p2_f1]), np.vstack([p1_f2, p2_f2]))
h_stacked_f1f2 = (np.hstack([p1_f1, p2_f1]).reshape(-1, 2),
np.hstack([p1_f2, p2_f2]).reshape(-1, 2))
return (v_stacked_f1f2, h_stacked_f1f2)
pts_frame1 = np.random.random_sample((60, 2)).astype("float32")
pts_frame2 = np.random.random_sample((60, 2)).astype("float32")
# Emulate the two sets of points for each frame where
# the first set is the start point, while
# the second set is the end point of a line
pts_frame1 = (pts_frame1[::2], pts_frame1[1::2])
pts_frame2 = (pts_frame2[::2], pts_frame2[1::2])
(v_stacked_f1f2, h_stacked_f1f2) = prepare_points(pts_frame1, pts_frame2)
F_vstacked = cv2.findFundamentalMat(v_stacked_f1f2[0], v_stacked_f1f2[1],
cv2.FM_RANSAC, 3, 0.99)[0]
F_hstacked = cv2.findFundamentalMat(h_stacked_f1f2[0], h_stacked_f1f2[1],
cv2.FM_RANSAC, 3, 0.99)[0]
print("F_vstacked:\n", F_vstacked, "\n")
print("F_hstacked:\n", F_hstacked, "\n")
# The output:
# F_vstacked:
# [[ 3.31788127 -2.24336615 -0.77866782]
# [ 0.83418839 -1.4066019 -0.92088302]
# [-2.75413748 2.27311637 1. ]]
# F_hstacked:
# [[ 7.70558741 25.29966782 -16.20835082]
# [-12.95357284 -0.54474384 14.95490469]
# [ 1.79050172 -10.40077071 1. ]]
HilmanMon, 14 Nov 2016 22:59:48 -0600http://answers.opencv.org/question/113067/Error: fundamental matrix size overgrowing to 3x9http://answers.opencv.org/question/88418/error-fundamental-matrix-size-overgrowing-to-3x9/ I am trying to find the fundamental matrix by manually identifying 7 corresponding points in two images and using the findFundamentalMatrix function. However on printing the matrix, it turns out to be 3x9. Seems to be a bug in my code but I cant figure it out.
void mouseEventOne(int event, int x, int y, int, void* param){
//select points in first image. store in one.
}
void mouseEventTwo(int event, int x, int y, int, void* param) {
//select points in second image. store in two.
set flag after 7 points
}
int main(int argc, char** argv) {
setMouseCallback("ImageOne", mouseEventOne, (void *) &ptOne);
setMouseCallback("ImageTwo", mouseEventTwo, (void *) &ptTwo);
for(;;) {
if(one.size() == 7 && two.size() == 7 && flag){
F = findFundamentalMat(one, two, CV_FM_7POINT);
flag = false;
cout<<F<<endl;
}
imshow("ImageOne", imageOne);
imshow("ImageTwo", imageTwo);
char s = (char)waitKey(30);
switch(s){
case 'q':
return 0;
}
}
Example of F that I get:
[-1.551177784481965e-05, 2.023178297273342e-05, 0.0005375762557272776;
-1.135972383318399e-05, 3.375254552474079e-05, -0.003613682804598734;
0.008327856564262326, -0.01695104159607208, 1;
4.431840798641267e-07, -1.055708293644028e-06, -2.507080893426536e-05;
4.638855327440553e-06, 4.121047291061634e-06, -0.003793250696015148;
-0.001440166620559569, -0.000516116119040233, 1;
-2.981211382612034e-06, 3.513201920256684e-06, 9.568949524556974e-05;
1.205098346864506e-06, 1.048082229878526e-05, -0.003754710242238103;
0.0006563331661128546, -0.004043525616809762, 1]
rookieWed, 24 Feb 2016 02:55:59 -0600http://answers.opencv.org/question/88418/Units of Rotation and translation from Essential Matrixhttp://answers.opencv.org/question/66839/units-of-rotation-and-translation-from-essential-matrix/ I obtained my Rotation and translation matrices using the SVD of E. I used the following code to do so :
SVD svd(E);
Matx33d W(0, -1, 0, 1, 0, 0, 0, 0, 1);
Mat_<double> R = svd.u * Mat(W) * svd.vt;
Mat_<double> t = svd.u.col(2);
Matx34d P1(R(0, 0), R(0, 1), R(0, 2), t(0), R(1, 0), R(1, 1), R(1, 2), t(1), R(2, 0), R(2, 1), R(2, 2), t(2));
I want to know what are the units of R and t ? When I calibrated the cameras I got the baseline distance in **cm** , So is the translation vector units also in **cm** ? Also what are the units of R?
P.S : I have read the following question on [Stackoverflow](https://stackoverflow.com/questions/16856177/what-will-be-the-translation-vector-unit-from-essential-matrix/16870447#16870447) , but it had actually confused me as no answer was accepted. Also I wanted to know if the translation vector I got is it in the world frame?(Is it the real world distance moved from the initial position)
EDIT 1:
I have also observed that the values are normalized for the translation vector i.e x^2+y^2+z^2 is nearly 1. So how do I get the actual vector?
EDIT 2: I have read the following question in [Stackoverflow](https://stackoverflow.com/questions/3678317/t-and-r-estimation-from-essential-matrix?rq=1) and I think I will be implementing it. Don't know if there is any better way.Sai KondaveetiWed, 22 Jul 2015 22:11:34 -0500http://answers.opencv.org/question/66839/Wrong fundamental matrix resultshttp://answers.opencv.org/question/62697/wrong-fundamental-matrix-results/Opencv's findFundamentalMat gives different results from the one returned by stereoCalibrate, on the same points (10 checkerboards worth of points concatenated in a long vector). The resulting translation matrix (SVD of E) is wrong when accounting for scale.
I tried both FM_RANSAC and FM_LMEDS. The second one one gives better results in some cases, but in most they are both wrong. Is there some kind of restriction on the points that are given to the function (i.e. they should not be coplanar, which the checkerboard corners are)?larry37Wed, 27 May 2015 06:10:47 -0500http://answers.opencv.org/question/62697/Fundamental Matrix Accuracyhttp://answers.opencv.org/question/57267/fundamental-matrix-accuracy/ I am working on 3d Reconstruction of Scene. I matched the feature of 2 images. I have the Keypoints1 and Keypoints2. I have Fundamental F and Essential Matrix E. Now I have to check X'FX =0. I know X' is the coordinate of second image and X is the coordinate of image of the first image. I have few questions can anyone please help me.
Is X' and X are the keypoints1 and keypoints2 ?
Is it (x, y) coordinate of that keypoints?
The product of X'FX, this will be in Mat format, if I not wrong. How this will be equal to 0?
Please can anyone help, sorry if my question is silly.
Thanks in advance.SUHASWed, 11 Mar 2015 06:06:43 -0500http://answers.opencv.org/question/57267/findFundamentalMat not correctly filtering outliershttp://answers.opencv.org/question/54824/findfundamentalmat-not-correctly-filtering-outliers/After detecting keypoints and matching them between two images, I run findFundamentalMat to estimate the Fundamental matrix and also filter the outliers. When I draw the matches using the mask I get from findFundamentalMat, there is sometimes some matches that are not filtered out eventhough they clearly don't fit in the transform.
Here is an example of a good filtering (Left image from robot's camre, right image static):
![image description](/upfiles/14235395351249148.png)
But without moving the robot, the matches change a lot from one picture to the next (due to the flickering in the light?)
And often there is one wrong match or two that are left. I suspect those matches to cause the inconsistency in my estimated Fundamental matrix which can look totally different from one image to the next, even without moving the robot.
![image description](/upfiles/14235396878588863.png)
Here the yellow and blue line clearly don't fit to the model. Could they cause the fundamental matrix to go totally wrong?MehdiMon, 09 Feb 2015 21:43:12 -0600http://answers.opencv.org/question/54824/Strange behavior of findFundamentalMat + RANSAChttp://answers.opencv.org/question/38682/strange-behavior-of-findfundamentalmat-ransac/I'm using findFundamentalMat + RANSAC to calculate the fundamental matrix of a stereo rig. However, it seems that it is not giving stable outputs.
For every run of the same scene, it gives wildly different outputs. The epipoles are drifting rapidly, and they sometimes appear inside the image. However, when both of the epipoles are inside the image, it seems that the two points are correspondent points.
Why is this happening?
[screenshot]
![When the epipoles are inside the images](/upfiles/14071521661962045.png)
![When the epipoles are outside the images](/upfiles/14071522871728025.png)
import numpy as np
import cv2
from libcv import video
from matplotlib import pyplot as plt
cap1 = video.create_capture(1)
cap2 = video.create_capture(2)
while True:
ret, img1 = cap1.read()
ret, img2 = cap2.read()
img1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []
pts1 = []
pts2 = []
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.6*n.distance:
good.append(m)
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_RANSAC)
# We select only inlier points
pts1 = pts1[mask.ravel()==1]
pts2 = pts2[mask.ravel()==1]
def drawlines(img1,img2,lines,pts1,pts2):
''' img1 - image on which we draw the epilines for the points in img2
lines - corresponding epilines '''
r,c = img1.shape[:2]
img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)
for r,pt1,pt2 in zip(lines,pts1,pts2):
color = tuple(np.random.randint(0,255,3).tolist())
x0,y0 = map(int, [0, -r[2]/r[1] ])
x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1] ])
img1 = cv2.line(img1, (x0,y0), (x1,y1), color,1)
img1 = cv2.circle(img1,tuple(pt1),5,color,-1)
img2 = cv2.circle(img2,tuple(pt2),5,color,-1)
return img1,img2
# Find epilines corresponding to points in right image (second image) and
# drawing its lines on left image
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
lines1 = lines1.reshape(-1,3)
img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)
# Find epilines corresponding to points in left image (first image) and
# drawing its lines on right image
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
lines2 = lines2.reshape(-1,3)
img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)
cv2.imshow('img6', img6)
cv2.imshow('img5', img5)
cv2.imshow('img4', img4)
cv2.imshow('img3', img3)
ch = 0xFF & cv2.waitKey(5)
if ch == 27:
break
yzmtf2008Mon, 04 Aug 2014 06:38:35 -0500http://answers.opencv.org/question/38682/Finding fundamental matrix using RANSAC and 15 pointshttp://answers.opencv.org/question/34834/finding-fundamental-matrix-using-ransac-and-15-points/I want to find fundamental matrix using RANSAC having 10 points and find a bug(?) or discrepancy between documentation and source code. https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/fundam.cpp#L710 this code says that I must have 15 points to use RANSAC (otherwise it use LMEDS even if I chose RANSAC), but http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#Mat%20findFundamentalMat%28InputArray%20points1,%20InputArray%20points2,%20int%20method,%20double%20param1,%20double%20param2,%20OutputArray%20mask%29 says that RANSAC work with >= 8 points (seems obvious to me). Can someone explain to me "why 15"? I spent a lot of time to finding a mistake in my program.Kuznetcov AlexeyTue, 10 Jun 2014 16:40:55 -0500http://answers.opencv.org/question/34834/OpenCV Error: Assertion failed when trying to find fundamental matrix from two loaded imageshttp://answers.opencv.org/question/29881/opencv-error-assertion-failed-when-trying-to-find-fundamental-matrix-from-two-loaded-images/I'm trying to write some code to estimate the fundamental matrix:
img1 = cv2.imread('glassL.jpg')
img2 = cv2.imread('glassR.jpg')
K = calibration['cameraMatrix']
F = cv2.findFundamentalMat(img1, img2, cv2.FM_RANSAC, 0.1, 0.99)
Getting the error:
OpenCV Error: Assertion failed (npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type()) in findFundamentalMat, file /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp, line 1103
Traceback (most recent call last):
File "sfm.py", line 27, in <module>
F = cv2.findFundamentalMat(img1, img2, cv2.FM_RANSAC, 0.1, 0.99)
cv2.error: /tmp/opencv-XbIS/opencv-2.4.8.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat
Any clue?aledalgrandeWed, 12 Mar 2014 17:07:36 -0500http://answers.opencv.org/question/29881/Validating the Funadamental Matrixhttp://answers.opencv.org/question/29876/validating-the-funadamental-matrix/I am calculating the fundamental matrix between two successive frames of a kinect, and according to Hartley and Zisserman the matrix should satisfy x'Fx=0.
I wrote up some quick code that I thought might do the job, but it's not working perfectly, which I will detail at the end of this question:
Mat RansakMask;
Mat F = findFundamentalMat(good_matches1,good_matches2,8,3.0,0.99,RansakMask);
cout << "Number of matches: " << RansakMask.size().height << endl;
int matches[2];
matches[0]=0;matches[1]=0;
for (int j = 0; j < (good_matches1.size()); j++)
{
if(RansakMask.at<uchar>(j, 0) != 1)continue;
cv::Mat p1(3,1, CV_64FC1), p2(3,1, CV_64FC1);
p1.at<double>(0) = good_matches1.at(j).x;
p1.at<double>(1) = good_matches1.at(j).y;
p1.at<double>(2) = 1.0;
p2.at<double>(0) = good_matches2.at(j).x;
p2.at<double>(1) = good_matches2.at(j).y;
p2.at<double>(2) = 1.0;
Mat m = (p1.t()*F*p2);
int i = (abs(m.at<double>(0))<0.5)?0:1;
matches[i]++;
}
cout << "Number of correct: " << matches[0] << endl << "Number of wrong: " << matches[1] << endl;
if(matches[1]>matches[0])
{
cout << "Fundamental Mat is wrong" << endl;
}
The basic idea was, out of all the inliers, compute which were accurate (<0.5f), and which weren't. If there are more accurate than wrong, we can proceed assuming F is relatively accurate.
However, when holding the camera still, this seems to work 9 frames out of 10, on the tenth frame usually giving an answer like (2 correct, 198 wrong) whereas every other frame is typically (198 correct, 2 wrong) etc.
When I move the camera, this turns into more of a 50% chance to be wrong. I don't want to have to throw away F every second frame, so I feel I must be missing something with this code.
Any advice would be appreciated. Thanks.FraserTWed, 12 Mar 2014 12:17:09 -0500http://answers.opencv.org/question/29876/a simple problemhttp://answers.opencv.org/question/27646/a-simple-problem/Dear All, P=[-306.8843; -263.0437; 0] is a point in space and p = [ 447.3374 ; 487.9971] is a corresponding image observation in pixels.
If 149.2, -53.6, -56.2 are Euler angles about x, y and z axis respectively, T= [ -28.3; -10.4; 1794.3 ] is translation vector, f= 16.5621 is the focal length, c=[ 285.7615; 249.037 ] are coordinate of principal points and aspect ratio is 1
How to construct camera matrix, that give p=[C] * P
Your help will be greatly acknowledged
ThanksstethorshMon, 03 Feb 2014 13:34:09 -0600http://answers.opencv.org/question/27646/From Fundamental Matrix To Rectified Imageshttp://answers.opencv.org/question/27155/from-fundamental-matrix-to-rectified-images/I have stereo photos coming from the same camera and I am trying to use them for 3D reconstruction.
To do that, I extract SURF features and calculate Fundamental matrix. Then, I get Essential matrix and from there, I have Rotation matrix and Translation vector. Finally, I use them to obtain rectified images.
The problem is that it works only with some specific parameters.
If I set *minHessian* to *430*, I will have a pretty nice rectified images. But, any other value gives me just a black image or some obviously wrong images.
In all the cases, the fundamental matrix seems to be fine (I draw epipolar lines on both the left and right images). However, I can not say so about Essential matrix, Rotation matrix and Translation vector. Even so I used all the 4 possible combination of *R* and *T*.
Here is my code. Any help or suggestion would be appreciated. Thanks!
<pre><code>
Mat img_1 = imread( "images/imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "images/imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 430;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
namedWindow( "Matches", CV_WINDOW_NORMAL );
imshow("Matches", img_matches );
waitKey(0);
//-- Step 4: calculate Fundamental Matrix
vector<Point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++ )
{
// queryIdx is the "left" image
imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
// trainIdx is the "right" image
imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.1, 0.99);
//-- Step 5: calculate Essential Matrix
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};//Camera Matrix
Mat K(3, 3, CV_64F, data);
Mat_<double> E = K.t() * F * K;
//-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
//-- Step 7: Reprojection Matrix and rectification data
Mat R1, R2, P1_, P2_, Q;
Rect validRoi[2];
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
stereoRectify(K, D, K, D, img_1.size(), R, T, R1, R2, P1_, P2_, Q, CV_CALIB_ZERO_DISPARITY, 1, img_1.size(), &validRoi[0], &validRoi[1] );
</code></pre>gozariFri, 24 Jan 2014 08:48:12 -0600http://answers.opencv.org/question/27155/Pose estimation produces wrong translation vectorhttp://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/Hi,<br>
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not.
I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:
Mat fundamental_matrix =
findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) = 1;
W.at<double>(2,2) = 1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
R1 = -1 * R1;
if(determinant(R2) < 0)
R2 = -1 * R2;
Mat trans = decomp.u.col(2);
However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98).
Any help would be appreciated.FiredragonwebSat, 10 Aug 2013 08:37:43 -0500http://answers.opencv.org/question/18565/FundamentalMat with correspondences from a set of images?http://answers.opencv.org/question/15085/fundamentalmat-with-correspondences-from-a-set-of-images/I have a set of images of an object (with different point of view, scale, rotation etc.)
Then I have a query image in which is present the object and I have a set of correspondences between keypoints of the object in query image and the training images.
Considering the training images can be different size, scale and/or orientation, does it make sense to calc the [fundamental mat][1] with this set of correspondences?
Or is it possible to calc an homography to correctly identify the object considering always that we have a set of matches pointing to different training images?
[1]: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmatyes123Wed, 12 Jun 2013 13:48:49 -0500http://answers.opencv.org/question/15085/findFundamentalMat from optical flow for FOE!http://answers.opencv.org/question/14714/findfundamentalmat-from-optical-flow-for-foe/Hello,
I'm trying to find the Focus of Expension (FOE) in a video to get the time to contact. From what I understand I have to calculate the optical flow, get the funamental matrix and calculate the epipole through the openCV build-in function for finding the epipolar lines. So far, so good. I've got the optical flow working and showing on a seperate image but i'm stuck at the findFundamentalMatrix function.
What kind of Mat or points do I have to give to that function as arguments when my starting point is a sequence of images from a video? I tried with two consecutive images from the video and with the images I got as a result from the optical flow calculation.
I'd be really thankful for any advide on the matter!
munismunisWed, 05 Jun 2013 11:08:55 -0500http://answers.opencv.org/question/14714/Difference between Fundamental , Essential and Homography matriceshttp://answers.opencv.org/question/11902/difference-between-fundamental-essential-and-homography-matrices/I have two images which are taken from different positions. The 2nd camera is located to the right,up and backward with respect to 1st camera. **So I think there is a perpective transformation between the two and not just affine transform since camera are at relatively different depths** (*am I right ??*) I have few corresponding points between the two images. I think of using these corresponding points to determine the transformation of each pixel point from 1st to 2nd image.
I am confused by the functions [findFundamentalMat][1] and [findHomography][2].... **both return 3x3 matrix, what is the difference between the two ??**
Is there any condition required/prerequisite to use them (when to use them)??
Which one to use to transform points from 1st image to 2nd image?? In the 3x3 matrices which the functions return, do they include the rotation and translation between the two image frames??
From [wikipedia][3], came to kmnow that Fundamental matrix is relation between corresponding image points. In an SO answer [here][4], it is said Essential Matrix E is required to get corresponding points... but I do not have the internal Camera matrix to calculate E.....I just have the two images.
How should I proceed to determine the corresponding point? Awaiting for suggestions..
Thank you
[1]: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat
[2]: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat
[3]: http://en.wikipedia.org/wiki/Fundamental_matrix_%28computer_vision%29
[4]: http://stackoverflow.com/questions/10386382/use-fundamental-matrix-to-compute-coordinates-translation-using-opencv?rq=1KarthikThu, 18 Apr 2013 12:27:06 -0500http://answers.opencv.org/question/11902/How to use GenericDescriptorMatcher?http://answers.opencv.org/question/11610/how-to-use-genericdescriptormatcher/Hi
I'm trying to use the Common Interfaces of Generic Descriptor Matchers but I have not discovered how to start yet. My problem is the following: I have two images with their respective keypoints sets, also I know before hand the Fundamental matrix that relates them, I want to match the keypoints using the epipolar constrain (p1'\*Fundamental\*p2 = 0), in other words, generate a vector< DMatch > that relates the points that satisfy the epipolar constrain. At this point I would like to use the interfaces that OpenCV provides in order to have my code as generic as possible. However there are not examples of how to use these tools. Can someone point me to the right direction? or maybe I don't need these interfaces?
Thanks in advance.
PD Sorry for my bad english.RaulPLSun, 14 Apr 2013 20:53:49 -0500http://answers.opencv.org/question/11610/Pose extraction from multiple calibrated viewshttp://answers.opencv.org/question/11459/pose-extraction-from-multiple-calibrated-views/Hi
I have a camera and it's camera matrix and distortion coefficients (I used the sample program). I took several overlapping pictures from the same scene and now I want to compute their relative position and rotation.
How can I do this?
My idea is to place the first image at the origin coordinates looking along the z axis and then relate the image i+1 with the image i:
for(int i=1; i<images.size(); ++i){
previous=images[i-1]
current=images[i]
// Find and Match feature points with SURF
// Points in the previous image
previous_image_points= ...
// Match points in the second image
current_image_points=...
// Find fundamental matrix
Mat F = findFundamentalMat(previous_image_points, current_image_points, FM_RANSAC, 0.1,0.99)
// Essential matrix. K => Camera matrix
Mat R = K.t()*F*K
// Find the position and rotation between cameras
SVD svd(E);
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Matx33d Winv(0,1,0,
-1,0,0,
0,0,1);
Mat_<double> R = svd.u * Mat(W) * svd.vt;
Mat_<double> t = svd.u.col(2);
// Current image position and rotation matrix
Matx34d rotPos = Matx34d(R(0,0), R(0,1), R(0,2), t(0),
R(1,0), R(1,1), R(1,2), t(1),
R(2,0), R(2,1), R(2,2), t(2));
// Previous image position and rotation matrix
// In the case of image 0, it is the identity matrix
Matx34d previous_rot_pos=....
// Calculate the final position and rotation matrix.
// I know that these are 3x4 matrices and can't be multiplied,
// but I convert them to homogeneous before :)
rotPos = previous_rot_pos*rotPos
}
This code concatenates position and rotation matrices such that, in the end, their positions are all relative to the first one.
Thank you :)diegoFri, 12 Apr 2013 05:22:06 -0500http://answers.opencv.org/question/11459/Recovering relative translation/rotation from fundamental matrixhttp://answers.opencv.org/question/5997/recovering-relative-translationrotation-from-fundamental-matrix/Hello,
I am trying to do 3d reconstruction using multiple views of an object.
What I have up to now is the following:
1.) Camera Intrinsics are known, so undistort() all images.
2.) Do SIFT on all images.
3.) Match all keypoints. (results visually tested, seem very good)
4.) Use findfundamentalmat(), reprojection error is also very good, and epilines look right.
5.) Extract 4 possible solutions from f using SVD, as stated in [Wikipedia](http://en.wikipedia.org/wiki/Essential_matrix)
6.) Build a unit cube, translate along z, and reproject it to all images, relative to the first.
Also compute the epilines to the points of the cube using computed f.
The problem I have now is, by my understanding, the four possible solutions should let the reprojected cube lie on it's matching epilines from the first images. But they do not. The transformation "seems" right, but even trying interactively different scalings of the translation the points of the reprojected cube are never on the epilines.
Here are some screenshot to illustrate the situation. The epilines seem right to me.
Image 0: (R|t) = (0|0)
![image0](http://imageshack.us/a/img690/9125/bildschirmfoto20130113u.png)
image 1 : (R|t) = computed transformation according to wikipedia (already picked the "right" solution)
![image1](http://imageshack.us/a/img194/9125/bildschirmfoto20130113u.png)
I know this seems to be a problematic thing, I found lots of thread about similar problems but no solutions... :/
please help!MajekSun, 13 Jan 2013 07:35:18 -0600http://answers.opencv.org/question/5997/findFundamentalMat + RANSAC = strange behaviorhttp://answers.opencv.org/question/3265/findfundamentalmat-ransac-strange-behavior/Hi
I have synthetic data for features. these is absolutely correct. I can check that.
So I have 10 features correspondences for cv::findFundamentalMat and I use CV_FM_RANSAC.
openCV always kicks points out of this set and finds a completely wrong fundamental Matrix. The Strange thing is, 7 correspondences are left. But the RANSAC in findFundamentalMat uses the 8Point algorithm as far as i know. So i have no clue whats going on here. I also have the features in 3D and use solvePNPRansac. Everything is fine there.
Even when i change the coordinates of my features, the function findFundamentalMat just kicks some other 3 corespondences.
When i use the CV_FM_8POINT method the results are fine. But i have to use RANSAC because normally i have douzens of noisy features.
Anyone any idea?PitschoThu, 18 Oct 2012 04:26:29 -0500http://answers.opencv.org/question/3265/what is CV_FM_RANSAC_ONLY ?http://answers.opencv.org/question/178/what-is-cv_fm_ransac_only/Hello everybody,
I'm working with opencv 2.4.2 and I find that after opencv 2.4.1, `findFundamentalMat` function support 2 separate methods for `RANSAC` algorithm, `CV_FM_RANSAC` and `CV_FM_RANSAC_ONLY` with the same default parameters. I couldn't find any reference and details about the `CV_FM_RANSAC_ONLY` in opencv documentation 2.4.1 and 2.4.2.
Anybody know what is the difference between implementation of `CV_FM_RANSAC_ONLY` and `CV_FM_RANSAC` ?
In generally, which one is better?
Amin AboueeSun, 08 Jul 2012 17:29:59 -0500http://answers.opencv.org/question/178/Python findFundamentalMathttp://answers.opencv.org/question/281/python-findfundamentalmat/I keep getting the error:
Traceback (most recent call last):
File "OpenCVTest.py", line 73, in <module>
(retval,mask) = cv.findFundamentalMat(c1,c2)
cv2.error: /tmp/homebrew-opencv-2.4.2-OdWH/OpenCV-2.4.2/modules/calib3d/src/fundam.cpp:1103: error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function findFundamentalMat
Where c1 and c2 are 8 by 2 ndarrays.
It works in cv, but not in cv2. Is this a bug or am I doing something wrong?
I am using OpenCV 2.4.2 in Python 2.7.3.
Thanks!TommyThu, 12 Jul 2012 06:58:34 -0500http://answers.opencv.org/question/281/