OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 16 May 2018 11:45:45 -0500stereoRectifyUncalibrated() gives bad resulthttp://answers.opencv.org/question/191707/stereorectifyuncalibrated-gives-bad-result/ I am using cv::stereoRectifyUncalibrated() to rectify two images but I am getting pretty bad results including lots of shearing effects. The steps I am following:
1. SURF to detect and match keypoints
2. cv::findFundamentalMat() to compute fundamental matrix
3. cv::stereoRectifyUncalibrated() to get homography matrix H1 and H2
4. cv::warpPerspective() to get the rectified images
I want to use the rectified images for disparity. But can't use due to the bad results of rectification. My questions:
1. Is it the fundamental matrix causing the problem?
2. or the warpPerspective() transform responsible for this?
3. or something else I need take care of?
Following is my code and sample images of results. I am new to opencv and appreciate any help.
#include <iostream>
#include <stdio.h>
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/core/affine.hpp"
using namespace cv;
using namespace cv::xfeatures2d;
int main()
{
//Loading the stereo images
Mat leftImage = imread("left.jpg", CV_LOAD_IMAGE_COLOR);
Mat rightImage = imread("right.jpg", CV_LOAD_IMAGE_COLOR);
//checking if image file succesfully opened
if (!leftImage.data || !rightImage.data)
{
std::cout << " --(!) Error reading images " << std::endl; return -1;
}
/*showing the input stereo images
namedWindow("Left image original", WINDOW_FREERATIO);
namedWindow("Right image original", WINDOW_FREERATIO);
imshow("Left image original", leftImage);
imshow("Right image original", rightImage);
*/
//::::::::::::::::::::::::::::::::::::::::::::::::
//Step 1: Detect the keypoints using SURF Detector
int minHessian = 420;
Ptr<SURF> detector = SURF::create(minHessian); //here detector is a pointer which points to SURF type object
//create is also a pointer which points to SURF type object
std::vector<KeyPoint> keypointsLeft, keypointsRight; //vectors storing keypoints of two images
detector->detect(leftImage, keypointsLeft);
detector->detect(rightImage, keypointsRight);
//::::::::::::::::::::::::::::::::
//Step 2: Descriptors of keypoints
Mat descriptorsLeft;
Mat descriptorsRight;
detector->compute(leftImage, keypointsLeft, descriptorsLeft);
detector->compute(rightImage, keypointsRight, descriptorsRight);
//std::cout << "descriptor matrix size: " << keypointsDescriptorsLeft.rows << " by " << keypointsDescriptorsLeft.cols << std::endl;
//::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
//Step 3: matching keypoints from image right and image left
//according to their descriptors (BruteForce, Flann based approaches)
// Construction of the matcher
std::vector<cv::DMatch> matches;
static Ptr<BFMatcher> matcher = cv::BFMatcher::create();
// Match the two image descriptors
matcher->match(descriptorsLeft, descriptorsRight, matches);
//std::cout << "Number of matched points: " << matches.size() << std::endl;
//::::::::::::::::::::::::::::::::
//Step 4: find the fundamental mat
// Convert 1 vector of keypoints into
// 2 vectors of Point2f for computing F matrix
// with cv::findFundamentalMat() function
std::vector<int> pointIndexesLeft; //getting index for point2f conversion
std::vector<int> pointIndexesRight; //getting index for point2f conversion
for (std::vector<cv::DMatch>::const_iterator it = matches.begin(); it != matches.end(); ++it) {
// Get the indexes of the selected matched keypoints
pointIndexesLeft.push_back(it->queryIdx);
pointIndexesRight.push_back(it->trainIdx);
}
// Convert keypoints vector into Point2f type vector
//as needed for fundamentalMat() function
std::vector<cv::Point2f> matchingPointsLeft, matchingPointsRight;
cv::KeyPoint::convert(keypointsLeft, matchingPointsLeft, pointIndexesLeft);
cv::KeyPoint::convert(keypointsRight, matchingPointsRight, pointIndexesRight);
//creating clone Mat to draw the keypoints on
Mat drawKeyLeft = leftImage.clone(), drawKeyRight = rightImage.clone();
//check by drawing the points
std::vector<cv::Point2f>::const_iterator it = matchingPointsLeft.begin();
while (it != matchingPointsLeft.end()) {
// draw a circle at each corner location
cv::circle(drawKeyLeft, *it, 3, cv::Scalar(0, 0, 0), 1);
++it;
}
it = matchingPointsRight.begin();
while (it != matchingPointsRight.end()) {
// draw a circle at each corner location
cv::circle(drawKeyRight, *it, 3, cv::Scalar(0, 0, 0), 1);
++it;
}
namedWindow("Left Image Keypoints", WINDOW_FREERATIO);
namedWindow("Right Image Keypoints", WINDOW_FREERATIO);
imshow("Left Image Keypoints", drawKeyLeft);
imshow("Right Image Keypoints", drawKeyRight);
// Compute F matrix from n>=8 matches
cv::Mat fundamental = cv::findFundamentalMat(matchingPointsLeft, // selected point2f points in first image
matchingPointsRight, // selected point2f points in second image
CV_FM_RANSAC); // 8-point method
std::cout << std::endl << "F-Matrix " << fundamental << std::endl << std::endl;
//drawing epipolar lines
//drawEpipolarLines(matchingPointsLeft, matchingPointsRight, fundamental, leftImage, rightImage);
//:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
//Step 5: Getting Homography matrices H1 & H2 using stereoRectifyUncalibrated()
//creating homography mactrices H1, H2
//are used to get rectified images
cv::Mat H1(3, 3, fundamental.type()); //H1 for left image
cv::Mat H2(3, 3, fundamental.type()); //H2 for right image
cv::stereoRectifyUncalibrated(matchingPointsLeft, matchingPointsRight, fundamental, leftImage.size(), H1, H2, 2);
std::cout << "H1 matrix" << H1 << std::endl;
std::cout << std::endl << "H2 matrix" << H2 << std::endl << std::endl;
//creating Mat to hold rectified images
Mat rectifiedLeft, rectifiedRight;
//getting rectified images using final transformation matrix above
warpPerspective(leftImage, rectifiedLeft, H1, leftImage.size(), INTER_LINEAR);
warpPerspective(rightImage, rectifiedRight, H2, leftImage.size(), INTER_LINEAR);
namedWindow("Rectified Left Image", WINDOW_FREERATIO);
namedWindow("Rectified Right Image", WINDOW_FREERATIO);
imshow("Rectified Left Image", rectifiedLeft);
imshow("Rectified Right Image", rectifiedRight);
waitKey(0);
//system("pause");
return 0;
}
![Left](/upfiles/15264889332701719.jpg)
![Right](/upfiles/1526488947756083.jpg)H MWed, 16 May 2018 11:45:45 -0500http://answers.opencv.org/question/191707/Imagetransformation after cv2.stereoRectifyUncalibratedhttp://answers.opencv.org/question/179076/imagetransformation-after-cv2stereorectifyuncalibrated/I am using python 2.7 and OpenCV 3.2.0 for uncalibrated image-rectification and dense stereo matching.
My problem is now after getting my transformation-matrices with
cv2.stereoRectifyUncalibrated()
I don't know how to proceed and create my rectification-maps and do the remapping. I know that in the calibrated case I could use
cv2.initUndistortRectifyMap() 'and' cv2.remap()
The problem is I don't have a calibration matrix that cv2.initUndistortRectifyMap() needs and so my question is, if there is a workaround with openCV or do I have to write a function myself?KaridoFri, 24 Nov 2017 10:09:11 -0600http://answers.opencv.org/question/179076/Opencv - Depth map from uncalibrated stereo systemhttp://answers.opencv.org/question/90742/opencv-depth-map-from-uncalibrated-stereo-system/I m trying to get a depth map from an uncalibrated method. I can obtain the fundamental matrix via different correspondent points from SIFT method and "cv2.findFundamentalMat". Then with "cv2.stereoRectifyUncalibrated" i can get the rectification matrix. Finally i can use "cv2.warpPerspective" to rectify and compute the disparity but this latter doesnt conduct to a good depth map...The values are very high so i m wondering if i have to use "warpPerspective" or i have to calculate rotation matrix from homography matrix got with "stereoRectifyUncalibrated"
A part of the code :
#Obtainment of the correspondent point with SIFT
sift = cv2.SIFT()
###find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(dst1,None)
kp2, des2 = sift.detectAndCompute(dst2,None)
###FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []
pts1 = []
pts2 = []
###ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.8*n.distance:
good.append(m)
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
pts1 = np.array(pts1)
pts2 = np.array(pts2)
#Computation of the fundamental matrix
F,mask= cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)
# Obtainment of the rectification matrix and use of the warpPerspective to transform them...
pts1 = pts1[:,:][mask.ravel()==1]
pts2 = pts2[:,:][mask.ravel()==1]
pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
p1fNew = pts1.reshape((pts1.shape[0] * 2, 1))
p2fNew = pts2.reshape((pts2.shape[0] * 2, 1))
retBool ,rectmat1, rectmat2 = cv2.stereoRectifyUncalibrated(p1fNew,p2fNew,F,(2048,2048))
dst11 = cv2.warpPerspective(dst1,rectmat1,(2048,2048))
dst22 = cv2.warpPerspective(dst2,rectmat2,(2048,2048))
#calculation of the disparity
stereo = cv2.StereoBM(cv2.STEREO_BM_BASIC_PRESET,ndisparities=16*10, SADWindowSize=9)
disp = stereo.compute(dst22.astype(uint8), dst11.astype(uint8)).astype(np.float32)
plt.imshow(disp);plt.colorbar();plt.clim(0,400)#;plt.show()
plt.savefig("0gauche.png")
#plot depth by using disparity focal length C1[0,0] from stereo calibration and T[0] the distance between cameras
plt.imshow(C1[0,0]*T[0]/(disp),cmap='hot');plt.clim(-0,500);plt.colorbar();plt.show()
Here the rectified pictures with uncalibrated method (and warpPerspective) :
![image description](/upfiles/14587543753795926.png)
Here the rectified pictures with calibrated method :
![image description](/upfiles/1458754406224736.png)
I dont know how the difference is so important between the two kind of pictures...and for the calibrated method, it doesnt seem aligned...strange
The disparity map of the uncalibrated method :
![image description](/upfiles/14587544201269831.png)
And the depth map are calculated with : C1[0,0]*T[0]/(disp)
with T from the "stereocalibrate" but the values are very high...Bilou563Wed, 23 Mar 2016 03:27:56 -0500http://answers.opencv.org/question/90742/Camera projection matrix from fundamentalhttp://answers.opencv.org/question/89418/camera-projection-matrix-from-fundamental/I'm pretty new to OpenCV and trying to puzzle together a monocular AR application **getting structure from motion**. I've got a tracker up and running which tracks points pretty well as the optical flow looks good. It needs to work on uncalibrated cameras.
From the point correspondences I get the fundamental matrix from findFundamentalMat, but I'm lost at how to get the camera projection matrix. Matrix math is not my strong suit, and for all my google foo all I can find are examples using pre-calibrated cameras.
1. Find fundamental matrix using findFundamentalMat (check!)
2. Find epilines with computeCorrespondEpilines (check!)
3. **Extract projection matrix P and P1** (????)
P is identity matrix for the uncalibrated case, but **how do I get P1**?
menneskeSat, 05 Mar 2016 05:03:55 -0600http://answers.opencv.org/question/89418/StereoRectifyUncalibrated "cannot solve under-determined linear system"http://answers.opencv.org/question/65816/stereorectifyuncalibrated-cannot-solve-under-determined-linear-system/ Hi all
I'm using cv2.stereoRectifyUncalibrated to try and calculate the appropriate rectification transformation between two sets of artificial correspondences:
import cv2
import cv2.cv as cv
import numpy as np
pts1 = [[423, 191], # top_l
[840, 217], # top_r
[422, 352], # bot_l
[838, 377], # bot_r
[325, 437], # front_l
[744, 464], # front_r
[288, 344], # wide_l
[974, 388]] # wide_r
pts2 = [[423, 192], # top_l
[841, 166], # top_r
[422, 358], # bottom_l
[839, 330], # bottom_r
[518, 440], # front_l
[934, 417], # front_r
[287, 363], # wide_l
[973, 320]] # wide_r
pts1 = np.array(pts1, dtype='f4')
pts2 = np.array(pts2, dtype='f4')
f, mask = cv2.findFundamentalMat(pts1, pts2, cv2.cv.CV_FM_8POINT)
pts1_r = pts1.reshape((pts1.shape[0] * 2, 1))
pts2_r = pts2.reshape((pts2.shape[0] * 2, 1))
ret, H1, H2 = cv2.stereoRectifyUncalibrated(pts1_r, pts2_r, f, (1280, 720))
print ret
I've included the data initialisation just to illustrate the array structure. You'll see I've avoided the assertion error mentioned in [this](http://opencv-users.1802565.n2.nabble.com/StereoRectifyUncalibrated-not-accepting-same-array-as-FindFundamentalMat-td5149185.html) discussion using reshape.
However, I now get the following error:
> OpenCV Error: Bad argument (The function can not solve under-determined linear systems) in solve, file /tmp/opencv20150527-4924-hjrvz/opencv-2.4.11/modules/core/src/lapack.cpp, line 1350
Out of context, the offending snippet looks like this:
int m = src.rows, m_ = m, n = src.cols, nb = _src2.cols;
...
if( m < n )
CV_Error(CV_StsBadArg, "The function can not solve under-determined linear systems" );
Where m and n are the number of rows and columns in 'inputArray' - supplied to cv::solve by cvsolve, and created somewhere in StereoRectifyUncalibrated.
My question is simply: what is going on here? I'm struggling to see how my artificial data could be responsible for the system being solved to be under-determined.slowWed, 08 Jul 2015 11:15:28 -0500http://answers.opencv.org/question/65816/Is stereoRectifyUncalibrated efficient?http://answers.opencv.org/question/233/is-stereorectifyuncalibrated-efficient/hello everybody,
I'm using of OpenCV 2.4 for rectification of images with findFundamentalMatrix and stereoRectifyUncalibrated. nearly 2 weeks ago, I saw a matlab code about rectify and I became interested to compare the result between them. at the first I thought that the result of opencv will be better than matlab but after several experiments, I found that the matlab code is better. but why?
I searched in internet about them and I found that they use of two difference algorithm according by 2 papers.
I think the opencv uses of "Theory and Practice of Projective Rectiﬁcation" paper by "Richard I. Hartley" that you can found [here](http://users.cecs.anu.edu.au/~hartley/Papers/joint-epipolar/journal/joint3.pdf).
But the base of matlab algorithm code is a paper from "[A. Fusiello, E. Trucco and A. Verri](http://profs.sci.univr.it/~fusiello/demo/rect/)" with "Quasi-Euclidean Uncalibrated Epipolar Rectiﬁcation" title that you can find [here](http://profs.sci.univr.it/~fusiello/papers/icpr08.pdf)
and the matlab source code is [here](http://profs.sci.univr.it/~fusiello/sw/RectifKitU.zip). if you see the compRect.m file, you will notice that they use of non-linear least square method (Levenberg–Marquardt algorithm) to find the extrinsic parameters (rotation matrix and focal point).
And my question:
why opencv don't use of second method while the result of that is better than opencv. If somebody used of second method (matlab code) already, please explain his experience.
Amin AboueeTue, 10 Jul 2012 11:50:35 -0500http://answers.opencv.org/question/233/