OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 06 Sep 2018 11:07:22 -0500findEssentialMat or decomposeEssentialMat do not work correctlyhttp://answers.opencv.org/question/90070/findessentialmat-or-decomposeessentialmat-do-not-work-correctly/I ganerated 3d points, projected them to 2 cameras (dst and src) with known positions and tryed to recover camera positions. <br>
dst camera have no rotarions and translations, so one of rotations returned by decomposeEssentialMat should be src rotarion.<br>
However rotations and translation returned by decomposeEssentialMat both absolutely incorrect
<pre>
import cv2
import numpy as np
objectPoints = np.float64([[-1,-1,5],[1,-1,5],[1,1,5],[-1,1,5],[0,0,0],[0,0,5]])
srcRot = np.float64([[0,0,1]])
srcT = np.float64([[0.5,0.5,-1]])
dstRot = np.float64([[0,0,0]])
dstT = np.float64([[0,0,0]])
cameraMatrix = np.float64([[1,0,0],
[0,1,0],
[0,0,1]])
srcPoints = cv2.projectPoints(objectPoints,srcRot,srcT,cameraMatrix,None)[0]
dstPoints = cv2.projectPoints(objectPoints,dstRot,dstT,cameraMatrix,None)[0]
E = cv2.findEssentialMat(srcPoints,dstPoints)[0]
R1,R2,t = cv2.decomposeEssentialMat(E)
print cv2.Rodrigues(R1)[0]
print cv2.Rodrigues(R2)[0]
print t
</pre>
the resut for R and t
<pre>
R1=[[-2.8672671 ]
[ 0.82984579]
[ 0.12698814]]
R2=[[ 0.84605365]
[ 2.92326821]
[-0.24527328]]
t=[[ 8.47069335e-04]
[ -3.75356183e-03]
[ -9.99992597e-01]]
</pre>
The rotation are correct just in case of the same height of cameras positions, but direction is always wrong.
Is it bug or my mistake?KolyanMon, 14 Mar 2016 04:24:51 -0500http://answers.opencv.org/question/90070/opencv 3 essentialmatrix and recoverposehttp://answers.opencv.org/question/31421/opencv-3-essentialmatrix-and-recoverpose/We are currently working on a project using random 3D camera positioning.
We compiled OpenCv 3.0.0 and did our best to use the functions findEssentialMat & recoverPose.
In our problem, we have two openGL cameras cam1 and cam2, which observe the same 3D object.
cam1 and cam 2 have the same intrinsic parameters (resolution, focal and pp)
On each capture from those cameras, we are able to identify a set of matched points (8 points per set)
The extrinsic parameters of cam1 are known.
**The objective of our work is to determine the extrinsic parameter of cam2.**
So we use
float focal = 4.1f;
cv::Point2d pp(0,0);
double prob = 0.999;
double threshold = 3.0;
int method = cv::RANSAC;
cv::Mat mask;
cv::Mat essentialMat = cv::findEssentialMat(points1, points2, focal, pp, method, prob, threshold, mask);
then we apply
cv::Mat T;
cv::Mat R;
cv::recoverPose(essentialMat, points1, points2, R, T, focal, pp, mask);
in order to get R the relative rotation matrix and T the relative translation matrix.
From that, we tried to apply R and T to cam1 extrinsic parameter without success.
Could you help us determine how to obtain cam2 translation and orientation from cam1, R and T?
By advance thanks a lotjaystabTue, 08 Apr 2014 12:14:53 -0500http://answers.opencv.org/question/31421/Difference between essential matrix computed using findEssentialMat and computed using (findFundamentalMat + calibration matrix)http://answers.opencv.org/question/198841/difference-between-essential-matrix-computed-using-findessentialmat-and-computed-using-findfundamentalmat-calibration-matrix/ Hello,
I am new to openCV and I developed a simple program aiming at computing the rotation and the translation between 2 successive camera frames (I use only one monocular camera). After matching the feature points, I computed the essential matrix using two different ways. The first method consists of applying the function findEseentialMat to the matched points and using the calibration matrix. The second method consists of applying findFundamentalMat to the mateched points and then applying the formula: E = K^T * F * K.
The obtained results of E are really different using the two methods and I cannot find an explanation to this.
Would you have an explanation, please?
Thank youAmaniThu, 06 Sep 2018 11:07:22 -0500http://answers.opencv.org/question/198841/How to compute the covariance of an inter-camera relative pose measurement?http://answers.opencv.org/question/179919/how-to-compute-the-covariance-of-an-inter-camera-relative-pose-measurement/If I'm doing pose estimation using a single camera using 3D-2D correspondences (E.g. PNP algorithm), I have read that reprojecting the points can give me an estimate of the Jacobian (cv::projectPoints), which can then be used to compute an estimate of the covariance of the pose.
But if I have two cameras, and I am performing relative pose estimation between the cameras using the fundamental/essential matrix (cv::findEssentialMat) and subsequent decomposition of the matrix, how can I compute the covariance of the relative pose between the cameras?saihvWed, 06 Dec 2017 20:23:32 -0600http://answers.opencv.org/question/179919/findEssentialMat give different results according to the number of feature pointshttp://answers.opencv.org/question/147345/findessentialmat-give-different-results-according-to-the-number-of-feature-points/ Hello,
I use the findEssentialMatrix function on a set of feature points (~ 1200 points) and then I use triangulatePoints function to recover the 3D positions of those feature points. But I have a problem with the findEssentialMatrix function because it seems that the result changes according to the number of points.
For example, if I use 1241 points for one frame, the result is quite good (R= 0.5,0.5,0.5 and t=1,0,0) and if I remove only one point the result is totally different (R=3.0,2.0,2.0 and t=0,0,1). I tried to remove other feature points and sometimes it works and sometimes not. I don't understand why. Is there a reason for that ?
std::vector<cv::Point2d> static_feature_point_t;
std::vector<cv::Point2d> static_feature_point_tmdelta;
// read from file
cv::FileStorage fs_t("static_feature_point_t.yml", cv::FileStorage::READ);
cv::FileStorage fs_tmdelta("static_feature_point_tmdelta.yml", cv::FileStorage::READ);
cv::FileNode feature_point_t = fs_t["feature_point"];
cv::FileNode feature_point_tmdelta = fs_tmdelta["feature_point"];
read(feature_point_t, static_feature_point_t);
read(feature_point_tmdelta, static_feature_point_tmdelta);
fs_t.release();
fs_tmdelta.release();
double focal = 300.;
cv::Point2d camera_principal_point(320, 240);
cv::Mat essential_matrix = cv::findEssentialMat(static_feature_point_t, static_feature_point_tmdelta, focal, camera_principal_point, cv::LMEDS);
cv::Mat rotation, translation;
cv::recoverPose(essential_matrix, static_feature_point_t, static_feature_point_tmdelta, rotation, translation, focal, camera_principal_point);
cv::Mat rot(3,1,CV_64F);
cv::Rodrigues(rotation, rot);
std::cout << "rotation " << rot*180./M_PI << std::endl;
std::cout << "translation " << translation << std::endl;
The two lists of feature points are [here](https://drive.google.com/open?id=0B1aZtV5c4-1xOUtUcGpHOF95R00)
(I didn't find how to upload files on the forum or if it is possible)
Thanks,mnchapelTue, 09 May 2017 12:30:09 -0500http://answers.opencv.org/question/147345/Question about epipolar lines and essential matrixhttp://answers.opencv.org/question/115645/question-about-epipolar-lines-and-essential-matrix/I am reading up on essential matrices and I got a little bit confused. Why does Ex (or x'E) give us the epipolar line ? Where E is the cross product of t and R ? Doesnt it just give the direction of x' ?
Can I look at it another way ? i.e. because t and R, the translation-rotation matrix aligns both frames to be parallel, the epipolar lines in the aligned frame is a straight horizontal line. And in the "original" frame it looks that way like in the image.
![image description](/upfiles/14804143359632302.png)NbbTue, 29 Nov 2016 04:13:24 -0600http://answers.opencv.org/question/115645/Undistort images or not before finding the Fundamental/Essential Matrix?http://answers.opencv.org/question/114828/undistort-images-or-not-before-finding-the-fundamentalessential-matrix/ I am quite confused right now. In order to find the Fundamental Matrix and the Essential Matrix, my common way is by first, undistort the images before did the other processes like detecting keypoints, matching the keypoints, find the Fundamental Matrix and then, the Essential Matrix. Is this correct? Can I **not** undistort the images in order to find the Fundamental Matrix and the Essential Matrix?
Another question is, as for the function `findEssentialMat` of the OpenCV, does it operate on the undistorted points, or distorted points, or both?HilmanSat, 26 Nov 2016 17:09:05 -0600http://answers.opencv.org/question/114828/findEssentialMat for coplanar pointshttp://answers.opencv.org/question/93695/findessentialmat-for-coplanar-points/ [this is a copy of a [question I just posted on StackOverflow](http://stackoverflow.com/questions/36844139/opencv-findessentialmat)]
I came to a conclusion that OpenCV's findEssentialMat is not working properly for coplanar points. The documentation specifies that it uses Nister's 5 point algorithm, and the corresponding paper declares that the algorithm works fine for coplanar points.
void main() {
ofstream log;
log.open("errorLog.txt");
srand((time(NULL) % RAND_MAX) * RAND_MAX);
/******* camera properties *******/
Mat camMat = Mat::eye(3, 3, CV_64F);
Mat distCoeffs = Mat::zeros(4, 1, CV_64F);
/******* pose 1 *******/
Mat rVec1 = (Mat_<double>(3, 1) << 0, 0, 0);
Mat tVec1 = (Mat_<double>(3, 1) << 0, 0, 1);
/******* pose 2 *******/
Mat rVec2 = (Mat_<double>(3, 1) << 0.0, 0.0, 0);
Mat tVec2 = (Mat_<double>(3, 1) << 0.2, 0, 1); // 2nd camera pose is just a pose1 translated by 0.2 along the X axis
int iterCount = 50;
int N = 40;
for (int j = 0; j < iterCount; j++)
{
/******* generate 3D points *******/
vector<Point3f> points3d = generatePlanarPoints(N);
/******* project 3D points from pose 1 *******/
vector<Point2f> points2d1;
projectPoints(points3d, rVec1, tVec1, camMat, distCoeffs, points2d1);
/******* project 3D points from pose 2 *******/
vector<Point2f> points2d2;
projectPoints(points3d, rVec2, tVec2, camMat, distCoeffs, points2d2);
/******* add noise to 2D points *******/
std::default_random_engine generator;
double noise = 1.0 / 640;
if (noise > 0.0) {
std::normal_distribution<double> distribution(0.0, noise);
for (int i = 0; i < N; i++)
{
points2d1[i].x += distribution(generator);
points2d1[i].y += distribution(generator);
points2d2[i].x += distribution(generator);
points2d2[i].y += distribution(generator);
}
}
/******* find transformation from 2D - 2D correspondences *******/
double threshold = 2.0 / 640;
Mat essentialMat = findEssentialMat(points2d1, points2d2, 1.0, Point(0,0), RANSAC, 0.999, threshold);
Mat estimatedRMat1, estimatedRMat2, estimatedTVec;
decomposeEssentialMat(essentialMat, estimatedRMat1, estimatedRMat2, estimatedTVec);
Mat estimatedRVec1, estimatedRVec2;
Rodrigues(estimatedRMat1, estimatedRVec1);
Rodrigues(estimatedRMat2, estimatedRVec2);
double minError = min(norm(estimatedRVec1 - rVec2), norm(estimatedRVec2 - rVec2));
log << minError << endl; // logging errors
}
log.flush();
log.close();
return;
}
The points are generated like this:
vector<Point3f> generatePlanarPoints(int N) {
float span = 5.0;
vector<Point3f> points3d;
for (int i = 0; i < N; i++)
{
float x = ((float)rand() / RAND_MAX - 0.5) * span;
float y = ((float)rand() / RAND_MAX - 0.5) * span;
float z = 0;
Point3f point3d(x,y,z);
points3d.push_back(point3d);
}
return points3d;
}
Excerpt from `errorLog.txt` file:
0
0.199337
0.199337
0.199337
0.199338
0
0.199337
0
0
0.199337
0.199337
This shows us that algorithm sometimes performs good (error == 0), and sometimes something weird happens (error == 0.199337). Is there any other explanation for this?
Obviously, the algorithm is deterministic and error 0.199337 will appear for a specific configuration of points. What is this configuration, I wasn't able to figure out.
I also experimented with different prob and threshold parameters for findEssentialMat. And I tried using more/less points and different camera poses... same thing is happening.acajicMon, 25 Apr 2016 10:05:49 -0500http://answers.opencv.org/question/93695/findEssentialMat gives wrong results in androidhttp://answers.opencv.org/question/93026/findessentialmat-gives-wrong-results-in-android/Hi,
I am trying to create a visual odometry app for android using NDK. I am using FAST and KLT and then using findEssentialMat to find the essential matrix. However, it is giving erroneous results. Here's some debug output:
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: Essential Matrix =
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: -0.000000
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: -1.655305
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: 29303136256.000000
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: -182220068263779929277612730010501644288.000000
04-17 23:23:28.313 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: 1.771581
04-17 23:23:28.314 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: 0.000000
04-17 23:23:28.314 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: -598.520691
04-17 23:23:28.314 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: 1.350371
04-17 23:23:28.314 2630-2903/com.example.shaswat.testopencv D/OCVSample::SDK: 152428905045575845543936.000000
The essential matrix values are displayed column-wise. My code is as follows:
Java part - onCameraFrame()
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
this.mCurr = inputFrame.rgba();
if(this.firstRun == 1){
this.mCurr = inputFrame.rgba();
this.mPrev = inputFrame.rgba();
this.mDisplay = inputFrame.rgba();
this.firstRun = 0;
this.mRf = new Mat(3, 3, CvType.CV_32F);
this.mRf.put(0,0,0,0,0,0,0,0,0,0,0);
this.mTf = new Mat(3, 1, CvType.CV_32F);
this.mTf.put(0,0,0,0,0);
}
this.controlFrameRate++;
if(this.controlFrameRate<2)
return null;
this.controlFrameRate = 0;
if(this.firstRun == 0) {
this.mDisplay = this.mPrev;
convertNativeGray(mCurr.getNativeObjAddr(), mPrev.getNativeObjAddr(), mDisplay.getNativeObjAddr(), mRf.getNativeObjAddr(), mTf.getNativeObjAddr());
this.mPrev = this.mCurr;
}
return mDisplay;
}
Native part:
JNIEXPORT jint JNICALL Java_com_example_shaswat_testopencv_MainActivity_convertNativeGray(JNIEnv*, jobject, jlong addrCurr, jlong addrPrev, jlong addrDisplay, jlong addrRF, jlong addrTF) {
__android_log_print(ANDROID_LOG_DEBUG, "OCVSample::SDK","In Native Code");
Mat& mCurr = *(Mat*)addrCurr;
Mat& mPrev = *(Mat*)addrPrev;
Mat& mDisplay = *(Mat*)addrDisplay;
Mat& R_f = *(Mat*)addrRF;
Mat& t_f = *(Mat*)addrTF;
int conv = 0;
jint retVal;
cvtColor(mCurr, mCurr, CV_RGBA2BGR);
cvtColor(mPrev, mPrev, CV_RGBA2BGR);
cvtColor(mDisplay, mDisplay, CV_RGBA2BGR);
//mDisplay = cv::Mat::zeros(mCurr.rows, mCurr.cols, mCurr.type());
Mat cameraMatrix = (Mat_<float>(3,3) << 1097.1547, 0.0, 403.9075, 0.0, 821.5675, 298.8437, 0.0, 0.0, 1.0);
VO::featureOperations odometry(cameraMatrix, mCurr, mPrev, mDisplay, R_f, t_f, 1);
odometry.calcOdometry(cameraMatrix, mCurr, mPrev, mDisplay, R_f, t_f, 1);
cvtColor(mCurr, mCurr, CV_BGR2RGBA);
cvtColor(mPrev, mPrev, CV_BGR2RGBA);
cvtColor(mDisplay, mDisplay, CV_BGR2RGBA);
retVal = (jint)conv;
return retVal;
}
calcOdometry function:
void VO::featureOperations::calcOdometry(cv::Mat cameraMatrix, cv::Mat currImage, cv::Mat prevImage, cv::Mat& trajectory, cv::Mat& R_f, cv::Mat& t_f, int enableHomography){
// Change these accordingly. Intrinsics
double focal = cameraMatrix.at<float>(0,0);
cv::Point2d pp(cameraMatrix.at<float>(0,2), cameraMatrix.at<float>(1,2));
// recovering the pose and the essential matrix
cv::Mat E, R, t, mask;
std::vector<uchar> status;
std::vector<cv::Point2f> prevFeatures;
std::vector<cv::Point2f> currFeatures;
prevFeatures = this->detectFeatures(prevImage);
// For FAST
if(this->trackFeatures(prevImage,currImage,prevFeatures,currFeatures,status)){
if(prevFeatures.size()>200 && currFeatures.size()>200)
{
E = cv::findEssentialMat(currFeatures, prevFeatures, focal, pp, cv::RANSAC, 0.999, 1.0, mask);
cv::recoverPose(E, currFeatures, prevFeatures, R, t, focal, pp, mask);
}
else
{
R = cv::Mat::zeros(3, 3, CV_32F);
t = cv::Mat::zeros(3, 1, CV_32F);
}
}
// a redetection is triggered in case the number of feautres being trakced go below a particular threshold
if (prevFeatures.size() < 1800) {
prevFeatures=this->detectFeatures(prevImage);
this->trackFeatures(prevImage,currImage,prevFeatures,currFeatures, status);
}
}
std::vector<cv::Point2f> VO::featureOperations::detectFeatures(cv::Mat img){
cv::cvtColor(img,img,cv::COLOR_BGR2GRAY);
// Detect features on this image
std::vector<cv::Point2f> pointsFAST;
std::vector<cv::KeyPoint> keypoints_FAST;
// FAST Detector
int fast_threshold = 20;
bool nonmaxSuppression = true;
cv::FAST(img,keypoints_FAST,fast_threshold,nonmaxSuppression);
cv::KeyPoint::convert(keypoints_FAST,pointsFAST,std::vector<int>());
assert(pointsFAST.size() > 0);
return pointsFAST;
}
bool VO::featureOperations::trackFeatures(cv::Mat prevImg, cv::Mat currentImg, std::vector<cv::Point2f>& points1, std::vector<cv::Point2f>& points2, std::vector<uchar>& status){
cv::Mat prevImg_gray,currentImg_gray;
cv::cvtColor(prevImg,prevImg_gray,CV_BGR2GRAY);
cv::cvtColor(currentImg,currentImg_gray,CV_BGR2GRAY);
std::vector<float> err;
cv::Size winSize=cv::Size(21,21);
cv::TermCriteria termcrit=cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01);
cv::calcOpticalFlowPyrLK(prevImg_gray, currentImg_gray, points1, points2, status, err, winSize, 3, termcrit, 0, 0.001);
//getting rid of points for which the KLT tracking failed or those who have gone outside the frame
int indexCorrection = 0;
for( int i=0; i<status.size(); i++)
{ cv::Point2f pt = points2.at(i- indexCorrection);
if ((status.at(i) == 0)||(pt.x<0)||(pt.y<0)) {
if((pt.x<0)||(pt.y<0)) {
status.at(i) = 0;
}
points1.erase (points1.begin() + (i - indexCorrection));
points2.erase (points2.begin() + (i - indexCorrection));
indexCorrection++;
}
}
if(points1.size()<=5||points2.size()<=5){
std::cout<<"Previous Features : \n"<<points1<<std::endl;
std::cout<<"Current Features : \n"<<points2<<std::endl;
}
return true;
}
Any help regarding this would be appreciated. ShaswatSun, 17 Apr 2016 13:08:06 -0500http://answers.opencv.org/question/93026/OpenCV findEssentialMat() and recoverPose() sign convention?http://answers.opencv.org/question/79059/opencv-findessentialmat-and-recoverpose-sign-convention/ If I have two images L and R taken from left and right cameras, respectively, and I am calling the essential matrix function as follows:
E = findEssentialMat(rightImagePoints, leftImagePoints, ...), and then subsequently recoverPose(E, rightImagePoints, leftImagePoints)..
The sign in the translation vector I am getting in the end is not [-1, 0, 0] as it should be (because I am calculating pose of camera L from camera R) but [1, 0, 0].. can anyone explain why to me please? I naturally assumed it would be looking for a chirality between points1 and points2 if the functions are passed points in the order points1, points2. Which camera is being considered as origin in my function call?saihvFri, 11 Dec 2015 23:31:23 -0600http://answers.opencv.org/question/79059/Pose estimation using PNP: Strange wrong resultshttp://answers.opencv.org/question/76450/pose-estimation-using-pnp-strange-wrong-results/ Hello, I am trying to use the PNP algorithm implementations in Open CV (EPNP, Iterative etc.) to get the metric pose estimates of cameras in a two camera pair (not a conventional stereo rig, the cameras are free to move independent of each other). My source of images currently is a robot simulator (Gazebo), where two cameras are simulated in a scene of objects. The images are almost ideal: i.e., zero distortion, no artifacts.
So to start off, this is my first pair of images.
[![enter image description here][1]][1] [![enter image description here][2]][2]
I assume the right camera as "origin". In metric world coordinates, left camera is at (1,1,1) and right is at (-1,1,1) (2m baseline along X). Using feature matching, I construct the essential matrix and thereby the R and t of the left camera w.r.t. right. This is what I get.
R in euler angles: [-0.00462468, -0.0277675, 0.0017928]
t matrix: [-0.999999598978524; -0.0002907901840156801; -0.0008470441900959029]
Which is right, because the displacement is only along the X axis in the camera frame. For the second pair, the left camera is now at (1,1,2) (moved upwards by 1m).
[![enter image description here][3]][3] [![enter image description here][4]][4]
Now the R and t of left w.r.t. right become:
R in euler angles: [0.0311084, -0.00627169, 0.00125991]
t matrix: [-0.894611301085138; -0.4468450866008623; -0.0002975759140359637]
Which again makes sense: there is no rotation; the displacement along Y axis is half of what the baseline (along X) is, so on, although this t doesn't give me the real metric estimates.
So in order to get metric estimates of pose in case 2, I constructed the 3D points using points from camera 1 and camera 2 in case 1 (taking the known baseline into account: which is 2m), and then ran the PNP algorithm with those 3D points and the image points from case 2. Strangely, both ITERATIVE and EPNP algorithms give me a similar and completely wrong result that looks like this:
Pose according to final PNP calculation is:
Rotation euler angles: [-9.68578, 15.922, -2.9001]
Metric translation in m: [-1.944911461358863; 0.11026997013253; 0.6083336931263812]
Am I missing something basic here? I thought this should be a relatively straightforward calculation for PNP given that there's no distortion etc. ANy comments or suggestions would be very helpful, thanks!
EDIT: Code for PNP implementation
Let's say pair 1 consists of queryImg1 and trainImg1; and pair 2 consists of queryImg2 and trainImg2 (2d vectors of points). Triangulation with pair 1 results in a vector of 3D points points3D.
1. Iterate through trainImg1 and see if the same point exists in trainImg2 (because that camera does not move)
2. If the same feature is tracked in trainImg2, find the corresponding match from queryImg2.
3. Form vectors P3D_tracked (subset of tracked 3D points), P2D_tracked (subset of tracked 2d points).
for(int i = 0; i < (int)trainImg1.size(); i++)
{
vector<Point2d>::iterator iter = find(trainImg2.begin(), trainImg2.end(), trainImg1[i]);
size_t index = distance(trainImg2.begin(), iter);
if(index != trainImg2.size())
{
P3D_tracked.push_back(points3D[i]);
P2D_tracked.push_back(queryImg2[index]);
}
}
solvePnP(P3D_tracked, P2D_tracked, K, d, rvec, tvec, false, CV_EPNP);
For one example I ran, the original set of points had a size of 761, and the no. of tracked features in the second pair was 455.
[1]: http://i.stack.imgur.com/IEcPpm.jpg
[2]: http://i.stack.imgur.com/sfnlxm.jpg
[3]: http://i.stack.imgur.com/8k7xbm.jpg
[4]: http://i.stack.imgur.com/mZ7Mzm.jpgsaihvMon, 16 Nov 2015 21:27:48 -0600http://answers.opencv.org/question/76450/findEssentialMat() pose estimation: wrong translation vector in some caseshttp://answers.opencv.org/question/68328/findessentialmat-pose-estimation-wrong-translation-vector-in-some-cases/ I am currently using epipolar geometry based pose estimation for estimating pose of one camera w.r.t another, with non-zero baseline between the cameras. I am using the five-point algorithm (implemented as findEssentialMat in opencv) to determine the up-to-scale translation and rotation matrix between the two cameras.
I have found two interesting problems when working with this, it would be great if someone can share their views: I don't have a great theoretical background in computer vision:
1. If the rotation of the camera is along the Z axis i.e., parallel to the scene and the translation is non-zero, the translation between camera1 and camera2 (which is along X in the real world) is wrongly estimated to be along Z. Example case: cam1 and cam2 spaced by approx 0.5 m on the X axis, cam2 rotated clockwise by 45 deg.
Image pair
![image pair1][1]
Output:
Translation vector is [-0.02513, 0.0686, 0.9973] (wrong, should be along X)
Rotation Euler angles: [-7.71364, 6.0731, -43.7583] (correct)
2. The geometry between image1 and image2 is not the exact inverse of the geometry between image2 and image1. Again while an image1<=>image2 correspondence produces the correct translation, image2<=>image1 is way off (rotation values are close though). Example below, where camera2 was displaced along X and rotated for ~30 degrees along Y
Image 1 to image 2
![1to2][2]
Output: Rotation [-1.578, 24.94, -0.1631] (Close) Translation [-0.0404, 0.035, 0.998] (Wrong)
Image 2 to image 1
![2to1][3]
Output: Rotation [2.82943, -30.3206, -3.32636] Translation [0.99366, -0.0513, -0.0999] (Correct)
Looks like it has no issues figuring the rotations out but the translations are a hit or miss.
As to question 1, I was initially concerned because because the rotation is along the Z axis, the points might appear to be all coplanar. But the five point algorithm paper particularly states: "The 5-point method is essentially unaffected by the planar degeneracy and still works".
Thank you for your time!
[1]:http://s14.postimg.org/umezzqt7z/case1.jpg
[2]:http://s14.postimg.org/h5moo3wtr/1to2.png
[3]:http://s14.postimg.org/x55c7nsvj/2to1.pngsaihvMon, 10 Aug 2015 21:35:12 -0500http://answers.opencv.org/question/68328/Pose estimation: solvePnP and epipolar geometry do not agreehttp://answers.opencv.org/question/68149/pose-estimation-solvepnp-and-epipolar-geometry-do-not-agree/ Hi, I have a relative camera pose estimation problem where I am looking at a scene with differently oriented cameras spaced a certain distance apart. Initially, I am computing the essential matrix using the 5 point algorithm and decomposing it to get the R and t of camera 2 w.r.t camera 1.
I thought it would be a good idea to do a check by triangulating the two sets of image points into 3D, and then running solvePnP on the 3D-2D correspondences, but the result I get from solvePnP is way off. I am trying to do this because bundle adjustment would not work as my scale keeps changing; so I thought this would be one way to "refine" my pose: correct me if I am wrong. Anyway, In one case, I had a 45 degree rotation between camera 1 and camera 2 along the Z axis, and the epipolar geometry part gave me this answer:
Relative camera rotation is [1.46774, 4.28483, 40.4676]
Translation vector is [-0.778165583410928; -0.6242059242696293; -0.06946429947410336]
solvePnP, on the other hand..
Camera1: rvecs [0.3830144497209735; -0.5153903947692436; -0.001401186630803216]
tvecs [-1777.451836911453; -1097.111339375749; 3807.545406775675]
Euler1 [24.0615, -28.7139, -6.32776]
Camera2: rvecs [1407374883553280; 1337006420426752; 774194163884064.1] (!!)
tvecs[1.249151852575814; -4.060149502748567; -0.06899980661249146]
Euler2 [-122.805, -69.3934, 45.7056]
Something is troublingly off with the rvecs of camera2 and tvec of camera 1. My code involving the point triangulation and solvePnP looks like this:
points1.convertTo(points1, CV_32F);
points2.convertTo(points2, CV_32F);
// Homogenize image points
points1.col(0) = (points1.col(0) - pp.x) / focal;
points2.col(0) = (points2.col(0) - pp.x) / focal;
points1.col(1) = (points1.col(1) - pp.y) / focal;
points2.col(1) = (points2.col(1) - pp.y) / focal;
points1 = points1.t(); points2 = points2.t();
cv::triangulatePoints(P1, P2, points1, points2, points3DH);
cv::Mat points3D;
convertPointsFromHomogeneous(Mat(points3DH.t()).reshape(4, 1), points3D);
cv::solvePnP(points3D, points1.t(), K, noArray(), rvec1, tvec1, 1, CV_ITERATIVE );
cv::solvePnP(points3D, points2.t(), K, noArray(), rvec2, tvec2, 1, CV_ITERATIVE );
And then I am converting the rvecs through Rodrigues to get the Euler angles: but since rvecs and tvecs themselves seem to be wrong, I feel something's wrong with my process. Any pointers would be helpful. Thanks!
saihvFri, 07 Aug 2015 16:28:15 -0500http://answers.opencv.org/question/68149/Epipolar geometry pose estimation: Epipolar lines look good but wrong posehttp://answers.opencv.org/question/67540/epipolar-geometry-pose-estimation-epipolar-lines-look-good-but-wrong-pose/ I am trying to use OpenCV to estimate one pose of a camera relative to another, using SIFT feature tracking, FLANN matching and subsequent calculations of the fundamental and essential matrix. After decomposing the essential matrix, I check for degenerate configurations and obtain the "right" R and t.
Problem is, they never seem to be right. I am including a couple of image pairs:
1. Image 2 taken with 45 degree rotation along the Y axis and same position w.r.t. Image 1.
<a href="http://i.imgur.com/lEsdjFn.jpg">Image pair</a>
<a href="http://i.imgur.com/hCYV2kN.jpg">Result </a>
2. Image 2 taken from approx. couple of meters away along the negative X direction, slight displacement in the negative Y direction. Approx. 45-60 degree rotation in camera pose along Y axis.
<a href="http://i.imgur.com/zO1hwh3.jpg">Image pair</a>
<a href="http://i.imgur.com/nn803lk.jpg">Result</a>
The translation vector in the second case, seems to be overestimating the movement in Y and underestimating the movement in X. The rotation matrices when converted to Euler angles give wrong results in both the cases. This happens with a lot of other datasets as well. I have tried switching the fundamental matrix computation technique between RANSAC, LMEDS etc., and am now doing it with RANSAC and a second computation using only the inliers with the 8 point method. Changing the feature detection method does not help either. The epipolar lines seem to be proper, and the fundamental matrix satisfies x'.F.x = 0
Am I missing something fundamentally wrong here? Given the program understands the epipolar geometry properly, what could possibly be happening that results in a completely wrong pose? I am doing the check to make sure points lie in front of both cameras. Any thoughts/suggestions would be very helpful. Thanks!
<a href="http://pastebin.com/42PTHPP6">Code</a> for referencesaihvFri, 31 Jul 2015 13:35:56 -0500http://answers.opencv.org/question/67540/undistortPoints, findEssentialMat, recoverPose: What is the relation between their arguments?http://answers.opencv.org/question/65788/undistortpoints-findessentialmat-recoverpose-what-is-the-relation-between-their-arguments/**TL;DR**: What relation should hold between the arguments passed to `undistortPoints`, `findEssentialMat` and `recoverPose`.
I have code like the following in my program
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
I `undistort` the Points before finding the essential matrix. The doc states that one can pass the new camera matrix as the last argument. When omitted, points are in *normalized* coordinates (between -1 and 1). In that case, I would expect that I pass 1 for the focal length and (0,0) for the principal point to `findEssentialMat`, as the points are normalized. So I would think this to be the way:
1. **Possibility 1** (normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients);
Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
2. **Possibility 2** (do not normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
double focal = K.at<double>(0,0);
Point2d principalPoint(K.at<double>(0,2), K.at<double>(1,2));
Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);
However, I have found, that I only get reasonable results when I tell `undistortPoints` that the old camera matrix shall still be valid (I guess in that case only distortion is removed) and pass arguments to `findEssentialMat` as if the points were normalized, which they are not.
Is this a bug, insufficient documentation or user error?
**Update**
It might me that `correctedMatches` should be called with (non-normalised) image/pixel coordinates and the Fundamental Matrix, not E, this may be another mistake in my computation. It can be obtained by `F = K^-T * E * K^-1`themightyoarfishWed, 08 Jul 2015 05:43:24 -0500http://answers.opencv.org/question/65788/OpenCV: Essential Matrix SVD decomphttp://answers.opencv.org/question/64534/opencv-essential-matrix-svd-decomp/Hi Folks,
I am trying to get camera motion vector based on OpenCV Optical Flow. I use C# wrapper for Unity of OpenCV 2.4.10, but it is just wrapper Here is the test case:
1. Calibrated my camera and have camera matrix K (3x3)
2. Use 2 100%-identical images framePrev and frameThis as optical flow frames (means no motion)
3. Selected features (2d points) from both images via
<pre><code>goodFeaturesToTrack (frameThis, pointsThis, iGFFTMax, 0.05, 20);
goodFeaturesToTrack (framePrev, pointsPrev, iGFFTMax, 0.05, 20);</code></pre> so i have features pointsPrev and pointsThis<br/><br/>
4. Use <pre><code>calcOpticalFlowPyrLK (framePrev, frameThis, pointsPrev, pointsThis, status, err);</code></pre> to verify flow for points, then I make sure analyzing status and err arrays, so my pointsPrev and pointsThis are identical pairs of points in image pixel coordinates<br/><br/>
5. Select first 8 pairs from pointsPrev and poitsThis (simply trunc arrays), then get Fundamental Matrix: <pre><code>F = Calib3d.findFundamentalMat(pointsPrev, pointsThis, Calib3d.FM_8POINT, 2, 0.99); </code></pre>. When points in all the pairs are identical (no motion) - it gives me 3x3 matrix with all zeros, I suggest that is correct (or?)<br/><br/>
6. Then getting Essential Matrix based on <code>E = K'.(t) * F * K</code> according to HZ 9.12, I have one camera, so K' = K.
<pre><code>gemm (K.t (),F,1,null,0,tmpMat,Core.GEMM_3_T);
gemm (tmpMat,K,1,null,0,E,Core.GEMM_3_T);</code></pre>
when F = |0|, then E = |0| as well<br/><br/>
7. Finally I apply SVD decomposition on E:
<pre><code>SVDecomp(E,W,U,VT);</code></pre></br><br/>
8. Analyzing W, U, VT output matrices, I can observe these values:
<pre></code>
W: 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000
U: -0.577, -0.408, -0.707, -0.577, -0.408, 0.707, 0.577, -0.816, 0.000
Vt: 1.000, 0.000, 0.000, 0.000, 1.000, 0.000, 0.000, 0.000, 1.000</code></pre>
I suggest these values are strange, as according to books/manuals camera translation vector is U.col(2) is Vector3(-0.7071, 0.7071, 0) that is not correct.
Other observations, when I test for different image frames:
<ul>
<li>U values are always between -1 and 1, that should not be translation, similar more on sine/cosine values (again, 0.7071 is sine of pie/4 or cosine of pie/4)</li>
<li>Fundamental matrix outputs are radically different for different algoriths - 8POINTS, 7POINTS, RANSAC, LMEDS, even for pairs of corresponding poits (features)</li>
<li>using dirrefent number of pairs of points (features) - say 5, 7, 8, 15, 40 - for the same algoriths also radically changes fundamental matrix output</li>
</ul>
I do really need your help, thank you in advance!
That is the copy of my question on StackOverflow :
http://stackoverflow.com/questions/30953989/opencv-essential-matrix-svd-decomp
Kind Regards, Eugene
<b>EDIT 1:</b> Additional observations
Then I tried to find the Fund matrix for these frame poits:
<pre><code>
MatOfPoint2f p1 = new MatOfPoint2f(new Point(100,100),new Point(100,200),new Point(100,300),
new Point (200,100),new Point(200,200),new Point(200,300),
new Point(300,100),new Point(300,200),new Point(300,300));
MatOfPoint2f p2 = new MatOfPoint2f(new Point(80,80),new Point(80,200),new Point(80,320),
new Point (200,80),new Point(200,200),new Point(200,320),
new Point(320,80),new Point(320,200),new Point(320,320));
</code></pre>
The points correspond to case when camera moves forward direction, all the features are center-symmetrical.
When I use findFundamentalMat with 8POINT algorithm - The Fund matrix is
<pre><code>
F = 0.00000000, 0.00010236, -0.02047281, -0.00010236, 0.00000000, 0.02047281, 0.02047281, -0.02047281, 0.00000000, </code></pre>
But when I use RANSAC - the result is
<pre><code>
F = 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000,
</code></pre>
Eugene BartoshSat, 20 Jun 2015 14:48:56 -0500http://answers.opencv.org/question/64534/Single Value Decomposition of an Essential Matrixhttp://answers.opencv.org/question/55031/single-value-decomposition-of-an-essential-matrix/I don't know whether this is directly related to opencv.
It might be more like a computer vision question.
Assuming you have an essential matrix E for a stereo system,
I would like to know the geometric interpretation of u, s, v' in term of the stereo system (e.g. epipole, baseline, ...),
once you decompose E using SVD.
I tried to dig in to the geometric interpretation of SVD, and it seems that u, v' are the rotation matrix, while s are the scale. Though I cannot really related them to the geometric of the stereo system.
Thank you. milLII3ywayThu, 12 Feb 2015 02:33:35 -0600http://answers.opencv.org/question/55031/Estimate camera pose (extrinsic parameters) from homography / essential matrixhttp://answers.opencv.org/question/38340/estimate-camera-pose-extrinsic-parameters-from-homography-essential-matrix/I am trying to estimate the camera pose from an estimated homography as explained in chapter 9.6.2 of [Hartley & Zisserman's book](http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf).
Basically, it says the following:
Let `W=(0,-1,0; 1,0,0; 0,0,1)`. The left camera is assumed at `[I|0]`.
Given an SVD decomposition of the essential matrix `E`
SVD(E) = U*diag(1,1,0)*V'
the extrinsic camera parameters [R|t] of the right camera are one of the following four solutions:
[U W V' | U*(0,0,+1)']
[U W V' | U*(0,0,-1)']
[U W'V' | U*(0,0,+1)']
[U W'V' | U*(0,0,-1)']
Now I am struggling with a few main issues.
1) I only have access to an estimated homography `H`. The way I understand it, it's not exactly an essential matrix, if the singular values are not two equal values. Therefore, what I am struggling with is that instead of
SVD(H) = U * diag(1,1,0) * V'
the SVD decomposes into something like
SVD(H) = U * diag(70,1.6,0.0001) * W
It's really weird that the singular values are not almost identical and that large. My first question is why this happens and what to do about it? Normalization? Scaling?
2) After a lot of thinking and more experimenting, I came up with the following implementation:
static const Mat W = (Mat_<double>(3, 3) <<
1, 0, 0,
0, -1, 0,
0, 0, 1
);
// Compute SVD of H
SVD svd(H);
cv::Mat_<double> R1 = svd.u * Mat::diag(svd.w) * svd.vt;
cv::Mat_<double> R2 = svd.u * Mat::diag(svd.w) * W * svd.vt;
cv::Mat_<double> t = svd.u.col(2);
This way I get four possible solutions `[R1|t], [R1|-t], [R2|t], [R2|-t]`, which produce some sort of results.
Apparently, in W, I don't swap x/y coordinates and I don't invert the x-coordinate. Only the y-coordinate is inverted.
I believe the swap can be explained by different image coordinate systems. So column and row might be inverted in my implementation. But I can't explain why I only have to mirror and not rotate. And overall I am not sure, if the implementation is correct.
3) In theory I think I need to triangulate a pair of matches and determine whether the 3D point is front of both planes. Only one of the four solutions will satisfy that condition. However, I don't know how to determine the near plane's normal and distance from the uncalibrated projection matrix.
4) This is all used in the context of image stitching. My goal is to enhance the current image stitching pipeline to support not only cameras rotating around itself, but also translating cameras with little rotation. Some preliminary results and a follow-up question will be posted soon. (TODO)DuffycolaTue, 29 Jul 2014 12:50:57 -0500http://answers.opencv.org/question/38340/Decomposition of essential matrix leads to wrong rotation and translationhttp://answers.opencv.org/question/30824/decomposition-of-essential-matrix-leads-to-wrong-rotation-and-translation/Hi,
I am doing some SfM and having troubles getting R and T from the essential matrix.
Here is what I am doing in sourcecode:
Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
Mat E = new Mat();
Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
Core.multiply(E, cameraMatrix, E);
Mat R = new Mat();
Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);
Mat T = new Mat();
calculateRT(E, R, T);
private void calculateRT(Mat E, Mat R, Mat T){
/*
* //-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
*/
Mat w = new Mat();
Mat u = new Mat();
Mat vt = new Mat();
Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
double[] W_Values = {0,-1,0,1,0,0,0,0,1};
Mat W = new Mat(new Size(3,3), CvType.CV_64FC1, new Scalar(W_Values) );
Core.multiply(u, W, R);
Core.multiply(R, vt, R);
T = u.col(2);
}
And here are the results of all matrizes after and during calculation.
Number matches: 10299
Number of good matches: 590
Number of obj_points left: 590.0
Fundamental:
[4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
0.0004532506630569588, -0.0009224427024602799, 1]
Essential:
[0.05410018455525099, 0, 0;
0, 0.8272987826496967, 0;
0, 0, 1]
U:
[0, 0, 1;
0, 0.9999999999999999, 0;
1, 0, 0]
W:
[1; 0.8272987826496967; 0.05410018455525099]
vt:
[0, 0, 1;
0, 1, 0;
1, 0, 0]
R:
[0, 0, 0;
0, 0, 0;
0, 0, 0]
T:
[1; 0; 0]
And for completion here are the image I am using
left: https://drive.google.com/file/d/0Bx9OKnxaua8kXzRFNFRtMlRHSzg/edit?usp=sharing
right: https://drive.google.com/file/d/0Bx9OKnxaua8kd3hyMjN1Zll6ZkE/edit?usp=sharing
Can someone point out where something is goind wrong or what I am doing wrong?
glethienSat, 29 Mar 2014 06:52:14 -0500http://answers.opencv.org/question/30824/From Fundamental Matrix To Rectified Imageshttp://answers.opencv.org/question/27155/from-fundamental-matrix-to-rectified-images/I have stereo photos coming from the same camera and I am trying to use them for 3D reconstruction.
To do that, I extract SURF features and calculate Fundamental matrix. Then, I get Essential matrix and from there, I have Rotation matrix and Translation vector. Finally, I use them to obtain rectified images.
The problem is that it works only with some specific parameters.
If I set *minHessian* to *430*, I will have a pretty nice rectified images. But, any other value gives me just a black image or some obviously wrong images.
In all the cases, the fundamental matrix seems to be fine (I draw epipolar lines on both the left and right images). However, I can not say so about Essential matrix, Rotation matrix and Translation vector. Even so I used all the 4 possible combination of *R* and *T*.
Here is my code. Any help or suggestion would be appreciated. Thanks!
<pre><code>
Mat img_1 = imread( "images/imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "images/imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 430;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
namedWindow( "Matches", CV_WINDOW_NORMAL );
imshow("Matches", img_matches );
waitKey(0);
//-- Step 4: calculate Fundamental Matrix
vector<Point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++ )
{
// queryIdx is the "left" image
imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
// trainIdx is the "right" image
imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.1, 0.99);
//-- Step 5: calculate Essential Matrix
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};//Camera Matrix
Mat K(3, 3, CV_64F, data);
Mat_<double> E = K.t() * F * K;
//-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
//-- Step 7: Reprojection Matrix and rectification data
Mat R1, R2, P1_, P2_, Q;
Rect validRoi[2];
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
stereoRectify(K, D, K, D, img_1.size(), R, T, R1, R2, P1_, P2_, Q, CV_CALIB_ZERO_DISPARITY, 1, img_1.size(), &validRoi[0], &validRoi[1] );
</code></pre>gozariFri, 24 Jan 2014 08:48:12 -0600http://answers.opencv.org/question/27155/Pose estimation produces wrong translation vectorhttp://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/Hi,<br>
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not.
I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:
Mat fundamental_matrix =
findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) = 1;
W.at<double>(2,2) = 1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
R1 = -1 * R1;
if(determinant(R2) < 0)
R2 = -1 * R2;
Mat trans = decomp.u.col(2);
However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98).
Any help would be appreciated.FiredragonwebSat, 10 Aug 2013 08:37:43 -0500http://answers.opencv.org/question/18565/Good Calibration for Essential matrix estimationhttp://answers.opencv.org/question/13550/good-calibration-for-essential-matrix-estimation/Hello,
I think I'm having some problems with camera calibration. I'm using the sample calibration program provided with several (20) images taken with an iPhone. I get the camera intrinsic matrix K and the distortion coefficients R. I then load such matrices into another program. This program allows the user to select matching features in 2 different undistorted images from which I can take the Fundamental Matrix F and using K I can get the Essential matrix E = K.t() * F * K.
Afterwards, I test both F and E to check for the epipolar constraint, i.e.: x'*F*x=0 or x'*E*x= where x and x' are the corresponding the user selected. For every matching point, the test for the fundamental matrix yields values very close to 0, while the one for the essential matrix returns values that are as large as 2694990. This is obviously wrong.
From this I can conclude that I must be doing something wrong. I believe the computation for E is right, so that must leave the calibration. What do I need to do for a good calibration?
ThanksdiegoFri, 17 May 2013 08:39:08 -0500http://answers.opencv.org/question/13550/