Ask Your Question

Firedragonweb's profile - activity

2017-06-13 05:34:33 -0600 received badge  Notable Question (source)
2016-01-22 12:03:19 -0600 received badge  Popular Question (source)
2015-07-24 03:33:15 -0600 received badge  Student (source)
2013-08-13 17:06:04 -0600 commented answer 3D Reconstruction upto real scale

Pretty much. A few pointers, though: the singular values should be (s,s,0), so (1.3, 1.05, 0) is a pretty good guess. About the R: Technically, this is right, however, ignoring signs. It might very well be that you get a rotation matrix which does not satisfy the constraint deteminant(R) = 1 but is instead -1. You might want to multiply it with -1 in that case. Generally, if you run into problems with this approach, try to determine the Essential Matrix using the 5 point algorithm (implemented into the very newest version of OpenCV, you will have to build it yourself). The scale is indeed impossible to obtain with these informations. However, it's all to scale. If you define for example the distance between the cameras being 1 unit, then everything will be measured in that unit.

2013-08-11 15:52:05 -0600 commented question Pose estimation produces wrong translation vector

The odometry tip was not too helpful in my current problem, however, it may proof useful in future use. The new findEssentialMat after building the current snapshot however did the trick: It seems, that the 5 point algorithm is far better suited for pose extraction than the way using the fundamental matrix. Thanks again!

2013-08-11 06:01:43 -0600 commented question Pose estimation produces wrong translation vector

For the time being I would be satisfied if I could read in 2 images, extract keypoints, estimate the pose between them and triangulate 3D points of these keypoints. There are plenty of texts regarding this, but sadly, the pose estimation does fail as described above.

2013-08-10 19:31:38 -0600 commented question Pose estimation produces wrong translation vector

Again, thanks for the input. I did check out that five-point.cpp file. I could not yet test the findEssentialMat function as that obviously requieres the whole packet, but decomposeEssentialMat and recoverPose yields exactly the same results as my approach. Which is a good thing, I guess. Or not as it still hides where things go horribly wrong :). As for solvePnP: From what I gather, that function tries to estimate the pose from already calculated 3D Object Points. I do however need the pose to calculate those or am I mistaken here?

2013-08-10 13:58:40 -0600 commented question Pose estimation produces wrong translation vector

Thank you for your reply. That much is clear. However, the direction of that unit vector is completly wrong: There is no way i could uniformly scale a vector that points mostly to Z to reflect the actual movement which is mostly to X.

2013-08-10 08:53:36 -0600 answered a question 3D Reconstruction upto real scale

From what I gather, you have obtained the Fundamental matrix through some means of calibration? Either way, with the fundamental matrix (or the calibration rig itself) you can obtain the pose difference via decomposition of the Essential matrix. Once you have that, you can use matched feature points (using a feature extractor and descriptor like SURF, BRISK, ...) to identify which points in one image belong to the same object point as another feature point in the other image.
With that information, you should be able to triangulate away.

2013-08-10 08:37:43 -0600 asked a question Pose estimation produces wrong translation vector

Hi,
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not. I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:

Mat fundamental_matrix =
        findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) =  1;
W.at<double>(2,2) =  1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
    R1 = -1 * R1;
if(determinant(R2) < 0)
    R2 = -1 * R2;
Mat trans = decomp.u.col(2);

However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98). Any help would be appreciated.