Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Pose estimation produces wrong translation vector

Hi,
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not. I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:

Mat fundamental_matrix =
        findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) =  1;
W.at<double>(2,2) =  1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
    R1 = -1 * R1;
if(determinant(R2) < 0)
    R2 = -1 * R2;
Mat trans = decomp.u.col(2);

However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98). Any help would be appreciated.