# Problem with building pose Mat from rotation and translation matrices

I have two images captured in the same space (scene), one with known pose. I need to calculate the pose of the second (query) image. I have obtained the relative camera pose using essential matrix. Now I am doing calculation of camera pose through the matrix multiplication (here is a formula).

I try to build the 4x4 pose Mat from rotation and translation matrices. My code is following

Pose bestPose = poses[best_view_index];

Mat cameraMotionMat = bestPose.buildPoseMat();

cout << "cameraMotionMat: " << cameraMotionMat.rows << ", " << cameraMotionMat.cols << endl;
float row_a[4] = {0.0, 0.0, 0.0, 1.0};
Mat row = Mat::zeros(1, 4, CV_64F);
cout << row.type() << endl;

cameraMotionMat.push_back(row);
// cameraMotionMat.at<float>(3, 3) = 1.0;


Earlier in code. Fora each view image:

Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);

// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);

Pose pose (R, t);
poses.push_back(pose);


Initially method bestPose.buildPoseMat() returns Mat of size (3, 4). I need to extend the Mat to size (4, 4) with row [0.0, 0.0, 0.0, 1.0] (zeros vector with 1 on the last position). Strangely I get following output when print out the resultant matrix

[0.9107258520121255, 0.4129580377861768, 0.006639390377046724, 0.9039011699443721; 0.4129661348384583, -0.9107463665340377, 0.0001652925667582038, -0.4277340727282191; 0.006115059555925467, 0.002591307168000504, -0.9999779453436902, 0.002497598952195387;
0, 0.0078125, 0, 0]

Last row does not look like it should be: [0, 0.0078125, 0, 0] rather than [0.0, 0.0, 0.0, 1.0]. Is this implementation correct? What could a problems with this matrix?

edit retag close merge delete

## Comments

please show, what is in cameraMotionMat, originally, and maybe you need to explain, how you obtain that (the steps that produced that)

( 2020-12-06 05:59:34 -0600 )edit

thank you. Code for the method buildPoseMat:

struct Pose
{
Pose(Mat _R, vector<float>_t) : R(_R) {
t = Mat (_t);
}
Mat R;
Mat t;

Mat buildPoseMat() //vector<float> tvec, Mat rot)
{
Mat rotTransMat (3, 4, CV_64F);
hconcat(R, t, rotTransMat);
return rotTransMat;
}
};


cameraMotionMat before adding a row (returned from method buildPoseMat):

[0.9107258520121255, 0.4129580377861768, 0.006639390377046724, 0.9039011699443721;
0.4129661348384583, -0.9107463665340377, 0.0001652925667582038, -0.4277340727282191;
0.006115059555925467, 0.002591307168000504, -0.9999779453436902, 0.002497598952195387]

( 2020-12-06 08:20:06 -0600 )edit

Updated the question as well.

( 2020-12-06 08:23:18 -0600 )edit

@berak I solved the issue using this solution. I suppose now to calculate camera pose for the query image using the pose of the best matching view image and the related camera transformation Mat (cameraMotionMat) by means of Mat product following formula from here (p2). That could look like this:

Mat bestViewOrigin;
// get bestViewOrigin from file
Mat queryOrigin = bestViewOrigin*cameraMotionMat;
Mat queryRot = bestViewRot*cameraMotionMat;


Is this solution correct?

( 2020-12-07 03:35:30 -0600 )edit