triangulatePoints about 3D Reconstruction

asked 2019-05-07 02:51:36 -0600

jjja gravatar image

updated 2019-05-07 02:52:36 -0600

I'm using triangulation to proceed with 3D reconstruction.

so I used one 2D camera and I stored the reference image.(intrinsic parameter is same)

reference Image: R1, t1

cam: R2, t2

solvePnP(...,rv1,t1)

rodrigues(rv1, R1)

solvePnP(...,rv2,t2)

rodrigues(rv2, R2)

so

The relative pixel location between the camera can be expressed as:

C0 = R1 * W + t1

C1 = R2 * W + t2

W = R1.inv() * (C0 - t1)

C1 = R2(R1.inv() * (C0 - t1) + t2

R = R2 * R1.inv()

T = t2 -R2 * R1.inv() * t1

and then

cv::Mat P0 = K * cv::Mat::eye(3, 4, CV_64F); // reference image
cv::Mat Rt, X;
cv::hconcat(R, T, Rt);
cv::Mat P1 = K * Rt; // live cam
cv::triangulatePoints(P0, P1, points0, points1, X);  (points: 2D image Coordinate using SIFT matching)

it was not an answer.

But if P0 = (R1, t1) and P1 = (R2, t2), it worked well.

Why did not the expression using the relative relationship between the cameras work? Did I make a mistake?

edit retag flag offensive close merge delete