Ask Your Question

Is cv::triangulatePoints() returning 3D points in world coordinate system?

asked 2016-12-14 08:54:28 -0600

ale_Xompi gravatar image

updated 2016-12-14 08:55:26 -0600

Considering a moving camera with fixed calibration matrix (intrinsic parameters), I am triangulating tracked feature points from two views that are not consecutive. The view poses are in camera coordinate system and images are already undistorted before detecting and tracking features.

Please can you confirm if the triangulated points are in world coordinate system after applying the cv::triangulatePoints() and cv::convertPointsFromHomogeneous() functions.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2018-03-26 15:49:11 -0600

JankaSvK gravatar image

It depends on used projection matrices to call the function.

Where to look for projection matrices?

As OpenCV documentation suggests you can use stereoRectify() which needs results from calibrateCamera and stereoCalibration. As far as I know, it chooses new coordinate system which has an origin between the cameras.

Another way to get projection matrices is to compute it by yourself. Let's assume that first camera position creates your world coordinate system. Projection matrix could be computed as A * [R | T], where A is a instristic matrix for the camera and R is rotation vector and T translation vector. So for the first position you set as rotation matrix identity and as a translation vector [0,0,0,1]T. For the second projection matrix use the output of R, T from the stereoCalibrate.

I am working on locating an object in stereo vision, so far this approach gives me reasonable results. Please let me know if there is somewhere a problem.

Also I recommend to take a look at this question at StackOverflow

edit flag offensive delete link more

answered 2016-12-14 15:33:29 -0600

Eduardo gravatar image

The function cv::triangulatePoints() will compute, reconstruct the 3D points in the camera frame (the left camera frame should be the reference, to be checked).

Also, from my knowledge, this function should work only for a stereo camera setup where the two image views are fronto-parallel (the images are rectified with cv::stereoRectify()):

image description

One last thing, cv::triangulatePoints() needs the two projection matrices of the two cameras, that is the transformation between the left and right camera frames. In case of I am wrong and it is possible to use cv::triangulatePoints() with non fronto-parallel views, you will still need to have the relationship, the transformation matrix between the two camera frames (the intrinsic matrix and the pairs of points are not sufficient).

edit flag offensive delete link more



thank you @Eduardo for your answer. The cv::triangulatePoints() does not require a stereo pair with fronto-parallel view (i.e. binocular), but you can apply the function with any pair of cameras given intrinsic and extrinsic parameters of each view and the feature points for each image. Of course, there are some degenerative configurations and the forward motion is the one that generates more uncertainty in the 3D points. However, after reading again HZ's Multi-View Geometry, the Hartley and Sturm's paper "Triangulation", and doing some experiments both in MATLAB and OpenCV, I can confirm that triangulated points are returned in world coordinate frame. Of course, if you fixed the first view as origin (of the world as well), then 3D points are given w.r.t. the first camera as you said.

ale_Xompi gravatar imageale_Xompi ( 2017-01-02 10:47:56 -0600 )edit

Can you please guide me, how can I construct projection matrix (P0 and P1) from stereo calibration results? I have cammat1, cammat2, R and T. R and T are with respect to the first camera.

carl777 gravatar imagecarl777 ( 2017-11-18 03:02:44 -0600 )edit

Can someone please confirm which coordinate frame the output 3D points are computed? Lets say P as shown in the diagram above is at the center of the baseline would its 3D coordinates be (0,0,Z) or (T/2,0,Z)?

vik748 gravatar imagevik748 ( 2019-02-06 19:35:10 -0600 )edit

When your projection matrix is computed with an (R, t) that changes coordinates from world to camera coordinates, then the 3D points are expressed in the world coordinate system. Likewise, when you use the (R, t) from stereo calibration, that uses the first camera as origin, then the points from triangulation are expressed in the coordinate system of the first camera.

I ran some test, see the very last section of

mfischer-gundlach gravatar imagemfischer-gundlach ( 2020-02-12 03:25:26 -0600 )edit

Question Tools



Asked: 2016-12-14 08:54:28 -0600

Seen: 14,678 times

Last updated: Mar 26 '18