# Is cv::triangulatePoints() returning 3D points in world coordinate system?

Considering a moving camera with fixed calibration matrix (intrinsic parameters), I am triangulating tracked feature points from two views that are not consecutive. The view poses are in camera coordinate system and images are already undistorted before detecting and tracking features.

Please can you confirm if the triangulated points are in world coordinate system after applying the cv::triangulatePoints() and cv::convertPointsFromHomogeneous() functions.

edit retag close merge delete

Sort by » oldest newest most voted

It depends on used projection matrices to call the function.

Where to look for projection matrices?

As OpenCV documentation suggests you can use stereoRectify() which needs results from calibrateCamera and stereoCalibration. As far as I know, it chooses new coordinate system which has an origin between the cameras.

Another way to get projection matrices is to compute it by yourself. Let's assume that first camera position creates your world coordinate system. Projection matrix could be computed as A * [R | T], where A is a instristic matrix for the camera and R is rotation vector and T translation vector. So for the first position you set as rotation matrix identity and as a translation vector [0,0,0,1]T. For the second projection matrix use the output of R, T from the stereoCalibrate.

I am working on locating an object in stereo vision, so far this approach gives me reasonable results. Please let me know if there is somewhere a problem.

Also I recommend to take a look at this question at StackOverflow

more

The function cv::triangulatePoints() will compute, reconstruct the 3D points in the camera frame (the left camera frame should be the reference, to be checked).

Also, from my knowledge, this function should work only for a stereo camera setup where the two image views are fronto-parallel (the images are rectified with cv::stereoRectify()):

One last thing, cv::triangulatePoints() needs the two projection matrices of the two cameras, that is the transformation between the left and right camera frames. In case of I am wrong and it is possible to use cv::triangulatePoints() with non fronto-parallel views, you will still need to have the relationship, the transformation matrix between the two camera frames (the intrinsic matrix and the pairs of points are not sufficient).

more

1

thank you @Eduardo for your answer. The cv::triangulatePoints() does not require a stereo pair with fronto-parallel view (i.e. binocular), but you can apply the function with any pair of cameras given intrinsic and extrinsic parameters of each view and the feature points for each image. Of course, there are some degenerative configurations and the forward motion is the one that generates more uncertainty in the 3D points. However, after reading again HZ's Multi-View Geometry, the Hartley and Sturm's paper "Triangulation", and doing some experiments both in MATLAB and OpenCV, I can confirm that triangulated points are returned in world coordinate frame. Of course, if you fixed the first view as origin (of the world as well), then 3D points are given w.r.t. the first camera as you said.

( 2017-01-02 10:47:56 -0500 )edit

Can you please guide me, how can I construct projection matrix (P0 and P1) from stereo calibration results? I have cammat1, cammat2, R and T. R and T are with respect to the first camera.

( 2017-11-18 03:02:44 -0500 )edit

Can someone please confirm which coordinate frame the output 3D points are computed? Lets say P as shown in the diagram above is at the center of the baseline would its 3D coordinates be (0,0,Z) or (T/2,0,Z)?

( 2019-02-06 19:35:10 -0500 )edit

Official site

GitHub

Wiki

Documentation