Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

triangulatePoints() function

I don't understand what exactly projPoints1 and projPoints2 are in triangulatePoints() built-in function of calib3d module. Here is the C++ API of triangulatePoints():

void triangulatePoints(InputArray projMatr1, InputArray projMatr2, InputArray projPoints1, InputArray projPoints2, OutputArray points4D)

When a point is detected in left and right images, do we have to use undistortPoints() function of imgproc module in order to obtain collinear points and then send the results to triangulatePoints() as projPoints1 and projPoints2? OR is it the right way to send detected 2D image point (including lens distortions) directly to triangulatePoints()? In OpenCV documentation of triangulatePoints() it just says

projPoints1 – 2xN array of feature points in the first image projPoints2 – 2xN array of feature points in the second image

where N is number of features. MATLAB Computer Vision Toolbox has a similar triangulation function, and before using triangulate function in there, they undistort detected feature coordinates. See this example:

http://www.mathworks.com/help/vision/ref/triangulate.html

They have a warning in this page:

The triangulate function does not account for lens distortion. You can undistort the images using the undistortImage function before detecting the points. Alternatively, you can undistort the points themselves using the undistortPoints function.

I wonder if it is the same in OpenCV too or not. I would be happiest person in the world if someone can respond.