cv::triangulatePoints give strange results
Hi,
I have images from two cameras (left and right). First I calibrate cameras and I get cameras matrices, distortion coefficients and projection matrices.
On images I detect marker and save it positions on left and right image. Next I use cv::triangulatePoints but I get really strange result (I use other library that also allows to get that 3D coords and it shows more likely results).
Here is my code:
int size0 = m_history.getSize(0);
int size1 = m_history.getSize(1);
if(size0 != size1)
{
setInfo(tr("Cannot calculate triangulation: number of marker positions is different on left and right camera"));
return false;
}
if(size0 <= 0)
{
setInfo(tr("No marker postion saved. Cannot calc triangulation."));
return false;
}
cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);
for(int i = 0; i < size0; i++)
{
cv::Point2f pt1 = m_history.getOriginalPoint(0, i); //before edit, here was cv::Point not cv::Point2f
cv::Point2f pt2 = m_history.getOriginalPoint(1, i); //before edit, here was cv::Point not cv::Point2f
pointsMat1.at<double>(0,0) = pt1.y;
pointsMat1.at<double>(1,0) = pt1.y;
pointsMat2.at<double>(0,0) = pt2.x;
pointsMat2.at<double>(1,0) = pt2.y;
cv::Mat pnts3D(4, 1, CV_32F);
cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
CvPoint3D64f point3D;
point3D.x = pnts3D.at<double>(0, 0);
point3D.y = pnts3D.at<double>(1, 0);
point3D.z = pnts3D.at<double>(2, 0);
point3D.x = point3D.x/pnts3D.at<double>(3, 0);
point3D.y = point3D.y/pnts3D.at<double>(3, 0);
point3D.z = point3D.z/pnts3D.at<double>(3, 0);
m_history.addTriangulatedPoint(point3D);
}
Where I do mistake?
Results from triangulation:
3D_X 3D_Y 3D_Z
1.20151e+24 7.70359e+23 4.41239e+24
-2.23198e+23 -3.11741e+22 -7.98741e+23
3.79697e+22 -1.45599e+22 1.31405e+23
-7.27922e+22 6.40761e+22 -2.45849e+23
7.11722e+22 -9.67023e+22 2.32894e+23
2.14094e+22 -2.26098e+22 7.11648e+22
4.95297e+22 -2.77331e+22 1.70013e+23
2.21597e+23 -1.22193e+22 7.79736e+23
-3.94602e+23 -1.85898e+23 -1.43658e+24
2.56816e+24 2.13037e+24 1.22021e+25
And from other library (it seems quite correct):
3D_X 3D_Y 3D_Z
8.33399 4.74962 62.6119
8.33243 -1.62743 60.8669
8.40008 -8.43666 59.5643
8.47287 -15.1735 58.3184
8.52143 -21.964 57.0896
8.4673 -17.6672 57.6249
8.31039 -10.9452 58.6376
8.39497 -4.27203 60.6766
8.37444 2.56077 62.3287
5.41351 4.73769 62.7424
My projection matrices:
P1 =
4.484533e+02 0 3.073146e+02 0
0 4.484533e+02 2.473914e+02 0
0 0 1 0
P2 =
4.484533e+02 0 3.073146e+02 -8.275082e+02
0 4.484533e+02 2.473914e+02 0
0 0 1 0
EDIT:
After correct cv::Point to cv::Point2f and with cv::undistortPoints I get that: (I pass to cv::undistortPoints only ...
Question. Is one of your cameras the origin of the coordinate system or do both of the camera translations have values?
Please explain more precisely what you ask, because I do not quite understand (maybe because of not perfect knowledge of English)
When you create the projection matrix, you create it from camera matrix, rvec, and tvec. What are the values of the rvec and tvec for each camera?
If you need to, you can use the decomposeProjectionMatrix function to get the values.
After cameras calibration I use stereoRectify to obtain projection matrices. In a moment I will update my post with projection matrices values
Those values look correct. Could you include the original 2d pixel values for the first point please?
From first image is: 432.921x321.005 and for second is 432.918x321.005
Is the other library using the information you get from OpenCV or is it calculating it's own?
I'm using two different methods to get two very large numbers. Are you sure those pixel values are correct? The difference is awfully small.
Lastly, you have a typo. pointsMat1 is filled with pt1.y twice.
Other library uses only: camera matrix, distortion coeffs, marker size that I set before start calculation. Nothing else. I correct that typo and update values of triangulation (now I get very large values from triangulation)
Are you undistorting the points or the images before you pass them to triangulate? If not, use THIS function and try that.
First I corrected my mistake (I change type of original points pt1 and pt2 from cv::Point to cv::Point2f). And I use now undistortPoints before saving to m_history from where I get points to triangulate method. I update post and add results after undistortPoints and triangulation.
What are your new pixel values?
After undistortPoints? I get 0.231818 x 0.127843 for first and 0.2478 x 0.162624 for second image
Do you think you could post all the information you have, including your images? Camera matrices, distortion, projection, anything else you're using. I can't tell what is going on with the information here.
I write answer to my post because it was easier than edit the first one (to not loose my question and written data). I write link to the images (50 pairs), all that I have from calibration and I also put my code