OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 08 Jan 2020 09:03:37 -0600Triangulate distance of points in 2D image?http://answers.opencv.org/question/224591/triangulate-distance-of-points-in-2d-image/I have three points with a known position in world space. I also have a 2D image in which these three points are visible.
Using OpenCV is there a way I can provide the actual x,y,z world coordinates of the of my points then triangulate the distance of each of these points from their x, y positions in my 2D image?leejac354Wed, 08 Jan 2020 09:03:37 -0600http://answers.opencv.org/question/224591/Triangulate Chessboard gives weird resultshttp://answers.opencv.org/question/208725/triangulate-chessboard-gives-weird-results/Hi all,
So I'm trying to create a projection in 3D with use of stereo camera with use of Python.
I found the intrinsic camera calibration parameters, which seem good.
Then I find the chessboard corners on both images with :
`_, C1 = cv2.findChessboardCorners(img1, (6, 9), None)`.
I undistort the found corners with:
`C1Norm = cv2.undistortPoints(C1, K1, D1)`
And use those to find the essential matrix with:
`E, mask = cv2.findEssentialMat(C1Norm, C2Norm, focal=1.00, pp=(0., 0.), method=cv2.RANSAC, prob=0.999)`.
And at last I find the rotation and translation between the cameras with:
`M, R, t, mask = cv2.recoverPose(E, C1Norm, C2Norm)`.
Now with use of that I find the projection matrix of both cameras;
`P1 = K1 * [I3 | 0]` and `P2 = K2 * [R | t]` where K1 and K2 are the intrinsic camera parameters and I is an 3x3 identity matrix.
Last step now should be to triangulate. I use `4D = triangulatePoints(P1, P2, C1Trans, C2Trans)` which gives me the 4D coordinates.
However, when I plot this, my chessboard is all crooked and wrong. Has anyone any idea where something might go wrong?
Also, I know the dimensions of the chessboard, any idea how I can turn the homogeneous coordinates of the chessboard corners to cm?
BerrentMon, 11 Feb 2019 23:29:07 -0600http://answers.opencv.org/question/208725/triangulation for imagehttp://answers.opencv.org/question/196194/triangulation-for-image/Hi,
How can I generate triangulation similar to below files:
[C:\fakepath\doberman.jpg](/upfiles/15323660875937447.jpg)
[C:\fakepath\doberman_poly.png](/upfiles/15323661153705509.png)
Regards,
MariuszmariuszMon, 23 Jul 2018 12:14:26 -0500http://answers.opencv.org/question/196194/Update Projection matrices for triangulationhttp://answers.opencv.org/question/192173/update-projection-matrices-for-triangulation/I have a stereo vision application that is attempting to do the following:
Load stereo calibration data from xml
find and match features on left/right frames.
Using the calibration matrices, triangulate points.
Run solvePnp on 2d/3d correspondences
Using the results from solvePnP, UPDATE the projection matrices:
//Update Projection matrices with tvec and rvec
cv::Mat MatrixFunctions::UpdateProjectionMatrix(cv::Mat P, cv::Mat tvec, cv::Mat R)
{
//decompose the calibrated matrices
//p1
cv::Mat Kd(3, 3, CV_32F); // intrinsic parameter matrix
cv::Mat Rd(3, 3, CV_32F); // rotation matrix
cv::Mat Td(4, 1, CV_32F); // translation vector
cv::Mat newP1(4, 3, CV_32F);
cv::decomposeProjectionMatrix(P, Kd, Rd, Td);
//fromH
cv::Mat T1 = cv::Mat(3, 1, CV_32F);
homogeneousToEuclidean(Td, T1);
R = R.t();
tvec = -R * tvec;
//add the new t and r
T1 = T1 + (tvec / 10);
Rd = Rd + R;
projectionFromKRt(Kd, R, T1, newP1); //from cv:sfm
return newP1;
}
by passing:
//return tvec and rvec from solvepnp
//create new Projection Matrices for left and right using the new position, P1 and P2 are the calibration P matrices
cv::Mat updatedP1(4, 3, CV_32F);
cv::Mat updatedP2(4, 3, CV_32F);
cv::Mat RMat;
cv::Rodrigues(rvec, RMat);
updatedP1 = MatFunc->UpdateProjectionMatrix(P1, tvec, RMat);
updatedP2 = MatFunc->UpdateProjectionMatrix(P2, tvec, RMat);
triangulate again.
What i want to see, is that the triangulated points are in the same world space, as the camera has moved. Instead, the points are warped, and very wrong. Am I thinking about this wrong? How can i triangulate the points into the same place, with a known camera position?antithingWed, 23 May 2018 10:07:54 -0500http://answers.opencv.org/question/192173/3D point extractionhttp://answers.opencv.org/question/178654/3d-point-extraction/ I want to measure a depth of an object which is placed at 4 meters from the camera. Therefore I increased the baseline distance (1.5 meters distance between two cameras) to get good depth accuracy. In this case, stereo rectification is not working properly, so I am using the triangulation function in order to extract the 3D point. For that, I have calculated the projection matric by following: P0 = K1[I|0] and P1=K2[R|T], where K1 and K2 are the camera matrix, R and T are the rotation and translation of the second camera respect to the first camera. In the next step, I passed all this value to the triangulation function, and it is providing a 3D point.
My question is: Am I doing correctly? If I am correct than the obtained 3D point would be respect to the P0 camera?carl777Sat, 18 Nov 2017 07:12:28 -0600http://answers.opencv.org/question/178654/Boounding Rect float returnhttp://answers.opencv.org/question/106256/boounding-rect-float-return/ Hello,
is it possible to get float or double values from bounding rect?
I would like to triangulate them and for that I need the exact values and no int.
Thanks for your help.
TiBeMon, 24 Oct 2016 04:38:53 -0500http://answers.opencv.org/question/106256/Triangulate Points from Stereo Imagehttp://answers.opencv.org/question/102712/triangulate-points-from-stereo-image/ Hello guys,
I hope someone can help me with my problem :)
**That's my approach**
I have two cameras and now I want to calculate the 3D coordinates of an object. For this, I calibrate the cameras individually and then I use stereoCalibrate.
Now I have the intrinsic and extrinsic parameters and would like to triangulate.
To do that, I create the projection Matrices like this.
Basically one camera I can keep as P1= K1 [I|0] (identity rotation and 0 translation), and the other camera P2= K2*[R|t] (rotation and translation between the cameras from stereoCalibrate())
These matrices I hand over to the triangulate function (http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/)
**To test the triangulation**
Checkerboard: 9x6 and 12 mm per square
I use the reference checkerboard images because the depth should be constant but i have depth "linear".
The x and y coordinates are accurate over the entire distance to 0.2mm.
The picture shows the values for the first two rows. As you can see the depth is strange
![image description](/upfiles/14744601134612977.png)
Anyone know where is my problem?
Thanks.TiBeWed, 21 Sep 2016 07:15:31 -0500http://answers.opencv.org/question/102712/