OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 14 Dec 2020 19:41:25 -0600How to find surfance orientation (relative to camera) from single depth image?http://answers.opencv.org/question/239261/how-to-find-surfance-orientation-relative-to-camera-from-single-depth-image/ Supposing I have access to the image stream of a depth camera, and there is a flat surface (e.g. floor, tabletop, etc) within the camera's FoV at all times, how could one estimate the floor vertical and horizontal orientation (or better yet, the rotation matrix/vector) from the camera perspective?
I have access to the camera matrix, therefore I can select multiple points on the surface and reconstruct their 3D coordinates on the camera frame. But how do I use those coordinates to build a transformation matrix (mostly rotation, translation is irrelevant, I'd just need to orient my reference frame orthogonally with the surface)
My main limitation seems to be that I do not have the correspondent coordinates on the surface points (on object/external reference), therefore can't use cv::`estimateAffine3D` or `cv::findHomography` or `cv::solvePnP`.
I've tried to estimate the plane equation using cv::SVD, but resulting fit doesn't seem to be that precise and I am not aware how could I use the plane equation to find the affine transformation matrix. joaocandreMon, 14 Dec 2020 19:41:25 -0600http://answers.opencv.org/question/239261/Connection between pose estimation, epipolar geometry and depth maphttp://answers.opencv.org/question/233007/connection-between-pose-estimation-epipolar-geometry-and-depth-map/ Hi I am an undergraduate student working on a graduate project, and a beginner to computer vision.
After I went through the tutorial "Camera Calibration and 3D Reconstruction" provided by OpenCV (link) :
https://docs.opencv.org/master/d9/db7/tutorial_py_table_of_contents_calib3d.html
I failed to see the connections between the second part to the final part. What I understand here is :
- The intrinsic and extrinsic parameters of a camera is required to estimate the position of the camera and the
captured object
- To reconstruct a 3D model multiple point clouds are needed, and to generate a point cloud a disparity map is required.
What I do not understand is :
- The importance of estimating the position of the camera or the object to compute the epiline or epipole in either image planes.
- The importance of epipolar geometry, and finding out the location of epiline and epipole to compute the disparity map.
As far as I am aware, the code below generate a disparity map
stereo = cv2.createStereoBM(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)
and the input includes a pair of stereo images, minDisparities, numDisparities and blockSize, but not the position of the camera nor the epiline/epipole.
Any help would be greatly appreciated.
askl1278Tue, 28 Jul 2020 09:04:27 -0500http://answers.opencv.org/question/233007/relative pose between two imageshttp://answers.opencv.org/question/75562/relative-pose-between-two-images/Is it possible to get a relative pose between two images?
When I got E => (R, t) and K shouldn't it be possible to calculate the location of a point in the second image, found in the first image.
I know you need to know the scale factor for full reconstruction but do I really need it two if I just want a relative pose.
And lets say when there is only a rotation then a least the scaling factor should not be interesting?!
but why does it then be in this formula with R combined?
s2* m'2 = R * s1 * m'1 + tMJFri, 06 Nov 2015 10:27:51 -0600http://answers.opencv.org/question/75562/undistortPoints, findEssentialMat, recoverPose: What is the relation between their arguments?http://answers.opencv.org/question/65788/undistortpoints-findessentialmat-recoverpose-what-is-the-relation-between-their-arguments/**TL;DR**: What relation should hold between the arguments passed to `undistortPoints`, `findEssentialMat` and `recoverPose`.
I have code like the following in my program
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
Mat E = findEssentialMat(imgpts1, imgpts2, 1, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
I `undistort` the Points before finding the essential matrix. The doc states that one can pass the new camera matrix as the last argument. When omitted, points are in *normalized* coordinates (between -1 and 1). In that case, I would expect that I pass 1 for the focal length and (0,0) for the principal point to `findEssentialMat`, as the points are normalized. So I would think this to be the way:
1. **Possibility 1** (normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients);
Mat E = findEssentialMat(imgpts1, imgpts2, 1.0, Point2d(0,0), RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, 1.0, Point2d(0,0), mask);
2. **Possibility 2** (do not normalize coordinates)
Mat mask; // inlier mask
undistortPoints(imgpts1, imgpts1, K, dist_coefficients, noArray(), K);
undistortPoints(imgpts2, imgpts2, K, dist_coefficients, noArray(), K);
double focal = K.at<double>(0,0);
Point2d principalPoint(K.at<double>(0,2), K.at<double>(1,2));
Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 3, mask);
correctMatches(E, imgpts1, imgpts2, imgpts1, imgpts2);
recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask);
However, I have found, that I only get reasonable results when I tell `undistortPoints` that the old camera matrix shall still be valid (I guess in that case only distortion is removed) and pass arguments to `findEssentialMat` as if the points were normalized, which they are not.
Is this a bug, insufficient documentation or user error?
**Update**
It might me that `correctedMatches` should be called with (non-normalised) image/pixel coordinates and the Fundamental Matrix, not E, this may be another mistake in my computation. It can be obtained by `F = K^-T * E * K^-1`themightyoarfishWed, 08 Jul 2015 05:43:24 -0500http://answers.opencv.org/question/65788/Pose estimation produces wrong translation vectorhttp://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/Hi,<br>
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not.
I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:
Mat fundamental_matrix =
findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) = 1;
W.at<double>(2,2) = 1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
R1 = -1 * R1;
if(determinant(R2) < 0)
R2 = -1 * R2;
Mat trans = decomp.u.col(2);
However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98).
Any help would be appreciated.FiredragonwebSat, 10 Aug 2013 08:37:43 -0500http://answers.opencv.org/question/18565/