OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 18 Dec 2020 07:38:58 -0600Problem with undistortPoints() function in pose estimation of imagehttp://answers.opencv.org/question/239443/problem-with-undistortpoints-function-in-pose-estimation-of-image/ I have written about my task [here](https://answers.opencv.org/question/238792/problem-with-building-pose-mat-from-rotation-and-translation-matrices/). I have
a set of images with known pose which were used for scene reconstruction and some query image from the same space without pose. I need to calculate the pose of the query image. I solved this problem using essential matrix. Here is a code
Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);
// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);
The only problem is that OpenCv documentation [states](https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga13f7e34de8fa516a686a56af1196247f) that the findEssentialMat function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. That's not a case for us - images of scene and query image can be captured by cameras with different intrinsics.
I suppose to use this undistortPoints() function. According to [documentation](https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e) the undistortPoints() function takes two important parameters distCoeffs and cameraMatrix.
Both images of the scene and query image have calibration parameters associated (fx, fy, cx, cy).
I obtain cameraMatrix parameter this way:
Mat K_v = (Mat_<double>(3, 3) <<
fx, 0, cx,
0, fy, cy,
0, 0, 1, CV_64F);
Is this correct? Moreover I need to get somewhere distCoeffs. So how I can obtain this distortion coefficients for images of the scene and query image? Probably I should solve it another way?sigmoid90Fri, 18 Dec 2020 07:38:58 -0600http://answers.opencv.org/question/239443/Problem with building pose Mat from rotation and translation matriceshttp://answers.opencv.org/question/238792/problem-with-building-pose-mat-from-rotation-and-translation-matrices/I have two images captured in the same space (scene), one with known pose. I need to calculate the pose of the second (query) image. I have obtained the relative camera pose using essential matrix. Now I am doing calculation of camera pose through the matrix multiplication ([here](https://answers.opencv.org/question/31421/opencv-3-essentialmatrix-and-recoverpose/) is a formula).
I try to build the 4x4 pose Mat from rotation and translation matrices. My code is following
Pose bestPose = poses[best_view_index];
Mat cameraMotionMat = bestPose.buildPoseMat();
cout << "cameraMotionMat: " << cameraMotionMat.rows << ", " << cameraMotionMat.cols << endl;
float row_a[4] = {0.0, 0.0, 0.0, 1.0};
Mat row = Mat::zeros(1, 4, CV_64F);
cout << row.type() << endl;
cameraMotionMat.push_back(row);
// cameraMotionMat.at<float>(3, 3) = 1.0;
Earlier in code. Fora each view image:
Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);
// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);
Pose pose (R, t);
poses.push_back(pose);
Initially method bestPose.buildPoseMat() returns Mat of size (3, 4). I need to extend the Mat to size (4, 4) with row [0.0, 0.0, 0.0, 1.0] (zeros vector with 1 on the last position). Strangely I get following output when print out the resultant matrix
> [0.9107258520121255,
> 0.4129580377861768, 0.006639390377046724, 0.9039011699443721;
> 0.4129661348384583, -0.9107463665340377, 0.0001652925667582038, -0.4277340727282191;
> 0.006115059555925467, 0.002591307168000504, -0.9999779453436902, 0.002497598952195387;
> 0, 0.0078125, 0, 0]
Last row does not look like it should be: [0, 0.0078125, 0, 0] rather than [0.0, 0.0, 0.0, 1.0]. Is this implementation correct? What could a problems with this matrix?sigmoid90Sun, 06 Dec 2020 04:51:36 -0600http://answers.opencv.org/question/238792/Row and Col problem when Mat represents 3d point cloudhttp://answers.opencv.org/question/189357/row-and-col-problem-when-mat-represents-3d-point-cloud/The row and col of a Mat to represent 3d point cloud in OpenCV is N * 3, N is the number of the points in the cloud and 3 is the x, y, z coordinate respectively. For example, when I use method loadPLYSimple to load data from PLY file I will get N * 3 Mat, when I use the Flann KDTREE, I need to pass N * 3 Mat as the parameter... But the problem is that when I try to perform a transformation on the data, such as rotation. If we have a 3 * 3 rotation Mat R, and the points Mat pc N * 3, the common way is just R * pc. However, pc is N * 3, so we need to do some extra transpose work. I'm not familiar with OpenCV, I just want to know if there is any better way to do that instead of doing the transpose work each time? Or maybe there is something which I do not understand hidden behind？Thanks.TabFri, 13 Apr 2018 16:28:56 -0500http://answers.opencv.org/question/189357/perspective transformation with given camera posehttp://answers.opencv.org/question/72020/perspective-transformation-with-given-camera-pose/Hi everyone!
I'm trying to create a program, that I will use to perform some tests.
In this program an 2D image is being displayed in 3D space in the cv:viz window, so user can change camera (viewer) position and orientation.
![image description](/upfiles/1443709792833003.jpg)
After that, program stores camera pose and takes the snaphot of the current view (without coordinates axes):
![image description](/upfiles/14437098062513117.jpg)
An here is the goal:
I have the **snaphot** (perspective view of undetermined plane or part of the plane), **camera pose** (especially its orientation) and **camera parameters**. Using these given values I would like to **perform perspective transformation to compute an ortographic view of this given image** (or its visible part).
I can get the camera object and compute its projection matrix:
camera.computeProjectionMatrix(projectionMatrix);
and then decompose projection matrix:
decomposeProjectionMatrix(subProjMatrix,cameraMatrix, rotMatrix, transVect, rotMatX, rotMatY, rotMatZ);
And what should I do next?
Notice, that I can't use chessboard cornersbecause the image is undetermined (it may be any image) and I can't use the corner points of the image, because user can zoom and translate the camera, so there is posibility, that no image corner point will be visible...
Thanks for any help in advance!pawsThu, 01 Oct 2015 09:41:43 -0500http://answers.opencv.org/question/72020/Meaning of perspective transformation matrix (Q) valueshttp://answers.opencv.org/question/38629/meaning-of-perspective-transformation-matrix-q-values/I did stereo camera calibration with stereo_calib.cpp sample and got the intrinsics.yml and extrinsics.yml files which contain also the Q matrix. That is the meaning of its values? [This answer](http://answers.opencv.org/question/4379/from-3d-point-cloud-to-disparity-map/?answer=4433#post-id-4433) shows following items in the matrix: Cx, Cy, f, a and b. I guess the f is focal length but I am not sure on the other values.
Also, I need following data for my further coding:
- focal length in pixels
- principal point (u-coordinate) in pixels
- principal point (v-coordinate) in pixels
- baseline in meters
These are the parameters needed for libviso2 visual odometry.
Thanks for help in advance!KozuchSun, 03 Aug 2014 13:23:53 -0500http://answers.opencv.org/question/38629/