OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 04 Dec 2020 10:49:29 -0600Triangulation with Ground Plane as Originhttp://answers.opencv.org/question/238744/triangulation-with-ground-plane-as-origin/Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...
To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.
![transform of board coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...
![transform of board coords to camera 2 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc2%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?**
Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points...
After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...
![transform of camera 2 coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bc2%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bc2%7D%7D%20%5C%5C%20%7BY_%7Bc2%7D%7D%20%5C%5C%20%7BZ_%7Bc2%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?**
![combined transforms](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
![combined transforms approximation](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%5Capprox%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D)
**I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...**
Output from projectPoints
![](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9844968%20%26%20-0.14832049%20%26%200.09363274%20%26%20-0.7521725%5C%5C%200.01426749%20%26%20-0.46433134%20%26%20-0.88554664%20%26%201.10571043%20%5C%5C%200.17482132%20%26%200.87315373%20%26%20-0.45501656%20%26%203.89971067%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)
![](https://latex.codecogs.com/gif.latex?%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9621638%20%26%20-0.00173254%20%26%200.01675597%20%26%20-1.03920386%5C%5C%20-0.00161398%20%26%20-0.51909025%20%26%20-0.06325754%20%26%200.02077932%20%5C%5C%20-0.01954778%20%26%20-0.07318432%20%26%20-0.49902605%20%26%201.0988982%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...
edit/update: Adding raw data from projectPoints and stereoCalibrate:
Sorry for the delay. Going through my code I use estimatePoseCharucoBoard to get my transformation matric from board coords to camera, sorry about that! Here are the matrices that I obtained;
**Note: Any time that a calibration board object is needed the board dimensions given are in meters. So scaling should remain the same between matrices.**
board to camera 190 from estimatePoseCharucoBoard -->
c1^M_board
[[ 0.99662517 0.05033606 -0.06484257 -0.88300593]
[-0.02915834 -0.52132771 -0.85285826 0.82721859]
[-0.07673376 0.85187071 -0.5181006 4.03620873]
[ 0. 0. 0. 1. ]]
board to camera 229 from estimatePoseCharucoBoard -->
c2^M_board
[[ 0.9844968 -0.14832049 0.09363274 -0.7521725 ]
[ 0.01426749 -0.46433134 -0.88554664 1.10571043]
[ 0.17482132 0.87315373 -0.45501656 3.89971067]
[ 0. 0. 0. 1. ]]
camera 229 to camera 190 from stereoCalibrate -->
c1^M_c2
[[ 0.96542194 0.05535236 0.2547481 -1.20694685]
[-0.03441951 0.99570816 -0.08591013 0.03888629]
[-0.25841009 0.07417122 0.96318371 0.04002158]
[ 0. 0. 0. 1. ]]
Here is a code snippet showing how I obtain the transformation matrix to ground:
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// Attempt to refind more markers based on already detected markers
aruco::refineDetectedMarkers(image, charucoboard, corners, ids, rejected,
noArray(), noArray(), 10.f, 3.f, true, noArray(), detectorParams);
if(ids.size() < 1){
cerr << "No marker ID's found" << endl;
}
// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(corners, ids, image, charucoboard,
currentCharucoCorners, currentCharucoIds);
if(currentCharucoCorners.rows < 6) {
cerr << "Not enough corners for calibration" << endl;
}
cout << "Corners Found: " << currentCharucoCorners.rows << endl;
cout << "Total Object Points: " << objPoints.size() << endl;
aruco::estimatePoseCharucoBoard(currentCharucoCorners, currentCharucoIds, charucoboard,
intrinsics.cameraMatrix, intrinsics.distCoeffs,
rvec, tvec, false);
Rodrigues(rvec, R);
cout << "Rotation Matrix: " << R << endl;
cout << "Translation Matrix: " << tvec << endl;
P = RTtoP(R, tvec);
cout << "Projection Matrix: " << P << endl;ConnorMFri, 04 Dec 2020 10:49:29 -0600http://answers.opencv.org/question/238744/projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/how to calculate the inliers points from my rotation and translation matrix?http://answers.opencv.org/question/138651/how-to-calculate-the-inliers-points-from-my-rotation-and-translation-matrix/ how to calculate the inliers points from my rotation and translation matrix?
if I have the points lists
std::vector<Point3d> opoints;
std::vector<Point2d> ipoints;
and I have the rotation and translation matrix, How can I calculate the inliers points
I know that cv::solvePnPRansac will calculate the inliers, rotation and translation from the two points list, but I need to calculate the inliers from my rotation and translation?
Thanks for your supportMohammed OmarFri, 07 Apr 2017 16:35:39 -0500http://answers.opencv.org/question/138651/camera rotation and translation based on two imageshttp://answers.opencv.org/question/68023/camera-rotation-and-translation-based-on-two-images/Hello,
I'm just starting my little project in OpenCV and I need your help :)
I would like to calculate rotation and translation values of the camera basing on two views of the same planar, square object.
I have already found functions such as: getPerspectiveTransform, decomposeEssentialMat, decomposeHomographyMat. Plenty of tools, but I'm not sure which of them to use in my case.
I have a square object of known real-world dimensions [meters]. After simple image processing I can extract pixel values of the vertices and the center of the square.
Now I would like to calculate the relative rotation and translation of the camera which led to obtain the second of two images:<br>
"Reference view" and "View #n"<br>
(please see below).
Any suggestions will be appreciated :)
1. Reference view:<br>
![image description](/upfiles/1438854857209.png)
<br>(center of the object is on the optical axis of camera, the camera-object distance is known)
2. View #1:<br>
![image description](/upfiles/14388548769288926.png)
3. View #2:<br>
![image description](/upfiles/14388548834324958.png)
4. View #3:<br>
![image description](/upfiles/1438854889587757.png)
AliceThu, 06 Aug 2015 05:40:19 -0500http://answers.opencv.org/question/68023/Please help me! How to compute the rotation and translation matrix?http://answers.opencv.org/question/30079/please-help-me-how-to-compute-the-rotation-and-translation-matrix/
I have computed the corresponding coordinates from two successive images,but I do not know how to compute the rotation and translation matrix (which I use the matrix to estimate the camera motion) ? Is there a function in opencv that could solve my problem?shmm91Mon, 17 Mar 2014 07:31:12 -0500http://answers.opencv.org/question/30079/Unit of pose vectors from solvePnP()http://answers.opencv.org/question/13225/unit-of-pose-vectors-from-solvepnp/I would like to ask about the solvePnP() output of rotation and translation vectors. What is the unit of of them, are they radian & meter respectively?
Thanks in advance.
alfa_80Sun, 12 May 2013 11:42:31 -0500http://answers.opencv.org/question/13225/