OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 06 Dec 2020 04:51:36 -0600Triangulation with Ground Plane as Originhttp://answers.opencv.org/question/238744/triangulation-with-ground-plane-as-origin/Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...
To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.
![transform of board coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...
![transform of board coords to camera 2 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc2%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?**
Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points...
After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...
![transform of camera 2 coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bc2%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bc2%7D%7D%20%5C%5C%20%7BY_%7Bc2%7D%7D%20%5C%5C%20%7BZ_%7Bc2%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?**
![combined transforms](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
![combined transforms approximation](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%5Capprox%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D)
**I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...**
Output from projectPoints
![](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9844968%20%26%20-0.14832049%20%26%200.09363274%20%26%20-0.7521725%5C%5C%200.01426749%20%26%20-0.46433134%20%26%20-0.88554664%20%26%201.10571043%20%5C%5C%200.17482132%20%26%200.87315373%20%26%20-0.45501656%20%26%203.89971067%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)
![](https://latex.codecogs.com/gif.latex?%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9621638%20%26%20-0.00173254%20%26%200.01675597%20%26%20-1.03920386%5C%5C%20-0.00161398%20%26%20-0.51909025%20%26%20-0.06325754%20%26%200.02077932%20%5C%5C%20-0.01954778%20%26%20-0.07318432%20%26%20-0.49902605%20%26%201.0988982%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...
edit/update: Adding raw data from projectPoints and stereoCalibrate:
Sorry for the delay. Going through my code I use estimatePoseCharucoBoard to get my transformation matric from board coords to camera, sorry about that! Here are the matrices that I obtained;
**Note: Any time that a calibration board object is needed the board dimensions given are in meters. So scaling should remain the same between matrices.**
board to camera 190 from estimatePoseCharucoBoard -->
c1^M_board
[[ 0.99662517 0.05033606 -0.06484257 -0.88300593]
[-0.02915834 -0.52132771 -0.85285826 0.82721859]
[-0.07673376 0.85187071 -0.5181006 4.03620873]
[ 0. 0. 0. 1. ]]
board to camera 229 from estimatePoseCharucoBoard -->
c2^M_board
[[ 0.9844968 -0.14832049 0.09363274 -0.7521725 ]
[ 0.01426749 -0.46433134 -0.88554664 1.10571043]
[ 0.17482132 0.87315373 -0.45501656 3.89971067]
[ 0. 0. 0. 1. ]]
camera 229 to camera 190 from stereoCalibrate -->
c1^M_c2
[[ 0.96542194 0.05535236 0.2547481 -1.20694685]
[-0.03441951 0.99570816 -0.08591013 0.03888629]
[-0.25841009 0.07417122 0.96318371 0.04002158]
[ 0. 0. 0. 1. ]]
Here is a code snippet showing how I obtain the transformation matrix to ground:
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// Attempt to refind more markers based on already detected markers
aruco::refineDetectedMarkers(image, charucoboard, corners, ids, rejected,
noArray(), noArray(), 10.f, 3.f, true, noArray(), detectorParams);
if(ids.size() < 1){
cerr << "No marker ID's found" << endl;
}
// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(corners, ids, image, charucoboard,
currentCharucoCorners, currentCharucoIds);
if(currentCharucoCorners.rows < 6) {
cerr << "Not enough corners for calibration" << endl;
}
cout << "Corners Found: " << currentCharucoCorners.rows << endl;
cout << "Total Object Points: " << objPoints.size() << endl;
aruco::estimatePoseCharucoBoard(currentCharucoCorners, currentCharucoIds, charucoboard,
intrinsics.cameraMatrix, intrinsics.distCoeffs,
rvec, tvec, false);
Rodrigues(rvec, R);
cout << "Rotation Matrix: " << R << endl;
cout << "Translation Matrix: " << tvec << endl;
P = RTtoP(R, tvec);
cout << "Projection Matrix: " << P << endl;ConnorMFri, 04 Dec 2020 10:49:29 -0600http://answers.opencv.org/question/238744/Problem with building pose Mat from rotation and translation matriceshttp://answers.opencv.org/question/238792/problem-with-building-pose-mat-from-rotation-and-translation-matrices/I have two images captured in the same space (scene), one with known pose. I need to calculate the pose of the second (query) image. I have obtained the relative camera pose using essential matrix. Now I am doing calculation of camera pose through the matrix multiplication ([here](https://answers.opencv.org/question/31421/opencv-3-essentialmatrix-and-recoverpose/) is a formula).
I try to build the 4x4 pose Mat from rotation and translation matrices. My code is following
Pose bestPose = poses[best_view_index];
Mat cameraMotionMat = bestPose.buildPoseMat();
cout << "cameraMotionMat: " << cameraMotionMat.rows << ", " << cameraMotionMat.cols << endl;
float row_a[4] = {0.0, 0.0, 0.0, 1.0};
Mat row = Mat::zeros(1, 4, CV_64F);
cout << row.type() << endl;
cameraMotionMat.push_back(row);
// cameraMotionMat.at<float>(3, 3) = 1.0;
Earlier in code. Fora each view image:
Mat E = findEssentialMat(pts1, pts2, focal, pp, FM_RANSAC, F_DIST, F_CONF, mask);
// Read pose for view image
Mat R, t; //, mask;
recoverPose(E, pts1, pts2, R, t, focal, pp, mask);
Pose pose (R, t);
poses.push_back(pose);
Initially method bestPose.buildPoseMat() returns Mat of size (3, 4). I need to extend the Mat to size (4, 4) with row [0.0, 0.0, 0.0, 1.0] (zeros vector with 1 on the last position). Strangely I get following output when print out the resultant matrix
> [0.9107258520121255,
> 0.4129580377861768, 0.006639390377046724, 0.9039011699443721;
> 0.4129661348384583, -0.9107463665340377, 0.0001652925667582038, -0.4277340727282191;
> 0.006115059555925467, 0.002591307168000504, -0.9999779453436902, 0.002497598952195387;
> 0, 0.0078125, 0, 0]
Last row does not look like it should be: [0, 0.0078125, 0, 0] rather than [0.0, 0.0, 0.0, 1.0]. Is this implementation correct? What could a problems with this matrix?sigmoid90Sun, 06 Dec 2020 04:51:36 -0600http://answers.opencv.org/question/238792/How to get real world projection coordinatehttp://answers.opencv.org/question/225422/how-to-get-real-world-projection-coordinate/I have a TV screen (dimension of TV is known say width, w and height, h) and I have a Camera somewhere nearby and the physical distance between camera and TV screen's center is known say (Δx,Δy,Δz). The camera and TV screen might be facing in different angles, the vertical angle and horizontal angle, that both make with each other say, θv and θh is also known.
Now the camera has recorded the gaze of a person in terms of yaw, pitch (and roll too, but roll is not needed in this case). Also, the person's real world distance from camera is known z-dist, x-dist and y-dist.
How to project this person's gaze on the TV's plane and get whether the gaze intersect the TV's screen, given TV's physical dimension, if yes, then know relative position of the plane's intersection with the gaze.KafanTue, 28 Jan 2020 08:25:40 -0600http://answers.opencv.org/question/225422/Distance between Camera and Marker (calculate with Tvec)http://answers.opencv.org/question/218178/distance-between-camera-and-marker-calculate-with-tvec/ So i got a marker of known size and set the world coordinate origin to the center of the marker. With solvePnP i calculated the corresponding rotation vector as well as the translation vector. Camera calibration is done in advance. When i project points given in world coordinates it shows them correctly on the display in pixel coordinates. Now i want to calculated the distance between the camera and the marker. If i understood everything correctly, the shift between the world coordinate system and the camera coordinate system is given by the translation vector. Accordingly, the distance between the marker and the camera should be the norm of the vector, given by -Rvec(inverse)*Tvec. but if i do that the distance is way too high (about 2,5x). Am i missing something here? How can i get the right distance between the camera and the marker?
Thanks in advanceMarkus11123Tue, 10 Sep 2019 13:18:34 -0500http://answers.opencv.org/question/218178/Figure out scale and rotation of template image based on matched keypointshttp://answers.opencv.org/question/216386/figure-out-scale-and-rotation-of-template-image-based-on-matched-keypoints/I was able to use ORB matching to match my template image in my camera feed. Then I was able to extract the matching keypoints in both the feed and the template. Now I really want to find what would be the center of my template image but in my camera feed. I was thinking if I could figure out how to rotate, then scale the template image to fit the matched keypoints then I would know where the center was.
Does anyone have any advice on how to do something like that?
Thank you. eric_engineerTue, 30 Jul 2019 13:40:09 -0500http://answers.opencv.org/question/216386/How to calibrate rotation and translation between camera and see through displayhttp://answers.opencv.org/question/214852/how-to-calibrate-rotation-and-translation-between-camera-and-see-through-display/This is a follow-on to my question [here](https://answers.opencv.org/question/214736/translating-from-camera-space-into-see-through-display-space/). Basically I have a camera and a see-through display, they are both rigidly mounted to the same metal frame. The camera and display are about an inch and a half apart. What I want is to do the best job I can aligning an overlay on the real-world by translating what the camera sees to the display. FOV and resolution do not match.
I see that this is sort of like the stereo camera problem, where you take pictures of the same test pattern from both and then somehow come up with the translation and rotation matrix. Here I only have my eye so I was thinking of printing a checkerboard and putting it on the wall. Then displaying that checkerboard on my display. Next I'd position myself so that the display checkerboard aligned itself over the top of the one on the wall. And then I'd take a picture. At least at that point I'd know some information about what the camera and the display see at the same time.
After that I'm not sure what to do with that data :) Am I on the right track here? Should I be taking photos from different angles? And then what do I do with this data once I get it? I'm not sure on the math here. I've been trying to read about how stereo camera are calibrated, how cameras are modeled, and how you project 3D objects into 2D space but I haven't come up with the solution to this problem yet.
Thank youeric_engineerThu, 27 Jun 2019 16:01:24 -0500http://answers.opencv.org/question/214852/Translating from camera space, into see through display spacehttp://answers.opencv.org/question/214736/translating-from-camera-space-into-see-through-display-space/ I'm new to image processing and opencv, so sorry if this is the wrong place for this question. I have a head mounted camera taking video that I am doing some object detection on. Then I want to draw the outline of the detected object on my see through eye mounted display. Now I see some obvious problems with this. First of course the camera is out of x,y alignment with my eye. I don't think moving it would be too hard.
But then the field of view of the camera and my display are both different, so I'm thinking I have to crop the camera to the smaller FOV of the display. I'm wondering if I can just use the ratio of the two FOVs for this?
Am I missing something? Is there a prefered or right way to come up with this translation?
Thank you for your time! eric_engineerTue, 25 Jun 2019 14:48:25 -0500http://answers.opencv.org/question/214736/projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/Output view full of translated imagehttp://answers.opencv.org/question/205550/output-view-full-of-translated-image/Hello. I want to translate an image and I want to have full view of the output. To be more precise, I want to translate this
![image description](/upfiles/15451415073235399.png)
to this
![image description](/upfiles/1545141577276499.png)
How can this be done in opencv?
Thank you in advancekspanakisTue, 18 Dec 2018 08:01:01 -0600http://answers.opencv.org/question/205550/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/how to calculate the inliers points from my rotation and translation matrix?http://answers.opencv.org/question/138651/how-to-calculate-the-inliers-points-from-my-rotation-and-translation-matrix/ how to calculate the inliers points from my rotation and translation matrix?
if I have the points lists
std::vector<Point3d> opoints;
std::vector<Point2d> ipoints;
and I have the rotation and translation matrix, How can I calculate the inliers points
I know that cv::solvePnPRansac will calculate the inliers, rotation and translation from the two points list, but I need to calculate the inliers from my rotation and translation?
Thanks for your supportMohammed OmarFri, 07 Apr 2017 16:35:39 -0500http://answers.opencv.org/question/138651/incidence angle from translation vectorhttp://answers.opencv.org/question/133209/incidence-angle-from-translation-vector/Using OpenCV 3.2 for a local positioning system utilizing aruco.
Have successfully extracted 3d angles/matrices between camera and identified markers.
But I also need to calculate the angle to the camera caused by **translation** of the marker in x and y.
Using typical methods in OpenCV, it is straightforward to get this translation vector.
But I am stuck on how to straightforwardly go from the translation vector to angle of incidence.
I have searched/read widely, but have not found any commentary on this. I am not a 3d expert by any means and it is more than possible I have just overlooked something.
My own instinct at this point is to simply translate 0, 0, 0 in model coords to a camera location, and estimate the angle of incidence geometrically. There are some obvious problems with this, and I suspect it will be less than satisfactory.
So - my question is: Is there a straightforward way of going from the translation vector returned by methods like solvePnP() or estimatePoseSingleMarkers() to the angle of incidence to the camera caused by the translation?
This is NOT the angle of pose - these come (reasonably) directly from these functions - but these angles not reflect changes in the angle of incidence due to the translation - the incorporate only the actual rotation of detected object.
Any help greatly appreciated,
SethsneimanThu, 09 Mar 2017 15:28:07 -0600http://answers.opencv.org/question/133209/How to translate the Region of interest to the Center of gravity of the image?http://answers.opencv.org/question/123516/how-to-translate-the-region-of-interest-to-the-center-of-gravity-of-the-image/ I'm using Opencv 3.0 c++.I have been able to find the COG of an irregular object but now I need to translate the ROI to the COG of the image,that is to make it in the center.
Any suggestions how to do it?
![image description](/upfiles/14851537705317878.png)
AllisonSun, 22 Jan 2017 06:53:53 -0600http://answers.opencv.org/question/123516/camera rotation and translation based on two imageshttp://answers.opencv.org/question/68023/camera-rotation-and-translation-based-on-two-images/Hello,
I'm just starting my little project in OpenCV and I need your help :)
I would like to calculate rotation and translation values of the camera basing on two views of the same planar, square object.
I have already found functions such as: getPerspectiveTransform, decomposeEssentialMat, decomposeHomographyMat. Plenty of tools, but I'm not sure which of them to use in my case.
I have a square object of known real-world dimensions [meters]. After simple image processing I can extract pixel values of the vertices and the center of the square.
Now I would like to calculate the relative rotation and translation of the camera which led to obtain the second of two images:<br>
"Reference view" and "View #n"<br>
(please see below).
Any suggestions will be appreciated :)
1. Reference view:<br>
![image description](/upfiles/1438854857209.png)
<br>(center of the object is on the optical axis of camera, the camera-object distance is known)
2. View #1:<br>
![image description](/upfiles/14388548769288926.png)
3. View #2:<br>
![image description](/upfiles/14388548834324958.png)
4. View #3:<br>
![image description](/upfiles/1438854889587757.png)
AliceThu, 06 Aug 2015 05:40:19 -0500http://answers.opencv.org/question/68023/Image stitching of translating imageshttp://answers.opencv.org/question/64120/image-stitching-of-translating-images/Hi all,
I'm writing a program that stitch images taken by a flying drone. The problem is that images are translating like the drone is acting like a "scanner".So, when I calculate feature points and then homography, this one messes all my mosaic. There's a way in opencv (or with opencv together with another library) to stitch toghere images that differs by a translation instead a rotation?bjorn89Sun, 14 Jun 2015 16:16:12 -0500http://answers.opencv.org/question/64120/Shift contoure in javahttp://answers.opencv.org/question/53243/shift-contoure-in-java/I would like to shift contures to the left 10 pixels in Java, but I am not able to find out how to do it.
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(mat, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Mat border = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC1);
for (MatOfPoint contour : contours) { // iterate over every contour in the list
Imgproc.drawContours(border, translate(contours), contours.indexOf(contour), new Scalar(255));
}
I have found some implementation of function *translate(contours)* but I am not able to rewrite it Java. For example, how to reimplement:
template<class T>
std::vector<cv::Point> translate_contour(std::vector<T> in , int offset_x, int offset_y){
std::vector<cv::Point> ret_contour;
cv::Point2f offset(offset_x,offset_y);
for(int i = 0; i<in.size(); i++){
int x = (int)(in[i].x+offset_x+0.5);
int y = (int)(in[i].y+offset_y+0.5);
ret_contour.push_back(T(x,y));
}
return ret_contour;
}mpeleSat, 17 Jan 2015 17:04:58 -0600http://answers.opencv.org/question/53243/stereo calibration translation vectorhttp://answers.opencv.org/question/22307/stereo-calibration-translation-vector/When using the stereoCalibrate function, I a putting the left images in for camera one and the right images in for camera two (standing behind the cameras). I would assume the translation vector would be in the positive x direction (ie. to the right), but it is not. The magnitude of the vector is correct, it's just going in the "wrong" direction. I was just curious why it's coming back negative. Thanks.sam.petrocelliFri, 11 Oct 2013 11:49:30 -0500http://answers.opencv.org/question/22307/Stitching: how to get camera translation into bundle adjustment?http://answers.opencv.org/question/12740/stitching-how-to-get-camera-translation-into-bundle-adjustment/When examining the stiching module it appears to only be setup for a rotating camera. In theory can it be modified easily for a rotating and translating camera?
I think it is in the bundle adjustment that this would need to be done because in the Brown and Lowe paper which describes the stitching module it states:
> The new image is initialised with the same rotation and focal length as the image to which it best matches. Then the parameters are updated using Levenberg-Marquardt
Is this where effort should be made to make the camera rotate and translate (this is all new to me)?
Within the bundle adjustment can I just initialise the next image with the last images rotation combined with a best estimate on translation? Somehow that doesn't seem right to me as I've read about cost functions etc. Do I need to do something actually inside the bundle adjustment code?
I'm quite lost here and googling hasn't turned anything up. I'm very new to this though so perhaps I'm searching with the wrong terminology. If anybody has any suggestions at all even it is just some term to try googling it would really help.
Thanks
ricorWed, 01 May 2013 14:53:11 -0500http://answers.opencv.org/question/12740/Rotation Vectors and Translation Vectorshttp://answers.opencv.org/question/14256/rotation-vectors-and-translation-vectors/Hii , I am using cvCalibrateCamera2(....) function in opencv . Here one of the output that I get is the rotation_vectors which is Nx3 matrix . I have seen the documentation and it says look at cvRodrigues2() function for further details . And I have understood that cvRodrigues2() function converts the a 1x3 rotation vector to a 3x3 rotation matrix . My question is which 1x3 rotation vector out of the N , should be inputted to cvRodrigues2() function for calculating the Rotation Matrix ?? sachin_rtThu, 30 May 2013 00:03:41 -0500http://answers.opencv.org/question/14256/Question about unknown parameter- I am not a coderhttp://answers.opencv.org/question/32032/question-about-unknown-parameter-i-am-not-a-coder/Hey,
I am operating with a OpenCV 3D scanner, but I am not a programmer. I like to get known more about the calibration output files (cam_rotation_vectors.xml, cam_translation_vectors.xml, cam_extrinsic.xml, .... as well as for the projector). Which data are presented in each of these files? For example got data in a 8x3 matrix for cam_translation_vectors.xml. Thise format does not fit to anything I found decumentated in online tutorials.
Many thanks!LadyUnknowingSun, 20 Apr 2014 10:04:17 -0500http://answers.opencv.org/question/32032/Please help me! How to compute the rotation and translation matrix?http://answers.opencv.org/question/30079/please-help-me-how-to-compute-the-rotation-and-translation-matrix/
I have computed the corresponding coordinates from two successive images,but I do not know how to compute the rotation and translation matrix (which I use the matrix to estimate the camera motion) ? Is there a function in opencv that could solve my problem?shmm91Mon, 17 Mar 2014 07:31:12 -0500http://answers.opencv.org/question/30079/how to achieve Similarity Transform ?http://answers.opencv.org/question/19929/how-to-achieve-similarity-transform/Hi
I have been able to transform an image using affine transform and perspective transform using the [affine transformation tutorial](http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html). I have also been able to rotate an image using [getRotationMatrix2D](http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#getrotationmatrix2d). But how can I translate an image ?
krammerSun, 01 Sep 2013 10:18:35 -0500http://answers.opencv.org/question/19929/Pose estimation produces wrong translation vectorhttp://answers.opencv.org/question/18565/pose-estimation-produces-wrong-translation-vector/Hi,<br>
I'm trying to extract camera poses from a set of two images using features I extracted with BRISK. The feature points match quite brilliantly when I display them and the rotation matrix I get seems to be reasonable. The translation vector, however, is not.
I'm using the simple method of computing the fundamental matrix, essential matrix computing the SVD as presented in e.g. H&Z:
Mat fundamental_matrix =
findFundamentalMat(poi1, poi2, FM_RANSAC, deviation, 0.9, mask);
Mat essentialMatrix = calibrationMatrix.t() * fundamental_matrix * calibrationMatrix;
SVD decomp (essentialMatrix, SVD::FULL_UV);
Mat W = Mat::zeros(3, 3, CV_64F);
W.at<double>(0,1) = -1;
W.at<double>(1,0) = 1;
W.at<double>(2,2) = 1;
Mat R1= decomp.u * W * decomp.vt;
Mat R2= decomp.u * W.t() * decomp.vt;
if(determinant(R1) < 0)
R1 = -1 * R1;
if(determinant(R2) < 0)
R2 = -1 * R2;
Mat trans = decomp.u.col(2);
However, the resulting translation vector is horrible, especially the z coordinate: Usually it is near (0,0,1) regardless of the camera movement I performed while recording these images. Sometimes it seems that the first two coordinates might be kind of right, but they're far to small in comparison to the z coordinate (e.g. I moved the camera mainly in +x and the resulting vector is something like (0.2, 0, 0.98).
Any help would be appreciated.FiredragonwebSat, 10 Aug 2013 08:37:43 -0500http://answers.opencv.org/question/18565/Rotation Matrix calculation using cvRodrigues , Calculating real world coordinates from pixel world coordinates .http://answers.opencv.org/question/14969/rotation-matrix-calculation-using-cvrodrigues-calculating-real-world-coordinates-from-pixel-world-coordinates/Hii I want to get real world (X,Y,Z) coordinates of an object from live capture from a PTZ camera .
I have found the intrinsic parameters , and using chessboard calibration with 15 chessboard images . I have also found the extrinsic prameters . I know Z coordinate cannot be found using a single camera .
To do this I need to find rotation and translation matrix . I have a doubt that for finding rotation matrix cvRodrigues() function will convert a rotation vector to 3x3 rotation matrix , but here I will have 15 rotation vectors . Which one should I use for finding the rotation matrix ??
Also I want to know if I happen to pan or tilt my camera from the original calibration position . will i have to recalculate my rotation matrix and translation matrix or can i use the old ones ?? sachin_rtTue, 11 Jun 2013 00:20:46 -0500http://answers.opencv.org/question/14969/Unit of pose vectors from solvePnP()http://answers.opencv.org/question/13225/unit-of-pose-vectors-from-solvepnp/I would like to ask about the solvePnP() output of rotation and translation vectors. What is the unit of of them, are they radian & meter respectively?
Thanks in advance.
alfa_80Sun, 12 May 2013 11:42:31 -0500http://answers.opencv.org/question/13225/