OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 17 Nov 2020 11:41:28 -0600Transforming global coordinates to camera coordinateshttp://answers.opencv.org/question/237933/transforming-global-coordinates-to-camera-coordinates/ I want to get the transformation from the global coordinate system to the coordinate system of the camera image. I have a stationary camera pointing at the ground at an approximate angle of 20 degrees. I have already obtained the cameras intrinsic and distortion parameters.
My current setup is as follows. I have placed my camera at the middle of a chessboard edge (my (0,0) coordinate in the global coordinate system). I measured the distances of the chessboard intersections in mm. I then used cv::findChessboardCorners to find the before mentioned corners in the image. I then used cv::solvePnP to get rvec and tvec to generate the transformation matrix. I then used generated the transformation matrix using rvec and tvec.
Mat cameraMatrix = (Mat_<float>(3,3) <<
715.18604574311325, 0.0, 319.5,
0.0, 715.18604574311325, 239.5,
0.0, 0.0, 1.0);
Mat distCoeffs = (Mat_<float>(5,1) <<
-0.013535583817766943, 0.10657613007692497, 0.0, 0.0, -1.2272218410276732);
vector<Point2f> pointBuf;
vector<Point3f> boardPoints;
Mat rvec, tvec;
bool found;
//...
//code for declaring intersection coordinates (in mm) in the global coordinate system
//...
found = cv::findChessboardCorners(source, size, pointBuf);
if (found == true) {
solvePnP(boardPoints, pointBuf, cameraMatrix, distCoeffs, rvec, tvec, false);
Rodrigues(rvec,R);
R = R.t();
tvec = -R * tvec;
Mat T = cv::Mat::eye(4, 4, R.type());
T( cv::Range(0,3), cv::Range(0,3) ) = R * 1;
T( cv::Range(0,3), cv::Range(3,4) ) = tvec * 1;
Am I correct in assuming that if I just multiply a 4 by 1 vector in global coordinates, for instance
Mat p1 = (Mat_<float>(4, 1) << 100, 200,0,1); //the units are mm
where the coordinates are in mm, with matrix T, I get the corresponding x,y coordinates on the image plane in pixles.
result = T*p1;
The results I get with the current code are wrong, I just don't know if I either missed something, got the units wrong or if I my code is just completely wrong.lipa1242Tue, 17 Nov 2020 11:41:28 -0600http://answers.opencv.org/question/237933/How to get an rvec replacement from quaternion or euler valueshttp://answers.opencv.org/question/237900/how-to-get-an-rvec-replacement-from-quaternion-or-euler-values/ Hey, I am using an ArUcolib but want to replace a `rvec` form `solvePnP` with data from precise IMU. I get that `rvec` is a compact Rodrigues notation and `theta = sqrt(a^2 + b^2 + c^2)` `v = rod2/theta = [a/theta, b/theta, c/theta]` but cannot find a solution for my problem.
How to convert quaternion or Euler angles to a rotation vector that can replace a `rvec`?
ArkadiuszNMon, 16 Nov 2020 09:42:46 -0600http://answers.opencv.org/question/237900/ArUco orientation using the function aruco.estimatePoseSingleMarkers()http://answers.opencv.org/question/215377/aruco-orientation-using-the-function-arucoestimateposesinglemarkers/Hi everyone!
I'm trying to program a python app that determine the position and orientation of an aruco marker. I calibrated the camera and everything and I used *aruco.estimatePoseSingleMarkers* that returns the translation and rotation vectors.
The translation vector works fine but I don't understand how the rotation vector works. I took some picture to illustrate my problem with the "roll rotation":
Here the rotation vector is approximately [in degree]: [180 0 0]
![image description](/upfiles/15626850347225475.png)
Here the rotation vector is approximately [in degree]: [123 -126 0]
![image description](/upfiles/15626851829885092.png)
And here the rotation vector is approximately [in degree]: [0 -180 0]
![image description](/upfiles/15626853815019584.png)
And I don't see the logic in these angles. I've tried the other two rotations (pitch and yaw) and there appear also "random". So if you have an explication I would be very happy :) lamaaTue, 09 Jul 2019 10:28:00 -0500http://answers.opencv.org/question/215377/Convert yaw, pitch and roll values to rVec for projectPoints.http://answers.opencv.org/question/214216/convert-yaw-pitch-and-roll-values-to-rvec-for-projectpoints/ I'm trying to take a set of images and use projectPoints to take a real world Lat/Lng/Alt and draw markers on the images if the marker has a valid x,y within the image. I have the lat,lng,alt of the image along with the yaw, pitch and roll values from the camera that took the image. My YPR values are in the following format:
- Yaw being the general orientation of the camera when on a horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horizontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
I have been stuck for a few days on how to take the yaw pitch and roll values to get a valid rVec value for the projectPoints function.
Thanks for any help.CVPilotWed, 12 Jun 2019 14:18:12 -0500http://answers.opencv.org/question/214216/Relative rotation between Aruco markershttp://answers.opencv.org/question/198487/relative-rotation-between-aruco-markers/I want to find the relative rotation angles between two Aruco markers, using python and cv2. I'm referring to my markers as the "test" marker and the "reference" marker.
I have successfully retrieved the pose of the markers using cv2.aruco.estimatePoseSingleMarkers. This gives me a "test_rvec" for the test marker and a "ref_rvec" for the reference marker.
As I understand it, rvec (same format as used by cv2.solvePnP, which I believe aruco uses under the covers) is the rotation of the marker relative to the camera. So, to get the rotation of the test marker relative to the reference marker, I do:
R_ref_to_cam = cv2.Rodrigues(ref_rvec)[0] #reference to camera
R_test_to_cam = cv2.Rodrigues(test_rvec)[0] #test to camera
R_cam_to_ref = np.transpose(R_ref_to_cam) #inverse of reference to camera
R_test_to_ref = np.matmul(R_test_to_cam,R_cam_to_ref) #test to reference
Then I use cv2.decomposeProjectionMatrix to compute the euler angles of the resulting matrix (R_test_to_ref).
In testing, with both markers flat on my desk and with the same orientation, with the camera pointed straight down, I get X=0, Y=0, Z=0 as expected, since the relative orientation between the markers is zero.
However, if I rotate one marker 90 degrees in the "z" direction (still keeping it flat on my desk), I get X=30, Y=30, Z=90. I would expect to see two of the axes report as 90 degrees and the third (rotational) axis report 0 degrees. What am I doing wrong?
EDIT: it seems that the 30 degrees I was seeing was the angle of the camera relative to my desk (at least it was non-zero). Still not sure why this doesn't "cancel out" with what I am doing with rvecs...PolywogonFri, 31 Aug 2018 12:22:05 -0500http://answers.opencv.org/question/198487/Rotation vector interpretationhttp://answers.opencv.org/question/197981/rotation-vector-interpretation/I use opencv cv2.solvePnP() function to calculate rotation and translation vectors. Rotation is returned as rvec [vector with 3DOF]. I would like to ask for help with interpreting the rvec.
As far as I understand rvec = the rotation vector representation:
- the rotation vector is the axis of the rotation
- the length of rotation vector is the rotation angle θ in radians [around axis, so rotation vector]
Rvec returned by solvePnP:
rvec =
[[-1.5147142 ]
[ 0.11365167]
[ 0.10590861]]
Then:
angle_around_rvec = sqrt(-1.5147142^2 + 0.11365167^2 + 0.10590861^2) [rad] = 1.52266 [rad] = 1.52266*180/3.14 [deg] = 87.286 [deg]
**1. Does 3 rvec components correspond to world coordinates? Or what are these directions?**
**2. Can I interpret the vector components as separate rotation angles in radians around components directions?**
My rvec components interpretation:
angle_around_X = -1.5147142 [rad] = -1.5147*180/3.14 [deg] = -86.83 [deg]
angle_around_Y = 0.11365167 [rad] = 0.11365167*180/3.14 [deg] = 6.52 [deg]
angle_around_Z = 0.10590861 [rad] = 0.10590861*180/3.14 [deg] = 6.07 [deg]
My usecase:
I have coordinates of four image points. I know the coordinates of these points in the real world. I know camera intrinsic matrix. I use PnP3 to get rotation and translation vector. From rotation matrix, I would like to find out what are the angles around fixed global/world axes: X, Y, Z. I am NOT interested in Euler angles. I want to find out how an object is being rotated around the fixed world coordinates (not it's own coordinate system).
I would really appreciate your help. I feel lost in rotation.
Thank you in advance.dziadygeThu, 23 Aug 2018 13:55:50 -0500http://answers.opencv.org/question/197981/Aruco marker pose tracking method using fixed camerahttp://answers.opencv.org/question/193101/aruco-marker-pose-tracking-method-using-fixed-camera/Here's some context: I'm trying to track an aruco marker on a robot using a camera placed at an unknown angle at a certain height. I have an aruco marker placed on the floor, which I use as my world frame and use it to calibrate the cameras. I need to get the position (x,y,z,roll,pitch,yaw) wrt the world frame. Here's what I do:
- I use estimatePoseSingleMarkers() to track the world marker and I store the rvecs/tvecs somewhere
- I run my main code which uses estimatePoseSingleMarkers() to detect the robot's rvecs/tvecs
- I transform the world rvecs/tvecs to the camera frame and then compose that to the robot's rotation and translation vectors. I have tried and tested both composeRT() and Rodrigues()
I am definitely sure I have the aruco detection part all correct. I am able to detect each Aruco marker and able to draw their axes, so there's no problem there. The detection is very consistent. I have also gone through this 'http://answers.opencv.org/question/122301/aruco-marker-tracking-with-fixed-camera/' and my code is very much in line with the answer there. However, I have some questions:
**1. Is storing the rvecs/tvecs of the world marker the right way to go? How accurate are the stored values? Should I rather use a Charuco board to perform this calibration?**
**2. When I run the code to compute the pose (x,y,z, roll, pitch, yaw), I get highly inaccurate results. The pose values are either constant or rapidly switching to a ridiculous value like 2.3345e-245 or something. I see a change values when I move my robot but the numbers don't really make sense.**
**I have used the final composed rotation matrix (like this: http://planning.cs.uiuc.edu/node103.html) and the translation vector directly. I have triple checked my code but I cannot find anything wrong. Is it my strategy? Or do you think I have maybe computed the rvecs, tvecs or the pose incorrectly?**
Any help will be wonderful!
P.S. I am not sure if I can share my code here (its part of a larger project), but if you think you can help, I can definitely share some snippets over email.
kmathTue, 05 Jun 2018 15:26:02 -0500http://answers.opencv.org/question/193101/composeRT input/output questionhttp://answers.opencv.org/question/186217/composert-inputoutput-question/ I have two frame transformations, one from frame 0 to frame 1, and one from frame 1 to frame 2, and I would like to concatenate these into the transformation from frame 0 to frame 2. I'm computing the pose of a moving camera, and the first transformation (0-1) represent the previous pose of the camera (transformation from its initial pose) and the 1-2 transformation is the newest change of pose, represented by tvec and rvec gotten from solvePnPRansac.
However, as I cannot just try out different inputs in my code and see if the output seems correct, since my system currently consists of lots of noise, I would like to have the math check out before I implement it into my application. But while I try to use the formulas given in the documentation with different rvecs and tvecs, I can't get the outputs (rvec3/tvec3) I want. These are the formulas:
![image description](/upfiles/15204294873689042.png)
I've tried with the following rvecs/tvecs:
- rvec1: Rotation *from* frame 1 *to* frame 0
- tvec1: vector *from* the origin of frame 0 *to* the origin of frame 1, given in *frame 0 coordinates*
- rvec2: Rotation *from* frame 2 *to* frame 1
- tvec2: vector *from* the origin of frame 1 *to* the origin of frame 2, given in *frame 1 coordinates*
I want to end up with:
- rvec3: Rotation from frame 0 to frame 2 (or inversed, doesn't matter)
- tvec3: vector *from* frame 0 *to* frame 2 given in *frame 0 coordinates* (or negated)
However, with these vectors, I can't get the formula in the documentation to make sense. The rvec3-formula makes sense, and with rvec1/rvec2 I get rvec3=(rvec2*rvec1)⁻¹ to equal the rotation from frame 2 to frame 0. However, the computation of tvec3 doesn't add up:
The formula says tvec3 = rvec2 * tvec1 + tvec2, but rvec2 * tvec1 doesn't make sense with my vectors. I mean, it says to rotate a vector given in frame 0 coordinates from frame 2 to frame 1. A vector given in frame 0 coordinates need to be multiplied with a rotation matrix representing the rotation *from* frame 0 to some other frame, but this is not the case here. And it haven't made sense with any other vectors I've tried as well, for that matter.
Someone that could help me with these calculcations? Thanks!
bendikivWed, 07 Mar 2018 07:38:10 -0600http://answers.opencv.org/question/186217/projectPoints tvec and rvechttp://answers.opencv.org/question/178930/projectpoints-tvec-and-rvec/ I have a problem understanding the two parameters of this function. I thought they are the translation and rotation of the camera in global coordinate system and object points are also given in this global coordinate system. But a simple test proves me wrong:
import cv2
import numpy as np
# principal point (60, 60)
cammat = np.array([[30, 0, 60], [0, 30, 60], [0, 0, 1]], dtype=np.float32)
distortion = np.array([0, 0, 0, 0, 0], dtype=np.float32)
tvec = np.array([0, 0, 0], dtype=np.float32)
rvec = np.array([0, 0, 0], dtype=np.float32)
point_3d = np.array([0.1, 0.1, 1], dtype=np.float32)
# [63.00, 63.00]
p1 = cv2.projectPoints(np.array([[point_3d]]), rvec, tvec, cammat, distortion)
# move camera a bit closer to the 3d point, expecting point on 2d plant
# to be a bit further away from the principal point
tvec = np.array([0, 0, 0.1], dtype=np.float32)
# [62.72, 62.72]
p2 = cv2.projectPoints(np.array([[point_3d]]), rvec, tvec, cammat, distortion)
Where am I wrong?gareinsWed, 22 Nov 2017 06:48:31 -0600http://answers.opencv.org/question/178930/SolvePnp rvec suddenly begins to give opposite/unexpected values , is this observed before?http://answers.opencv.org/question/134413/solvepnp-rvec-suddenly-begins-to-give-oppositeunexpected-values-is-this-observed-before/I want to smooth motion of head by getting moving average of last 3 rvec and tvec vectors. But suddenly comes unexpected values of rvec,all values became negative :
3.15671 0.096776 0.32789
3.18738 0.0878376 0.336211
3.15005 0.0896022 0.337018
-3.06086 -0.0907339 -0.339971
-3.07529 -0.0846121 -0.343904
-3.08211 -0.0861078 -0.349439
.....
-3.03756 0.257243 -0.510188
-3.00875 0.241385 -0.505155
-3.0265 0.255717 -0.504729
3.13553 -0.306111 0.507516
3.12246 -0.339343 0.507442
3.11797 -0.409513 0.505041
OguzThu, 16 Mar 2017 18:08:15 -0500http://answers.opencv.org/question/134413/Need explaination about rvecs returned from SolvePnPhttp://answers.opencv.org/question/134017/need-explaination-about-rvecs-returned-from-solvepnp/I am using ArUco for pose estimation and i want to get the global world coordinates of my camera using say a single detected aruco marker. For this, I need to know the rotation of camera wrt marker along y axies (upward/downward axis). The output of ArUco /SolvePnP gives me rvecs which contains rotation vector. Now, I really don't understand how this rotation vector represents the angle of rotation. I can convert it to rotation matrix using Rodrigues but still i don't get the actual roll, yaw, pitch angles (or rotation along x,y and z axes) which i really need.
So, can anyone please explain how to manipulate rotation using the rotation vector in rvecs and also how to get simple rotation angles along the three axes from them. Thanksb2meerTue, 14 Mar 2017 08:48:04 -0500http://answers.opencv.org/question/134017/"_rvec" and "_tvec" for solvePnPRansac() are wrong.http://answers.opencv.org/question/93865/_rvec-and-_tvec-for-solvepnpransac-are-wrong/ Environment:<br>
OpenCV 3.1.0<br>
OS X 10.11.4
I am using "solvePnPRansac()" to get "_rvec" and "_tvec".<br>
However, I thought values are wrong.<br>
Because when I tried same step with "OpenCV 2.4.12", there is the difference between both.
call "solvePnP" with arg "rvec", but set "_local_model.col" as "_rvec".
("tvec" also)
Code:
opencv-3.1.0/modules/calib3d/src/solvepnp.cpp
(https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/solvepnp.cpp)
result = solvePnP(opoints_inliers, ipoints_inliers, cameraMatrix, distCoeffs, rvec, tvec, false, flags == SOLVEPNP_P3P ? SOLVEPNP_EPNP : flags) ? 1 : -1;
_rvec.assign(_local_model.col(0)); // output rotation vector
_tvec.assign(_local_model.col(1)); // output translation vector
I think this is right.
Code:
result = solvePnP(opoints_inliers, ipoints_inliers, cameraMatrix, distCoeffs, _local_model.col(0), _local_model.col(1), false, flags == SOLVEPNP_P3P ? SOLVEPNP_EPNP : flags) ? 1 : -1;
What do you think?
minokenWed, 27 Apr 2016 06:53:24 -0500http://answers.opencv.org/question/93865/solvePnP camera coordinates.http://answers.opencv.org/question/64410/solvepnp-camera-coordinates/ I am running solvePNPransac using arbitrary points, found with FAST detector and triangulated to 3d. I get my rvec and tvec Mats back, which are object-space coordinates. When i print them though, they all stay at around zero, no matter how I move the camera.
I flip the matrix, to get the camera-centric coordinates, and see a similar thing. The camera pose hover between -0.01 and 0.1, no matter the motion.
Is this possibly because the coordinates are from a random one of the sampled points, and it changes?
How could I get the actual camera world coordinates updating as I move it?
Thanks!stillNoviceThu, 18 Jun 2015 08:34:41 -0500http://answers.opencv.org/question/64410/solvePnp object - to - camera pose.http://answers.opencv.org/question/64315/solvepnp-object-to-camera-pose/ I have read enough about this to know that it is fairly straightforward, but I can't find an example of how to actually do it.
I use solvePnp, and get rvec and tvec matrices back. The rotation and translation, respectively, as a 3x3 matrix and a 3 x 1.
As I understand it, this is the OBJECT transformation matrix, using the camera sensor as the zero point and giving the coordinates of one of the points.
My question is, how exactly do i get the camera pose from this information? I believe i need to invert the matrix?
stillNoviceWed, 17 Jun 2015 02:57:53 -0500http://answers.opencv.org/question/64315/