OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 12 Jun 2018 22:22:36 -0500Combining pose estimates from multiple sourceshttp://answers.opencv.org/question/193677/combining-pose-estimates-from-multiple-sources/Hi,
I am receiving 6 DOF pose estimates of an object from 3 cameras. I then choose a **frame common to all cameras**, ie, a world frame, and transform pose estimate from each camera to obtain pose estimate of object in world frame.
The pose estimates of the object in the world frame are - `( x, y , z, roll, pitch, yaw )`
I have 3 measurements like this.
**What is the best way to combine them ?** I must emphasise ***I am not adding the Euler angles*** as I'm aware that they are not vectors, my purpose is to obtain a more robust reading since I have multiple cameras and hence I am taking the average for each of Roll, pitch and yaw.
Can I simply take average of the 3 values ?
Can I average the roll, pitch and yaw values from each pose estimate ?
Thanks!malharjajooTue, 12 Jun 2018 22:22:36 -0500http://answers.opencv.org/question/193677/Obtaining Euler angles from Rodrigues Rotation Matrixhttp://answers.opencv.org/question/189455/obtaining-euler-angles-from-rodrigues-rotation-matrix/ Hi,
I wish to extract Euler angles from the **rvec** output parameter of [cv::solvePnp](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool%20solvePnP(InputArray%20objectPoints,%20InputArray%20imagePoints,%20InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20OutputArray%20rvec,%20OutputArray%20tvec,%20bool%20useExtrinsicGuess,%20int%20flags)).
I understand that 3x1 rvec needs to be passed on to the [Rodrigues](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)) function to obtain the 3x3 rotation matrix.
But to obtain Euler Angles, I need to use a fixed convention ( see [this](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) , for example Z-Y-X,etc ) which requires the rotation matrix to be obtained from a **permutation (fixed order) of multiplication of Matrices** ,
**eg: for Z-Y-X convention,
R_resultant = Rz * Ry * Rx**.
I have looked into the source code [here](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L251) for Rodrigues function but don't quite understand how the matrix is formed Hence -
**My Question**: What is the **convention** (Z-Y-X, X-Y-Z, etc) of the formation of the 3x3 Rotation matrix obtained from Rodrigues ? Without this information I cannot obtain Euler angles. I have seen [this tutorial](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) for real time pose estimation, but unfortunately I don't understand the assumption for the convention (and hence I had made another question for that - [here](http://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/?comment=189453#post-id-189453)).malharjajooMon, 16 Apr 2018 10:54:16 -0500http://answers.opencv.org/question/189455/Issue in OpenCV sample tutorial for real time pose estimationhttp://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/Hi,
I was trying out the real time pose estimation tutorial from OpenCV ( [this tutorial](https://docs.opencv.org/3.3.0/dc/d2c/tutorial_real_time_pose.html) ), and while looking into the source code, I may have found a (possible) inconsistency -
The functions for converting between Rotation Matrix and Euler Angles ( please see [euler2rot](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L229) and [rot2euler](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) from the C++ source code for the tutorial).
I am aware that there are **6-types** of [Tait-bryan](https://en.wikipedia.org/wiki/Euler_angles) angles (they are colloquially referred to as **Euler Angles** ).
**My issue:**
- The above source code functions **do not** adhere to any of the 6 types. I checked a few sources to verify this -
From the [section on wikipedia page](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) and even wrote a simple Matlab script using symbolic variables ( please see below, at the end of the question ).
- **The source code in the functions seems to correspond to the Y-Z-X Tait bryan angles, but with the pitch and yaw interchanged.** Why is this the case ? (Does this have something to do with the fact that the camera's coordinate axes has z-facing forward, y-facing downward and x-facing right ? )
- And finally, I think, Z-Y-X Tait Bryan angles are the industry standard for robotics , hence is there any particular reason for using Y-Z-X (with the pitch and yaw interchanged, as noticed above) ?
**================= Matlab script below for reference ============**
syms roll
syms pitch
syms yaw
Rx = [1 0 0; 0 cos(roll) -sin(roll); 0 sin(roll) cos(roll)];
Ry = [cos(pitch) 0 sin(pitch); 0 1 0; -sin(pitch) 0 cos(pitch)];
Rz = [cos(yaw) -sin(yaw) 0; sin(yaw) cos(yaw) 0; 0 0 1];
R_Y_Z_X = Ry * Rz * Rx
**Output**:
R_Y_Z_X =
[ cos(pitch)*cos(yaw), sin(pitch)*sin(roll) - cos(pitch)*cos(roll)*sin(yaw), cos(roll)*sin(pitch) + cos(pitch)*sin(roll)*sin(yaw)]
[ sin(yaw), cos(roll)*cos(yaw), -cos(yaw)*sin(roll)]
[ -cos(yaw)*sin(pitch), cos(pitch)*sin(roll) + cos(roll)*sin(pitch)*sin(yaw), cos(pitch)*cos(roll) - sin(pitch)*sin(roll)*sin(yaw)]malharjajooSun, 15 Apr 2018 19:31:39 -0500http://answers.opencv.org/question/189414/How can I get the accuracy between two angles (euler or other)?http://answers.opencv.org/question/185672/how-can-i-get-the-accuracy-between-two-angles-euler-or-other/ Hi Together,
I have a board of marker and detect their angles in respect to the camera. I know that one marker to the other marker should have been rotated with 5° or an other already known angle about the z-Axis. Due to the camera - marker "relationship" is there always a "flip offset" of 180°-X (X is there because I captured the pictures not perpendicular). Now I get for instance the angle (euler ZYX):
A -178.155774553622°; -1.81510372911041°; 5.46620496345042° (rotated marker 5°)
B 175.347721071838°; -1.19249002241927°; -0.334586900200200° (reference "zero" marker 0°)
C -6.49650437453965 °; -0.622613706691140°; 5.80079186365062° (the difference between marker 5° and marker 0°)
D 0°; 0°; 5° (the difference it should be)
My problem is, that depending on the convention (XYZ/ZYX/ZXZ and so on...) there are alway different angles. I know that should be like that. But I don't know how to calculate the difference in a prober way, so that I can compare what the real difference over each axis is.
Is there any way to compare the angles in a better way, maybe not in euler angles but in a way to say "1° decree offset"?
Thank you very much
Sarahsarah1802Wed, 28 Feb 2018 04:57:04 -0600http://answers.opencv.org/question/185672/aruco giving strange rotations.http://answers.opencv.org/question/143072/aruco-giving-strange-rotations/Hi, I am tracking aruco markers in an openCv application.
I am using the left frame of a calibrated stereo camera, so the image is undistorted and rectified. As such, i pass 0 parameters to the distortion matrix.
I have the camera pointing down at the floor, and measure the angle with an IMU. The result is 37 degrees in pitch.
I have a marker set up, on the floor, as straight as I can get it in front of the camera.
When I print the rvecs value returned from the marker, I would expect to see something like:
Yaw 0, Roll 0, Pitch -37
but what I do see ( after applying *180 / M_PI;) is:
[131.865, -0.295742, 1.04376]
The rotation is only one axis, which looks right, but the value is a long way off from what it should be. Am I missing something? Or is this as accurate as Aruco gets with a single marker?
(the axis displayed in frame looks correct, but the value returned does not)
As a bonus question.. is it possible to return quaternions from Aruco tracking?
Thanks!
EDIT:
I now have:
cv::Mat expected;
cv::Rodrigues(rvecs[i], expected);
Eigen::Quaterniond mOrientationEigen = toQuaternionOg(expected);
Eigen::Quaterniond toQuaternionOg(const cv::Mat &M)
{
Eigen::Matrix<double, 3, 3> eigMat = toMatrix3d(M);
Eigen::Quaterniond q(eigMat);
return q;
}
But the returned quaternions are garbage:
`3.89581e-181.20234e+173.03257e-186.61337e+09`
Where am I going wrong here?
antithingSat, 22 Apr 2017 06:38:04 -0500http://answers.opencv.org/question/143072/Rotation matrix to euler angles with opencv c++ correct value wrong placehttp://answers.opencv.org/question/139472/rotation-matrix-to-euler-angles-with-opencv-c-correct-value-wrong-place/ I am working on a project wich involves Aruco markers and opencv. I am quite far in the project progress. I can read the rotation vectors and convert them to a rodrigues matrix using rodrigues() from opencv.
This is a example of a rodrigues matrix I get:
[0,1,0;
1,0,0;
0,0,-1]
I use the following code.
Mat m33(3, 3, CV_64F);
Mat measured_eulers(3, 1, CV_64F);
Rodrigues(rotationVectors, m33);
measured_eulers = rot2euler(m33);
Degree_euler = measured_eulers * 180 / CV_PI;
I use the predefined rot2euler to convert from rodrigues matrix to euler angles. And I convert the received radians to degrees.
rot2euler looks like the following.
Mat rot2euler(const Mat & rotationMatrix)
{
Mat euler(3, 1, CV_64F);
double m00 = rotationMatrix.at<double>(0, 0);
double m02 = rotationMatrix.at<double>(0, 2);
double m10 = rotationMatrix.at<double>(1, 0);
double m11 = rotationMatrix.at<double>(1, 1);
double m12 = rotationMatrix.at<double>(1, 2);
double m20 = rotationMatrix.at<double>(2, 0);
double m22 = rotationMatrix.at<double>(2, 2);
double x, y, z;
// Assuming the angles are in radians.
if (m10 > 0.998) { // singularity at north pole
x = 0;
y = CV_PI / 2;
z = atan2(m02, m22);
}
else if (m10 < -0.998) { // singularity at south pole
x = 0;
y = -CV_PI / 2;
z = atan2(m02, m22);
}
else
{
x = atan2(-m12, m11);
y = asin(m10);
z = atan2(-m20, m00);
}
euler.at<double>(0) = x;
euler.at<double>(1) = y;
euler.at<double>(2) = z;
return euler;
}
If I use the rodrigues matrix I give as an example I get the following euler angles.
[0; 90; -180]
But I am suppose to get the following.
[-180; 0; 90]
When is use this tool
[danceswithcode.net/engineeringnotes/rotations_in_3d/demo3D/rotations_in_3d_tool.html]
You can see that [0; 90; -180] doesn't match the rodrigues matrix but [-180; 0; 90] does. (I am aware of the fact that the tool works with ZYX coordinates)
So the problem is I get the correct values but in a wrong order.
Another problem is that this isn't always the case. For example rodrigues matrix:
[1,0,0;
0,-1,0;
0,0,-1]
Provides me the correct euler angles.
If someone knows a solution to the problem or can provide me with a explanation how the rot2euler function works exactly. It will be higly appreciated.
Kind Regards
Brent Convens
Brent ConvensWed, 12 Apr 2017 05:03:32 -0500http://answers.opencv.org/question/139472/How can I get rotation-vector from Euler-angles?http://answers.opencv.org/question/88531/how-can-i-get-rotation-vector-from-euler-angles/If I have 3 Euler-angles then how can I get rotation-vector to use it for camera-rotation in [OpenCV-viz3d](http://docs.opencv.org/2.4/modules/viz/doc/viz3d.html#)?
I.e. how can I get `Vec3d( X, Y, Z)` from [Euler-angles](https://en.wikipedia.org/wiki/Euler_angles) (alfa, betta, gamma) to use in this code?
viz::Viz3d viz_3d_window("Viz window");
cv::Affine3d pose = viz_3d_window.getViewerPose();
viz_3d_window.setViewerPose( pose.rotate( Vec3d( X, Y, Z) ) );
cv::Affine3d::rotate(); https://github.com/Itseez/opencv/blob/f9e2b036a502309c0da5abc5e711d4cd64eb8a38/modules/core/include/opencv2/core/affine.hpp#L114
I can find Euler-angles from rotation-matrix by using [RQDecomp3x3()](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=euler#rqdecomp3x3) or as shown in [that question](http://answers.opencv.org/question/8842/rotation-matrix-to-euler-angle/), but whether is there an inverse function in the OpenCV to find rotation-matrix from Euler-angles or better to find rotation-vector( `Vec3d( X, Y, Z)` )?AlexBThu, 25 Feb 2016 03:05:56 -0600http://answers.opencv.org/question/88531/Online estimation of euler angels (camera pose) using images from camera and opencv libraryhttp://answers.opencv.org/question/57752/online-estimation-of-euler-angels-camera-pose-using-images-from-camera-and-opencv-library/ I'm working on a android application and I need to estimate online camera rotation in 3D-plan using images from camera and opencv library. I like to calculate Euler angles.
I have read [this](http://stackoverflow.com/questions/11642988/determine-camera-rotation-between-two-360x180-equirectangular-panoramic-images) and [this page](http://stackoverflow.com/questions/17027277/android-opencv-homography-to-camera-pose-considering-camera-intrinsics-and-ba?rq=1) and I can estimate homography matrix like [here](http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv).
My first question is, should I really know the camera intrinsic matrix from camera calibrtion or is the homography matrix (camera extrinsic) enough to estimate euler angles (pitch, roll, yaw)?
If homography matrix is enough, how can I do it exactly?
Sorry, I am really beginner with opencv and cannot decompose "Mat" of homography to rotation matrix and translation matrix like describes [here](http://nghiaho.com/?page_id=846). How can I implement euler angles in android?
You can see my code using solvePnPRansac() and decomposeProjectionMatrix to calculate euler angles.
But it returns just a null-vector as double[] eulerArray = {0,0,0}!!! Can somebody help me?! What is wrong there? Thank you very much for any response!
public double[] findEulerAngles(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
List<KeyPoint> referenceKeypointsList = keypoints2.toList();
List<KeyPoint> sceneKeypointsList = keypoints1.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for(DMatch match : matchesList) {
double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// Identify "good" keypoints based on match distance.
List<Point3> goodReferencePointsList = new ArrayList<Point3>();
ArrayList<Point> goodScenePointsList = new ArrayList<Point>();
double maxGoodMatchDist = 1.75 * minDist;
for(DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
Point kk2 = k2[match.queryIdx].pt;
Point kk1 = k1[match.trainIdx].pt;
Point3 point3 = new Point3(kk1.x, kk1.y, 0.0);
goodReferencePointsList.add(point3);
goodScenePointsList.add( kk2);
sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {
// There are too few good points to find the pose.
return;
}
MatOfPoint3f goodReferencePoints = new MatOfPoint3f();
goodReferencePoints.fromList(goodReferencePointsList);
MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
MatOfDouble mRMat = new MatOfDouble(3, 3, CvType.CV_32F);
MatOfDouble mTVec = new MatOfDouble(3, 1, CvType.CV_32F);
//TODO: solve camera intrinsic matrix
Mat intrinsics = Mat.eye(3, 3, CvType.CV_32F); // dummy camera matrix
intrinsics.put(0, 0, 400);
intrinsics.put(1, 1, 400);
intrinsics.put(0, 2, 640 / 2);
intrinsics.put(1, 2, 480 / 2);
Calib3d.solvePnPRansac(goodReferencePoints, goodScenePoints, intrinsics, new MatOfDouble(), mRMat, mTVec);
MatOfDouble rotCameraMatrix1 = new MatOfDouble(3, 1, CvType.CV_32F);
double[] rVecArray = mRMat.toArray();
// Calib3d.Rodrigues(mRMat, rotCameraMatrix1);
double[] tVecArray = mTVec.toArray();
MatOfDouble projMatrix = new MatOfDouble(3, 4, CvType.CV_32F); //projMatrix 3x4 input projection matrix P.
projMatrix.put(0, 0, rVecArray[0]);
projMatrix.put(0, 1, rVecArray[1]);
projMatrix.put(0, 2, rVecArray[2]);
projMatrix.put(0, 3, 0);
projMatrix.put(1, 0, rVecArray[3]);
projMatrix.put(1, 1, rVecArray[4]);
projMatrix.put(1, 2, rVecArray[5]);
projMatrix.put(1, 3, 0);
projMatrix.put(2, 0, rVecArray[6]);
projMatrix.put(2, 1, rVecArray[7]);
projMatrix.put(2, 2, rVecArray[8]);
projMatrix.put(2, 3, 0);
MatOfDouble cameraMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //cameraMatrix Output 3x3 camera matrix K.
MatOfDouble rotMatrix = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrix Output 3x3 external rotation matrix R.
MatOfDouble transVect = new MatOfDouble(4, 1, CvType.CV_32F); //transVect Output 4x1 translation vector T.
MatOfDouble rotMatrixX = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixX a rotMatrixX
MatOfDouble rotMatrixY = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixY a rotMatrixY
MatOfDouble rotMatrixZ = new MatOfDouble(3, 3, CvType.CV_32F); //rotMatrixZ a rotMatrixZ
MatOfDouble eulerAngles = new MatOfDouble(3, 1, CvType.CV_32F); //eulerAngles Optional three-element vector containing three Euler angles of rotation in degrees.
Calib3d.decomposeProjectionMatrix( projMatrix,
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
eulerAngles);
double[] eulerArray = eulerAngles.toArray();
return eulerArray;
}moxWed, 18 Mar 2015 08:30:00 -0500http://answers.opencv.org/question/57752/RQ_decomposition is there a problem with Y 90 /270 degrees?http://answers.opencv.org/question/52615/rq_decomposition-is-there-a-problem-with-y-90-270-degrees/ Hi
I was wondering if anyone has come across a problem with the Euler angles returned by RQDecomp3X3 method when rotating by 90 or 270 degrees in the Y axis. I have tried decomposing a rotation matrix which is composed of 3 rotations in the following order starting with X (rotation matrix) multiplied by Y(rotation matrix) multiplied by Z(rotation matrix) (Z*Y*X) with any arbitrary combination of angles for the X and Z where Y = 90 and always get the following vector of Euler angles(90,90,90) or (90,-90,90) where Y=270. So regardless of my input angles for X and Z the result is always the same. Has anyone come across this, or do you know how to explain why this is occurring?
Any help with this will be greatly appreciated,
SuzanaSuzanaThu, 08 Jan 2015 14:41:35 -0600http://answers.opencv.org/question/52615/Kalman filtration for rotationhttp://answers.opencv.org/question/18186/kalman-filtration-for-rotation/I want to use Kalman filter for rotation. I have rotation matrix, but i can't use filtration for it, because all it's elements interdependent. That's why i try to convert matrix to euler angles and to filter each of compoments.
My questions:
- is this filtering of euler angles will be right? (is all components x, y, z independent? I think no) If yes, how to avoid gimbal lock during filtration?
- is there a better way to filter the rotation?tenta4Mon, 05 Aug 2013 02:18:54 -0500http://answers.opencv.org/question/18186/