OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 11 Oct 2019 13:37:27 -0500Transform camera positon from one ArUco marker to anotherhttp://answers.opencv.org/question/219606/transform-camera-positon-from-one-aruco-marker-to-another/I'm creating a university project with OpenCV Python and ArUco markers, where I would like to get a (relatively) robust pose estimation for the movement of the camera. I plan on using this for indoor drone flight graphing. For this, I have to transform the camera pose to world coordinates defined by the first seen marker.
I know there must be a transformation matrix between the markers, but I can't seem to figure out, what it is. I am trying with the difference of respective rvecs.
The code for the function in Python:
def TransformBetweenMarkers(tvec_m, tvec_n, rvec_m, rvec_n):
tvec_m = np.transpose(tvec_m) # tvec of 'm' marker
tvec_n = np.transpose(tvec_n) # tvec of 'n' marker
# vector from 'm' to 'n' marker in the camera's coordinate system
dtvec = tvec_m - tvec_n
# get the markers' rotation matrices respectively
R_m = cv2.Rodrigues(rvec_m)[0]
R_n = cv2.Rodrigues(rvec_n)[0]
# camera pose in 'm' marker's coordinate system
tvec_mm = np.matmul(-R_m.T, tvec_m)
# camera pose in 'n' marker's coordinate system
tvec_nn = np.matmul(-R_n.T, tvec_n)
# translational difference between markers in 'm' marker's system,
# basically the origin of 'n'
dtvec_m = np.matmul(-R_m.T, dtvec)
# this gets me the same as tvec_mm,
# but this only works, if 'm' marker is seen
# tvec_nm = dtvec_m + np.matmul(-R_m.T, tvec_n)
# something with the rvec difference must give the transformation(???)
drvec = rvec_m-rvec_n
# transformed to 'm' marker
drvec_m = np.transpose(np.matmul(R_m.T, np.transpose(drvec)))
dR_m = cv2.Rodrigues(drvec_m)[0]
# I want to transform tvec_nn with a single matrix,
# so it would be interpreted in 'm' marker's system
tvec_nm = dtvec_m + np.matmul(dR_m.T, tvec_nn)
# objective: tvec_mm == tvec_nm
This is the best I could get, but there is still an error value of +-0.03 meters between the `tvec_mm` and `tvec_nm` translation values.
Is it possible to get any better with this? Is this even a legit transformation or just a huge coincidence, that it gives approximately the same values? Any ideas?
Thank you!SzepyFri, 11 Oct 2019 13:37:27 -0500http://answers.opencv.org/question/219606/template matching invariant to rotations and noisehttp://answers.opencv.org/question/219149/template-matching-invariant-to-rotations-and-noise/ IS any way to enhance the maximum value of similarity in case the template on image is rotated with different angle than the saved template
The problem here is in case there is stamp on image rotated with small angle the template matching returns maximum matching value with small value so the program cannot now if this value is due to rotation or noise or due to the stamp is not found in the image MostafaMohsen17Wed, 02 Oct 2019 09:29:04 -0500http://answers.opencv.org/question/219149/ArUco orientation using the function aruco.estimatePoseSingleMarkers()http://answers.opencv.org/question/215377/aruco-orientation-using-the-function-arucoestimateposesinglemarkers/Hi everyone!
I'm trying to program a python app that determine the position and orientation of an aruco marker. I calibrated the camera and everything and I used *aruco.estimatePoseSingleMarkers* that returns the translation and rotation vectors.
The translation vector works fine but I don't understand how the rotation vector works. I took some picture to illustrate my problem with the "roll rotation":
Here the rotation vector is approximately [in degree]: [180 0 0]
![image description](/upfiles/15626850347225475.png)
Here the rotation vector is approximately [in degree]: [123 -126 0]
![image description](/upfiles/15626851829885092.png)
And here the rotation vector is approximately [in degree]: [0 -180 0]
![image description](/upfiles/15626853815019584.png)
And I don't see the logic in these angles. I've tried the other two rotations (pitch and yaw) and there appear also "random". So if you have an explication I would be very happy :) lamaaTue, 09 Jul 2019 10:28:00 -0500http://answers.opencv.org/question/215377/Cannot get correct translation and rotation matrix in opencv pythonhttp://answers.opencv.org/question/215136/cannot-get-correct-translation-and-rotation-matrix-in-opencv-python/I have been trying this for two days and for some reason I cannot get it to work. I have two cameras with different intrinsic camera matrices, they are setup with global coordinates in blender
Camera Left: -5, -5, 0 with 45 degrees rotation about Z-axis
Camera Right: 5, -5, 0 with -45 degrees rotation about Z-axis
I simulated points in blender and the positions on the cameras should be exact. I hard coded these into the code, but I am getting these results
**Angles**
Out[22]: (173.62487179673582, 165.61076366618005, 155.76859475230103)
Out[21]: (179.7648211135763, 168.02313442078392, -22.82952854817841)
**Translation**
Out[24]: array([ 0.04009013, 0.03941624, -0.99841832])
Out[23]: array([-0.04009013, -0.03941624, 0.99841832])
I should be getting, exactly:
**Angles** [0, -90, 0]
**Translation** [.707, 0, .707]
Scene setup for reference:
![image description](/upfiles/15621750508841992.png)
**Here is my code**
import cv2
import numpy as np
K_l = np.array([[1800.0, 0.0, 960.0], [0.0, 1800.0, 540.0], [0.0, 0.0, 1.0]])
K_r = np.array([[2100.0, 0.0, 960.0], [0.0, 2100.0, 540.0], [0.0, 0.0, 1.0]])
pts_l = np.array([[ 1041 , 540 ],
[ 925 , 465 ],
[ 786 , 458 ],
[ 1060 , 469 ],
[ 756 , 732 ],
[ 325 , 503 ],
[ 886 , 958 ],
[ 960 , 180 ],
[ 796 , 424 ],
[ 945 , 219 ],
[ 651 , 386 ],
[ 1731 , 676 ],
[ 572 , 590 ]])
pts_r = np.array([[ 1203 , 540 ],
[ 1001 , 453 ],
[ 825 , 458 ],
[ 1139 , 445 ],
[ 1072 , 752 ],
[ 418 , 516 ],
[ 410 , 886 ],
[ 1086 , 95 ],
[ 1151 , 405 ],
[ 1355 , 99 ],
[ 942 , 388 ],
[ 1445 , 883 ],
[ 994 , 589 ]])
F, mask = cv2.findFundamentalMat(pts_l.astype(float),pts_r.astype(float),cv2.FM_LMEDS)
E = np.dot(np.dot(np.transpose(K_r),F),K_l)
U, S, Vt = np.linalg.svd(E)
W = np.array([0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]).reshape(3, 3)
R_1 = U.dot(W).dot(Vt)
angles1, _, _, _, _, _ = cv2.RQDecomp3x3(R_1)
R_2 = U.dot(W.T).dot(Vt)
angles2, _, _, _, _, _ = cv2.RQDecomp3x3(R_2)
T1 = U[:, 2]
T2 = -U[:, 2]gyronikelegWed, 03 Jul 2019 12:31:07 -0500http://answers.opencv.org/question/215136/Calculate Euler angles after SolvePnphttp://answers.opencv.org/question/211087/calculate-euler-angles-after-solvepnp/I'm writing an iOS app that detect facial points using ML Kit, and then uses SolvePnp to calculate the pitch. I implemented the solution give here to solve pnp:
https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
That seems to work well, as the projected nose line drawn looks good.
Next, I try to convert the rotation vector to euler angles. I implemented this solution:
http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
This part is where it seems to fall apart. The calculated yaw/pitch/roll are clearly wrong for my reference frame. Perhaps there an issue of converting between coordinate systems?
Here is my code:
```
+(NSArray*) estimatePose:(FIRVisionFace *)face imgSize:(CGSize)imgSize {
// Contour legend: https://firebase.google.com/docs/ml-kit/images/examples/face_contours.svg
FIRVisionFaceContour* faceOval = [face contourOfType:FIRFaceContourTypeFace];
FIRVisionFaceContour* leftEyeContour = [face contourOfType:FIRFaceContourTypeLeftEye];
FIRVisionFaceContour* rightEyeContour = [face contourOfType:FIRFaceContourTypeRightEye];
FIRVisionFaceContour* noseBridge = [face contourOfType:FIRFaceContourTypeNoseBridge];
FIRVisionFaceContour* upperLipTop = [face contourOfType:FIRFaceContourTypeUpperLipTop];
FIRVisionPoint* chin = faceOval.points[18];
FIRVisionPoint* leftEyeLeftCorner = leftEyeContour.points[0];
FIRVisionPoint* rightEyeRightCorner = rightEyeContour.points[8];
FIRVisionPoint* noseTip = noseBridge.points[1];
FIRVisionPoint* leftMouthCorner = upperLipTop.points[0];
FIRVisionPoint* rightMouthCorner = upperLipTop.points[10];
std::vector<cv::Point2d> image_points;
std::vector<cv::Point3d> model_points;
// 2D/3D model points using https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
image_points.push_back( cv::Point2d(noseTip.x.doubleValue, noseTip.y.doubleValue) ); // Nose tip
image_points.push_back( cv::Point2d(chin.x.doubleValue, chin.y.doubleValue) ); // Chin
image_points.push_back( cv::Point2d(leftEyeLeftCorner.x.doubleValue, leftEyeLeftCorner.y.doubleValue) ); // Left eye left corner
image_points.push_back( cv::Point2d(rightEyeRightCorner.x.doubleValue, rightEyeRightCorner.y.doubleValue) ); // Right eye right corner
image_points.push_back( cv::Point2d(leftMouthCorner.x.doubleValue, leftMouthCorner.y.doubleValue) ); // Left Mouth corner
image_points.push_back( cv::Point2d(rightMouthCorner.x.doubleValue, rightMouthCorner.y.doubleValue) ); // Right mouth corner
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
double focal_length = imgSize.width; // Approximate focal length.
cv::Point2d center = cv::Point2d(imgSize.width / 2, imgSize.height / 2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Calculate a point to draw line from nose tip.
std::vector<cv::Point3d> nose_end_point3D;
std::vector<cv::Point2d> nose_end_point2D;
nose_end_point3D.push_back(cv::Point3d(0,0,1000.0));
cv::projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
NSArray *noseLine = [NSArray arrayWithObjects:
[NSValue valueWithCGPoint:CGPointMake(noseTip.x.doubleValue, noseTip.y.doubleValue)],
[NSValue valueWithCGPoint:CGPointMake(nose_end_point2D[0].x, nose_end_point2D[0].y)],
nil];
// Convert rotation vector to yaw/pitch/roll:
// http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
cv::Mat rodrigues_rotation_vector;
cv::Rodrigues(rotation_vector, rodrigues_rotation_vector);
cv::Vec3d euler_angles;
getEulerAngles(rodrigues_rotation_vector, euler_angles);
NSLog(@"mlkit yaw = %f, roll = %f", face.headEulerAngleY, face.headEulerAngleZ);
NSLog(@"opencv yaw = %f, pitch = %f, roll = %f", euler_angles[1], euler_angles[0], euler_angles[2]);
return noseLine;
}
void getEulerAngles(cv::Mat &rotCamerMatrix,cv::Vec3d &euler_angles) {
cv::Mat cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ;
double* _r = rotCamerMatrix.ptr<double>();
double projMatrix[12] = {
_r[0], _r[1], _r[2], 0,
_r[3], _r[4], _r[5], 0,
_r[6], _r[7], _r[8], 0
};
decomposeProjectionMatrix( cv::Mat(3, 4, CV_64FC1, projMatrix),
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
euler_angles);
}
```
For example, when I face straight to the camera, I get the following:
```
mlkit yaw = 3.786244, roll = 3.352636
opencv yaw = -1.416621, pitch = -179.549207, roll = -5.026994
```
And when I face left (pitch close to flat), I get the following:
```
mlkit yaw = -19.004604, roll = 4.542935
opencv yaw = -65.307372, pitch = -6.605039, roll = -57.922035
```
What am I doing wrong?
jacobTue, 02 Apr 2019 10:14:55 -0500http://answers.opencv.org/question/211087/projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/Roll, Pitch, Yaw ROS right hand notation from Aruco marker rvechttp://answers.opencv.org/question/208481/roll-pitch-yaw-ros-right-hand-notation-from-aruco-marker-rvec/I'm trying to get the RPY of an Aruco marker from the camera view using the ROS notation. ROS axis notations are right hand, where positive x points north, y west and z upwards.
I'm following this post http://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ but I can't get it to work properly for ROS notation. This is my implementation:
def rpy_decomposition(self, rvec):
R, _ = cv2.Rodrigues(rvec)
sin_x = math.sqrt(R[2, 0] * R[2, 0] + R[2, 1] * R[2, 1])
singular = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2, 0], R[2, 1]) # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = math.atan2(R[0, 2], -R[1, 2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = 0 # around z2-axis
z2 = -(2*math.pi -z2)%(2*math.pi)
return z1, x, z2
I can't really find a working code in Python or C++. Thanks
veilkrandWed, 06 Feb 2019 08:40:35 -0600http://answers.opencv.org/question/208481/How to determine the angle of rotation?http://answers.opencv.org/question/205685/how-to-determine-the-angle-of-rotation/ There is a square in an image with equal sides (that is inside another square).
![image description](/upfiles/15453320057265702.jpg)
Does OpenCV have functions which can help to efficiently calculate the angle?
ya_ocv_userThu, 20 Dec 2018 12:55:19 -0600http://answers.opencv.org/question/205685/Dear, I have rvec(rotation vector) and tvec(translation vector)..http://answers.opencv.org/question/204063/dear-i-have-rvecrotation-vector-and-tvectranslation-vector/How can I find the camera pose (EYE vector)? I would like to continue to find the Reflectance. Thank you in advancezar zarMon, 26 Nov 2018 04:47:56 -0600http://answers.opencv.org/question/204063/Rotation matrix to rotation vector (Rodrigues function)http://answers.opencv.org/question/85360/rotation-matrix-to-rotation-vector-rodrigues-function/Hello,
I have a 3x3 rotation matrix that I obtained from stereoCalibrate (using the ros stereo calibration node). I need to obtain a rotation vector (1x3), therefore I used the rodrigues formula. When I went to check the result that I got with this in matlab using the [Pietro Perona - California Institute of Technology](http://www.mathworks.com/matlabcentral/fileexchange/41511-deprecated-light-field-toolbox-v0-2-v0-3-now-available/content/LFToolbox0.2/SupportFunctions/CameraCal/rodrigues.m) I get two different results:
This is the code in cpp:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <tf/transform_broadcaster.h>
#include <ros/param.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <cv_bridge/cv_bridge.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
std::vector<double> T, R;
double cols, rows, R_cols, R_rows;
int i, j;
cv::Mat rot_vec = Mat::zeros(1,3,CV_64F), rot_mat = Mat::zeros(3,3,CV_64F);
ros::init(argc, argv, "get_extrinsic");
ros::NodeHandle node;
if(!node.getParam("translation_vector/cols", cols))
{
ROS_ERROR_STREAM("Translation vector (cols) could not be read.");
return 0;
}
if(!node.getParam("translation_vector/rows", rows))
{
ROS_ERROR_STREAM("Translation vector (rows) could not be read.");
return 0;
}
T.reserve(cols*rows);
if(!node.getParam("rotation_matrix/cols", cols))
{
ROS_ERROR_STREAM("Rotation matrix (cols) could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/rows", rows))
{
ROS_ERROR_STREAM("Rotation matrix (rows) could not be read.");
return 0;
}
R.reserve(cols*rows);
if(!node.getParam("translation_vector/data", T))
{
ROS_ERROR_STREAM("Translation vector could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/data", R))
{
ROS_ERROR_STREAM("Rotation matrix could not be read.");
return 0;
}
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
rot_mat.at<double>(i,j) = R[i*3+j];
}
std::cout << "Rotation Matrix:"<<endl;
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
std::cout<< rot_mat.at<double>(i,j) << " ";
std::cout << endl;
}
std::cout << endl;
std::cout << "Rodrigues: "<<endl;
Rodrigues(rot_mat, rot_vec);
for(i=0; i<3; i++)
std::cout << rot_vec.at<double>(1,i) << " ";
std::cout << endl;
ros::spin();
return 0;
};
And its output is:
Rotation Matrix:
-0.999998 -0.00188887 -0.000125644
0.0018868 -0.999888 0.014822
-0.000153626 0.0148217 0.99989
Rodrigues:
0.0232688 3.13962 4.94066e-324
But when I load the same rotation matrix in matlab and use the rodrigues function I get the following:
R =
-1.0000 -0.0019 -0.0001
0.0019 -0.9999 0.0148
-0.0002 0.0148 0.9999
>> rodrigues(R)
ans =
-0.0002
0.0233
3.1396
I can see that the numbers match, but they are in different positions and there seems to be an issue also with the signs.....Which formula should I trust?aripodMon, 25 Jan 2016 07:54:16 -0600http://answers.opencv.org/question/85360/How to find rotation angle from homography matrix?http://answers.opencv.org/question/203890/how-to-find-rotation-angle-from-homography-matrix/I have 2 images and i am finding simliar key points by SURF.
I want to find rotation angle between the two images from homograpohy matrix.
Can someone please tell me how to find rotation angle between two images from homography matrix.
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
Thank you.ronak.dedhiaThu, 22 Nov 2018 23:30:21 -0600http://answers.opencv.org/question/203890/Reverse camera angle w/ aruco trackinghttp://answers.opencv.org/question/203907/reverse-camera-angle-w-aruco-tracking/I have the Aruco tracking working and from cobbling together stuff from various code samples, ended up with the code below, where `final` is the view matrix passed to the camera. The problem is that the rotation of the camera isn't exactly what I need... not sure exactly which axis is wrong, but you can see in the following video that I want the base of the model to be sitting on the marker- but instead it's not oriented quite right. Any tips to get it right would be great! I'm open to re-orienting it in blender too if that's the right solution. Just not sure exactly _how_ it's wrong right now.
Video example:
https://youtu.be/-7WDxa-e2Oo
Code:
const inverse = cv.matFromArray(4,4, cv.CV_64F, [
1.0, 1.0, 1.0, 1.0,
-1.0,-1.0,-1.0,-1.0,
-1.0,-1.0,-1.0,-1.0,
1.0, 1.0, 1.0, 1.0
]);
cv.estimatePoseSingleMarkers(markerCorners, 0.1, cameraMatrix, distCoeffs, rvecs, tvecs);
cv.Rodrigues(rvecs, rout);
const tmat = tvecs.data64F;
const rmat = rout.data64F;
const viewMatrix = cv.matFromArray(4,4,cv.CV_64F, [
rmat[0],rmat[1],rmat[2],tmat[0],
rmat[3],rmat[4],rmat[5],tmat[1],
rmat[6],rmat[7],rmat[8],tmat[2],
0.0,0.0,0.0,1.0
]);
const output = cv.Mat.zeros(4,4, cv.CV_64F);
cv.multiply(inverse, viewMatrix, output);
cv.transpose(output, output);
const final = output.data64F; dakomFri, 23 Nov 2018 02:24:57 -0600http://answers.opencv.org/question/203907/Identify object on a conveyor belthttp://answers.opencv.org/question/201824/identify-object-on-a-conveyor-belt/Hello! I'm thinking of trying out openCV for my robot.
I want the program to be able to identify the metal parts on a conveyor belt that are single ones, and not the ones lying in clusters.
I will buy a raspberry pie with the raspberry pie camera module(is this a good idea for this project?).
I want the program to return the X-Y coordinate(or position of the pixel on the image) on a specific place on the metal part(so that the robot can lift it where it is supposed to be lifted). I would also want the program to have a adjustable degree of freedom of the orientation(rotation) of the single metal part to be localized.
**Where do I even start?**
A simple drawing of the robot
![image description](https://i.imgur.com/YE3LKpV.png)
An image of what the images could look like the program will process(have not bought the final camera yet and lighting).
![image description](https://i.imgur.com/OMXMq5M.jpg)
Here is the metal part I want to pick up from the conveyor belt.
![image description](https://i.imgur.com/uA0buvC.jpg)HatmpatnFri, 26 Oct 2018 01:02:17 -0500http://answers.opencv.org/question/201824/solvePnP with a priori known pitch and rollhttp://answers.opencv.org/question/199943/solvepnp-with-a-priori-known-pitch-and-roll/How to correctly call solvePnP (for estimate the pose of a large ArUco board), if the board orientation (pitch and roll, not yaw) is known (from an IMU)?okalachevSat, 22 Sep 2018 13:59:48 -0500http://answers.opencv.org/question/199943/Triangulation gives weird results for rotationhttp://answers.opencv.org/question/199673/triangulation-gives-weird-results-for-rotation/OpenCV version 3.4.2
I am taking a stereo pair and using recoverPose to get the [R|t] pose of the camera, If I start at the origin and use triangulatePoints the result looks somewhat like expected although I would have expected the z points to be positive;
These are the poses of the cameras [R|t]
p0: [1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 1, 0]
P1: [0.9999726146107655, -0.0007533190856300971, -0.007362237354563941, 0.9999683127209806;
0.0007569149205790131, 0.9999995956157767, 0.0004856419317479311, -0.001340876868928852;
0.007361868534054914, -0.0004912012195572309, 0.9999727804360723, 0.007847012372698725]
I get these results where the red dot and the yellow line indicates the camera pose (x positive is right, y positive is down):
![image description](/upfiles/1537317206819271.png)
When I rotate the first camera by 58.31 degrees and then use recoverPose to get the relative pose of the second camera the results are wrong.
Pose matrices where P0 is rotated by 58.31 degrees around the y axis before calling my code below.
P0: [0.5253219888177297, 0, 0.8509035245341184, 0;
0, 1, 0, 0;
-0.8509035245341184, 0, 0.5253219888177297, 0]
P1: [0.5315721563840478, -0.0007533190856300971, 0.8470126770406503, 0.5319823932782873;
-1.561037994149129e-05, 0.9999995956157767, 0.0008991799591322519, -0.001340876868928852;
-0.8470130118915117, -0.0004912012195572309, 0.5315719296650566, -0.8467543535708145]
(x positive is right, y positive is down)
![image description](/upfiles/15373172174565108.png)
The pose of the second frame is calculated as follows:
new_frame->E = cv::findEssentialMat(last_frame->points, new_frame->points, K, cv::RANSAC, 0.999, 1.0, new_frame->mask);
int res = recoverPose(new_frame->E, last_frame->points, new_frame->points, K, new_frame->local_R, new_frame->local_t, new_frame->mask);
// https://stackoverflow.com/questions/37810218/is-the-recoverpose-function-in-opencv-is-left-handed
// Convert so transformation is P0 -> P1
new_frame->local_t = -new_frame->local_t;
new_frame->local_R = new_frame->local_R.t();
new_frame->pose_t = last_frame->pose_t + (last_frame->pose_R * new_frame->local_t);
new_frame->pose_R = new_frame->local_R * last_frame->pose_R;
hconcat(new_frame->pose_R, new_frame->pose_t, new_frame->pose);
I then call triangulatePoints using the K * P0 and K * P1 on the corresponding points.
I feel like this is some kind of coordinate system issue as the points I would expect to have positive z values have a -z value in the plots and so the rotation is behaving strangely. I haven't been able to figure out what I need to do to fix it.
EDIT: Here is a gif of what's going on as I rotate through 360 degrees around Y. The cameras are still parallel. What am I missing, shouldn't the shape of the point cloud remain the same if both camera poses remain in relative positions even thought they have been rotated around the origin? Why are the points squashed into the X axis?
![image description](/upfiles/15373205818094867.gif)maym86Tue, 18 Sep 2018 14:55:35 -0500http://answers.opencv.org/question/199673/Rotation vector interpretationhttp://answers.opencv.org/question/197981/rotation-vector-interpretation/I use opencv cv2.solvePnP() function to calculate rotation and translation vectors. Rotation is returned as rvec [vector with 3DOF]. I would like to ask for help with interpreting the rvec.
As far as I understand rvec = the rotation vector representation:
- the rotation vector is the axis of the rotation
- the length of rotation vector is the rotation angle θ in radians [around axis, so rotation vector]
Rvec returned by solvePnP:
rvec =
[[-1.5147142 ]
[ 0.11365167]
[ 0.10590861]]
Then:
angle_around_rvec = sqrt(-1.5147142^2 + 0.11365167^2 + 0.10590861^2) [rad] = 1.52266 [rad] = 1.52266*180/3.14 [deg] = 87.286 [deg]
**1. Does 3 rvec components correspond to world coordinates? Or what are these directions?**
**2. Can I interpret the vector components as separate rotation angles in radians around components directions?**
My rvec components interpretation:
angle_around_X = -1.5147142 [rad] = -1.5147*180/3.14 [deg] = -86.83 [deg]
angle_around_Y = 0.11365167 [rad] = 0.11365167*180/3.14 [deg] = 6.52 [deg]
angle_around_Z = 0.10590861 [rad] = 0.10590861*180/3.14 [deg] = 6.07 [deg]
My usecase:
I have coordinates of four image points. I know the coordinates of these points in the real world. I know camera intrinsic matrix. I use PnP3 to get rotation and translation vector. From rotation matrix, I would like to find out what are the angles around fixed global/world axes: X, Y, Z. I am NOT interested in Euler angles. I want to find out how an object is being rotated around the fixed world coordinates (not it's own coordinate system).
I would really appreciate your help. I feel lost in rotation.
Thank you in advance.dziadygeThu, 23 Aug 2018 13:55:50 -0500http://answers.opencv.org/question/197981/Coordinate system used in the surface_matching modulehttp://answers.opencv.org/question/196387/coordinate-system-used-in-the-surface_matching-module/Hello,
I'm using the surface matching module within a project in Unity through a C++ DLL.
I'm attempting to match two of the same models in different poses, using their vertices as point clouds. So far I have the translation working correctly, but I'm having some trouble interpreting the rotations. I've tried guessing at the coordinate system by applying several combinations of rotations, axis swapping and inverting to the resulting quaternion but I've been unable to reach a complete solution.
I'm uncertain whether the coordinate system used in surface_matching is left or right handed and even if the quaternion in the Pose3D structure is represented as [x,y,z,w] or [w,x,y,z]. Could anyone offer some advice?
Thanks in advance.MrCharlesWed, 25 Jul 2018 11:29:51 -0500http://answers.opencv.org/question/196387/Method for finding Orientation Error using Axis-Anglehttp://answers.opencv.org/question/193675/method-for-finding-orientation-error-using-axis-angle/Hi,
I have a reference value for Roll, pitch and yaw (Euler Angles)
and my estimate for the same. I want to find the error between the two.
If I convert each of the RPY values to a Rotation matrix, I see some possible ways (see below) of finding
the orientation error.
I recently came across this openCV function in the calib3d module: [get_rotation_error](https://github.com/opencv/opencv/pull/11506) that uses Rodrigues/Axis-Angle (I think they mean the same) for finding the error between 2 rotation matrices.
**I have 2 questions** -
1) In the method given in [get_rotation_error](https://github.com/opencv/opencv/pull/11506), it seems to "subtract" the two rotation matrices by transposing one (not sure what the negative sign is about)
error_mat = R_ref * R.transpose() * -1
error = cv::norm( cv::Rodrigues ( error_mat ))
**How are we supposed to interpret the output** ( I believe the output of the cv::norm( rodrigues_vector) is the angle of the rodrigues vector according to openCV convention. Does this mean I simply need to convert it to degrees to find the angle error (between reference and my estimates) in degrees ?
I would also like to mention that **this method keeps returning 3.14159** even for wildly different values of the reference and my estimates. Is there something that I''m missing ?
======
2) I thought of another method, slightly different from the above , what if I do the following -
my_angle = cv::norm (cv::Rodrigues ( R ))
reference_angle = cv::norm (cv::Rodrigues ( R_ref ))
error = reference_angle - my_angle
**Is there something wrong** with method 2) ? I have tried it and it gives a different output compared to method 1).
I would be very grateful if someone can answer the above queries or even point me in the right direction.
Thanks!malharjajooTue, 12 Jun 2018 21:05:54 -0500http://answers.opencv.org/question/193675/Strange ArUco behavior / OpenCV SolvePnphttp://answers.opencv.org/question/190853/strange-aruco-behavior-opencv-solvepnp/Hi There,
I tested the accuracy of ArUco markers in severeal distances and used for that a board with a not rotated marker, several rotated marker on their z-Axis (known rotation) and several more not rotated marker. All markers are measured in their rotation in relation to the first not rotated marker.
Now I calculate the transformation (with quaternions) from the marker of interest in the reference marker. The output for my accuracy is the angle theta from the axis angles. The strange thing is, that the rotation error of the rotated (5°, 20°, 30°, 45°, 90° 180°) markers is small (max. 1°) and the error of the not roted marker (0°) is large (>2°).
I do the subpixel corner refinement and the detected markers looks fine for me. I can exclude the error of the camera, because changing positions (near to the center of the image or not) does not change the accuracy. As well I switched the IDs.
How can it be, that rotated marker are more accurate then non roated marker. Could it be a singularity on the detection or numeric errors?
Thank you for your help!
Sarahsarah1802Fri, 04 May 2018 02:31:17 -0500http://answers.opencv.org/question/190853/How can I get the accuracy between two angles (euler or other)?http://answers.opencv.org/question/185672/how-can-i-get-the-accuracy-between-two-angles-euler-or-other/ Hi Together,
I have a board of marker and detect their angles in respect to the camera. I know that one marker to the other marker should have been rotated with 5° or an other already known angle about the z-Axis. Due to the camera - marker "relationship" is there always a "flip offset" of 180°-X (X is there because I captured the pictures not perpendicular). Now I get for instance the angle (euler ZYX):
A -178.155774553622°; -1.81510372911041°; 5.46620496345042° (rotated marker 5°)
B 175.347721071838°; -1.19249002241927°; -0.334586900200200° (reference "zero" marker 0°)
C -6.49650437453965 °; -0.622613706691140°; 5.80079186365062° (the difference between marker 5° and marker 0°)
D 0°; 0°; 5° (the difference it should be)
My problem is, that depending on the convention (XYZ/ZYX/ZXZ and so on...) there are alway different angles. I know that should be like that. But I don't know how to calculate the difference in a prober way, so that I can compare what the real difference over each axis is.
Is there any way to compare the angles in a better way, maybe not in euler angles but in a way to say "1° decree offset"?
Thank you very much
Sarahsarah1802Wed, 28 Feb 2018 04:57:04 -0600http://answers.opencv.org/question/185672/Centering opencv rotationhttp://answers.opencv.org/question/182793/centering-opencv-rotation/I'm having difficulties getting opencv rotations to center.
The rotation must retain all data so no clipping is allowed.
My first test case is using 90 and -90 degrees to simplify the transformation matrix (see https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html)
I also thought the best way to observe rotations is to use a simple case where the border pixel values are set to observe how the box rotates.
The python code I tried came from Flemin's Blog on rotation (http://john.freml.in/opencv-rotation)
Below is a picture of the original non-rotated image in python. Use the grey point as (4,9) for reference.
![image description](/upfiles/1516341559715749.png)
Then after running the python script (script below), I get a rotation where it is shifted to the right one column. Note the reference point is at (1,4) when it should be at (0,4)
![image description](/upfiles/15163417082184282.png)
Below is the Python script. I added width and height offsets to the function to allow me to experiment with offsets to the tx and ty rotation parameters. I found that setting the width offset to 1 made the 90 degree rotation case match Matlab, but it didn't help -90.
UPDATE 1/19 9AM: I tried setting offset = -0.5 in the function rotate_about_center() below and the 90 and -90 degree rotations center as expected. For a 10x10 image, the reasoning why this may work is that the center point defined by (cols/2, rows/2) is not (5,5), but rather (4.5, 4.5). The same logic applied to a 11x11 image: the center is not (5.5,5.5) but rather (5,5). Rotations at 45 and -45 don't center - meaning they visually don't look centered in the box computed of size nw x nh. So I think I understand why a "center" equal to (cols/2 - 0.5, rows/2 - 05) works but a center of (cols/2, rows/2) does not, however, most examples I've found do not subtract the 0.5.
import cv2
from matplotlib import pyplot as plt
import functools
import math
bwimshow = functools.partial(plt.imshow, vmin=0, vmax=255,
cmap=plt.get_cmap('gray'))
def rotate_about_center(src, angle, widthOffset=0., heightOffset=0, scale=1.):
w = src.shape[1]
h = src.shape[0]
# Add offset to correct for center of images.
wOffset = -0.5
hOffset = -0.5
rangle = np.deg2rad(angle) # angle in radians
# now calculate new image width and height
nw = (abs(np.sin(rangle)*h) + abs(np.cos(rangle)*w))*scale
nh = (abs(np.cos(rangle)*h) + abs(np.sin(rangle)*w))*scale
print("nw = ", nw, "nh = ", nh)
# ask OpenCV for the rotation matrix
rot_mat = cv2.getRotationMatrix2D((nw*0.5 + wOffset, nh*0.5 + hOffset), angle, scale)
# calculate the move from the old center to the new center combined
# with the rotation
rot_move = np.dot(rot_mat, np.array([(nw-w)*0.5 + widthOffset, (nh-h)*0.5 + heightOffset,0]))
# the move only affects the translation, so update the translation
# part of the transform
rot_mat[0,2] += rot_move[0]
rot_mat[1,2] += rot_move[1]
return cv2.warpAffine(src, rot_mat, (int(math.ceil(nw)), int(math.ceil(nh))), flags=cv2.INTER_LANCZOS4)
def main():
# create image
rows = 10
cols = 10
angle = -90
widthOffset = 0 # need 1 to match 90 degrees and ? for -90 degrees.
heightOffset = 0
img = np.zeros((rows,cols), np.float32)
img[:, 0] = 255
img[:, cols-1] = 255
img[0, :] = 200
img[rows-1, :] = 200
# mark some pixels for reference points.
img[0, int(cols/2 - 1)] = 0
img[rows-1, int(cols/2) - 1] = 100
bwimshow(img)
plt.show()
img = rotate_about_center(img, angle, widthOffset, heightOffset)
print("img shape = ", img.shape)
print('Data type', img.dtype)
bwimshow(img)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
main()enter code here
I apologize for the reams and reams of code, but hopefully it makes it easier for someone to replicate the problem.epattonFri, 19 Jan 2018 00:27:55 -0600http://answers.opencv.org/question/182793/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/wrong rotation matrix when using recoverpose between two very similar imageshttp://answers.opencv.org/question/180264/wrong-rotation-matrix-when-using-recoverpose-between-two-very-similar-images/ I'm trying to perform visual odometry with a camera on top of a car. Basically I use Fast or GoodFeaturesToTrack ( I don't know yet which one is more convenient) and then I follow those points with calcOpticalFlowPyrLK. Once I have both previuos and actual points I call findEssentialMat and then recoverPose to obtain rotation and translation matrix.
My program works quite well. It has some errors when there are images with sun/shadow in the sides but the huge problem is WHEN THE CAR STOPS. When the car stops or his speed is quite low the frames looks very similar (or nearly the same) and the rotation matrix gets crazy (I guess the essential matrix too).
Does anyone knows if it is a common error? Any ideas on how to fix it?
I don't know what information you need to answer it but it seems that It is a concept mistake that I have. I have achieved an acurracy of 1º and 10 metres after a 3km ride but anytime I stop.....goodbye!
Thank you so much in advanceMarquesVanVicoTue, 12 Dec 2017 05:29:00 -0600http://answers.opencv.org/question/180264/get rotation from fundamental matrixhttp://answers.opencv.org/question/176270/get-rotation-from-fundamental-matrix/ I wonder if it is possible to get relative rotation between two uncalibrated cameras, based on an image pair that has feature points to be matched between the two cameras?
I read some articles and it sounds to me that it is possible to get the relative rotation between the two cams from the fundamental matrix. but after i searched around I only find solutions using essential mat which needs the camera to be calibrated...
shelpermiscFri, 13 Oct 2017 08:54:09 -0500http://answers.opencv.org/question/176270/Find ROI on an image from given referencehttp://answers.opencv.org/question/174520/find-roi-on-an-image-from-given-reference/Hello. I have the following problem to solve. Suppose I have a reference image consisting of some geometric objects and numbers on a homogeneous background. They are sufficiently distinct from the background - all these objects are more close to white gray-color, whereas background is all close to black.
I have a reference image and sample images, which can have different scale, some angle of rotation with respect to the reference and also some horizontal or vertical shift. What I need to do is to find the whole ROI, i.e. the region which is clearly distinguished from the background. Moreover, I need to identify regions corresponding to particular geometric objects (e.g. triangles) and regions that contain only numbers.
What method is better to apply here? I guess about SIFT implementation, since it is invariant under affine transformations. But my question is more about technique: how to implement this? I know that SIFT transform in OpenCV gives you coordinates of keypoints. and computes descriptors.
The reference image looks like this:
![image description](/upfiles/15056688881601366.jpg)newtSun, 17 Sep 2017 10:11:52 -0500http://answers.opencv.org/question/174520/How to rotate a camera to point to an object on the screenhttp://answers.opencv.org/question/172516/how-to-rotate-a-camera-to-point-to-an-object-on-the-screen/ I have a camera which points in a direct, I have a unit vector `C` which describes the orientation of the camera in world coordinates.
There is a point of interest in the image taken by the camera. Given the field of view of the camera and image size, I can compute two vectors in pixel space:
`A`, the principal point (center point of the image), and
`B` the point of interest in pixel space.
I want to rotate the camera `C` (in world coordinates) such that it now points at the object represented on screen by `B`
It's unclear to me how to transition between the on-screen pixel-space orientation of vectors `A` and `B` and the world space vector `C`.davidparks21Sun, 20 Aug 2017 16:00:03 -0500http://answers.opencv.org/question/172516/Proper way of rotating 3D points around axishttp://answers.opencv.org/question/169888/proper-way-of-rotating-3d-points-around-axis/Hello!
I have a problem with apply rotation to a set of 3D points. I use depth map, which store Z coordinates of points, also I use reverse of camera intrinsic matrix to obtain X and Y coords of point. I need to rotate those 3D points aorund Y axis and compute depth map after rotation. The code I use is here:
for (int a = 0; a < depthValues.rows; ++a)
{
for (int b = 0; b < depthValues.cols; ++b)
{
float oldDepth = depthValues.at<cv::Vec3f>(a, b)[0];
if (oldDepth > EPSILON)
{
cv::Mat pointInWorldSpace = cameraMatrix.inv() * cv::Mat(cv::Vec3f(a, b , 1), false);
pointInWorldSpace *= oldDepth;
cv::Mat rotatedPointInWorldSpace = rotation * pointInWorldSpace;
float newDepth = rotatedPointInWorldSpace.at<cv::Vec3f>(0, 0)[2];
cv::Mat rotatedPointInImageSpace = cameraMatrix * rotatedPointInWorldSpace;
int x = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[0] / newDepth;
int y = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[1] / newDepth;
x = x < 0 ? 0 : x;
y = y < 0 ? 0 : y;
x = x > depthValues.rows - 1 ? depthValues.rows - 1 : x;
y = y > depthValues.cols - 1 ? depthValues.cols - 1 : y;
depthValuesAfterConversion.at < cv::Vec3f >(x, y) = cv::Vec3f(newDepth, newDepth, newDepth);
}
}
}
Here's how I compute rotation matrix:
float angle = (15.0 * 3.14159265f) / 180.0f;
float rotateYaxis[3][3] =
{
{ cos(angle), 0, -sin(angle) },
{ 0, 1, 0 },
{ sin(angle), 0, cos(angle) }
};
cv::Mat rotation(3, 3, CV_32FC1, rotateYaxis);
Unfortunately, after applying this rotation to my depth map it looks like it's rotated around X axis. I discovered that when I compute rotation matrix as it was rotation around X axis - my code works lke expected.
My question is: could you point me out where I made mistake to my code? Using matrix I've described I expected my depth map to be rotated around Y axis, not X.
Thank you for your help!
seaxgastFri, 28 Jul 2017 15:36:05 -0500http://answers.opencv.org/question/169888/Retrieve yaw, pitch, roll from rvechttp://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ I need to retrieve the attitude angles of a camera (using `cv2` on Python).
- Yaw being the general orientation of the camera when on an horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horitzontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
<br>
I have yet `rvec` and `tvec` from a `solvepnp()`.
Then I have computed:
`rmat = cv2.Rodrigues(rvec)[0]`
If I'm right, camera position in the world coordinates system is given by:
`position_camera = -np.matrix(rmat).T * np.matrix(tvec)`
But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the observer (thus the camera)?
I have tried implementing this : http://planning.cs.uiuc.edu/node102.html#eqn:yprmat in a function :
def rotation_matrix_to_attitude_angles(R) :
import math
import numpy as np
cos_beta = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
validity = cos_beta < 1e-6
if not validity:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = math.atan2(R[2,1], R[2,2]) # roll [x]
else:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = 0 # roll [x]
return np.array([alpha, beta, gamma])
but it gives me some results which are far away from reality on a true dataset (even when applying it to the inverse rotation matrix: `rmat.T`).
Am I doing something wrong?
And if yes, what?
All informations I've found are incomplete (never saying in which reference frame we are or whatever in a rigorous way).
Thanks.
**Update:**
Rotation order seems to be of greatest importance.
So; do you know to which of these matrix does the `cv2.Rodrigues(rvec)` result correspond?:
![rotation matrices](/upfiles/14987816662030655.png)
From: https://en.wikipedia.org/wiki/Euler_angles
<h3>Update:</h3>
I'm finally done. Here's the solution:
def yawpitchrolldecomposition(R):
import math
import numpy as np
sin_x = math.sqrt(R[2,0] * R[2,0] + R[2,1] * R[2,1])
validity = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2,0], R[2,1]) # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = math.atan2(R[0,2], -R[1,2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = 0 # around z2-axis
return np.array([[z1], [x], [z2]])
yawpitchroll_angles = -180*yawpitchrolldecomposition(rmat)/math.pi
yawpitchroll_angles[0,0] = (360-yawpitchroll_angles[0,0])%360 # change rotation sense if needed, comment this line otherwise
yawpitchroll_angles[1,0] = yawpitchroll_angles[1,0]+90
That's all folks!
swiss_knightTue, 20 Jun 2017 08:49:20 -0500http://answers.opencv.org/question/161369/Rodrigues rotationhttp://answers.opencv.org/question/163351/rodrigues-rotation/I do not understand the difference between these two equations:
<br>
1. from wikipedia:
![wiki Rodrigues formula](https://wikimedia.org/api/rest_v1/media/math/render/svg/14de5f7bfa4af6a7867008d8fd790d14e3a54530)
https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
<br>
2. from open CV doc:
![cv2 Rodrigues formal](http://docs.opencv.org/2.4/_images/math/8bffbe8d9297cebc136dc8ead9a40cad3940a640.png)
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)
<br>
Where is the **cos(θ)** gone on the wiki page in the formula 1. ?
Shouln't it be: v_{rot} = cos(θ)v + sin.... ?
Then on the wiki page, there is no more cos(θ) in the definition of R...
<br>
Or did I miss something?
swiss_knightSun, 02 Jul 2017 16:47:32 -0500http://answers.opencv.org/question/163351/Comparing Two Contours: Rotation invariant?http://answers.opencv.org/question/157572/comparing-two-contours-rotation-invariant/ I found one approach for estimate the orientation of two contours [here](http://answers.opencv.org/question/113492/orientation-of-two-contours/) , which rotates one contour and checks the distance to the original.
I changed the headers to
#include <opencv2/core.hpp>
#include <opencv2/shape.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv_modules.hpp>
#include <iostream>
#include <fstream>
#include <string.h>
and the main to:
int main(int argc, char* argv[])
It may be kind of a stupid question, but first of all i don't know, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation, because it does an internal fit?
If this would be the case, my results would be coherent, because I always get 0 as distance (but unfortunately no image as well). Also the result from an other program, where i match rotated contours with cv::shapecontextdistanceextractor> as well as the hausdorff metric seems not to be wrong (small distances, but no exact 0). JoeBroeselWed, 07 Jun 2017 13:45:36 -0500http://answers.opencv.org/question/157572/