OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 12 Jun 2019 14:18:12 -0500Convert yaw, pitch and roll values to rVec for projectPoints.http://answers.opencv.org/question/214216/convert-yaw-pitch-and-roll-values-to-rvec-for-projectpoints/ I'm trying to take a set of images and use projectPoints to take a real world Lat/Lng/Alt and draw markers on the images if the marker has a valid x,y within the image. I have the lat,lng,alt of the image along with the yaw, pitch and roll values from the camera that took the image. My YPR values are in the following format:
- Yaw being the general orientation of the camera when on a horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horizontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
I have been stuck for a few days on how to take the yaw pitch and roll values to get a valid rVec value for the projectPoints function.
Thanks for any help.CVPilotWed, 12 Jun 2019 14:18:12 -0500http://answers.opencv.org/question/214216/Calculate Euler angles after SolvePnphttp://answers.opencv.org/question/211087/calculate-euler-angles-after-solvepnp/I'm writing an iOS app that detect facial points using ML Kit, and then uses SolvePnp to calculate the pitch. I implemented the solution give here to solve pnp:
https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
That seems to work well, as the projected nose line drawn looks good.
Next, I try to convert the rotation vector to euler angles. I implemented this solution:
http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
This part is where it seems to fall apart. The calculated yaw/pitch/roll are clearly wrong for my reference frame. Perhaps there an issue of converting between coordinate systems?
Here is my code:
```
+(NSArray*) estimatePose:(FIRVisionFace *)face imgSize:(CGSize)imgSize {
// Contour legend: https://firebase.google.com/docs/ml-kit/images/examples/face_contours.svg
FIRVisionFaceContour* faceOval = [face contourOfType:FIRFaceContourTypeFace];
FIRVisionFaceContour* leftEyeContour = [face contourOfType:FIRFaceContourTypeLeftEye];
FIRVisionFaceContour* rightEyeContour = [face contourOfType:FIRFaceContourTypeRightEye];
FIRVisionFaceContour* noseBridge = [face contourOfType:FIRFaceContourTypeNoseBridge];
FIRVisionFaceContour* upperLipTop = [face contourOfType:FIRFaceContourTypeUpperLipTop];
FIRVisionPoint* chin = faceOval.points[18];
FIRVisionPoint* leftEyeLeftCorner = leftEyeContour.points[0];
FIRVisionPoint* rightEyeRightCorner = rightEyeContour.points[8];
FIRVisionPoint* noseTip = noseBridge.points[1];
FIRVisionPoint* leftMouthCorner = upperLipTop.points[0];
FIRVisionPoint* rightMouthCorner = upperLipTop.points[10];
std::vector<cv::Point2d> image_points;
std::vector<cv::Point3d> model_points;
// 2D/3D model points using https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
image_points.push_back( cv::Point2d(noseTip.x.doubleValue, noseTip.y.doubleValue) ); // Nose tip
image_points.push_back( cv::Point2d(chin.x.doubleValue, chin.y.doubleValue) ); // Chin
image_points.push_back( cv::Point2d(leftEyeLeftCorner.x.doubleValue, leftEyeLeftCorner.y.doubleValue) ); // Left eye left corner
image_points.push_back( cv::Point2d(rightEyeRightCorner.x.doubleValue, rightEyeRightCorner.y.doubleValue) ); // Right eye right corner
image_points.push_back( cv::Point2d(leftMouthCorner.x.doubleValue, leftMouthCorner.y.doubleValue) ); // Left Mouth corner
image_points.push_back( cv::Point2d(rightMouthCorner.x.doubleValue, rightMouthCorner.y.doubleValue) ); // Right mouth corner
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
double focal_length = imgSize.width; // Approximate focal length.
cv::Point2d center = cv::Point2d(imgSize.width / 2, imgSize.height / 2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Calculate a point to draw line from nose tip.
std::vector<cv::Point3d> nose_end_point3D;
std::vector<cv::Point2d> nose_end_point2D;
nose_end_point3D.push_back(cv::Point3d(0,0,1000.0));
cv::projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
NSArray *noseLine = [NSArray arrayWithObjects:
[NSValue valueWithCGPoint:CGPointMake(noseTip.x.doubleValue, noseTip.y.doubleValue)],
[NSValue valueWithCGPoint:CGPointMake(nose_end_point2D[0].x, nose_end_point2D[0].y)],
nil];
// Convert rotation vector to yaw/pitch/roll:
// http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
cv::Mat rodrigues_rotation_vector;
cv::Rodrigues(rotation_vector, rodrigues_rotation_vector);
cv::Vec3d euler_angles;
getEulerAngles(rodrigues_rotation_vector, euler_angles);
NSLog(@"mlkit yaw = %f, roll = %f", face.headEulerAngleY, face.headEulerAngleZ);
NSLog(@"opencv yaw = %f, pitch = %f, roll = %f", euler_angles[1], euler_angles[0], euler_angles[2]);
return noseLine;
}
void getEulerAngles(cv::Mat &rotCamerMatrix,cv::Vec3d &euler_angles) {
cv::Mat cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ;
double* _r = rotCamerMatrix.ptr<double>();
double projMatrix[12] = {
_r[0], _r[1], _r[2], 0,
_r[3], _r[4], _r[5], 0,
_r[6], _r[7], _r[8], 0
};
decomposeProjectionMatrix( cv::Mat(3, 4, CV_64FC1, projMatrix),
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
euler_angles);
}
```
For example, when I face straight to the camera, I get the following:
```
mlkit yaw = 3.786244, roll = 3.352636
opencv yaw = -1.416621, pitch = -179.549207, roll = -5.026994
```
And when I face left (pitch close to flat), I get the following:
```
mlkit yaw = -19.004604, roll = 4.542935
opencv yaw = -65.307372, pitch = -6.605039, roll = -57.922035
```
What am I doing wrong?
jacobTue, 02 Apr 2019 10:14:55 -0500http://answers.opencv.org/question/211087/Roll, Pitch, Yaw ROS right hand notation from Aruco marker rvechttp://answers.opencv.org/question/208481/roll-pitch-yaw-ros-right-hand-notation-from-aruco-marker-rvec/I'm trying to get the RPY of an Aruco marker from the camera view using the ROS notation. ROS axis notations are right hand, where positive x points north, y west and z upwards.
I'm following this post http://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ but I can't get it to work properly for ROS notation. This is my implementation:
def rpy_decomposition(self, rvec):
R, _ = cv2.Rodrigues(rvec)
sin_x = math.sqrt(R[2, 0] * R[2, 0] + R[2, 1] * R[2, 1])
singular = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2, 0], R[2, 1]) # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = math.atan2(R[0, 2], -R[1, 2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = 0 # around z2-axis
z2 = -(2*math.pi -z2)%(2*math.pi)
return z1, x, z2
I can't really find a working code in Python or C++. Thanks
veilkrandWed, 06 Feb 2019 08:40:35 -0600http://answers.opencv.org/question/208481/Orientation (yaw) estimation of AR drone using OpenCVhttp://answers.opencv.org/question/189354/orientation-yaw-estimation-of-ar-drone-using-opencv/Hi,
I am working on a project that involves using stereo cameras for obtaining
the 3D coordinate of a drone. I am currently trying to obtain **orientation (yaw only)** estimate of the drone using a pair of external (manually) calibrated stereo cameras.
**My Question:**
Are there any ways to do this in OpenCV ? I have seen [this tutorial](https://docs.opencv.org/3.3.0/dc/d2c/tutorial_real_time_pose.html) but it does not work for 3D non-planar objects.
**Note**:
Kindly note that while there are some existing posts (like [this](https://stackoverflow.com/questions/42586536/position-and-orientation-estimation-by-stereo-images) ) but *they are mostly on triangulation* for the 3D coordinate, while my question here is on **finding the orientation (yaw only)** using external stereo cameras.
Thanks!malharjajooFri, 13 Apr 2018 15:06:00 -0500http://answers.opencv.org/question/189354/Retrieve yaw, pitch, roll from rvechttp://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ I need to retrieve the attitude angles of a camera (using `cv2` on Python).
- Yaw being the general orientation of the camera when on an horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horitzontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
<br>
I have yet `rvec` and `tvec` from a `solvepnp()`.
Then I have computed:
`rmat = cv2.Rodrigues(rvec)[0]`
If I'm right, camera position in the world coordinates system is given by:
`position_camera = -np.matrix(rmat).T * np.matrix(tvec)`
But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the observer (thus the camera)?
I have tried implementing this : http://planning.cs.uiuc.edu/node102.html#eqn:yprmat in a function :
def rotation_matrix_to_attitude_angles(R) :
import math
import numpy as np
cos_beta = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
validity = cos_beta < 1e-6
if not validity:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = math.atan2(R[2,1], R[2,2]) # roll [x]
else:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = 0 # roll [x]
return np.array([alpha, beta, gamma])
but it gives me some results which are far away from reality on a true dataset (even when applying it to the inverse rotation matrix: `rmat.T`).
Am I doing something wrong?
And if yes, what?
All informations I've found are incomplete (never saying in which reference frame we are or whatever in a rigorous way).
Thanks.
**Update:**
Rotation order seems to be of greatest importance.
So; do you know to which of these matrix does the `cv2.Rodrigues(rvec)` result correspond?:
![rotation matrices](/upfiles/14987816662030655.png)
From: https://en.wikipedia.org/wiki/Euler_angles
<h3>Update:</h3>
I'm finally done. Here's the solution:
def yawpitchrolldecomposition(R):
import math
import numpy as np
sin_x = math.sqrt(R[2,0] * R[2,0] + R[2,1] * R[2,1])
validity = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2,0], R[2,1]) # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = math.atan2(R[0,2], -R[1,2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = 0 # around z2-axis
return np.array([[z1], [x], [z2]])
yawpitchroll_angles = -180*yawpitchrolldecomposition(rmat)/math.pi
yawpitchroll_angles[0,0] = (360-yawpitchroll_angles[0,0])%360 # change rotation sense if needed, comment this line otherwise
yawpitchroll_angles[1,0] = yawpitchroll_angles[1,0]+90
That's all folks!
swiss_knightTue, 20 Jun 2017 08:49:20 -0500http://answers.opencv.org/question/161369/viola and jones yaw, pitch roll valueshttp://answers.opencv.org/question/42459/viola-and-jones-yaw-pitch-roll-values/I was looking for some figures in order to know how V&J can deal with yaw, pitch and roll values.
I'm afraid that I came across a paper some time ago, but I can't remember.
I suppose that these values will depend on the type of classifier (lbpcascade_frontalface, haarcascade_frontalface_alt, haarcascade_frontalface_alt2,...).
So any figures that somebody can give me would be fantastic, also figures for eyes , mouth and nose classifiers.
ThanksalbertofernandezSat, 20 Sep 2014 11:39:08 -0500http://answers.opencv.org/question/42459/