OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 04 Dec 2020 10:49:29 -0600Triangulation with Ground Plane as Originhttp://answers.opencv.org/question/238744/triangulation-with-ground-plane-as-origin/Hello, I am working on a project where I have two calibrated cameras (c1, c2) mounted to the ceiling of my lab and I want to triangulate points on objects that I place in the capture volume. I want my final output 3D points to be relative to the world origin that is placed on the ground plane (floor of the lab). I have some questions about my process and multiplying the necessary transformations. Here is what I have done so far...
To start I have captured an image, with c1, of the ChArUco board on the ground that will act as the origin of my "world". I detect corners ( cv::aruco::detectMarkers/ cv::aruco::interpolateCornersCharuco) in the image taken by c1 and obtain the transformation (with cv::projectPoints) from 3D world coordinates to 3D camera coordinates.
![transform of board coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
I followed the same process of detecting corners on the ChArUco board with c2 (board in same position) and obtained the transformation that takes a point relative to the board origin to the camera origin...
![transform of board coords to camera 2 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc2%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q1. With the two transformations, and my calibrated intrinsic parameters, should I be able to pass these to cv::triangulatePoints to obtain 3D points that are relative to the ChArUco board origin?**
Next, I was curious if I use cv::stereoCalibrate with my camera pair to obtain the transformation from camera 2 relative points to camera 1 relative points, could I combine this with the transform from camera 1 relative points to board relative points...to get a transform from camera 2 relative points to board relative points...
After running cv::stereoCalibrate I obtain (where c1 is the origin camera that c2 transforms to)...
![transform of camera 2 coords to camera 1 coords](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_1%7D%20%5C%5C%20%7BY_c_1%7D%20%5C%5C%20%7BZ_c_1%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%5E%7Bc1%7DM_%7Bc2%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bc2%7D%7D%20%5C%5C%20%7BY_%7Bc2%7D%7D%20%5C%5C%20%7BZ_%7Bc2%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
**Q2. Should I be able to combine transforms in the follow manner to get a transform that is the same (or very close) as my transform for board points to camera 2 points?**
![combined transforms](https://latex.codecogs.com/gif.latex?%5Cbegin%7Bpmatrix%7D%7BX_c_2%7D%20%5C%5C%20%7BY_c_2%7D%20%5C%5C%20%7BZ_c_2%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D%20%3D%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%5Cbegin%7Bpmatrix%7D%7BX_%7Bboard%7D%7D%20%5C%5C%20%7BY_%7Bboard%7D%7D%20%5C%5C%20%7BZ_%7Bboard%7D%7D%20%5C%5C%201%20%5Cend%7Bpmatrix%7D)
![combined transforms approximation](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%5Capprox%20%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D)
**I tried to do this and noticed that the transform obtained by detecting the ChArUco board corners is significantly different than the one obtained by combing the transformations. Should this work as I stated, or have I misunderstood something and done the math incorrectly? Here is output I get for the two methods (translation units are meters)...**
Output from projectPoints
![](https://latex.codecogs.com/gif.latex?%5E%7Bc2%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9844968%20%26%20-0.14832049%20%26%200.09363274%20%26%20-0.7521725%5C%5C%200.01426749%20%26%20-0.46433134%20%26%20-0.88554664%20%26%201.10571043%20%5C%5C%200.17482132%20%26%200.87315373%20%26%20-0.45501656%20%26%203.89971067%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Output from combined transforms (projectPoints w/ c1 and board, and stereoCalibrate w/ c1 and c2)
![](https://latex.codecogs.com/gif.latex?%28%5E%7Bc1%7DM_%7Bc2%7D%29%5E%7B-1%7D%20%5Ccdot%20%5E%7Bc1%7DM_%7Bboard%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%200.9621638%20%26%20-0.00173254%20%26%200.01675597%20%26%20-1.03920386%5C%5C%20-0.00161398%20%26%20-0.51909025%20%26%20-0.06325754%20%26%200.02077932%20%5C%5C%20-0.01954778%20%26%20-0.07318432%20%26%20-0.49902605%20%26%201.0988982%20%5C%5C%200%20%26%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D)
Looking at the transform obtained from projectPoints the translation makes sense as in the physical setup the ChArUco board is about 4m away from the camera. This makes me think the combined transform doesn't really make sense...
edit/update: Adding raw data from projectPoints and stereoCalibrate:
Sorry for the delay. Going through my code I use estimatePoseCharucoBoard to get my transformation matric from board coords to camera, sorry about that! Here are the matrices that I obtained;
**Note: Any time that a calibration board object is needed the board dimensions given are in meters. So scaling should remain the same between matrices.**
board to camera 190 from estimatePoseCharucoBoard -->
c1^M_board
[[ 0.99662517 0.05033606 -0.06484257 -0.88300593]
[-0.02915834 -0.52132771 -0.85285826 0.82721859]
[-0.07673376 0.85187071 -0.5181006 4.03620873]
[ 0. 0. 0. 1. ]]
board to camera 229 from estimatePoseCharucoBoard -->
c2^M_board
[[ 0.9844968 -0.14832049 0.09363274 -0.7521725 ]
[ 0.01426749 -0.46433134 -0.88554664 1.10571043]
[ 0.17482132 0.87315373 -0.45501656 3.89971067]
[ 0. 0. 0. 1. ]]
camera 229 to camera 190 from stereoCalibrate -->
c1^M_c2
[[ 0.96542194 0.05535236 0.2547481 -1.20694685]
[-0.03441951 0.99570816 -0.08591013 0.03888629]
[-0.25841009 0.07417122 0.96318371 0.04002158]
[ 0. 0. 0. 1. ]]
Here is a code snippet showing how I obtain the transformation matrix to ground:
// detect markers
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
// Attempt to refind more markers based on already detected markers
aruco::refineDetectedMarkers(image, charucoboard, corners, ids, rejected,
noArray(), noArray(), 10.f, 3.f, true, noArray(), detectorParams);
if(ids.size() < 1){
cerr << "No marker ID's found" << endl;
}
// interpolate charuco corners
Mat currentCharucoCorners, currentCharucoIds;
aruco::interpolateCornersCharuco(corners, ids, image, charucoboard,
currentCharucoCorners, currentCharucoIds);
if(currentCharucoCorners.rows < 6) {
cerr << "Not enough corners for calibration" << endl;
}
cout << "Corners Found: " << currentCharucoCorners.rows << endl;
cout << "Total Object Points: " << objPoints.size() << endl;
aruco::estimatePoseCharucoBoard(currentCharucoCorners, currentCharucoIds, charucoboard,
intrinsics.cameraMatrix, intrinsics.distCoeffs,
rvec, tvec, false);
Rodrigues(rvec, R);
cout << "Rotation Matrix: " << R << endl;
cout << "Translation Matrix: " << tvec << endl;
P = RTtoP(R, tvec);
cout << "Projection Matrix: " << P << endl;ConnorMFri, 04 Dec 2020 10:49:29 -0600http://answers.opencv.org/question/238744/camera pose rotation to local rotation for an objecthttp://answers.opencv.org/question/234652/camera-pose-rotation-to-local-rotation-for-an-object/Suppose I have the pose of an object in a 4 x 4 matrix (has translation and rotation information) but it's with respect to my camera frame
Suppose I also have the intrinsic calibration off the camera that was used to find the pose of the object
Now I want to get the object rotation but in it's local coordinate system not in the camera frame coordinate system
What is the math to perform this transformation?
I have a feeling this is pretty easy to do but I just can't get my head around it. Is it something like the inverse of the camera intrinsic multiplied by 4 x 4 pose information? JT3DWed, 02 Sep 2020 05:05:44 -0500http://answers.opencv.org/question/234652/Find image rotation angle using DFThttp://answers.opencv.org/question/231320/find-image-rotation-angle-using-dft/ Hi, I was using this tutorial :
https://docs.opencv.org/3.4/d8/d01/tutorial_discrete_fourier_transform.html
My aim is to use the dft result to rotate my original images so they are better aligned horizontally.
While getting the dft was a piece of cake I'm stuck however on the last part 'From this we may calculate the offset and perform an image rotation to correct eventual miss alignments.'
Can someone point me to some articles describing how you do calculate the offset?
Preferably in python as the rest of my code is also in python.
kaymanWed, 17 Jun 2020 07:21:40 -0500http://answers.opencv.org/question/231320/[SOLVED]trouble converting Euler angles to rotation matrix for calhttp://answers.opencv.org/question/231022/solvedtrouble-converting-euler-angles-to-rotation-matrix-for-cal/I have this matched pair of stereo cameras that come with calibration data including rotation and translation between the left and right cameras. The left to right rotation is given as a set of 3 euler angles, and I'm told the rotation is passive in the order x->y->z. The coordinate system they use would have the z axis shooting out away from you, the y axis headed down to your feet, and the x axis shooting off from your right shoulder. I'm trying to convert these three angles into a matrix that I can use in stereoRectify. If I don't set the rotation matrix everything seems to work okay, but when I try to create my own rotation matrix the image is barely recognizable.
I tried the following:
//next we need to compose the 3 rotations into their matrix forms
cv::Mat_<double> R_x(3,3);
R_x << 1, 0, 0,
0, cos(E_x[0]), -sin(E_x[0]),
0, sin(E_x[0]), cos(E_x[0]);
cv::Mat_<double> R_y(3,3);
R_y << cos(E_y[0]), 0, sin(E_y[0]),
0, 1, 0
-sin(E_y[0]), 0, cos(E_y[0]);
cv::Mat_<double> R_z(3,3);
R_z << cos(E_z[0]), -sin(E_z[0]), 0,
sin(E_z[0]), cos(E_z[0]), 0,
0, 0, 1;
R = R_x * R_y * R_z;
And also tried re-ording the last multiplication step which gives me different rotations but nothing that looks correct. How should I be doing this?
Thank youeric_engineerMon, 08 Jun 2020 10:06:06 -0500http://answers.opencv.org/question/231022/what is the purpose of cos_t/=hist_width in OpenCV SIFThttp://answers.opencv.org/question/229732/what-is-the-purpose-of-cos_thist_width-in-opencv-sift/Hello fellows,
I am digging into the source code of the SIFT in OpenCV, and I encountered a few lines of code that I can't justify. In the calcSIFTDescriptor portion, this function takes the input "ori" and calculate its respectful cos and sin. as the 2 lines shown below:
> float cos_t = cosf(ori*(float)(CV_PI/180));
> float sin_t = sinf(ori*(float)(CV_PI/180));
A few lines later the cos_t and sin_t are divided by "hist_width" without being used by the 2 lines indicated below:
> cos_t /= hist_width;
> sin_t /= hist_width;
Then this modified cos_t and sin_t is used for rotation.
> float c_rot = j * cos_t - i * sin_t;
> float r_rot = j * sin_t + i * cos_t;
I find it hard to understand why the source code update the cos_t and sin_t by dividing them by hist_width. Any help would be appreciated!
Cheers!
HughesHugheschenThu, 30 Apr 2020 15:06:47 -0500http://answers.opencv.org/question/229732/How to detect angle of rotation of my target object in the imagehttp://answers.opencv.org/question/228975/how-to-detect-angle-of-rotation-of-my-target-object-in-the-image/ I would like to detect the angle of rotation of my box.
I have thought about building an image classification model, which label ( 0 degrees, 45 degrees, 90 degrees, 135 degrees). However, I thought I may complicate the problem and it is hard to classify since the future is almost the same.
Here are the sample images:
1. The first photo is a reference which rotated for 0 degrees.
[C:\fakepath\IMG_8310.jpg](/upfiles/1586789656839307.jpg)
2. The first photo is a reference which rotated for 45 degrees.
3. The third photo is a reference which rotated for 90 degrees.
[C:\fakepath\IMG_1981.jpg](/upfiles/15867897114439075.jpg)
4. The fourth photo is a reference which rotated for 135 degrees.
[C:\fakepath\IMG_0771.jpg](/upfiles/15867897507186083.jpg)
Any method is appreciated! Thank you for the help!
JackyKooMon, 13 Apr 2020 09:56:28 -0500http://answers.opencv.org/question/228975/Why OpenCV uses Rodrigues rotation vector instead of Cayley's formula?http://answers.opencv.org/question/222656/why-opencv-uses-rodrigues-rotation-vector-instead-of-cayleys-formula/I was a fan of Rodrigues rotation formula [1] until I learned about Cayley's formula for orthonormal matrices [2]. If I understand correctly Rodrigues is not meant necessarily to produce a 3x3 rotation matrix but to rotate a vector with a computationally light formula. So maybe OpenCV uses this formula internally to do all transformations? (skipping the calculation of the rotation matrix). Just
But when it comes to producing a 3x3 rotation matrix it seems, at a glance, that Cayley's formula is much simpler.
So why not use Cayley instead of Rodrigues?
[1] https://docs.opencv.org/ref/master/d9/d0c/group__calib3d.html#ga61585db663d9da06b68e70cfbf6a1eac
[2] Murray, R. M.; Li, Z. & Sastry, S. S. A mathematical introduction to robotic manipulation Book, CRC press, 1994, 29, 214-222JamesBaxterMon, 02 Dec 2019 13:54:19 -0600http://answers.opencv.org/question/222656/template matching rotation C++ to pythonhttp://answers.opencv.org/question/221865/template-matching-rotation-c-to-python/ Hello. Help translate from c ++ to python. There is a great example of a template matching rotation. But it is in C ++.
https://answers.opencv.org/question/179797/template-matching-find-rotation-of-object-on-scene/CreaSun, 17 Nov 2019 07:48:13 -0600http://answers.opencv.org/question/221865/Transform camera positon from one ArUco marker to anotherhttp://answers.opencv.org/question/219606/transform-camera-positon-from-one-aruco-marker-to-another/I'm creating a university project with OpenCV Python and ArUco markers, where I would like to get a (relatively) robust pose estimation for the movement of the camera. I plan on using this for indoor drone flight graphing. For this, I have to transform the camera pose to world coordinates defined by the first seen marker.
I know there must be a transformation matrix between the markers, but I can't seem to figure out, what it is. I am trying with the difference of respective rvecs.
The code for the function in Python:
def TransformBetweenMarkers(tvec_m, tvec_n, rvec_m, rvec_n):
tvec_m = np.transpose(tvec_m) # tvec of 'm' marker
tvec_n = np.transpose(tvec_n) # tvec of 'n' marker
# vector from 'm' to 'n' marker in the camera's coordinate system
dtvec = tvec_m - tvec_n
# get the markers' rotation matrices respectively
R_m = cv2.Rodrigues(rvec_m)[0]
R_n = cv2.Rodrigues(rvec_n)[0]
# camera pose in 'm' marker's coordinate system
tvec_mm = np.matmul(-R_m.T, tvec_m)
# camera pose in 'n' marker's coordinate system
tvec_nn = np.matmul(-R_n.T, tvec_n)
# translational difference between markers in 'm' marker's system,
# basically the origin of 'n'
dtvec_m = np.matmul(-R_m.T, dtvec)
# this gets me the same as tvec_mm,
# but this only works, if 'm' marker is seen
# tvec_nm = dtvec_m + np.matmul(-R_m.T, tvec_n)
# something with the rvec difference must give the transformation(???)
drvec = rvec_m-rvec_n
# transformed to 'm' marker
drvec_m = np.transpose(np.matmul(R_m.T, np.transpose(drvec)))
dR_m = cv2.Rodrigues(drvec_m)[0]
# I want to transform tvec_nn with a single matrix,
# so it would be interpreted in 'm' marker's system
tvec_nm = dtvec_m + np.matmul(dR_m.T, tvec_nn)
# objective: tvec_mm == tvec_nm
This is the best I could get, but there is still an error value of +-0.03 meters between the `tvec_mm` and `tvec_nm` translation values.
Is it possible to get any better with this? Is this even a legit transformation or just a huge coincidence, that it gives approximately the same values? Any ideas?
Thank you!SzepyFri, 11 Oct 2019 13:37:27 -0500http://answers.opencv.org/question/219606/template matching invariant to rotations and noisehttp://answers.opencv.org/question/219149/template-matching-invariant-to-rotations-and-noise/ IS any way to enhance the maximum value of similarity in case the template on image is rotated with different angle than the saved template
The problem here is in case there is stamp on image rotated with small angle the template matching returns maximum matching value with small value so the program cannot now if this value is due to rotation or noise or due to the stamp is not found in the image MostafaMohsen17Wed, 02 Oct 2019 09:29:04 -0500http://answers.opencv.org/question/219149/ArUco orientation using the function aruco.estimatePoseSingleMarkers()http://answers.opencv.org/question/215377/aruco-orientation-using-the-function-arucoestimateposesinglemarkers/Hi everyone!
I'm trying to program a python app that determine the position and orientation of an aruco marker. I calibrated the camera and everything and I used *aruco.estimatePoseSingleMarkers* that returns the translation and rotation vectors.
The translation vector works fine but I don't understand how the rotation vector works. I took some picture to illustrate my problem with the "roll rotation":
Here the rotation vector is approximately [in degree]: [180 0 0]
![image description](/upfiles/15626850347225475.png)
Here the rotation vector is approximately [in degree]: [123 -126 0]
![image description](/upfiles/15626851829885092.png)
And here the rotation vector is approximately [in degree]: [0 -180 0]
![image description](/upfiles/15626853815019584.png)
And I don't see the logic in these angles. I've tried the other two rotations (pitch and yaw) and there appear also "random". So if you have an explication I would be very happy :) lamaaTue, 09 Jul 2019 10:28:00 -0500http://answers.opencv.org/question/215377/Cannot get correct translation and rotation matrix in opencv pythonhttp://answers.opencv.org/question/215136/cannot-get-correct-translation-and-rotation-matrix-in-opencv-python/I have been trying this for two days and for some reason I cannot get it to work. I have two cameras with different intrinsic camera matrices, they are setup with global coordinates in blender
Camera Left: -5, -5, 0 with 45 degrees rotation about Z-axis
Camera Right: 5, -5, 0 with -45 degrees rotation about Z-axis
I simulated points in blender and the positions on the cameras should be exact. I hard coded these into the code, but I am getting these results
**Angles**
Out[22]: (173.62487179673582, 165.61076366618005, 155.76859475230103)
Out[21]: (179.7648211135763, 168.02313442078392, -22.82952854817841)
**Translation**
Out[24]: array([ 0.04009013, 0.03941624, -0.99841832])
Out[23]: array([-0.04009013, -0.03941624, 0.99841832])
I should be getting, exactly:
**Angles** [0, -90, 0]
**Translation** [.707, 0, .707]
Scene setup for reference:
![image description](/upfiles/15621750508841992.png)
**Here is my code**
import cv2
import numpy as np
K_l = np.array([[1800.0, 0.0, 960.0], [0.0, 1800.0, 540.0], [0.0, 0.0, 1.0]])
K_r = np.array([[2100.0, 0.0, 960.0], [0.0, 2100.0, 540.0], [0.0, 0.0, 1.0]])
pts_l = np.array([[ 1041 , 540 ],
[ 925 , 465 ],
[ 786 , 458 ],
[ 1060 , 469 ],
[ 756 , 732 ],
[ 325 , 503 ],
[ 886 , 958 ],
[ 960 , 180 ],
[ 796 , 424 ],
[ 945 , 219 ],
[ 651 , 386 ],
[ 1731 , 676 ],
[ 572 , 590 ]])
pts_r = np.array([[ 1203 , 540 ],
[ 1001 , 453 ],
[ 825 , 458 ],
[ 1139 , 445 ],
[ 1072 , 752 ],
[ 418 , 516 ],
[ 410 , 886 ],
[ 1086 , 95 ],
[ 1151 , 405 ],
[ 1355 , 99 ],
[ 942 , 388 ],
[ 1445 , 883 ],
[ 994 , 589 ]])
F, mask = cv2.findFundamentalMat(pts_l.astype(float),pts_r.astype(float),cv2.FM_LMEDS)
E = np.dot(np.dot(np.transpose(K_r),F),K_l)
U, S, Vt = np.linalg.svd(E)
W = np.array([0.0, -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]).reshape(3, 3)
R_1 = U.dot(W).dot(Vt)
angles1, _, _, _, _, _ = cv2.RQDecomp3x3(R_1)
R_2 = U.dot(W.T).dot(Vt)
angles2, _, _, _, _, _ = cv2.RQDecomp3x3(R_2)
T1 = U[:, 2]
T2 = -U[:, 2]gyronikelegWed, 03 Jul 2019 12:31:07 -0500http://answers.opencv.org/question/215136/Calculate Euler angles after SolvePnphttp://answers.opencv.org/question/211087/calculate-euler-angles-after-solvepnp/I'm writing an iOS app that detect facial points using ML Kit, and then uses SolvePnp to calculate the pitch. I implemented the solution give here to solve pnp:
https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
That seems to work well, as the projected nose line drawn looks good.
Next, I try to convert the rotation vector to euler angles. I implemented this solution:
http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
This part is where it seems to fall apart. The calculated yaw/pitch/roll are clearly wrong for my reference frame. Perhaps there an issue of converting between coordinate systems?
Here is my code:
```
+(NSArray*) estimatePose:(FIRVisionFace *)face imgSize:(CGSize)imgSize {
// Contour legend: https://firebase.google.com/docs/ml-kit/images/examples/face_contours.svg
FIRVisionFaceContour* faceOval = [face contourOfType:FIRFaceContourTypeFace];
FIRVisionFaceContour* leftEyeContour = [face contourOfType:FIRFaceContourTypeLeftEye];
FIRVisionFaceContour* rightEyeContour = [face contourOfType:FIRFaceContourTypeRightEye];
FIRVisionFaceContour* noseBridge = [face contourOfType:FIRFaceContourTypeNoseBridge];
FIRVisionFaceContour* upperLipTop = [face contourOfType:FIRFaceContourTypeUpperLipTop];
FIRVisionPoint* chin = faceOval.points[18];
FIRVisionPoint* leftEyeLeftCorner = leftEyeContour.points[0];
FIRVisionPoint* rightEyeRightCorner = rightEyeContour.points[8];
FIRVisionPoint* noseTip = noseBridge.points[1];
FIRVisionPoint* leftMouthCorner = upperLipTop.points[0];
FIRVisionPoint* rightMouthCorner = upperLipTop.points[10];
std::vector<cv::Point2d> image_points;
std::vector<cv::Point3d> model_points;
// 2D/3D model points using https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/#code
image_points.push_back( cv::Point2d(noseTip.x.doubleValue, noseTip.y.doubleValue) ); // Nose tip
image_points.push_back( cv::Point2d(chin.x.doubleValue, chin.y.doubleValue) ); // Chin
image_points.push_back( cv::Point2d(leftEyeLeftCorner.x.doubleValue, leftEyeLeftCorner.y.doubleValue) ); // Left eye left corner
image_points.push_back( cv::Point2d(rightEyeRightCorner.x.doubleValue, rightEyeRightCorner.y.doubleValue) ); // Right eye right corner
image_points.push_back( cv::Point2d(leftMouthCorner.x.doubleValue, leftMouthCorner.y.doubleValue) ); // Left Mouth corner
image_points.push_back( cv::Point2d(rightMouthCorner.x.doubleValue, rightMouthCorner.y.doubleValue) ); // Right mouth corner
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
double focal_length = imgSize.width; // Approximate focal length.
cv::Point2d center = cv::Point2d(imgSize.width / 2, imgSize.height / 2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Calculate a point to draw line from nose tip.
std::vector<cv::Point3d> nose_end_point3D;
std::vector<cv::Point2d> nose_end_point2D;
nose_end_point3D.push_back(cv::Point3d(0,0,1000.0));
cv::projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
NSArray *noseLine = [NSArray arrayWithObjects:
[NSValue valueWithCGPoint:CGPointMake(noseTip.x.doubleValue, noseTip.y.doubleValue)],
[NSValue valueWithCGPoint:CGPointMake(nose_end_point2D[0].x, nose_end_point2D[0].y)],
nil];
// Convert rotation vector to yaw/pitch/roll:
// http://answers.opencv.org/question/16796/computing-attituderoll-pitch-yaw-from-solvepnp/?answer=52913#post-id-52913
cv::Mat rodrigues_rotation_vector;
cv::Rodrigues(rotation_vector, rodrigues_rotation_vector);
cv::Vec3d euler_angles;
getEulerAngles(rodrigues_rotation_vector, euler_angles);
NSLog(@"mlkit yaw = %f, roll = %f", face.headEulerAngleY, face.headEulerAngleZ);
NSLog(@"opencv yaw = %f, pitch = %f, roll = %f", euler_angles[1], euler_angles[0], euler_angles[2]);
return noseLine;
}
void getEulerAngles(cv::Mat &rotCamerMatrix,cv::Vec3d &euler_angles) {
cv::Mat cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ;
double* _r = rotCamerMatrix.ptr<double>();
double projMatrix[12] = {
_r[0], _r[1], _r[2], 0,
_r[3], _r[4], _r[5], 0,
_r[6], _r[7], _r[8], 0
};
decomposeProjectionMatrix( cv::Mat(3, 4, CV_64FC1, projMatrix),
cameraMatrix,
rotMatrix,
transVect,
rotMatrixX,
rotMatrixY,
rotMatrixZ,
euler_angles);
}
```
For example, when I face straight to the camera, I get the following:
```
mlkit yaw = 3.786244, roll = 3.352636
opencv yaw = -1.416621, pitch = -179.549207, roll = -5.026994
```
And when I face left (pitch close to flat), I get the following:
```
mlkit yaw = -19.004604, roll = 4.542935
opencv yaw = -65.307372, pitch = -6.605039, roll = -57.922035
```
What am I doing wrong?
jacobTue, 02 Apr 2019 10:14:55 -0500http://answers.opencv.org/question/211087/projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/Roll, Pitch, Yaw ROS right hand notation from Aruco marker rvechttp://answers.opencv.org/question/208481/roll-pitch-yaw-ros-right-hand-notation-from-aruco-marker-rvec/I'm trying to get the RPY of an Aruco marker from the camera view using the ROS notation. ROS axis notations are right hand, where positive x points north, y west and z upwards.
I'm following this post http://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ but I can't get it to work properly for ROS notation. This is my implementation:
def rpy_decomposition(self, rvec):
R, _ = cv2.Rodrigues(rvec)
sin_x = math.sqrt(R[2, 0] * R[2, 0] + R[2, 1] * R[2, 1])
singular = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2, 0], R[2, 1]) # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = math.atan2(R[0, 2], -R[1, 2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = 0 # around z2-axis
z2 = -(2*math.pi -z2)%(2*math.pi)
return z1, x, z2
I can't really find a working code in Python or C++. Thanks
veilkrandWed, 06 Feb 2019 08:40:35 -0600http://answers.opencv.org/question/208481/How to determine the angle of rotation?http://answers.opencv.org/question/205685/how-to-determine-the-angle-of-rotation/ There is a square in an image with equal sides (that is inside another square).
![image description](/upfiles/15453320057265702.jpg)
Does OpenCV have functions which can help to efficiently calculate the angle?
ya_ocv_userThu, 20 Dec 2018 12:55:19 -0600http://answers.opencv.org/question/205685/Dear, I have rvec(rotation vector) and tvec(translation vector)..http://answers.opencv.org/question/204063/dear-i-have-rvecrotation-vector-and-tvectranslation-vector/How can I find the camera pose (EYE vector)? I would like to continue to find the Reflectance. Thank you in advancezar zarMon, 26 Nov 2018 04:47:56 -0600http://answers.opencv.org/question/204063/Rotation matrix to rotation vector (Rodrigues function)http://answers.opencv.org/question/85360/rotation-matrix-to-rotation-vector-rodrigues-function/Hello,
I have a 3x3 rotation matrix that I obtained from stereoCalibrate (using the ros stereo calibration node). I need to obtain a rotation vector (1x3), therefore I used the rodrigues formula. When I went to check the result that I got with this in matlab using the [Pietro Perona - California Institute of Technology](http://www.mathworks.com/matlabcentral/fileexchange/41511-deprecated-light-field-toolbox-v0-2-v0-3-now-available/content/LFToolbox0.2/SupportFunctions/CameraCal/rodrigues.m) I get two different results:
This is the code in cpp:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <tf/transform_broadcaster.h>
#include <ros/param.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <cv_bridge/cv_bridge.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
std::vector<double> T, R;
double cols, rows, R_cols, R_rows;
int i, j;
cv::Mat rot_vec = Mat::zeros(1,3,CV_64F), rot_mat = Mat::zeros(3,3,CV_64F);
ros::init(argc, argv, "get_extrinsic");
ros::NodeHandle node;
if(!node.getParam("translation_vector/cols", cols))
{
ROS_ERROR_STREAM("Translation vector (cols) could not be read.");
return 0;
}
if(!node.getParam("translation_vector/rows", rows))
{
ROS_ERROR_STREAM("Translation vector (rows) could not be read.");
return 0;
}
T.reserve(cols*rows);
if(!node.getParam("rotation_matrix/cols", cols))
{
ROS_ERROR_STREAM("Rotation matrix (cols) could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/rows", rows))
{
ROS_ERROR_STREAM("Rotation matrix (rows) could not be read.");
return 0;
}
R.reserve(cols*rows);
if(!node.getParam("translation_vector/data", T))
{
ROS_ERROR_STREAM("Translation vector could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/data", R))
{
ROS_ERROR_STREAM("Rotation matrix could not be read.");
return 0;
}
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
rot_mat.at<double>(i,j) = R[i*3+j];
}
std::cout << "Rotation Matrix:"<<endl;
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
std::cout<< rot_mat.at<double>(i,j) << " ";
std::cout << endl;
}
std::cout << endl;
std::cout << "Rodrigues: "<<endl;
Rodrigues(rot_mat, rot_vec);
for(i=0; i<3; i++)
std::cout << rot_vec.at<double>(1,i) << " ";
std::cout << endl;
ros::spin();
return 0;
};
And its output is:
Rotation Matrix:
-0.999998 -0.00188887 -0.000125644
0.0018868 -0.999888 0.014822
-0.000153626 0.0148217 0.99989
Rodrigues:
0.0232688 3.13962 4.94066e-324
But when I load the same rotation matrix in matlab and use the rodrigues function I get the following:
R =
-1.0000 -0.0019 -0.0001
0.0019 -0.9999 0.0148
-0.0002 0.0148 0.9999
>> rodrigues(R)
ans =
-0.0002
0.0233
3.1396
I can see that the numbers match, but they are in different positions and there seems to be an issue also with the signs.....Which formula should I trust?aripodMon, 25 Jan 2016 07:54:16 -0600http://answers.opencv.org/question/85360/How to find rotation angle from homography matrix?http://answers.opencv.org/question/203890/how-to-find-rotation-angle-from-homography-matrix/I have 2 images and i am finding simliar key points by SURF.
I want to find rotation angle between the two images from homograpohy matrix.
Can someone please tell me how to find rotation angle between two images from homography matrix.
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
Thank you.ronak.dedhiaThu, 22 Nov 2018 23:30:21 -0600http://answers.opencv.org/question/203890/Reverse camera angle w/ aruco trackinghttp://answers.opencv.org/question/203907/reverse-camera-angle-w-aruco-tracking/I have the Aruco tracking working and from cobbling together stuff from various code samples, ended up with the code below, where `final` is the view matrix passed to the camera. The problem is that the rotation of the camera isn't exactly what I need... not sure exactly which axis is wrong, but you can see in the following video that I want the base of the model to be sitting on the marker- but instead it's not oriented quite right. Any tips to get it right would be great! I'm open to re-orienting it in blender too if that's the right solution. Just not sure exactly _how_ it's wrong right now.
Video example:
https://youtu.be/-7WDxa-e2Oo
Code:
const inverse = cv.matFromArray(4,4, cv.CV_64F, [
1.0, 1.0, 1.0, 1.0,
-1.0,-1.0,-1.0,-1.0,
-1.0,-1.0,-1.0,-1.0,
1.0, 1.0, 1.0, 1.0
]);
cv.estimatePoseSingleMarkers(markerCorners, 0.1, cameraMatrix, distCoeffs, rvecs, tvecs);
cv.Rodrigues(rvecs, rout);
const tmat = tvecs.data64F;
const rmat = rout.data64F;
const viewMatrix = cv.matFromArray(4,4,cv.CV_64F, [
rmat[0],rmat[1],rmat[2],tmat[0],
rmat[3],rmat[4],rmat[5],tmat[1],
rmat[6],rmat[7],rmat[8],tmat[2],
0.0,0.0,0.0,1.0
]);
const output = cv.Mat.zeros(4,4, cv.CV_64F);
cv.multiply(inverse, viewMatrix, output);
cv.transpose(output, output);
const final = output.data64F; dakomFri, 23 Nov 2018 02:24:57 -0600http://answers.opencv.org/question/203907/Identify object on a conveyor belthttp://answers.opencv.org/question/201824/identify-object-on-a-conveyor-belt/Hello! I'm thinking of trying out openCV for my robot.
I want the program to be able to identify the metal parts on a conveyor belt that are single ones, and not the ones lying in clusters.
I will buy a raspberry pie with the raspberry pie camera module(is this a good idea for this project?).
I want the program to return the X-Y coordinate(or position of the pixel on the image) on a specific place on the metal part(so that the robot can lift it where it is supposed to be lifted). I would also want the program to have a adjustable degree of freedom of the orientation(rotation) of the single metal part to be localized.
**Where do I even start?**
A simple drawing of the robot
![image description](https://i.imgur.com/YE3LKpV.png)
An image of what the images could look like the program will process(have not bought the final camera yet and lighting).
![image description](https://i.imgur.com/OMXMq5M.jpg)
Here is the metal part I want to pick up from the conveyor belt.
![image description](https://i.imgur.com/uA0buvC.jpg)HatmpatnFri, 26 Oct 2018 01:02:17 -0500http://answers.opencv.org/question/201824/solvePnP with a priori known pitch and rollhttp://answers.opencv.org/question/199943/solvepnp-with-a-priori-known-pitch-and-roll/How to correctly call solvePnP (for estimate the pose of a large ArUco board), if the board orientation (pitch and roll, not yaw) is known (from an IMU)?okalachevSat, 22 Sep 2018 13:59:48 -0500http://answers.opencv.org/question/199943/Triangulation gives weird results for rotationhttp://answers.opencv.org/question/199673/triangulation-gives-weird-results-for-rotation/OpenCV version 3.4.2
I am taking a stereo pair and using recoverPose to get the [R|t] pose of the camera, If I start at the origin and use triangulatePoints the result looks somewhat like expected although I would have expected the z points to be positive;
These are the poses of the cameras [R|t]
p0: [1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 1, 0]
P1: [0.9999726146107655, -0.0007533190856300971, -0.007362237354563941, 0.9999683127209806;
0.0007569149205790131, 0.9999995956157767, 0.0004856419317479311, -0.001340876868928852;
0.007361868534054914, -0.0004912012195572309, 0.9999727804360723, 0.007847012372698725]
I get these results where the red dot and the yellow line indicates the camera pose (x positive is right, y positive is down):
![image description](/upfiles/1537317206819271.png)
When I rotate the first camera by 58.31 degrees and then use recoverPose to get the relative pose of the second camera the results are wrong.
Pose matrices where P0 is rotated by 58.31 degrees around the y axis before calling my code below.
P0: [0.5253219888177297, 0, 0.8509035245341184, 0;
0, 1, 0, 0;
-0.8509035245341184, 0, 0.5253219888177297, 0]
P1: [0.5315721563840478, -0.0007533190856300971, 0.8470126770406503, 0.5319823932782873;
-1.561037994149129e-05, 0.9999995956157767, 0.0008991799591322519, -0.001340876868928852;
-0.8470130118915117, -0.0004912012195572309, 0.5315719296650566, -0.8467543535708145]
(x positive is right, y positive is down)
![image description](/upfiles/15373172174565108.png)
The pose of the second frame is calculated as follows:
new_frame->E = cv::findEssentialMat(last_frame->points, new_frame->points, K, cv::RANSAC, 0.999, 1.0, new_frame->mask);
int res = recoverPose(new_frame->E, last_frame->points, new_frame->points, K, new_frame->local_R, new_frame->local_t, new_frame->mask);
// https://stackoverflow.com/questions/37810218/is-the-recoverpose-function-in-opencv-is-left-handed
// Convert so transformation is P0 -> P1
new_frame->local_t = -new_frame->local_t;
new_frame->local_R = new_frame->local_R.t();
new_frame->pose_t = last_frame->pose_t + (last_frame->pose_R * new_frame->local_t);
new_frame->pose_R = new_frame->local_R * last_frame->pose_R;
hconcat(new_frame->pose_R, new_frame->pose_t, new_frame->pose);
I then call triangulatePoints using the K * P0 and K * P1 on the corresponding points.
I feel like this is some kind of coordinate system issue as the points I would expect to have positive z values have a -z value in the plots and so the rotation is behaving strangely. I haven't been able to figure out what I need to do to fix it.
EDIT: Here is a gif of what's going on as I rotate through 360 degrees around Y. The cameras are still parallel. What am I missing, shouldn't the shape of the point cloud remain the same if both camera poses remain in relative positions even thought they have been rotated around the origin? Why are the points squashed into the X axis?
![image description](/upfiles/15373205818094867.gif)maym86Tue, 18 Sep 2018 14:55:35 -0500http://answers.opencv.org/question/199673/Rotation vector interpretationhttp://answers.opencv.org/question/197981/rotation-vector-interpretation/I use opencv cv2.solvePnP() function to calculate rotation and translation vectors. Rotation is returned as rvec [vector with 3DOF]. I would like to ask for help with interpreting the rvec.
As far as I understand rvec = the rotation vector representation:
- the rotation vector is the axis of the rotation
- the length of rotation vector is the rotation angle θ in radians [around axis, so rotation vector]
Rvec returned by solvePnP:
rvec =
[[-1.5147142 ]
[ 0.11365167]
[ 0.10590861]]
Then:
angle_around_rvec = sqrt(-1.5147142^2 + 0.11365167^2 + 0.10590861^2) [rad] = 1.52266 [rad] = 1.52266*180/3.14 [deg] = 87.286 [deg]
**1. Does 3 rvec components correspond to world coordinates? Or what are these directions?**
**2. Can I interpret the vector components as separate rotation angles in radians around components directions?**
My rvec components interpretation:
angle_around_X = -1.5147142 [rad] = -1.5147*180/3.14 [deg] = -86.83 [deg]
angle_around_Y = 0.11365167 [rad] = 0.11365167*180/3.14 [deg] = 6.52 [deg]
angle_around_Z = 0.10590861 [rad] = 0.10590861*180/3.14 [deg] = 6.07 [deg]
My usecase:
I have coordinates of four image points. I know the coordinates of these points in the real world. I know camera intrinsic matrix. I use PnP3 to get rotation and translation vector. From rotation matrix, I would like to find out what are the angles around fixed global/world axes: X, Y, Z. I am NOT interested in Euler angles. I want to find out how an object is being rotated around the fixed world coordinates (not it's own coordinate system).
I would really appreciate your help. I feel lost in rotation.
Thank you in advance.dziadygeThu, 23 Aug 2018 13:55:50 -0500http://answers.opencv.org/question/197981/Coordinate system used in the surface_matching modulehttp://answers.opencv.org/question/196387/coordinate-system-used-in-the-surface_matching-module/Hello,
I'm using the surface matching module within a project in Unity through a C++ DLL.
I'm attempting to match two of the same models in different poses, using their vertices as point clouds. So far I have the translation working correctly, but I'm having some trouble interpreting the rotations. I've tried guessing at the coordinate system by applying several combinations of rotations, axis swapping and inverting to the resulting quaternion but I've been unable to reach a complete solution.
I'm uncertain whether the coordinate system used in surface_matching is left or right handed and even if the quaternion in the Pose3D structure is represented as [x,y,z,w] or [w,x,y,z]. Could anyone offer some advice?
Thanks in advance.MrCharlesWed, 25 Jul 2018 11:29:51 -0500http://answers.opencv.org/question/196387/Method for finding Orientation Error using Axis-Anglehttp://answers.opencv.org/question/193675/method-for-finding-orientation-error-using-axis-angle/Hi,
I have a reference value for Roll, pitch and yaw (Euler Angles)
and my estimate for the same. I want to find the error between the two.
If I convert each of the RPY values to a Rotation matrix, I see some possible ways (see below) of finding
the orientation error.
I recently came across this openCV function in the calib3d module: [get_rotation_error](https://github.com/opencv/opencv/pull/11506) that uses Rodrigues/Axis-Angle (I think they mean the same) for finding the error between 2 rotation matrices.
**I have 2 questions** -
1) In the method given in [get_rotation_error](https://github.com/opencv/opencv/pull/11506), it seems to "subtract" the two rotation matrices by transposing one (not sure what the negative sign is about)
error_mat = R_ref * R.transpose() * -1
error = cv::norm( cv::Rodrigues ( error_mat ))
**How are we supposed to interpret the output** ( I believe the output of the cv::norm( rodrigues_vector) is the angle of the rodrigues vector according to openCV convention. Does this mean I simply need to convert it to degrees to find the angle error (between reference and my estimates) in degrees ?
I would also like to mention that **this method keeps returning 3.14159** even for wildly different values of the reference and my estimates. Is there something that I''m missing ?
======
2) I thought of another method, slightly different from the above , what if I do the following -
my_angle = cv::norm (cv::Rodrigues ( R ))
reference_angle = cv::norm (cv::Rodrigues ( R_ref ))
error = reference_angle - my_angle
**Is there something wrong** with method 2) ? I have tried it and it gives a different output compared to method 1).
I would be very grateful if someone can answer the above queries or even point me in the right direction.
Thanks!malharjajooTue, 12 Jun 2018 21:05:54 -0500http://answers.opencv.org/question/193675/Strange ArUco behavior / OpenCV SolvePnphttp://answers.opencv.org/question/190853/strange-aruco-behavior-opencv-solvepnp/Hi There,
I tested the accuracy of ArUco markers in severeal distances and used for that a board with a not rotated marker, several rotated marker on their z-Axis (known rotation) and several more not rotated marker. All markers are measured in their rotation in relation to the first not rotated marker.
Now I calculate the transformation (with quaternions) from the marker of interest in the reference marker. The output for my accuracy is the angle theta from the axis angles. The strange thing is, that the rotation error of the rotated (5°, 20°, 30°, 45°, 90° 180°) markers is small (max. 1°) and the error of the not roted marker (0°) is large (>2°).
I do the subpixel corner refinement and the detected markers looks fine for me. I can exclude the error of the camera, because changing positions (near to the center of the image or not) does not change the accuracy. As well I switched the IDs.
How can it be, that rotated marker are more accurate then non roated marker. Could it be a singularity on the detection or numeric errors?
Thank you for your help!
Sarahsarah1802Fri, 04 May 2018 02:31:17 -0500http://answers.opencv.org/question/190853/How can I get the accuracy between two angles (euler or other)?http://answers.opencv.org/question/185672/how-can-i-get-the-accuracy-between-two-angles-euler-or-other/ Hi Together,
I have a board of marker and detect their angles in respect to the camera. I know that one marker to the other marker should have been rotated with 5° or an other already known angle about the z-Axis. Due to the camera - marker "relationship" is there always a "flip offset" of 180°-X (X is there because I captured the pictures not perpendicular). Now I get for instance the angle (euler ZYX):
A -178.155774553622°; -1.81510372911041°; 5.46620496345042° (rotated marker 5°)
B 175.347721071838°; -1.19249002241927°; -0.334586900200200° (reference "zero" marker 0°)
C -6.49650437453965 °; -0.622613706691140°; 5.80079186365062° (the difference between marker 5° and marker 0°)
D 0°; 0°; 5° (the difference it should be)
My problem is, that depending on the convention (XYZ/ZYX/ZXZ and so on...) there are alway different angles. I know that should be like that. But I don't know how to calculate the difference in a prober way, so that I can compare what the real difference over each axis is.
Is there any way to compare the angles in a better way, maybe not in euler angles but in a way to say "1° decree offset"?
Thank you very much
Sarahsarah1802Wed, 28 Feb 2018 04:57:04 -0600http://answers.opencv.org/question/185672/Centering opencv rotationhttp://answers.opencv.org/question/182793/centering-opencv-rotation/I'm having difficulties getting opencv rotations to center.
The rotation must retain all data so no clipping is allowed.
My first test case is using 90 and -90 degrees to simplify the transformation matrix (see https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html)
I also thought the best way to observe rotations is to use a simple case where the border pixel values are set to observe how the box rotates.
The python code I tried came from Flemin's Blog on rotation (http://john.freml.in/opencv-rotation)
Below is a picture of the original non-rotated image in python. Use the grey point as (4,9) for reference.
![image description](/upfiles/1516341559715749.png)
Then after running the python script (script below), I get a rotation where it is shifted to the right one column. Note the reference point is at (1,4) when it should be at (0,4)
![image description](/upfiles/15163417082184282.png)
Below is the Python script. I added width and height offsets to the function to allow me to experiment with offsets to the tx and ty rotation parameters. I found that setting the width offset to 1 made the 90 degree rotation case match Matlab, but it didn't help -90.
UPDATE 1/19 9AM: I tried setting offset = -0.5 in the function rotate_about_center() below and the 90 and -90 degree rotations center as expected. For a 10x10 image, the reasoning why this may work is that the center point defined by (cols/2, rows/2) is not (5,5), but rather (4.5, 4.5). The same logic applied to a 11x11 image: the center is not (5.5,5.5) but rather (5,5). Rotations at 45 and -45 don't center - meaning they visually don't look centered in the box computed of size nw x nh. So I think I understand why a "center" equal to (cols/2 - 0.5, rows/2 - 05) works but a center of (cols/2, rows/2) does not, however, most examples I've found do not subtract the 0.5.
import cv2
from matplotlib import pyplot as plt
import functools
import math
bwimshow = functools.partial(plt.imshow, vmin=0, vmax=255,
cmap=plt.get_cmap('gray'))
def rotate_about_center(src, angle, widthOffset=0., heightOffset=0, scale=1.):
w = src.shape[1]
h = src.shape[0]
# Add offset to correct for center of images.
wOffset = -0.5
hOffset = -0.5
rangle = np.deg2rad(angle) # angle in radians
# now calculate new image width and height
nw = (abs(np.sin(rangle)*h) + abs(np.cos(rangle)*w))*scale
nh = (abs(np.cos(rangle)*h) + abs(np.sin(rangle)*w))*scale
print("nw = ", nw, "nh = ", nh)
# ask OpenCV for the rotation matrix
rot_mat = cv2.getRotationMatrix2D((nw*0.5 + wOffset, nh*0.5 + hOffset), angle, scale)
# calculate the move from the old center to the new center combined
# with the rotation
rot_move = np.dot(rot_mat, np.array([(nw-w)*0.5 + widthOffset, (nh-h)*0.5 + heightOffset,0]))
# the move only affects the translation, so update the translation
# part of the transform
rot_mat[0,2] += rot_move[0]
rot_mat[1,2] += rot_move[1]
return cv2.warpAffine(src, rot_mat, (int(math.ceil(nw)), int(math.ceil(nh))), flags=cv2.INTER_LANCZOS4)
def main():
# create image
rows = 10
cols = 10
angle = -90
widthOffset = 0 # need 1 to match 90 degrees and ? for -90 degrees.
heightOffset = 0
img = np.zeros((rows,cols), np.float32)
img[:, 0] = 255
img[:, cols-1] = 255
img[0, :] = 200
img[rows-1, :] = 200
# mark some pixels for reference points.
img[0, int(cols/2 - 1)] = 0
img[rows-1, int(cols/2) - 1] = 100
bwimshow(img)
plt.show()
img = rotate_about_center(img, angle, widthOffset, heightOffset)
print("img shape = ", img.shape)
print('Data type', img.dtype)
bwimshow(img)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
main()enter code here
I apologize for the reams and reams of code, but hopefully it makes it easier for someone to replicate the problem.epattonFri, 19 Jan 2018 00:27:55 -0600http://answers.opencv.org/question/182793/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/