OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sun, 29 Mar 2020 06:33:36 -0500Inverse Matrix from a Decomposehomographyhttp://answers.opencv.org/question/228224/inverse-matrix-from-a-decomposehomography/I need to obtain the inverse of a rotation matrix I get from decomposehomography() but I'm having some trouble as it looks like the type of the matrix I obtain from that function does not seem to work with .inv(). Here's an example where Prev_rot_matrix is another matrix
int solutions = decomposeHomographyMat(H, cameraMatrix, Rs_decomp, ts_decomp, normals_decomp);
for (int i = 0; i < solutions; i++)
{
if(normals_decomp[i].at<double>(2)>0)
{
aux=FrameVar(Rs_decomp[i], Prev_rot_matrix); /*Prev_rot_matrix has the same structure as Rs_decomp[i]*/
if(aux<Var)
{
Var=aux;
SOL=i;
}
}
}
double FrameVar(Mat Rot_Curr, Mat Rot_Prev)
{
//Previous rotation: Rot_Prev ;
current rotation: Rot_Curr
double NewAngle, OldAngle, aux;
Mat Rot_Curr_vect, Rot_Prev_vect;
Mat VarAngle(cv::Size(1,1), CV_64FC1);
Rodrigues(Rot_Curr, Rot_Curr_vect);
Rot_Prev_vect=Rot_Prev.inv()*Rodrigues(Rot_Prev, Rot_Prev_vect);
...
So when I try to compile it I get:
> ‘cv::MatExpr’ is not derived from
> ‘const cv::Affine3<T>’
> Rot_Prev_vect=Rot_Prev.inv()*Rodrigues(Rot_Prev,
> Rot_Prev_vect);
and a bunch of other errors.
In case that that meant Rot_Prev class does not have .inv(), how can I obtain its inverse matrix? I want to get the previous rotation vector in the new frame coordinatesAquasSun, 29 Mar 2020 06:33:36 -0500http://answers.opencv.org/question/228224/Why OpenCV uses Rodrigues rotation vector instead of Cayley's formula?http://answers.opencv.org/question/222656/why-opencv-uses-rodrigues-rotation-vector-instead-of-cayleys-formula/I was a fan of Rodrigues rotation formula [1] until I learned about Cayley's formula for orthonormal matrices [2]. If I understand correctly Rodrigues is not meant necessarily to produce a 3x3 rotation matrix but to rotate a vector with a computationally light formula. So maybe OpenCV uses this formula internally to do all transformations? (skipping the calculation of the rotation matrix). Just
But when it comes to producing a 3x3 rotation matrix it seems, at a glance, that Cayley's formula is much simpler.
So why not use Cayley instead of Rodrigues?
[1] https://docs.opencv.org/ref/master/d9/d0c/group__calib3d.html#ga61585db663d9da06b68e70cfbf6a1eac
[2] Murray, R. M.; Li, Z. & Sastry, S. S. A mathematical introduction to robotic manipulation Book, CRC press, 1994, 29, 214-222JamesBaxterMon, 02 Dec 2019 13:54:19 -0600http://answers.opencv.org/question/222656/Rotation matrix to rotation vector (Rodrigues function)http://answers.opencv.org/question/85360/rotation-matrix-to-rotation-vector-rodrigues-function/Hello,
I have a 3x3 rotation matrix that I obtained from stereoCalibrate (using the ros stereo calibration node). I need to obtain a rotation vector (1x3), therefore I used the rodrigues formula. When I went to check the result that I got with this in matlab using the [Pietro Perona - California Institute of Technology](http://www.mathworks.com/matlabcentral/fileexchange/41511-deprecated-light-field-toolbox-v0-2-v0-3-now-available/content/LFToolbox0.2/SupportFunctions/CameraCal/rodrigues.m) I get two different results:
This is the code in cpp:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <tf/transform_broadcaster.h>
#include <ros/param.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <cv_bridge/cv_bridge.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
std::vector<double> T, R;
double cols, rows, R_cols, R_rows;
int i, j;
cv::Mat rot_vec = Mat::zeros(1,3,CV_64F), rot_mat = Mat::zeros(3,3,CV_64F);
ros::init(argc, argv, "get_extrinsic");
ros::NodeHandle node;
if(!node.getParam("translation_vector/cols", cols))
{
ROS_ERROR_STREAM("Translation vector (cols) could not be read.");
return 0;
}
if(!node.getParam("translation_vector/rows", rows))
{
ROS_ERROR_STREAM("Translation vector (rows) could not be read.");
return 0;
}
T.reserve(cols*rows);
if(!node.getParam("rotation_matrix/cols", cols))
{
ROS_ERROR_STREAM("Rotation matrix (cols) could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/rows", rows))
{
ROS_ERROR_STREAM("Rotation matrix (rows) could not be read.");
return 0;
}
R.reserve(cols*rows);
if(!node.getParam("translation_vector/data", T))
{
ROS_ERROR_STREAM("Translation vector could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/data", R))
{
ROS_ERROR_STREAM("Rotation matrix could not be read.");
return 0;
}
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
rot_mat.at<double>(i,j) = R[i*3+j];
}
std::cout << "Rotation Matrix:"<<endl;
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
std::cout<< rot_mat.at<double>(i,j) << " ";
std::cout << endl;
}
std::cout << endl;
std::cout << "Rodrigues: "<<endl;
Rodrigues(rot_mat, rot_vec);
for(i=0; i<3; i++)
std::cout << rot_vec.at<double>(1,i) << " ";
std::cout << endl;
ros::spin();
return 0;
};
And its output is:
Rotation Matrix:
-0.999998 -0.00188887 -0.000125644
0.0018868 -0.999888 0.014822
-0.000153626 0.0148217 0.99989
Rodrigues:
0.0232688 3.13962 4.94066e-324
But when I load the same rotation matrix in matlab and use the rodrigues function I get the following:
R =
-1.0000 -0.0019 -0.0001
0.0019 -0.9999 0.0148
-0.0002 0.0148 0.9999
>> rodrigues(R)
ans =
-0.0002
0.0233
3.1396
I can see that the numbers match, but they are in different positions and there seems to be an issue also with the signs.....Which formula should I trust?aripodMon, 25 Jan 2016 07:54:16 -0600http://answers.opencv.org/question/85360/marker to gl-matrix via OpenCV.jshttp://answers.opencv.org/question/203256/marker-to-gl-matrix-via-opencvjs/Let's assume we get `rvecs` and `tvecs` via `cv.estimatePoseSingleMarkers()`
Then going through each marker, we can get the 1x3 vectors like this (found in online samples):
const rotation = [rvecs.doublePtr(0, i)[0], rvecs.doublePtr(0, i)[1], rvecs.doublePtr(0, i)[2]];
const translation = [tvecs.doublePtr(0, i)[0], tvecs.doublePtr(0, i)[1], tvecs.doublePtr(0, i)[2]];
How can this be converted so it's compatible with gl-matrix's [fromRotationTranslation](http://glmatrix.net/docs/module-mat4.html#.fromRotationTranslation)?
Specifically - it seems gl-matrix expects the rotation in a quaternion... I see some sample C++ code to do that conversion [here](https://gist.github.com/shubh-agrawal/76754b9bfb0f4143819dbd146d15d4c8), but it assumes a Rodrigues function in OpenCV to get from the 1x3 vector to a 3x3 rotation matrix - I think?
The Rodrigues function seems to be missing in OpenCV.js... and it's not clear to me what exactly `cv.estimatePoseSingleMarkers()` is returning in each `rvec` (is it euler radians? something else?)dakomWed, 14 Nov 2018 16:30:02 -0600http://answers.opencv.org/question/203256/Method for finding Orientation Error using Axis-Anglehttp://answers.opencv.org/question/193675/method-for-finding-orientation-error-using-axis-angle/Hi,
I have a reference value for Roll, pitch and yaw (Euler Angles)
and my estimate for the same. I want to find the error between the two.
If I convert each of the RPY values to a Rotation matrix, I see some possible ways (see below) of finding
the orientation error.
I recently came across this openCV function in the calib3d module: [get_rotation_error](https://github.com/opencv/opencv/pull/11506) that uses Rodrigues/Axis-Angle (I think they mean the same) for finding the error between 2 rotation matrices.
**I have 2 questions** -
1) In the method given in [get_rotation_error](https://github.com/opencv/opencv/pull/11506), it seems to "subtract" the two rotation matrices by transposing one (not sure what the negative sign is about)
error_mat = R_ref * R.transpose() * -1
error = cv::norm( cv::Rodrigues ( error_mat ))
**How are we supposed to interpret the output** ( I believe the output of the cv::norm( rodrigues_vector) is the angle of the rodrigues vector according to openCV convention. Does this mean I simply need to convert it to degrees to find the angle error (between reference and my estimates) in degrees ?
I would also like to mention that **this method keeps returning 3.14159** even for wildly different values of the reference and my estimates. Is there something that I''m missing ?
======
2) I thought of another method, slightly different from the above , what if I do the following -
my_angle = cv::norm (cv::Rodrigues ( R ))
reference_angle = cv::norm (cv::Rodrigues ( R_ref ))
error = reference_angle - my_angle
**Is there something wrong** with method 2) ? I have tried it and it gives a different output compared to method 1).
I would be very grateful if someone can answer the above queries or even point me in the right direction.
Thanks!malharjajooTue, 12 Jun 2018 21:05:54 -0500http://answers.opencv.org/question/193675/Obtaining Euler angles from Rodrigues Rotation Matrixhttp://answers.opencv.org/question/189455/obtaining-euler-angles-from-rodrigues-rotation-matrix/ Hi,
I wish to extract Euler angles from the **rvec** output parameter of [cv::solvePnp](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool%20solvePnP(InputArray%20objectPoints,%20InputArray%20imagePoints,%20InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20OutputArray%20rvec,%20OutputArray%20tvec,%20bool%20useExtrinsicGuess,%20int%20flags)).
I understand that 3x1 rvec needs to be passed on to the [Rodrigues](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)) function to obtain the 3x3 rotation matrix.
But to obtain Euler Angles, I need to use a fixed convention ( see [this](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) , for example Z-Y-X,etc ) which requires the rotation matrix to be obtained from a **permutation (fixed order) of multiplication of Matrices** ,
**eg: for Z-Y-X convention,
R_resultant = Rz * Ry * Rx**.
I have looked into the source code [here](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L251) for Rodrigues function but don't quite understand how the matrix is formed Hence -
**My Question**: What is the **convention** (Z-Y-X, X-Y-Z, etc) of the formation of the 3x3 Rotation matrix obtained from Rodrigues ? Without this information I cannot obtain Euler angles. I have seen [this tutorial](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) for real time pose estimation, but unfortunately I don't understand the assumption for the convention (and hence I had made another question for that - [here](http://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/?comment=189453#post-id-189453)).malharjajooMon, 16 Apr 2018 10:54:16 -0500http://answers.opencv.org/question/189455/Aruco Detection - OpenCV Error: Incorrect size of input arrayhttp://answers.opencv.org/question/183056/aruco-detection-opencv-error-incorrect-size-of-input-array/ Hi There,
I try to detect my ArUco markers with this posted code (only part of it) and get on the last line the OpenCV Error: Incorrect size of input array ... in cvRodrigues2, file C:\...
It worked if I only have one marker in the picture. I think more marker are the problem? But how to solve it (as a beginner I am not sure, what to look for. I have searched the whole day but did not find anything :-( )
while (true)
{
if (playVideo == true)
{
if (!vid.read(frame))
break;
aruco::detectMarkers(frame, markerDictionary, markerCorners, markerIds);
aruco::estimatePoseSingleMarkers(markerCorners, arucoSquareDimension, cameraMatrix, distortionCoefficients, rotationVectors, translationVectors);
for (int i = 0; i < markerIds.size(); i++)
{
aruco::drawDetectedMarkers(frame, markerCorners, markerIds); //Zeichnet die Umrandung und die ID auf die Marker
aruco::drawAxis(frame, cameraMatrix, distortionCoefficients, rotationVectors[i], translationVectors[i], arucoSquareDimension*0.5f); //Zeichnet die Koordinatensysteme, die Zahl gibt die Größe der Vektoren in Meter an
//Hier wird die Marker ID in der Konsole ausgegeben
std::cout << "Die ID des Markers lautet:";
for (std::vector<int>::iterator it = markerIds.begin(); it != markerIds.end(); ++it)
std::cout << ' ' << *it;
std::cout << '\n';
//Hier wird die Rotation in der Konsole ausgegeben *180)/(3.14159265359)
std::cout << "Die Rotation des Markers ist:";
for (std::vector<Vec3d>::iterator it = rotationVectors.begin(); it != rotationVectors.end(); ++it)
std::cout << ' ' << (*it * 180) / (3.14159265359) << " grad";
std::cout << '\n';
//Rotationsmatrix
Mat R;
cv::Rodrigues(rotationVectors, R);
Thank you!
Sarahsarah1802Tue, 23 Jan 2018 08:53:19 -0600http://answers.opencv.org/question/183056/Marker pose calculation with solvepnphttp://answers.opencv.org/question/178158/marker-pose-calculation-with-solvepnp/ Hi,
I'm trying to get my rectangular marker pose via the `solvepnp()` function. It just needs to detect the marker pose, it doesn't need to place an object on top of it. Opencv gives the next information about this function:
bool cv::solvePnP (
InputArray objectPoints,
InputArray imagePoints,
InputArray cameraMatrix,
InputArray distCoeffs,
OutputArray rvec,
OutputArray tvec,
bool useExtrinsicGuess = false,
int flags = SOLVEPNP_ITERATIVE)
I believe that the `imagePoints` array are the corners of my rectangular marker. And the `cameraMatrix` and `distCoeffs` have something to do with the camera parameters? I have already calibrated my camera with the openCV sample program, so I have a parameter xml file, can I do something with this? The `rvec` and `tvec` are offcours the rotation and translation vectors and via `Rodrigez()` I can get the pose of the marker. But for the `objectPoints` array, I have no idee. and don't know what I have to do with it.
Can someone help me with this?
Thanks!MaxWellSun, 12 Nov 2017 17:13:49 -0600http://answers.opencv.org/question/178158/OpenCV SolvePnP strange valueshttp://answers.opencv.org/question/176922/opencv-solvepnp-strange-values/Hello,
I experiment on a project.
I use **SolvePnP** to find rotation vector on an object.
Since the values are hard to understand, I used 3D software to define specific values that I am trying to find with OpenCV.
I've got a plane in the center on my scene. I apply rotations on X, Y or Z.
In example bellow, rotations are defined on :
**x=30°
y=0°
z=30°**
I've got good values for focalLength, fov, etc.
![image description](/upfiles/1508940197829013.jpg)
As you can see, the **cv2.projectPoints** works perfectly on my image.
When I call **SolvePnP**, the **rvecs returns strange values**.
For rotation X, I've got 28.939°
For rotation X, I've got 7.916°
For rotation Z, I've got 29.02031°
So when I try to map a plane with WebGL, I've got the result on image bellow (red plane)
![image description](/upfiles/15089407414127149.jpg)
**So here is my question.
Why SolvePnP doesn't return x:30°, y:0° and z:30° !
It's very strange no ???**
Do I have to use **Rodrigues** somewhere? If yes, how ?
Is there a lack of precision somewhere?
Thanks
Loïc
kopacabana73Wed, 25 Oct 2017 09:18:24 -0500http://answers.opencv.org/question/176922/OpenCV to OpenGL or WebGLhttp://answers.opencv.org/question/176796/opencv-to-opengl-or-webgl/ Hello,
Sorry for my awful english !
I use OpenCV to detect image in image. Everything works perfectly.
![image description](/upfiles/15087636962778099.jpg)
Now I want to use values returns by OpenCV with SolvePnp and Rodrigues in a WebGL project. I read many articles, books, posts, etc but that's not working and I don't understand why :((
I create a matrix 'projectionMatrix' with this function :
function openCVCameraMatrixToProjectionMatrix(fx, fy, cx, cy, zfar, znear, width, height){
var m = [
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
];
m[0][0] = 2.0 * fx / width;
m[0][1] = 0.0;
m[0][2] = 0.0;
m[0][3] = 0.0;
m[1][0] = 0.0;
m[1][1] = -2.0 * fy / height;
m[1][2] = 0.0;
m[1][3] = 0.0;
m[2][0] = 1.0 - 2.0 * cx / width;
m[2][1] = 2.0 * cy / height - 1.0;
m[2][2] = (zfar + znear) / (znear - zfar);
m[2][3] = -1.0;
m[3][0] = 0.0;
m[3][1] = 0.0;
m[3][2] = 2.0 * zfar * znear / (znear - zfar);
m[3][3] = 0.0;
return m;
}
After that, I create a **cameraMatrix** with values returns by **SolvePnp** (rVec and tVec) :
var cameraMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1];
cameraMatrix = m4.xRotate(cameraMatrix, rVecs[0]);
cameraMatrix = m4.yRotate(cameraMatrix, rVecs[1]);
cameraMatrix = m4.zRotate(cameraMatrix, rVecs[2]);
cameraMatrix = m4.translate(cameraMatrix, tVecs[0], tVecs[1], tVecs[2]);
I'm not sure calculations matrix order are OK ! And I'm sure I made errors in this part (but not only in this part) !
----------
After I inverse camera matrix in a new variable
var viewMatrix = m4.inverse(cameraMatrix);
I compute a **viewProjectionMatrix**
var viewProjectionMatrix = m4.multiply(projectionMatrix, viewMatrix);
----------
After all these operations, I use **uniformMatrix4fv** with a plane (-0.5, -0.5, 1.0, 1.0):
gl.uniformMatrix4fv(matrixLocation, false, modelMatrix);
----------
I also try to use Rodrigues values directly on **modelMatrix** but that didn't work...
----------
None of these attempts work, that never match.
**Can someone help me? Please**
I post on stack (code, explanations, etc) : https://stackoverflow.com/questions/46833757/opencv-to-webgl
Loïc
kopacabana73Mon, 23 Oct 2017 08:24:24 -0500http://answers.opencv.org/question/176796/Rodrigues rotationhttp://answers.opencv.org/question/163351/rodrigues-rotation/I do not understand the difference between these two equations:
<br>
1. from wikipedia:
![wiki Rodrigues formula](https://wikimedia.org/api/rest_v1/media/math/render/svg/14de5f7bfa4af6a7867008d8fd790d14e3a54530)
https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
<br>
2. from open CV doc:
![cv2 Rodrigues formal](http://docs.opencv.org/2.4/_images/math/8bffbe8d9297cebc136dc8ead9a40cad3940a640.png)
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)
<br>
Where is the **cos(θ)** gone on the wiki page in the formula 1. ?
Shouln't it be: v_{rot} = cos(θ)v + sin.... ?
Then on the wiki page, there is no more cos(θ) in the definition of R...
<br>
Or did I miss something?
swiss_knightSun, 02 Jul 2017 16:47:32 -0500http://answers.opencv.org/question/163351/Rotation matrix to euler angles with opencv c++ correct value wrong placehttp://answers.opencv.org/question/139472/rotation-matrix-to-euler-angles-with-opencv-c-correct-value-wrong-place/ I am working on a project wich involves Aruco markers and opencv. I am quite far in the project progress. I can read the rotation vectors and convert them to a rodrigues matrix using rodrigues() from opencv.
This is a example of a rodrigues matrix I get:
[0,1,0;
1,0,0;
0,0,-1]
I use the following code.
Mat m33(3, 3, CV_64F);
Mat measured_eulers(3, 1, CV_64F);
Rodrigues(rotationVectors, m33);
measured_eulers = rot2euler(m33);
Degree_euler = measured_eulers * 180 / CV_PI;
I use the predefined rot2euler to convert from rodrigues matrix to euler angles. And I convert the received radians to degrees.
rot2euler looks like the following.
Mat rot2euler(const Mat & rotationMatrix)
{
Mat euler(3, 1, CV_64F);
double m00 = rotationMatrix.at<double>(0, 0);
double m02 = rotationMatrix.at<double>(0, 2);
double m10 = rotationMatrix.at<double>(1, 0);
double m11 = rotationMatrix.at<double>(1, 1);
double m12 = rotationMatrix.at<double>(1, 2);
double m20 = rotationMatrix.at<double>(2, 0);
double m22 = rotationMatrix.at<double>(2, 2);
double x, y, z;
// Assuming the angles are in radians.
if (m10 > 0.998) { // singularity at north pole
x = 0;
y = CV_PI / 2;
z = atan2(m02, m22);
}
else if (m10 < -0.998) { // singularity at south pole
x = 0;
y = -CV_PI / 2;
z = atan2(m02, m22);
}
else
{
x = atan2(-m12, m11);
y = asin(m10);
z = atan2(-m20, m00);
}
euler.at<double>(0) = x;
euler.at<double>(1) = y;
euler.at<double>(2) = z;
return euler;
}
If I use the rodrigues matrix I give as an example I get the following euler angles.
[0; 90; -180]
But I am suppose to get the following.
[-180; 0; 90]
When is use this tool
[danceswithcode.net/engineeringnotes/rotations_in_3d/demo3D/rotations_in_3d_tool.html]
You can see that [0; 90; -180] doesn't match the rodrigues matrix but [-180; 0; 90] does. (I am aware of the fact that the tool works with ZYX coordinates)
So the problem is I get the correct values but in a wrong order.
Another problem is that this isn't always the case. For example rodrigues matrix:
[1,0,0;
0,-1,0;
0,0,-1]
Provides me the correct euler angles.
If someone knows a solution to the problem or can provide me with a explanation how the rot2euler function works exactly. It will be higly appreciated.
Kind Regards
Brent Convens
Brent ConvensWed, 12 Apr 2017 05:03:32 -0500http://answers.opencv.org/question/139472/Am I doing something wrong ? Rodrigues rvec1 to R , R to rvec2 ,rvec1 != rvec2.http://answers.opencv.org/question/133497/am-i-doing-something-wrong-rodrigues-rvec1-to-r-r-to-rvec2-rvec1-rvec2/ cout << rotation_vector << endl;
Mat rmat;
cv::Rodrigues(rotation_vector, rmat);
cv::Rodrigues(rmat, rotation_vector);
cout << rotation_vector << endl;
Thanks a lot.OguzFri, 10 Mar 2017 12:01:42 -0600http://answers.opencv.org/question/133497/Rotation and Translation questionhttp://answers.opencv.org/question/95782/rotation-and-translation-question/ I've used OpenCV's calibrateCamera function to give me the rvecs and tvecs to try to get the transformation from that of my camera with world coordinates (0,0,0) to that of a chessboard. I'm using it to find the corners of the chessboard with respect to the camera.
I'm confused as to how to apply the rvec and tvec to get the transformation. Do I just do (0,0,0)*rvec + tvec. If this is the case, how do I invert the transformation, as I'll need this for something else as well?
After doing research online I realized I may have to use the Rodrigues() function to get the rotation matrix from rvec, augment that with the tvec, and then add a row of (0,0,0,1) to get a 4x4 transformation matrix. This way, I would be able to get the inverse. However, If that is the case, how do I multiply that by (0,0,0)? Am I just supposed to do (0,0,0,1)*(rotm|tvec, 0,0,0,1)?
I'm not sure which of the two methods to use, I'm doubtful about how to proceed with either of them.sourmanbTue, 07 Jun 2016 01:19:52 -0500http://answers.opencv.org/question/95782/Rodrigues giving empty Matrixhttp://answers.opencv.org/question/88581/rodrigues-giving-empty-matrix/ Hi,
I'm implementing markerbased tracking on Android and I'm stuck at the rotation matrix.
Up until the solvePnP everything works fine and I get results (rvec, tvec) that seem legit.
But after using the rvec in Rodrigues for having a rotation matrix, it just results in a 3x3 Matrix, filled only with zeroes.
I really don't get this, because there are definitely values in rvec, but Rodrigues just doesn't use them?
I hope somebody knows whats wrong.
Heres the code snippet:
Mat rvec = new Mat(3,1,CvType.CV_64F);
Mat tvec = new Mat(3,1,CvType.CV_64F);
MatOfPoint3f markerPointsMat = new MatOfPoint3f(new Point3(0,0,0),new Point3(1,0,0),new Point3(1,1,0),new Point3(0,1,0));
MatOfPoint2f pointsMat = new MatOfPoint2f(points.get(0),points.get(1),points.get(2),points.get(3));
//cam matrix, by hand
Mat cam=new Mat(3,3,CvType.CV_64F);
cam.zeros(cam.size(),CvType.CV_64F);
cam.put(0, 0, 1400.533595140377 );
cam.put(0, 2, 175.5 );
cam.put(1, 1, 1400.533595140377 );
cam.put(1, 2, 143.5);
cam.put(2,2,1.0);
double[] a = {0.1155606860709641,-0.3046292381782507,0,0,0};
MatOfDouble dist = new MatOfDouble(a);
Calib3d.solvePnP(markerPointsMat, pointsMat, cam, dist, rvec, tvec);
//rot vector to matrix
Mat rmat = Mat.zeros(3,3,CvType.CV_64F);
Rodrigues(rvec, rmat);eqnxThu, 25 Feb 2016 09:28:12 -0600http://answers.opencv.org/question/88581/Opencv - Camera position relative to Aruco Markershttp://answers.opencv.org/question/69316/opencv-camera-position-relative-to-aruco-markers/I've been playing around with Aruco markers for a while and I want to start using them for a purpose. I created a board 2:2 of markers, calibrated my camera and printed out Rvec & Tvec (Rotation and translation vectors) of the detected board:
> detectedBoard =
> TheBoardDetector.getDetectedBoard();
> rvec = detectedBoard.Rvec; tvec =
> detectedBoard.Tvec;
After that I used a code snippet that I found online for getting the real position of the camera relative to the markers:
cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec,cv::Mat Tvec) {
cv::Mat m33(3,3,CV_32FC1);
cv::Rodrigues(Rvec, m33) ;
cv::Mat m44=cv::Mat::eye(4,4,CV_32FC1);
for (int i=0;i<3;i++)
for (int j=0;j<3;j++)
m44.at<float>(i,j)=m33.at<float>(i,j);
//now, add translation information
for (int i=0;i<3;i++)
m44.at<float>(i,3)=Tvec.at<float>(0,i);
//invert the matrix
m44.inv();
return cv::Point3f( m44.at<float>(0,0),m44.at<float>(0,1),m44.at<float>(0,2));
}
I tested that code but I got random results. The results changed when I moved my camera but I don't know the unit of measurement. My questions:
1- Is that function correct? I don't know anything about solvepnp nor Rodrigues functions and I tried reading about them but didn't understand a thing.
2- Can I get (in meters) the coordinates (x, y, z) of the camera relative to the markers?
3- How?
I know that I might be lazy a bit for not testing those functions, but I guess that question will be a piece of cake for an experienced person in computer vision and opencv.
Thank you!
Cheers.
NicolasJamalMon, 24 Aug 2015 10:37:35 -0500http://answers.opencv.org/question/69316/cv::Rodrigues @ opencv3http://answers.opencv.org/question/50614/cvrodrigues-opencv3/Hi,
the example below (with any rotation matrix)
works fine in 2.4.8 but throws in 3.0 at cv::Rodrigues.
What am i missing?
cv::Mat mat = cv::Mat::eye(cv::Size(3,3), CV_32F);
mat.at<float>(1,1)*=-1;
mat.at<float>(2,2)*=-1;
cv::Mat rvec = cv::Mat(1,3,CV_32F);
cv::Rodrigues(mat, rvec);
Using Windows 8.1, downloaded binaries, Visual Studio 2012,
linked dynamic libs. (Not getting any exception text since its
thrown in a dll?!)
Thanks for any help.svidosoTue, 11 Nov 2014 16:01:15 -0600http://answers.opencv.org/question/50614/strange stereorectify error with rotation matrixhttp://answers.opencv.org/question/3441/strange-stereorectify-error-with-rotation-matrix/Hi,
I am doing stereo rectification using calibration data for a stereo pair in standard form (intrinsics as 3x3 Mat, distortions as 5x1 Mat, Rotation as 3x3 Mat and Translation as 3x1 Mat). But I keep getting the following exception
OpenCV Error: Formats of input arguments do not match (All the matrices must have
the same data type) in cvRodrigues2,
file /build/buildd/opencv-2.3.1/modules/calib3d/src/calibration.cpp, line 507
terminate called after throwing an instance of 'cv::Exception' what():
/build/buildd/opencv-2.3.1/modules/calib3d/src/calibration.cpp:507: error: (-205)
All the matrices must have the same data type in function cvRodrigues2
Here is the calibration data [http://pastebin.com/uGLzjYqx](http://pastebin.com/uGLzjYqx)
I call `stereorectify` with the following
cv::stereoRectify(ints_left_, ints_right_, dtcs_left_, dtcs_right_,
cv::Size(640, 480), rotation_, translation_,
homography_left_, homography_right_,
rppm_left_, rppm_right_, qmat_);
Thanks
makokalWed, 24 Oct 2012 08:52:15 -0500http://answers.opencv.org/question/3441/cv::Rodrigues problemhttp://answers.opencv.org/question/9266/cvrodrigues-problem/I have two rotation matrices
<a href="https://d37wxxhohlp07s.cloudfront.net/s3_images/871822/img.png?1363284317"><img src="https://d37wxxhohlp07s.cloudfront.net/s3_images/871822/img_inline.png?1363284317" alt="" /></a>
R1 and R2 correspond the Euler angles:
(90, 0 , 0) and (0, 0, 90)
as a result, I expect to get a turn like this
90, 0 , 90
But cv::Rodrigues returns this angles for R3: (69, 69, 69).
What I'm doing wrong tenta4Thu, 14 Mar 2013 13:07:40 -0500http://answers.opencv.org/question/9266/