OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 06 Feb 2019 08:40:35 -0600projectPoints functionality questionhttp://answers.opencv.org/question/96474/projectpoints-functionality-question/ I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/tutorial_py_pose.html#gsc.tab=0 regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.
In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.
I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks! bfc_opencvTue, 14 Jun 2016 21:19:07 -0500http://answers.opencv.org/question/96474/Roll, Pitch, Yaw ROS right hand notation from Aruco marker rvechttp://answers.opencv.org/question/208481/roll-pitch-yaw-ros-right-hand-notation-from-aruco-marker-rvec/I'm trying to get the RPY of an Aruco marker from the camera view using the ROS notation. ROS axis notations are right hand, where positive x points north, y west and z upwards.
I'm following this post http://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ but I can't get it to work properly for ROS notation. This is my implementation:
def rpy_decomposition(self, rvec):
R, _ = cv2.Rodrigues(rvec)
sin_x = math.sqrt(R[2, 0] * R[2, 0] + R[2, 1] * R[2, 1])
singular = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2, 0], R[2, 1]) # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = math.atan2(R[0, 2], -R[1, 2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2, 2]) # around x-axis
z2 = 0 # around z2-axis
z2 = -(2*math.pi -z2)%(2*math.pi)
return z1, x, z2
I can't really find a working code in Python or C++. Thanks
veilkrandWed, 06 Feb 2019 08:40:35 -0600http://answers.opencv.org/question/208481/How to determine the angle of rotation?http://answers.opencv.org/question/205685/how-to-determine-the-angle-of-rotation/ There is a square in an image with equal sides (that is inside another square).
![image description](/upfiles/15453320057265702.jpg)
Does OpenCV have functions which can help to efficiently calculate the angle?
ya_ocv_userThu, 20 Dec 2018 12:55:19 -0600http://answers.opencv.org/question/205685/Dear, I have rvec(rotation vector) and tvec(translation vector)..http://answers.opencv.org/question/204063/dear-i-have-rvecrotation-vector-and-tvectranslation-vector/How can I find the camera pose (EYE vector)? I would like to continue to find the Reflectance. Thank you in advancezar zarMon, 26 Nov 2018 04:47:56 -0600http://answers.opencv.org/question/204063/Rotation matrix to rotation vector (Rodrigues function)http://answers.opencv.org/question/85360/rotation-matrix-to-rotation-vector-rodrigues-function/Hello,
I have a 3x3 rotation matrix that I obtained from stereoCalibrate (using the ros stereo calibration node). I need to obtain a rotation vector (1x3), therefore I used the rodrigues formula. When I went to check the result that I got with this in matlab using the [Pietro Perona - California Institute of Technology](http://www.mathworks.com/matlabcentral/fileexchange/41511-deprecated-light-field-toolbox-v0-2-v0-3-now-available/content/LFToolbox0.2/SupportFunctions/CameraCal/rodrigues.m) I get two different results:
This is the code in cpp:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <tf/transform_broadcaster.h>
#include <ros/param.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <cv_bridge/cv_bridge.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
std::vector<double> T, R;
double cols, rows, R_cols, R_rows;
int i, j;
cv::Mat rot_vec = Mat::zeros(1,3,CV_64F), rot_mat = Mat::zeros(3,3,CV_64F);
ros::init(argc, argv, "get_extrinsic");
ros::NodeHandle node;
if(!node.getParam("translation_vector/cols", cols))
{
ROS_ERROR_STREAM("Translation vector (cols) could not be read.");
return 0;
}
if(!node.getParam("translation_vector/rows", rows))
{
ROS_ERROR_STREAM("Translation vector (rows) could not be read.");
return 0;
}
T.reserve(cols*rows);
if(!node.getParam("rotation_matrix/cols", cols))
{
ROS_ERROR_STREAM("Rotation matrix (cols) could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/rows", rows))
{
ROS_ERROR_STREAM("Rotation matrix (rows) could not be read.");
return 0;
}
R.reserve(cols*rows);
if(!node.getParam("translation_vector/data", T))
{
ROS_ERROR_STREAM("Translation vector could not be read.");
return 0;
}
if(!node.getParam("rotation_matrix/data", R))
{
ROS_ERROR_STREAM("Rotation matrix could not be read.");
return 0;
}
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
rot_mat.at<double>(i,j) = R[i*3+j];
}
std::cout << "Rotation Matrix:"<<endl;
for(i=0; i<3; i++)
{
for(j=0; j<3; j++)
std::cout<< rot_mat.at<double>(i,j) << " ";
std::cout << endl;
}
std::cout << endl;
std::cout << "Rodrigues: "<<endl;
Rodrigues(rot_mat, rot_vec);
for(i=0; i<3; i++)
std::cout << rot_vec.at<double>(1,i) << " ";
std::cout << endl;
ros::spin();
return 0;
};
And its output is:
Rotation Matrix:
-0.999998 -0.00188887 -0.000125644
0.0018868 -0.999888 0.014822
-0.000153626 0.0148217 0.99989
Rodrigues:
0.0232688 3.13962 4.94066e-324
But when I load the same rotation matrix in matlab and use the rodrigues function I get the following:
R =
-1.0000 -0.0019 -0.0001
0.0019 -0.9999 0.0148
-0.0002 0.0148 0.9999
>> rodrigues(R)
ans =
-0.0002
0.0233
3.1396
I can see that the numbers match, but they are in different positions and there seems to be an issue also with the signs.....Which formula should I trust?aripodMon, 25 Jan 2016 07:54:16 -0600http://answers.opencv.org/question/85360/How to find rotation angle from homography matrix?http://answers.opencv.org/question/203890/how-to-find-rotation-angle-from-homography-matrix/I have 2 images and i am finding simliar key points by SURF.
I want to find rotation angle between the two images from homograpohy matrix.
Can someone please tell me how to find rotation angle between two images from homography matrix.
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
Thank you.ronak.dedhiaThu, 22 Nov 2018 23:30:21 -0600http://answers.opencv.org/question/203890/Reverse camera angle w/ aruco trackinghttp://answers.opencv.org/question/203907/reverse-camera-angle-w-aruco-tracking/I have the Aruco tracking working and from cobbling together stuff from various code samples, ended up with the code below, where `final` is the view matrix passed to the camera. The problem is that the rotation of the camera isn't exactly what I need... not sure exactly which axis is wrong, but you can see in the following video that I want the base of the model to be sitting on the marker- but instead it's not oriented quite right. Any tips to get it right would be great! I'm open to re-orienting it in blender too if that's the right solution. Just not sure exactly _how_ it's wrong right now.
Video example:
https://youtu.be/-7WDxa-e2Oo
Code:
const inverse = cv.matFromArray(4,4, cv.CV_64F, [
1.0, 1.0, 1.0, 1.0,
-1.0,-1.0,-1.0,-1.0,
-1.0,-1.0,-1.0,-1.0,
1.0, 1.0, 1.0, 1.0
]);
cv.estimatePoseSingleMarkers(markerCorners, 0.1, cameraMatrix, distCoeffs, rvecs, tvecs);
cv.Rodrigues(rvecs, rout);
const tmat = tvecs.data64F;
const rmat = rout.data64F;
const viewMatrix = cv.matFromArray(4,4,cv.CV_64F, [
rmat[0],rmat[1],rmat[2],tmat[0],
rmat[3],rmat[4],rmat[5],tmat[1],
rmat[6],rmat[7],rmat[8],tmat[2],
0.0,0.0,0.0,1.0
]);
const output = cv.Mat.zeros(4,4, cv.CV_64F);
cv.multiply(inverse, viewMatrix, output);
cv.transpose(output, output);
const final = output.data64F; dakomFri, 23 Nov 2018 02:24:57 -0600http://answers.opencv.org/question/203907/Identify object on a conveyor belthttp://answers.opencv.org/question/201824/identify-object-on-a-conveyor-belt/Hello! I'm thinking of trying out openCV for my robot.
I want the program to be able to identify the metal parts on a conveyor belt that are single ones, and not the ones lying in clusters.
I will buy a raspberry pie with the raspberry pie camera module(is this a good idea for this project?).
I want the program to return the X-Y coordinate(or position of the pixel on the image) on a specific place on the metal part(so that the robot can lift it where it is supposed to be lifted). I would also want the program to have a adjustable degree of freedom of the orientation(rotation) of the single metal part to be localized.
**Where do I even start?**
A simple drawing of the robot
![image description](https://i.imgur.com/YE3LKpV.png)
An image of what the images could look like the program will process(have not bought the final camera yet and lighting).
![image description](https://i.imgur.com/OMXMq5M.jpg)
Here is the metal part I want to pick up from the conveyor belt.
![image description](https://i.imgur.com/uA0buvC.jpg)HatmpatnFri, 26 Oct 2018 01:02:17 -0500http://answers.opencv.org/question/201824/solvePnP with a priori known pitch and rollhttp://answers.opencv.org/question/199943/solvepnp-with-a-priori-known-pitch-and-roll/How to correctly call solvePnP (for estimate the pose of a large ArUco board), if the board orientation (pitch and roll, not yaw) is known (from an IMU)?okalachevSat, 22 Sep 2018 13:59:48 -0500http://answers.opencv.org/question/199943/Triangulation gives weird results for rotationhttp://answers.opencv.org/question/199673/triangulation-gives-weird-results-for-rotation/OpenCV version 3.4.2
I am taking a stereo pair and using recoverPose to get the [R|t] pose of the camera, If I start at the origin and use triangulatePoints the result looks somewhat like expected although I would have expected the z points to be positive;
These are the poses of the cameras [R|t]
p0: [1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 1, 0]
P1: [0.9999726146107655, -0.0007533190856300971, -0.007362237354563941, 0.9999683127209806;
0.0007569149205790131, 0.9999995956157767, 0.0004856419317479311, -0.001340876868928852;
0.007361868534054914, -0.0004912012195572309, 0.9999727804360723, 0.007847012372698725]
I get these results where the red dot and the yellow line indicates the camera pose (x positive is right, y positive is down):
![image description](/upfiles/1537317206819271.png)
When I rotate the first camera by 58.31 degrees and then use recoverPose to get the relative pose of the second camera the results are wrong.
Pose matrices where P0 is rotated by 58.31 degrees around the y axis before calling my code below.
P0: [0.5253219888177297, 0, 0.8509035245341184, 0;
0, 1, 0, 0;
-0.8509035245341184, 0, 0.5253219888177297, 0]
P1: [0.5315721563840478, -0.0007533190856300971, 0.8470126770406503, 0.5319823932782873;
-1.561037994149129e-05, 0.9999995956157767, 0.0008991799591322519, -0.001340876868928852;
-0.8470130118915117, -0.0004912012195572309, 0.5315719296650566, -0.8467543535708145]
(x positive is right, y positive is down)
![image description](/upfiles/15373172174565108.png)
The pose of the second frame is calculated as follows:
new_frame->E = cv::findEssentialMat(last_frame->points, new_frame->points, K, cv::RANSAC, 0.999, 1.0, new_frame->mask);
int res = recoverPose(new_frame->E, last_frame->points, new_frame->points, K, new_frame->local_R, new_frame->local_t, new_frame->mask);
// https://stackoverflow.com/questions/37810218/is-the-recoverpose-function-in-opencv-is-left-handed
// Convert so transformation is P0 -> P1
new_frame->local_t = -new_frame->local_t;
new_frame->local_R = new_frame->local_R.t();
new_frame->pose_t = last_frame->pose_t + (last_frame->pose_R * new_frame->local_t);
new_frame->pose_R = new_frame->local_R * last_frame->pose_R;
hconcat(new_frame->pose_R, new_frame->pose_t, new_frame->pose);
I then call triangulatePoints using the K * P0 and K * P1 on the corresponding points.
I feel like this is some kind of coordinate system issue as the points I would expect to have positive z values have a -z value in the plots and so the rotation is behaving strangely. I haven't been able to figure out what I need to do to fix it.
EDIT: Here is a gif of what's going on as I rotate through 360 degrees around Y. The cameras are still parallel. What am I missing, shouldn't the shape of the point cloud remain the same if both camera poses remain in relative positions even thought they have been rotated around the origin? Why are the points squashed into the X axis?
![image description](/upfiles/15373205818094867.gif)maym86Tue, 18 Sep 2018 14:55:35 -0500http://answers.opencv.org/question/199673/Rotation vector interpretationhttp://answers.opencv.org/question/197981/rotation-vector-interpretation/I use opencv cv2.solvePnP() function to calculate rotation and translation vectors. Rotation is returned as rvec [vector with 3DOF]. I would like to ask for help with interpreting the rvec.
As far as I understand rvec = the rotation vector representation:
- the rotation vector is the axis of the rotation
- the length of rotation vector is the rotation angle θ in radians [around axis, so rotation vector]
Rvec returned by solvePnP:
rvec =
[[-1.5147142 ]
[ 0.11365167]
[ 0.10590861]]
Then:
angle_around_rvec = sqrt(-1.5147142^2 + 0.11365167^2 + 0.10590861^2) [rad] = 1.52266 [rad] = 1.52266*180/3.14 [deg] = 87.286 [deg]
**1. Does 3 rvec components correspond to world coordinates? Or what are these directions?**
**2. Can I interpret the vector components as separate rotation angles in radians around components directions?**
My rvec components interpretation:
angle_around_X = -1.5147142 [rad] = -1.5147*180/3.14 [deg] = -86.83 [deg]
angle_around_Y = 0.11365167 [rad] = 0.11365167*180/3.14 [deg] = 6.52 [deg]
angle_around_Z = 0.10590861 [rad] = 0.10590861*180/3.14 [deg] = 6.07 [deg]
My usecase:
I have coordinates of four image points. I know the coordinates of these points in the real world. I know camera intrinsic matrix. I use PnP3 to get rotation and translation vector. From rotation matrix, I would like to find out what are the angles around fixed global/world axes: X, Y, Z. I am NOT interested in Euler angles. I want to find out how an object is being rotated around the fixed world coordinates (not it's own coordinate system).
I would really appreciate your help. I feel lost in rotation.
Thank you in advance.dziadygeThu, 23 Aug 2018 13:55:50 -0500http://answers.opencv.org/question/197981/Coordinate system used in the surface_matching modulehttp://answers.opencv.org/question/196387/coordinate-system-used-in-the-surface_matching-module/Hello,
I'm using the surface matching module within a project in Unity through a C++ DLL.
I'm attempting to match two of the same models in different poses, using their vertices as point clouds. So far I have the translation working correctly, but I'm having some trouble interpreting the rotations. I've tried guessing at the coordinate system by applying several combinations of rotations, axis swapping and inverting to the resulting quaternion but I've been unable to reach a complete solution.
I'm uncertain whether the coordinate system used in surface_matching is left or right handed and even if the quaternion in the Pose3D structure is represented as [x,y,z,w] or [w,x,y,z]. Could anyone offer some advice?
Thanks in advance.MrCharlesWed, 25 Jul 2018 11:29:51 -0500http://answers.opencv.org/question/196387/Method for finding Orientation Error using Axis-Anglehttp://answers.opencv.org/question/193675/method-for-finding-orientation-error-using-axis-angle/Hi,
I have a reference value for Roll, pitch and yaw (Euler Angles)
and my estimate for the same. I want to find the error between the two.
If I convert each of the RPY values to a Rotation matrix, I see some possible ways (see below) of finding
the orientation error.
I recently came across this openCV function in the calib3d module: [get_rotation_error](https://github.com/opencv/opencv/pull/11506) that uses Rodrigues/Axis-Angle (I think they mean the same) for finding the error between 2 rotation matrices.
**I have 2 questions** -
1) In the method given in [get_rotation_error](https://github.com/opencv/opencv/pull/11506), it seems to "subtract" the two rotation matrices by transposing one (not sure what the negative sign is about)
error_mat = R_ref * R.transpose() * -1
error = cv::norm( cv::Rodrigues ( error_mat ))
**How are we supposed to interpret the output** ( I believe the output of the cv::norm( rodrigues_vector) is the angle of the rodrigues vector according to openCV convention. Does this mean I simply need to convert it to degrees to find the angle error (between reference and my estimates) in degrees ?
I would also like to mention that **this method keeps returning 3.14159** even for wildly different values of the reference and my estimates. Is there something that I''m missing ?
======
2) I thought of another method, slightly different from the above , what if I do the following -
my_angle = cv::norm (cv::Rodrigues ( R ))
reference_angle = cv::norm (cv::Rodrigues ( R_ref ))
error = reference_angle - my_angle
**Is there something wrong** with method 2) ? I have tried it and it gives a different output compared to method 1).
I would be very grateful if someone can answer the above queries or even point me in the right direction.
Thanks!malharjajooTue, 12 Jun 2018 21:05:54 -0500http://answers.opencv.org/question/193675/Strange ArUco behavior / OpenCV SolvePnphttp://answers.opencv.org/question/190853/strange-aruco-behavior-opencv-solvepnp/Hi There,
I tested the accuracy of ArUco markers in severeal distances and used for that a board with a not rotated marker, several rotated marker on their z-Axis (known rotation) and several more not rotated marker. All markers are measured in their rotation in relation to the first not rotated marker.
Now I calculate the transformation (with quaternions) from the marker of interest in the reference marker. The output for my accuracy is the angle theta from the axis angles. The strange thing is, that the rotation error of the rotated (5°, 20°, 30°, 45°, 90° 180°) markers is small (max. 1°) and the error of the not roted marker (0°) is large (>2°).
I do the subpixel corner refinement and the detected markers looks fine for me. I can exclude the error of the camera, because changing positions (near to the center of the image or not) does not change the accuracy. As well I switched the IDs.
How can it be, that rotated marker are more accurate then non roated marker. Could it be a singularity on the detection or numeric errors?
Thank you for your help!
Sarahsarah1802Fri, 04 May 2018 02:31:17 -0500http://answers.opencv.org/question/190853/How can I get the accuracy between two angles (euler or other)?http://answers.opencv.org/question/185672/how-can-i-get-the-accuracy-between-two-angles-euler-or-other/ Hi Together,
I have a board of marker and detect their angles in respect to the camera. I know that one marker to the other marker should have been rotated with 5° or an other already known angle about the z-Axis. Due to the camera - marker "relationship" is there always a "flip offset" of 180°-X (X is there because I captured the pictures not perpendicular). Now I get for instance the angle (euler ZYX):
A -178.155774553622°; -1.81510372911041°; 5.46620496345042° (rotated marker 5°)
B 175.347721071838°; -1.19249002241927°; -0.334586900200200° (reference "zero" marker 0°)
C -6.49650437453965 °; -0.622613706691140°; 5.80079186365062° (the difference between marker 5° and marker 0°)
D 0°; 0°; 5° (the difference it should be)
My problem is, that depending on the convention (XYZ/ZYX/ZXZ and so on...) there are alway different angles. I know that should be like that. But I don't know how to calculate the difference in a prober way, so that I can compare what the real difference over each axis is.
Is there any way to compare the angles in a better way, maybe not in euler angles but in a way to say "1° decree offset"?
Thank you very much
Sarahsarah1802Wed, 28 Feb 2018 04:57:04 -0600http://answers.opencv.org/question/185672/Centering opencv rotationhttp://answers.opencv.org/question/182793/centering-opencv-rotation/I'm having difficulties getting opencv rotations to center.
The rotation must retain all data so no clipping is allowed.
My first test case is using 90 and -90 degrees to simplify the transformation matrix (see https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html)
I also thought the best way to observe rotations is to use a simple case where the border pixel values are set to observe how the box rotates.
The python code I tried came from Flemin's Blog on rotation (http://john.freml.in/opencv-rotation)
Below is a picture of the original non-rotated image in python. Use the grey point as (4,9) for reference.
![image description](/upfiles/1516341559715749.png)
Then after running the python script (script below), I get a rotation where it is shifted to the right one column. Note the reference point is at (1,4) when it should be at (0,4)
![image description](/upfiles/15163417082184282.png)
Below is the Python script. I added width and height offsets to the function to allow me to experiment with offsets to the tx and ty rotation parameters. I found that setting the width offset to 1 made the 90 degree rotation case match Matlab, but it didn't help -90.
UPDATE 1/19 9AM: I tried setting offset = -0.5 in the function rotate_about_center() below and the 90 and -90 degree rotations center as expected. For a 10x10 image, the reasoning why this may work is that the center point defined by (cols/2, rows/2) is not (5,5), but rather (4.5, 4.5). The same logic applied to a 11x11 image: the center is not (5.5,5.5) but rather (5,5). Rotations at 45 and -45 don't center - meaning they visually don't look centered in the box computed of size nw x nh. So I think I understand why a "center" equal to (cols/2 - 0.5, rows/2 - 05) works but a center of (cols/2, rows/2) does not, however, most examples I've found do not subtract the 0.5.
import cv2
from matplotlib import pyplot as plt
import functools
import math
bwimshow = functools.partial(plt.imshow, vmin=0, vmax=255,
cmap=plt.get_cmap('gray'))
def rotate_about_center(src, angle, widthOffset=0., heightOffset=0, scale=1.):
w = src.shape[1]
h = src.shape[0]
# Add offset to correct for center of images.
wOffset = -0.5
hOffset = -0.5
rangle = np.deg2rad(angle) # angle in radians
# now calculate new image width and height
nw = (abs(np.sin(rangle)*h) + abs(np.cos(rangle)*w))*scale
nh = (abs(np.cos(rangle)*h) + abs(np.sin(rangle)*w))*scale
print("nw = ", nw, "nh = ", nh)
# ask OpenCV for the rotation matrix
rot_mat = cv2.getRotationMatrix2D((nw*0.5 + wOffset, nh*0.5 + hOffset), angle, scale)
# calculate the move from the old center to the new center combined
# with the rotation
rot_move = np.dot(rot_mat, np.array([(nw-w)*0.5 + widthOffset, (nh-h)*0.5 + heightOffset,0]))
# the move only affects the translation, so update the translation
# part of the transform
rot_mat[0,2] += rot_move[0]
rot_mat[1,2] += rot_move[1]
return cv2.warpAffine(src, rot_mat, (int(math.ceil(nw)), int(math.ceil(nh))), flags=cv2.INTER_LANCZOS4)
def main():
# create image
rows = 10
cols = 10
angle = -90
widthOffset = 0 # need 1 to match 90 degrees and ? for -90 degrees.
heightOffset = 0
img = np.zeros((rows,cols), np.float32)
img[:, 0] = 255
img[:, cols-1] = 255
img[0, :] = 200
img[rows-1, :] = 200
# mark some pixels for reference points.
img[0, int(cols/2 - 1)] = 0
img[rows-1, int(cols/2) - 1] = 100
bwimshow(img)
plt.show()
img = rotate_about_center(img, angle, widthOffset, heightOffset)
print("img shape = ", img.shape)
print('Data type', img.dtype)
bwimshow(img)
plt.show()
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
main()enter code here
I apologize for the reams and reams of code, but hopefully it makes it easier for someone to replicate the problem.epattonFri, 19 Jan 2018 00:27:55 -0600http://answers.opencv.org/question/182793/OpenCV + OpenGL: proper camera pose using solvePnPhttp://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/I've got problem with obtaining proper camera pose from iPad camera using OpenCV.
I'm using custom made 2D marker (based on [AruCo library](http://www.uco.es/investiga/grupos/ava/node/26) ) - I want to render 3D cube over that marker using OpenGL.
In order to recieve camera pose I'm using solvePnP function from OpenCV.
According to [THIS LINK](http://stackoverflow.com/questions/18637494/camera-position-in-world-coordinate-from-cvsolvepnp) I'm doing it like this:
<!-- language: c++ -->
cv::solvePnP(markerObjectPoints, imagePoints, [self currentCameraMatrix], _userDefaultsManager.distCoeffs, rvec, tvec);
tvec.at<double>(0, 0) *= -1; // I don't know why I have to do it, but translation in X axis is inverted
cv::Mat R;
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R * tvec; // translation of inverse
cv::Mat T(4, 4, R.type()); // T is 4x4
T(cv::Range(0, 3), cv::Range(0, 3)) = R * 1; // copies R into T
T(cv::Range(0, 3), cv::Range(3, 4)) = tvec * 1; // copies tvec into T
double *p = T.ptr<double>(3);
p[0] = p[1] = p[2] = 0;
p[3] = 1;
camera matrix & dist coefficients are coming from *findChessboardCorners* function, *imagePoints* are manually detected corners of marker (you can see them as green square in the video posted below), and *markerObjectPoints* are manually hardcoded points that represents marker corners:
<!-- language: c++ -->
markerObjectPoints.push_back(cv::Point3d(-6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, -6, 0));
markerObjectPoints.push_back(cv::Point3d(6, 6, 0));
markerObjectPoints.push_back(cv::Point3d(-6, 6, 0));
Because marker is 12 cm long in real world, I've chosed the same size in the for easier debugging.
As a result I'm recieving 4x4 matrix T, that I'll use as ModelView matrix in OpenCV.
Using GLKit drawing function looks more or less like this:
<!-- language: c++ -->
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
// preparations
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
effect.transform.projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(39), aspect, 0.1f, 1000.0f);
// set modelViewMatrix
float mat[16] = generateOpenGLMatFromFromOpenCVMat(T);
currentModelMatrix = GLKMatrix4MakeWithArrayAndTranspose(mat);
effect.transform.modelviewMatrix = currentModelMatrix;
[effect prepareToDraw];
glDrawArrays(GL_TRIANGLES, 0, 36); // draw previously prepared cube
}
I'm not rotating everything for 180 degrees around X axis (as it was mentioned in previously linked article), because I doesn't look as necessary.
The problem is that it doesn't work! Translation vector looks OK, but X and Y rotations are messed up :(
I've recorded a video presenting that issue:
[http://www.youtube.com/watch?v=EMNBT5H7-os](http://www.youtube.com/watch?v=EMNBT5H7-os)
I've tried almost everything (including inverting all axises one by one), but nothing actually works.
What should I do? How should I properly display that 3D cube? Translation / rotation vectors that come from solvePnP are looking reasonable, so I guess that I can't correctly map these vectors to OpenGL matrices.axadiwSat, 26 Oct 2013 17:49:13 -0500http://answers.opencv.org/question/23089/wrong rotation matrix when using recoverpose between two very similar imageshttp://answers.opencv.org/question/180264/wrong-rotation-matrix-when-using-recoverpose-between-two-very-similar-images/ I'm trying to perform visual odometry with a camera on top of a car. Basically I use Fast or GoodFeaturesToTrack ( I don't know yet which one is more convenient) and then I follow those points with calcOpticalFlowPyrLK. Once I have both previuos and actual points I call findEssentialMat and then recoverPose to obtain rotation and translation matrix.
My program works quite well. It has some errors when there are images with sun/shadow in the sides but the huge problem is WHEN THE CAR STOPS. When the car stops or his speed is quite low the frames looks very similar (or nearly the same) and the rotation matrix gets crazy (I guess the essential matrix too).
Does anyone knows if it is a common error? Any ideas on how to fix it?
I don't know what information you need to answer it but it seems that It is a concept mistake that I have. I have achieved an acurracy of 1º and 10 metres after a 3km ride but anytime I stop.....goodbye!
Thank you so much in advanceMarquesVanVicoTue, 12 Dec 2017 05:29:00 -0600http://answers.opencv.org/question/180264/get rotation from fundamental matrixhttp://answers.opencv.org/question/176270/get-rotation-from-fundamental-matrix/ I wonder if it is possible to get relative rotation between two uncalibrated cameras, based on an image pair that has feature points to be matched between the two cameras?
I read some articles and it sounds to me that it is possible to get the relative rotation between the two cams from the fundamental matrix. but after i searched around I only find solutions using essential mat which needs the camera to be calibrated...
shelpermiscFri, 13 Oct 2017 08:54:09 -0500http://answers.opencv.org/question/176270/Find ROI on an image from given referencehttp://answers.opencv.org/question/174520/find-roi-on-an-image-from-given-reference/Hello. I have the following problem to solve. Suppose I have a reference image consisting of some geometric objects and numbers on a homogeneous background. They are sufficiently distinct from the background - all these objects are more close to white gray-color, whereas background is all close to black.
I have a reference image and sample images, which can have different scale, some angle of rotation with respect to the reference and also some horizontal or vertical shift. What I need to do is to find the whole ROI, i.e. the region which is clearly distinguished from the background. Moreover, I need to identify regions corresponding to particular geometric objects (e.g. triangles) and regions that contain only numbers.
What method is better to apply here? I guess about SIFT implementation, since it is invariant under affine transformations. But my question is more about technique: how to implement this? I know that SIFT transform in OpenCV gives you coordinates of keypoints. and computes descriptors.
The reference image looks like this:
![image description](/upfiles/15056688881601366.jpg)newtSun, 17 Sep 2017 10:11:52 -0500http://answers.opencv.org/question/174520/How to rotate a camera to point to an object on the screenhttp://answers.opencv.org/question/172516/how-to-rotate-a-camera-to-point-to-an-object-on-the-screen/ I have a camera which points in a direct, I have a unit vector `C` which describes the orientation of the camera in world coordinates.
There is a point of interest in the image taken by the camera. Given the field of view of the camera and image size, I can compute two vectors in pixel space:
`A`, the principal point (center point of the image), and
`B` the point of interest in pixel space.
I want to rotate the camera `C` (in world coordinates) such that it now points at the object represented on screen by `B`
It's unclear to me how to transition between the on-screen pixel-space orientation of vectors `A` and `B` and the world space vector `C`.davidparks21Sun, 20 Aug 2017 16:00:03 -0500http://answers.opencv.org/question/172516/Proper way of rotating 3D points around axishttp://answers.opencv.org/question/169888/proper-way-of-rotating-3d-points-around-axis/Hello!
I have a problem with apply rotation to a set of 3D points. I use depth map, which store Z coordinates of points, also I use reverse of camera intrinsic matrix to obtain X and Y coords of point. I need to rotate those 3D points aorund Y axis and compute depth map after rotation. The code I use is here:
for (int a = 0; a < depthValues.rows; ++a)
{
for (int b = 0; b < depthValues.cols; ++b)
{
float oldDepth = depthValues.at<cv::Vec3f>(a, b)[0];
if (oldDepth > EPSILON)
{
cv::Mat pointInWorldSpace = cameraMatrix.inv() * cv::Mat(cv::Vec3f(a, b , 1), false);
pointInWorldSpace *= oldDepth;
cv::Mat rotatedPointInWorldSpace = rotation * pointInWorldSpace;
float newDepth = rotatedPointInWorldSpace.at<cv::Vec3f>(0, 0)[2];
cv::Mat rotatedPointInImageSpace = cameraMatrix * rotatedPointInWorldSpace;
int x = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[0] / newDepth;
int y = rotatedPointInImageSpace.at<cv::Vec3f>(0, 0)[1] / newDepth;
x = x < 0 ? 0 : x;
y = y < 0 ? 0 : y;
x = x > depthValues.rows - 1 ? depthValues.rows - 1 : x;
y = y > depthValues.cols - 1 ? depthValues.cols - 1 : y;
depthValuesAfterConversion.at < cv::Vec3f >(x, y) = cv::Vec3f(newDepth, newDepth, newDepth);
}
}
}
Here's how I compute rotation matrix:
float angle = (15.0 * 3.14159265f) / 180.0f;
float rotateYaxis[3][3] =
{
{ cos(angle), 0, -sin(angle) },
{ 0, 1, 0 },
{ sin(angle), 0, cos(angle) }
};
cv::Mat rotation(3, 3, CV_32FC1, rotateYaxis);
Unfortunately, after applying this rotation to my depth map it looks like it's rotated around X axis. I discovered that when I compute rotation matrix as it was rotation around X axis - my code works lke expected.
My question is: could you point me out where I made mistake to my code? Using matrix I've described I expected my depth map to be rotated around Y axis, not X.
Thank you for your help!
seaxgastFri, 28 Jul 2017 15:36:05 -0500http://answers.opencv.org/question/169888/Retrieve yaw, pitch, roll from rvechttp://answers.opencv.org/question/161369/retrieve-yaw-pitch-roll-from-rvec/ I need to retrieve the attitude angles of a camera (using `cv2` on Python).
- Yaw being the general orientation of the camera when on an horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
- Pitch being the "nose" orientation of the camera: 0° = horitzontal, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
- Roll being if the camera is tilted left or right when in your hands: +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
<br>
I have yet `rvec` and `tvec` from a `solvepnp()`.
Then I have computed:
`rmat = cv2.Rodrigues(rvec)[0]`
If I'm right, camera position in the world coordinates system is given by:
`position_camera = -np.matrix(rmat).T * np.matrix(tvec)`
But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the observer (thus the camera)?
I have tried implementing this : http://planning.cs.uiuc.edu/node102.html#eqn:yprmat in a function :
def rotation_matrix_to_attitude_angles(R) :
import math
import numpy as np
cos_beta = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
validity = cos_beta < 1e-6
if not validity:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = math.atan2(R[2,1], R[2,2]) # roll [x]
else:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = 0 # roll [x]
return np.array([alpha, beta, gamma])
but it gives me some results which are far away from reality on a true dataset (even when applying it to the inverse rotation matrix: `rmat.T`).
Am I doing something wrong?
And if yes, what?
All informations I've found are incomplete (never saying in which reference frame we are or whatever in a rigorous way).
Thanks.
**Update:**
Rotation order seems to be of greatest importance.
So; do you know to which of these matrix does the `cv2.Rodrigues(rvec)` result correspond?:
![rotation matrices](/upfiles/14987816662030655.png)
From: https://en.wikipedia.org/wiki/Euler_angles
<h3>Update:</h3>
I'm finally done. Here's the solution:
def yawpitchrolldecomposition(R):
import math
import numpy as np
sin_x = math.sqrt(R[2,0] * R[2,0] + R[2,1] * R[2,1])
validity = sin_x < 1e-6
if not singular:
z1 = math.atan2(R[2,0], R[2,1]) # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = math.atan2(R[0,2], -R[1,2]) # around z2-axis
else: # gimbal lock
z1 = 0 # around z1-axis
x = math.atan2(sin_x, R[2,2]) # around x-axis
z2 = 0 # around z2-axis
return np.array([[z1], [x], [z2]])
yawpitchroll_angles = -180*yawpitchrolldecomposition(rmat)/math.pi
yawpitchroll_angles[0,0] = (360-yawpitchroll_angles[0,0])%360 # change rotation sense if needed, comment this line otherwise
yawpitchroll_angles[1,0] = yawpitchroll_angles[1,0]+90
That's all folks!
swiss_knightTue, 20 Jun 2017 08:49:20 -0500http://answers.opencv.org/question/161369/Rodrigues rotationhttp://answers.opencv.org/question/163351/rodrigues-rotation/I do not understand the difference between these two equations:
<br>
1. from wikipedia:
![wiki Rodrigues formula](https://wikimedia.org/api/rest_v1/media/math/render/svg/14de5f7bfa4af6a7867008d8fd790d14e3a54530)
https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
<br>
2. from open CV doc:
![cv2 Rodrigues formal](http://docs.opencv.org/2.4/_images/math/8bffbe8d9297cebc136dc8ead9a40cad3940a640.png)
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)
<br>
Where is the **cos(θ)** gone on the wiki page in the formula 1. ?
Shouln't it be: v_{rot} = cos(θ)v + sin.... ?
Then on the wiki page, there is no more cos(θ) in the definition of R...
<br>
Or did I miss something?
swiss_knightSun, 02 Jul 2017 16:47:32 -0500http://answers.opencv.org/question/163351/Comparing Two Contours: Rotation invariant?http://answers.opencv.org/question/157572/comparing-two-contours-rotation-invariant/ I found one approach for estimate the orientation of two contours [here](http://answers.opencv.org/question/113492/orientation-of-two-contours/) , which rotates one contour and checks the distance to the original.
I changed the headers to
#include <opencv2/core.hpp>
#include <opencv2/shape.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv_modules.hpp>
#include <iostream>
#include <fstream>
#include <string.h>
and the main to:
int main(int argc, char* argv[])
It may be kind of a stupid question, but first of all i don't know, why the transformation of the contours should improve the result of computeDistance. Is the <cv::shapecontextdistanceextractor> not invariant to rotation and translation, because it does an internal fit?
If this would be the case, my results would be coherent, because I always get 0 as distance (but unfortunately no image as well). Also the result from an other program, where i match rotated contours with cv::shapecontextdistanceextractor> as well as the hausdorff metric seems not to be wrong (small distances, but no exact 0). JoeBroeselWed, 07 Jun 2017 13:45:36 -0500http://answers.opencv.org/question/157572/how to calculate the inliers points from my rotation and translation matrix?http://answers.opencv.org/question/138651/how-to-calculate-the-inliers-points-from-my-rotation-and-translation-matrix/ how to calculate the inliers points from my rotation and translation matrix?
if I have the points lists
std::vector<Point3d> opoints;
std::vector<Point2d> ipoints;
and I have the rotation and translation matrix, How can I calculate the inliers points
I know that cv::solvePnPRansac will calculate the inliers, rotation and translation from the two points list, but I need to calculate the inliers from my rotation and translation?
Thanks for your supportMohammed OmarFri, 07 Apr 2017 16:35:39 -0500http://answers.opencv.org/question/138651/Rotation Matrix from recoverPose is not symmetrichttp://answers.opencv.org/question/134522/rotation-matrix-from-recoverpose-is-not-symmetric/Hello,
i am using the recoverPose() function in OpenCV but i don't get a symmetrical rotation matrix.
But shouldn't it return a symmetrical rotation matrix?
My results are looking like:
R = 0.998585723955729, 0.02348487299776981, 0.04769709270061936;
-0.02232705043463718, 0.9994464428542043, -0.02466395517687959,
-0.04824991948907295, 0.02356413814160357, 0.9985572976364158;
t = -0.9982022017535427,
0.005659929033547541,
0.05966849769949602;
My both pictures are:
![image description](/upfiles/14897600475475862.jpg)
![image description](/upfiles/14897600525790462.jpg)mirnyyFri, 17 Mar 2017 09:09:55 -0500http://answers.opencv.org/question/134522/Controlling focal length and rotation in bundle adjuster modulehttp://answers.opencv.org/question/127731/controlling-focal-length-and-rotation-in-bundle-adjuster-module/ Hello,
Recently i had been working with various opencv modules mostly related to stitching and features2d module to make stitching reliable for the app i am working on. After lot of study and effort i could able to detect and match key-points as per my requirement, but as of now i am stuck at controlling focal length and rotation. Right now i am getting stitching results very accurate in some cases but in most of the cases result is very bad.
After checking the code of various modules like camera estimator, adjuster, warper etc, i think the problem is within the module BundleAdjusterRay. I know there is also other adjuster modules and i had tried them all but what i can see is BundleAdjusterRay is the only better work around and i think it is doing its job right and giving the right results when i have the images taken with perfect angle and rotation between each of them. But since the images are supposed to taken with bare hands using phone camera i believe that there will always chance of minor rotation or angle errors which i want to control and balance by tweaking and/or customizing code of opencv modules.
Where i am stuck right now is i could not able to figure out how to simply control Rotation and focal length within BundleAdjusterRay class which is derived from implementation of calcError, calcJacobian methods or even the CvLevMarq LevMarq algorithm implementation. I know they are required but what i want is just make it balanced and controllable based on known parameters of rotation and angle of my images so that it will just not give me very bad results.
PS: My app involves stitching few images say 10 to 20, the regions, order of images are static and rotation between two images are also known up to some level of accuracy. Images are taken from iPhone/iPad camera.
hsquaretechnologyMon, 13 Feb 2017 07:34:27 -0600http://answers.opencv.org/question/127731/How to extract Angle , Scale , transition and shear for rotated and scaled objecthttp://answers.opencv.org/question/115198/how-to-extract-angle-scale-transition-and-shear-for-rotated-and-scaled-object/**Problem description**
I have rotated and scaled scene and need to correct the scale and rotation , then find rectangle known object in the last fixed image
*Input*
<pre>
-Image scense from Camera or scanner
-Normalized (normal scale and 0 degree rotation )templeate image for known object
</pre>
*Requiered output*
<pre>
1-correct the scale and the rotation for the input scene
2-find rectnagle
</pre>
the following figure explain what is the input and steps to find the output
![image description](/upfiles/14805950097423279.png)
I 'm using the following sample [Features2D + Homography to find a known object] (http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html) to find rotated and scaled object .
I used the following code to do the process
//read the input image
Mat img_object = imread( strObjectFile, CV_LOAD_IMAGE_GRAYSCALE );
Mat img_scene = imread( strSceneFile, CV_LOAD_IMAGE_GRAYSCALE );
if( img_scene.empty() || img_object.empty())
{
return ERROR_READ_FILE;
}
//Step 1 Find the object in the scene and find H matrix
//-- 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_object, keypoints_scene;
detector.detect( img_object, keypoints_object );
detector.detect( img_scene, keypoints_scene );
//-- 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_object, descriptors_scene;
extractor.compute( img_object, keypoints_object, descriptors_object );
extractor.compute( img_scene, keypoints_scene, descriptors_scene );
//-- 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//Draw matched points
imwrite("c:\\temp\\Matched_Pints.png",img_matches);
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
//imshow( "Good Matches & Object detection", img_matches );
imwrite("c:\\temp\\Object_detection_result.png",img_matches);
//Step 2 correct the scene scale and rotation and locate object in the recovered scene
Mat img_Recovered;
//1-calculate new image size for the recovered image
// i find correct way for the new size now take the diagonal and ignore the scale
int idiag = sqrt(double(img_scene.cols * img_scene.cols + img_scene.rows * img_scene.rows));
Size ImgSize = Size(idiag, idiag);//initial image size
//2-find the warped (recovered , corrected scene)
warpPerspective(img_scene,img_Recovered,H,ImgSize,CV_INTER_LANCZOS4+WARP_INVERSE_MAP);
//3-find the logo (object , model,...) in the recovered scene
std::vector<Point2f> Recovered_corners(4);
perspectiveTransform( scene_corners, Recovered_corners, H.inv());
imwrite("c:\\temp\\Object_detection_Recoverd.png",img_Recovered);
it works fine to detect the object , the object is saved without scale , without rotation , and without shear
now i want get the original image scene back after denoising it, so i need to calculate the following:
<pre>
-Rotation angle
-Scale
-Shear angle
-transation
</pre>
so the my question is how to use the Homography matrix in the sample to get the mentioned values
had tryed to get the recovered (denoised) image by using wrapperspective with WARP_INVERSE_MAP but the image not converted correctly Here it 's object image i used
![image description](/upfiles/14804375986478971.png)
and here it's the scene image i used
![image description](/upfiles/14804376603722508.jpg)
then after calculating Homography matrix H as described in the sample
i used the following code
Mat img_Recovered;
warpPerspective(img_scene,img_Recovered,H,img_scene.size(),CV_INTER_LANCZOS4+WARP_INVERSE_MAP);
i got the following image
![image description](/upfiles/14804379443071238.png)
as you see the recovered image does not returned correctly , i noticed that the recoverd image is drawn from starting point of object ,
there is alot of question here
<pre>
1-how to caluate the correct image size of the recoverd image
2-how to get the recoverd image correctly
3-how to get the object rectangle in the recovered image
4-how to know the rotation angle and scale
</pre>
thanks for helpessamzakyMon, 28 Nov 2016 11:47:11 -0600http://answers.opencv.org/question/115198/Matrix rotation error and how pull out a single elementhttp://answers.opencv.org/question/111785/matrix-rotation-error-and-how-pull-out-a-single-element/ Hi,
I have problem with matrix rotation. I am using the module Aruco to receive marker position. Then I use the function Rodrigues () to receive a rotation matrix. I would like to pull out of the matrix as a single element needs to calculate the orientation of the marker. But all the time I error.
The following code:
cv::Mat rvecs, tvecs;
// detect markers and estimate pose
aruco::detectMarkers(image, dictionary, corners, ids, detectorParams, rejected);
if (estimatePose && ids.size() > 0)
aruco::estimatePoseSingleMarkers(corners, markerLength, camMatrix, distCoeffs, rvecs, tvecs);
double currentTime = ((double)getTickCount() - tick) / getTickFrequency();
Mat R = Mat::zeros(3, 3, CV_64F);
Rodrigues(rvecs, R);
Here is the formula:
Mat Thz = (atan2 (R (3), R (0))) * (180 / M_PI)
But the problem is in the variables.
When I'm use `R.at <double> (1, 1)` displays a single element matrix. For example, when I give `Mat thz = atan2 (R.at <double> (1, 1), R.at <double> (1, 2));` ,
displays error `C2440: 'initializing': can not convert from 'double' this' cv :: Mat`
How do I convert the matrix R so I can use them with designs and functions atan2?DrN22Thu, 10 Nov 2016 04:37:04 -0600http://answers.opencv.org/question/111785/