OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 26 Nov 2018 09:38:03 -0600creating a rotation matrix in OpenCVhttp://answers.opencv.org/question/204092/creating-a-rotation-matrix-in-opencv/ I am trying to recreate a rotation matrix in OpenCV using one of the direction vectors along the z-axes but have not been having any success with it. I am also not sure if this can be done in a unique fashion but I was hoping to get some help from the forum.
So, initially, I try and create the rotation matrix that I want to recover. I do this as following:
import cv2
import numpy
def generate_fake_opencv_calibrations(num_calibs=10):
calibs = list()
cm = np.asarray([[300.0, 0.0, 400], [0, 300.0, 700.0], [0., 0., 1.]])
for i in range(num_calibs):
image_c = np.random.rand(6, 2) * 100.0
world_c = np.random.rand(6, 3)
_, r, t = cv2.solvePnP(world_c, image_c, cm, None, flags=cv2.SOLVEPNP_ITERATIVE)
# Create the rotation matrix
r, _ = cv2.Rodrigues(r)
calibs.append(r)
return calibs
So, the rotation matrix has all the properties that the rows and columns are orthonormal wrt to each other. So, what I was hoping to do is recreate this rotation matrix when I have the direction vector for the z-axes.
So, I have:
z = c[:, 2]
Now I wrote a function to create the other two axes as:
def create_orthonormal_basis(v):
v = v / np.linalg.norm(v)
if v[0] > 0.9:
b1 = np.asarray([0.0, 1.0, 0.0])
else:
b1 = np.asarray([1.0, 0.0, 0.0])
b1 -= v * np.dot(b1, v)
b1 *= np.reciprocal(np.linalg.norm(b1))
b2 = np.cross(v, b1)
return b1, b2, v
I can then create the matrix as:
x, y, z = create_orthonormal_basis(z)
mat = np.asarray([[x[0], y[0], z[0]],
[x[1], y[1], z[1]],
[x[2], y[2], z[2]]])
I was expecting this matrix to map a given point to approximately the same location, however this was not the case. So for a random case, I am getting the following:
For the input matrix, given by:
[-0.5917787 -0.69902414 0.40145141]
[ 0.76717701 -0.64127655 0.01427625]
[ 0.24746193 0.31643267 0.91576905]]
The output is:
[ 0.91588032 0. 0.40145141]
[-0.00625761 0.99987851 0.01427625]
[-0.40140263 -0.01558747 0.91576905]]
I take a random input like:
[0.33385406 0.91243684 0.33755828]
and map it using the original and reconstructed matrices and the output is quite different:
Original: [-0.69986985 -0.32418012 0.68046643]
Reconstructed: [0.44128362 0.91505592 0.16089295]
pamparanaMon, 26 Nov 2018 09:38:03 -0600http://answers.opencv.org/question/204092/Triangulation gives weird results for rotationhttp://answers.opencv.org/question/199673/triangulation-gives-weird-results-for-rotation/OpenCV version 3.4.2
I am taking a stereo pair and using recoverPose to get the [R|t] pose of the camera, If I start at the origin and use triangulatePoints the result looks somewhat like expected although I would have expected the z points to be positive;
These are the poses of the cameras [R|t]
p0: [1, 0, 0, 0;
0, 1, 0, 0;
0, 0, 1, 0]
P1: [0.9999726146107655, -0.0007533190856300971, -0.007362237354563941, 0.9999683127209806;
0.0007569149205790131, 0.9999995956157767, 0.0004856419317479311, -0.001340876868928852;
0.007361868534054914, -0.0004912012195572309, 0.9999727804360723, 0.007847012372698725]
I get these results where the red dot and the yellow line indicates the camera pose (x positive is right, y positive is down):
![image description](/upfiles/1537317206819271.png)
When I rotate the first camera by 58.31 degrees and then use recoverPose to get the relative pose of the second camera the results are wrong.
Pose matrices where P0 is rotated by 58.31 degrees around the y axis before calling my code below.
P0: [0.5253219888177297, 0, 0.8509035245341184, 0;
0, 1, 0, 0;
-0.8509035245341184, 0, 0.5253219888177297, 0]
P1: [0.5315721563840478, -0.0007533190856300971, 0.8470126770406503, 0.5319823932782873;
-1.561037994149129e-05, 0.9999995956157767, 0.0008991799591322519, -0.001340876868928852;
-0.8470130118915117, -0.0004912012195572309, 0.5315719296650566, -0.8467543535708145]
(x positive is right, y positive is down)
![image description](/upfiles/15373172174565108.png)
The pose of the second frame is calculated as follows:
new_frame->E = cv::findEssentialMat(last_frame->points, new_frame->points, K, cv::RANSAC, 0.999, 1.0, new_frame->mask);
int res = recoverPose(new_frame->E, last_frame->points, new_frame->points, K, new_frame->local_R, new_frame->local_t, new_frame->mask);
// https://stackoverflow.com/questions/37810218/is-the-recoverpose-function-in-opencv-is-left-handed
// Convert so transformation is P0 -> P1
new_frame->local_t = -new_frame->local_t;
new_frame->local_R = new_frame->local_R.t();
new_frame->pose_t = last_frame->pose_t + (last_frame->pose_R * new_frame->local_t);
new_frame->pose_R = new_frame->local_R * last_frame->pose_R;
hconcat(new_frame->pose_R, new_frame->pose_t, new_frame->pose);
I then call triangulatePoints using the K * P0 and K * P1 on the corresponding points.
I feel like this is some kind of coordinate system issue as the points I would expect to have positive z values have a -z value in the plots and so the rotation is behaving strangely. I haven't been able to figure out what I need to do to fix it.
EDIT: Here is a gif of what's going on as I rotate through 360 degrees around Y. The cameras are still parallel. What am I missing, shouldn't the shape of the point cloud remain the same if both camera poses remain in relative positions even thought they have been rotated around the origin? Why are the points squashed into the X axis?
![image description](/upfiles/15373205818094867.gif)maym86Tue, 18 Sep 2018 14:55:35 -0500http://answers.opencv.org/question/199673/Relative rotation between Aruco markershttp://answers.opencv.org/question/198487/relative-rotation-between-aruco-markers/I want to find the relative rotation angles between two Aruco markers, using python and cv2. I'm referring to my markers as the "test" marker and the "reference" marker.
I have successfully retrieved the pose of the markers using cv2.aruco.estimatePoseSingleMarkers. This gives me a "test_rvec" for the test marker and a "ref_rvec" for the reference marker.
As I understand it, rvec (same format as used by cv2.solvePnP, which I believe aruco uses under the covers) is the rotation of the marker relative to the camera. So, to get the rotation of the test marker relative to the reference marker, I do:
R_ref_to_cam = cv2.Rodrigues(ref_rvec)[0] #reference to camera
R_test_to_cam = cv2.Rodrigues(test_rvec)[0] #test to camera
R_cam_to_ref = np.transpose(R_ref_to_cam) #inverse of reference to camera
R_test_to_ref = np.matmul(R_test_to_cam,R_cam_to_ref) #test to reference
Then I use cv2.decomposeProjectionMatrix to compute the euler angles of the resulting matrix (R_test_to_ref).
In testing, with both markers flat on my desk and with the same orientation, with the camera pointed straight down, I get X=0, Y=0, Z=0 as expected, since the relative orientation between the markers is zero.
However, if I rotate one marker 90 degrees in the "z" direction (still keeping it flat on my desk), I get X=30, Y=30, Z=90. I would expect to see two of the axes report as 90 degrees and the third (rotational) axis report 0 degrees. What am I doing wrong?
EDIT: it seems that the 30 degrees I was seeing was the angle of the camera relative to my desk (at least it was non-zero). Still not sure why this doesn't "cancel out" with what I am doing with rvecs...PolywogonFri, 31 Aug 2018 12:22:05 -0500http://answers.opencv.org/question/198487/Adjust image rotation to camera rotationhttp://answers.opencv.org/question/197058/adjust-image-rotation-to-camera-rotation/Hi,
I have written a program which blends a space shuttle icon into an image which is projected. My program, additionally, uses feature matching, homography and perspective transform so that the icon can be moved as the camera moves. In other words, move the camera over the projected image to move the icon around.
I am done with everything describe above. One thing I want to accomplish is to rotate the icon image with the camera. That is, do not always draw the icon as it is, rotate it to left and right when the camera rotates so.
Here is my code which blends the icon into the image:
# blend the shuttleIcon into the image
processedImage = cv2.cvtColor(processedImage, cv2.COLOR_BGR2BGRA)
w, h, c = shuttleIcon.shape
for i in range(0, w):
for j in range(0, h):
if shuttleIcon[i, j][3] != 0:
processedImage[trackedCenterPoint[1] + i, trackedCenterPoint[0] + j] = shuttleIcon[i, j]
processedImage = cv2.cvtColor(processedImage, cv2.COLOR_BGRA2BGR)
How can I incorporate the camera rotation into this? Any thoughts?
NOTE: I do not have the calibration matrix.RakiTue, 07 Aug 2018 21:14:28 -0500http://answers.opencv.org/question/197058/Aruco marker pose tracking method using fixed camerahttp://answers.opencv.org/question/193101/aruco-marker-pose-tracking-method-using-fixed-camera/Here's some context: I'm trying to track an aruco marker on a robot using a camera placed at an unknown angle at a certain height. I have an aruco marker placed on the floor, which I use as my world frame and use it to calibrate the cameras. I need to get the position (x,y,z,roll,pitch,yaw) wrt the world frame. Here's what I do:
- I use estimatePoseSingleMarkers() to track the world marker and I store the rvecs/tvecs somewhere
- I run my main code which uses estimatePoseSingleMarkers() to detect the robot's rvecs/tvecs
- I transform the world rvecs/tvecs to the camera frame and then compose that to the robot's rotation and translation vectors. I have tried and tested both composeRT() and Rodrigues()
I am definitely sure I have the aruco detection part all correct. I am able to detect each Aruco marker and able to draw their axes, so there's no problem there. The detection is very consistent. I have also gone through this 'http://answers.opencv.org/question/122301/aruco-marker-tracking-with-fixed-camera/' and my code is very much in line with the answer there. However, I have some questions:
**1. Is storing the rvecs/tvecs of the world marker the right way to go? How accurate are the stored values? Should I rather use a Charuco board to perform this calibration?**
**2. When I run the code to compute the pose (x,y,z, roll, pitch, yaw), I get highly inaccurate results. The pose values are either constant or rapidly switching to a ridiculous value like 2.3345e-245 or something. I see a change values when I move my robot but the numbers don't really make sense.**
**I have used the final composed rotation matrix (like this: http://planning.cs.uiuc.edu/node103.html) and the translation vector directly. I have triple checked my code but I cannot find anything wrong. Is it my strategy? Or do you think I have maybe computed the rvecs, tvecs or the pose incorrectly?**
Any help will be wonderful!
P.S. I am not sure if I can share my code here (its part of a larger project), but if you think you can help, I can definitely share some snippets over email.
kmathTue, 05 Jun 2018 15:26:02 -0500http://answers.opencv.org/question/193101/Obtaining Euler angles from Rodrigues Rotation Matrixhttp://answers.opencv.org/question/189455/obtaining-euler-angles-from-rodrigues-rotation-matrix/ Hi,
I wish to extract Euler angles from the **rvec** output parameter of [cv::solvePnp](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool%20solvePnP(InputArray%20objectPoints,%20InputArray%20imagePoints,%20InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20OutputArray%20rvec,%20OutputArray%20tvec,%20bool%20useExtrinsicGuess,%20int%20flags)).
I understand that 3x1 rvec needs to be passed on to the [Rodrigues](https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void%20Rodrigues(InputArray%20src,%20OutputArray%20dst,%20OutputArray%20jacobian)) function to obtain the 3x3 rotation matrix.
But to obtain Euler Angles, I need to use a fixed convention ( see [this](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) , for example Z-Y-X,etc ) which requires the rotation matrix to be obtained from a **permutation (fixed order) of multiplication of Matrices** ,
**eg: for Z-Y-X convention,
R_resultant = Rz * Ry * Rx**.
I have looked into the source code [here](https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L251) for Rodrigues function but don't quite understand how the matrix is formed Hence -
**My Question**: What is the **convention** (Z-Y-X, X-Y-Z, etc) of the formation of the 3x3 Rotation matrix obtained from Rodrigues ? Without this information I cannot obtain Euler angles. I have seen [this tutorial](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) for real time pose estimation, but unfortunately I don't understand the assumption for the convention (and hence I had made another question for that - [here](http://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/?comment=189453#post-id-189453)).malharjajooMon, 16 Apr 2018 10:54:16 -0500http://answers.opencv.org/question/189455/Issue in OpenCV sample tutorial for real time pose estimationhttp://answers.opencv.org/question/189414/issue-in-opencv-sample-tutorial-for-real-time-pose-estimation/Hi,
I was trying out the real time pose estimation tutorial from OpenCV ( [this tutorial](https://docs.opencv.org/3.3.0/dc/d2c/tutorial_real_time_pose.html) ), and while looking into the source code, I may have found a (possible) inconsistency -
The functions for converting between Rotation Matrix and Euler Angles ( please see [euler2rot](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L229) and [rot2euler](https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/src/Utils.cpp#L189) from the C++ source code for the tutorial).
I am aware that there are **6-types** of [Tait-bryan](https://en.wikipedia.org/wiki/Euler_angles) angles (they are colloquially referred to as **Euler Angles** ).
**My issue:**
- The above source code functions **do not** adhere to any of the 6 types. I checked a few sources to verify this -
From the [section on wikipedia page](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) and even wrote a simple Matlab script using symbolic variables ( please see below, at the end of the question ).
- **The source code in the functions seems to correspond to the Y-Z-X Tait bryan angles, but with the pitch and yaw interchanged.** Why is this the case ? (Does this have something to do with the fact that the camera's coordinate axes has z-facing forward, y-facing downward and x-facing right ? )
- And finally, I think, Z-Y-X Tait Bryan angles are the industry standard for robotics , hence is there any particular reason for using Y-Z-X (with the pitch and yaw interchanged, as noticed above) ?
**================= Matlab script below for reference ============**
syms roll
syms pitch
syms yaw
Rx = [1 0 0; 0 cos(roll) -sin(roll); 0 sin(roll) cos(roll)];
Ry = [cos(pitch) 0 sin(pitch); 0 1 0; -sin(pitch) 0 cos(pitch)];
Rz = [cos(yaw) -sin(yaw) 0; sin(yaw) cos(yaw) 0; 0 0 1];
R_Y_Z_X = Ry * Rz * Rx
**Output**:
R_Y_Z_X =
[ cos(pitch)*cos(yaw), sin(pitch)*sin(roll) - cos(pitch)*cos(roll)*sin(yaw), cos(roll)*sin(pitch) + cos(pitch)*sin(roll)*sin(yaw)]
[ sin(yaw), cos(roll)*cos(yaw), -cos(yaw)*sin(roll)]
[ -cos(yaw)*sin(pitch), cos(pitch)*sin(roll) + cos(roll)*sin(pitch)*sin(yaw), cos(pitch)*cos(roll) - sin(pitch)*sin(roll)*sin(yaw)]malharjajooSun, 15 Apr 2018 19:31:39 -0500http://answers.opencv.org/question/189414/Row and Col problem when Mat represents 3d point cloudhttp://answers.opencv.org/question/189357/row-and-col-problem-when-mat-represents-3d-point-cloud/The row and col of a Mat to represent 3d point cloud in OpenCV is N * 3, N is the number of the points in the cloud and 3 is the x, y, z coordinate respectively. For example, when I use method loadPLYSimple to load data from PLY file I will get N * 3 Mat, when I use the Flann KDTREE, I need to pass N * 3 Mat as the parameter... But the problem is that when I try to perform a transformation on the data, such as rotation. If we have a 3 * 3 rotation Mat R, and the points Mat pc N * 3, the common way is just R * pc. However, pc is N * 3, so we need to do some extra transpose work. I'm not familiar with OpenCV, I just want to know if there is any better way to do that instead of doing the transpose work each time? Or maybe there is something which I do not understand hidden behind？Thanks.TabFri, 13 Apr 2018 16:28:56 -0500http://answers.opencv.org/question/189357/wrong rotation matrix when using recoverpose between two very similar imageshttp://answers.opencv.org/question/180264/wrong-rotation-matrix-when-using-recoverpose-between-two-very-similar-images/ I'm trying to perform visual odometry with a camera on top of a car. Basically I use Fast or GoodFeaturesToTrack ( I don't know yet which one is more convenient) and then I follow those points with calcOpticalFlowPyrLK. Once I have both previuos and actual points I call findEssentialMat and then recoverPose to obtain rotation and translation matrix.
My program works quite well. It has some errors when there are images with sun/shadow in the sides but the huge problem is WHEN THE CAR STOPS. When the car stops or his speed is quite low the frames looks very similar (or nearly the same) and the rotation matrix gets crazy (I guess the essential matrix too).
Does anyone knows if it is a common error? Any ideas on how to fix it?
I don't know what information you need to answer it but it seems that It is a concept mistake that I have. I have achieved an acurracy of 1º and 10 metres after a 3km ride but anytime I stop.....goodbye!
Thank you so much in advanceMarquesVanVicoTue, 12 Dec 2017 05:29:00 -0600http://answers.opencv.org/question/180264/how to find the rotated angle of aruco markerhttp://answers.opencv.org/question/177976/how-to-find-the-rotated-angle-of-aruco-marker/I am using aruco markers to get the location of a robot. After getting the pose from the estimatePoseSingleMarkers i obtain the rvecs and tvecs for a given marker. From this how could i obtain the rotated angle of the marker about each axis.
i used the code below to detect and draw the aruco markers along with its axis.
while(true)
{
vector< vector<Point2f>> corners; //All the Marker corners
vector<int> ids;
cap >> frame;
cvtColor(frame, gray, CV_BGR2GRAY);
aruco::detectMarkers(gray, dictionary, corners, ids);
aruco::drawDetectedMarkers(frame,corners,ids);
aruco::estimatePoseSingleMarkers(corners, arucoMarkerLength, cameraMatrix, distanceCoefficients, rvecs, tvecs);
for(int i = 0; i < ids.size(); i++)
{
aruco::drawAxis(frame, cameraMatrix, distanceCoefficients, rvecs[i], tvecs[i], 0.1f);
}
imshow("Markers", frame);
int key = waitKey(10);
if((char)key == 'q')
break;
}
yomalThu, 09 Nov 2017 02:45:07 -0600http://answers.opencv.org/question/177976/OpenCV SolvePnP strange valueshttp://answers.opencv.org/question/176922/opencv-solvepnp-strange-values/Hello,
I experiment on a project.
I use **SolvePnP** to find rotation vector on an object.
Since the values are hard to understand, I used 3D software to define specific values that I am trying to find with OpenCV.
I've got a plane in the center on my scene. I apply rotations on X, Y or Z.
In example bellow, rotations are defined on :
**x=30°
y=0°
z=30°**
I've got good values for focalLength, fov, etc.
![image description](/upfiles/1508940197829013.jpg)
As you can see, the **cv2.projectPoints** works perfectly on my image.
When I call **SolvePnP**, the **rvecs returns strange values**.
For rotation X, I've got 28.939°
For rotation X, I've got 7.916°
For rotation Z, I've got 29.02031°
So when I try to map a plane with WebGL, I've got the result on image bellow (red plane)
![image description](/upfiles/15089407414127149.jpg)
**So here is my question.
Why SolvePnP doesn't return x:30°, y:0° and z:30° !
It's very strange no ???**
Do I have to use **Rodrigues** somewhere? If yes, how ?
Is there a lack of precision somewhere?
Thanks
Loïc
kopacabana73Wed, 25 Oct 2017 09:18:24 -0500http://answers.opencv.org/question/176922/calculate matrix rotation translation with two camerashttp://answers.opencv.org/question/166053/calculate-matrix-rotation-translation-with-two-cameras/ Hello to all. I would like your help. I have two cams and I would like to calculate the matrices H and R.
I have calculated the intrinsic parameters of both cams.
I do not know how to continue..Can i use openPNP? or whats?GiGa.91Fri, 14 Jul 2017 08:07:11 -0500http://answers.opencv.org/question/166053/Calculate corners locations from known dimensions and 1st corner's locationhttp://answers.opencv.org/question/161633/calculate-corners-locations-from-known-dimensions-and-1st-corners-location/ Hello,
I have a board with known dimensions (50cm along x-axis and 30 cm along y-axis). Now i have an Aruco marker taped to the top left corner of the board. I can detect the marker and get its location in the image through `aruco::detectMarkers`, location in the world coordinates through `triangulatePoints` and orientation through `estimatePoseSingleMarker` perfectly.
Now I have the location of the 1st corner and I want to calculate the locations of the other 3 corners of the board w.r.t my world origin. the problem is that the board can be in any orientation so i cannot add 50cm to the x of the 1st corner and 30 to the y. The known dimensions are w.r.t my world coordinate system.
My question is, How can i scale these dimensions depending on the board's orientation and then calculate the 2D/3D locations of the other corners?
IbraWed, 21 Jun 2017 12:19:43 -0500http://answers.opencv.org/question/161633/SolvePnp determines well the translation but not the rotationhttp://answers.opencv.org/question/161150/solvepnp-determines-well-the-translation-but-not-the-rotation/I am trying to extract the pose of a camera knowing five points correspondence between the image and the world.
I know the intrinsics and the distortion coefficients and in order to determine the camera pose I am using the solvePnP method.
My problem is that the translation vector is right but the rotation does not seem right. I inverted the joint matrix that contains tvec and rvec.
If you suspect what could be happening and you need more details please ask. Thank you in advance.
My code:
vector<Point2f> contoursCenter; //image points
RNG rng(12345);
for (size_t i = 0; i < contours.size(); i++)
{
Moments m = moments(contours[i], false);
Point2f center = Point2f(m.m10 / m.m00, m.m01 / m.m00);
contoursCenter.push_back(center);
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
drawContours(drawing, contours, i, color, 2, 8);
circle(drawing, center, 4, color, -1, 8, 0);
}
vector<Point2f> origCenters;
//convert back from bird eye view
perspectiveTransform(contoursCenter, origCenters, homography.inv());
for (auto center : origCenters)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
circle(image, center, 4, color, -1, 8, 0);
}
namedWindow("drawing", WINDOW_NORMAL);
resizeWindow("drawing", 1366, 768);
imshow("drawing", image);
waitKey();
//solvePnP
vector<Point3f> worldCoordinates;
// camera perspective
worldCoordinates.push_back(Point3f(0.995, -0.984, 0.005 + 0.945)); //left down circle
worldCoordinates.push_back(Point3f(-0.954, -0.984, 0.005 + 0.945)); //left upper circle
worldCoordinates.push_back(Point3f(0, 0, 1.4)); //ball
worldCoordinates.push_back(Point3f(-0.954, 0.987, 0.005 + 0.945)); // right upper circle
worldCoordinates.push_back(Point3f(0.995, 0.987, 0.005 + 0.945)); // right down circle
cv::Mat intrinsic(cameraInfo_.intrinsicMatrix());
cv::Mat distortion(cameraInfo_.distortionCoeffs());
float x = 7;
float y = 0;
float z = 4.5;
tf::Vector3 pos(x, y, z);
double roll = -TFSIMD_HALF_PI - 0.39;
double pitch = 0;
double yaw = TFSIMD_HALF_PI;
tf::Quaternion quat;
quat.setRPY(roll, pitch, yaw);
tf::Transform transform;
transform.setOrigin(pos);
transform.setRotation(quat);
tf::Transform invTrans = transform.inverse();
tf::Matrix3x3 rotMat(invTrans.getRotation());
rotMat.getRPY(roll, pitch, yaw);
x = invTrans.getOrigin().x();
y = invTrans.getOrigin().y();
z = invTrans.getOrigin().z();
cv::Mat rvec = getRotationVectorFromTaitBryanAngles(roll, pitch, yaw);
cv::Mat tvec = (cv::Mat_<float>(3, 1) << x, y, z);
bool useInitialGuess = false;
solvePnP(worldCoordinates, origCenters, intrinsic, distortion, rvec, tvec, useInitialGuess, CV_EPNP);
//L2 Error using projectPoints
double errPnP = calculateError(rvec, tvec, intrinsic, distortion, origCenters, worldCoordinates);
std::cout << "PnP mean error = " << errPnP << std::endl; //error=1.2
//tf::Transform transform;
transform.setOrigin(tf::Vector3(tvec.at<double>(0), tvec.at<double>(1), tvec.at<double>(2)));
tf::Quaternion q;
//get roll,pitch and yaw from rotation vector
correctRVec(rvec, roll, pitch, yaw);
q.setRPY(roll, pitch, yaw);
//q.setRPY(rvec.at<double>(0), rvec.at<double>(1), rvec.at<double>(2));
transform.setRotation(q);
//change back to camera transformation
tf::Transform invTransform = transform.inverse();
double xInv = invTransform.getOrigin().x();
double yInv = invTransform.getOrigin().y();
double zInv = invTransform.getOrigin().z();
tf::Matrix3x3 invRotMat(invTransform.getRotation());
double rollInv, pitchInv, yawInv;
invRotMat.getRPY(rollInv, pitchInv, yawInv);
std::cout << "trans = [" << xInv << " " << yInv << " " << zInv << "]" << std::endl;
std::cout << "rot = [" << rollInv << " " << pitchInv << " " << yawInv << "]" << std::endl;
std::cout << "rot (deg) [" << rollInv * 180. / M_PI << " " << pitchInv * 180. / M_PI << " " << yawInv * 180. / M_PI
<< "]" << std::endl;
The world frame has a x that points onwards, a y that points to the left and a z that points up.
In this frame the real pose of the camera is 7 meters in x, 4.5 in z and with a pitch of 0.39 radians and a yaw of PI radians.
My results are the following:
trans = [7.0834 -0.0451012 4.65986]
rot = [-1.97556 -0.00664226 1.56302]
rot (deg) [-113.191 -0.380573 89.5545]
agbjMon, 19 Jun 2017 11:08:57 -0500http://answers.opencv.org/question/161150/Simplified pose estimation on collinear points, like solvePNPhttp://answers.opencv.org/question/128137/simplified-pose-estimation-on-collinear-points-like-solvepnp/ Let's say I have a pole that I can identify points on in the image, like a vertical post painted in contrast colors with a known pattern. And I know the distances between those points in world coordinates. But since it is a pole all the points belong to a single line. I tried to use solvePNP but it seems to be not very happy with collinear points. Is there any other methods in OpenCV similar to solvePNP but for collinear points? Apparently since this is a pole I can live without the rotation around the pole if it simplifies the problem. Thank you.MikhaWed, 15 Feb 2017 11:33:20 -0600http://answers.opencv.org/question/128137/Mock Camera Intrinsicshttp://answers.opencv.org/question/98301/mock-camera-intrinsics/ Hello,
I am following this post an amazing blog post to create a Perspective Transform for an arbitrary image.
https://jepsonsblog.blogspot.co.nz/2012/11/rotation-in-3d-using-opencvs.html
This works well for square images, but has a problem with the aspect ratio when it comes to rectangular images.
I suspect it is the Camera Intrinsics that uses the field of view.
cv::Mat A2 = (cv::Mat_<double>(3, 4) <<
f, 0, w / 2, 0,
0, f, h / 2, 0,
0, 0, 1, 0);
There are comments on the blog mentioning suggested values of f for 200-250 when using this method.
In the documentation for OpenCV, it is a little more precise stating that it is in fact fx and fy. Focal length can be found if the field of view is known, and vice versa.
> fx, fy are the focal lengths expressed
> in pixel units.
What is the solution here?
Example image of the problem, look at how it is oddly stretched in the x axis which becomes more severe as the rotation increases. Works fine when they are square images.
![image description](/upfiles/14684127248575433.png)
MRDanielWed, 13 Jul 2016 07:31:53 -0500http://answers.opencv.org/question/98301/Rotation and Translation questionhttp://answers.opencv.org/question/95782/rotation-and-translation-question/ I've used OpenCV's calibrateCamera function to give me the rvecs and tvecs to try to get the transformation from that of my camera with world coordinates (0,0,0) to that of a chessboard. I'm using it to find the corners of the chessboard with respect to the camera.
I'm confused as to how to apply the rvec and tvec to get the transformation. Do I just do (0,0,0)*rvec + tvec. If this is the case, how do I invert the transformation, as I'll need this for something else as well?
After doing research online I realized I may have to use the Rodrigues() function to get the rotation matrix from rvec, augment that with the tvec, and then add a row of (0,0,0,1) to get a 4x4 transformation matrix. This way, I would be able to get the inverse. However, If that is the case, how do I multiply that by (0,0,0)? Am I just supposed to do (0,0,0,1)*(rotm|tvec, 0,0,0,1)?
I'm not sure which of the two methods to use, I'm doubtful about how to proceed with either of them.sourmanbTue, 07 Jun 2016 01:19:52 -0500http://answers.opencv.org/question/95782/How can I get rotation-vector from Euler-angles?http://answers.opencv.org/question/88531/how-can-i-get-rotation-vector-from-euler-angles/If I have 3 Euler-angles then how can I get rotation-vector to use it for camera-rotation in [OpenCV-viz3d](http://docs.opencv.org/2.4/modules/viz/doc/viz3d.html#)?
I.e. how can I get `Vec3d( X, Y, Z)` from [Euler-angles](https://en.wikipedia.org/wiki/Euler_angles) (alfa, betta, gamma) to use in this code?
viz::Viz3d viz_3d_window("Viz window");
cv::Affine3d pose = viz_3d_window.getViewerPose();
viz_3d_window.setViewerPose( pose.rotate( Vec3d( X, Y, Z) ) );
cv::Affine3d::rotate(); https://github.com/Itseez/opencv/blob/f9e2b036a502309c0da5abc5e711d4cd64eb8a38/modules/core/include/opencv2/core/affine.hpp#L114
I can find Euler-angles from rotation-matrix by using [RQDecomp3x3()](http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=euler#rqdecomp3x3) or as shown in [that question](http://answers.opencv.org/question/8842/rotation-matrix-to-euler-angle/), but whether is there an inverse function in the OpenCV to find rotation-matrix from Euler-angles or better to find rotation-vector( `Vec3d( X, Y, Z)` )?AlexBThu, 25 Feb 2016 03:05:56 -0600http://answers.opencv.org/question/88531/Rotation Matrixhttp://answers.opencv.org/question/82229/rotation-matrix/i use 'calibrateCamera' opencv function for calibrating camera which gives intrinsic and extrinsic parameters. In Extrinsic Parameter it provide 3x1 Rotation and 3x1 translation matrix for each pattern.
So my question is how to convert this 3x1 rotation matrix into R matrix which is
R11 R12 R13
R21 R22 R23
R31 R32 R33
or how can i get roll, yaw, pitch angle for camera after getting 3x1 matrix.
I also saw opencv function 'Rodrigues' which convert 3x1 into 3x3 but that 3x3 matrix is symmetric matrix which is not R matrix.
Please help.WinterTue, 05 Jan 2016 02:29:40 -0600http://answers.opencv.org/question/82229/3D rotation matrix between 2 axishttp://answers.opencv.org/question/66463/3d-rotation-matrix-between-2-axis/I have 2 known 3d points which are the origin of 2 axis plot in the space and I need to compute the 3D rotation matrix between them. I didn't really get what the difference Euler angles and the other type of angles? Any help please?
**EDITED**:
![image description](/upfiles/14370470003199331.png)
I have Oc1 and Oc2 known points in the space and I know that using R1&T1 I can get to Oc1 and using R2&T2 I can get to Oc2 but I need to compute the 3D rotation matrix between Oc1 and Oc2. Is there any openCV method that computes such rotation?
**EDITED**
Here my code for a sample to test `c1Mc2 = (oMc1)^-1 oMc2`:
vector <Point3f> listOfPointsOnTable;
cout << "******** DATA *******" << endl;
listOfPointsOnTable.push_back(Point3f(0,0,0));
listOfPointsOnTable.push_back(Point3f(100,0,0));
listOfPointsOnTable.push_back(Point3f(100,100,0));
listOfPointsOnTable.push_back(Point3f(0,100,0));
cout << endl << "Scene points :" << endl;
for (int i = 0; i < listOfPointsOnTable.size(); i++)
{
cout << listOfPointsOnTable[i] << endl;
}
//Define the optical center of each camera
Point3f centreOfC1 = Point3f(23,0,50);
Point3f centreOfC2 = Point3f(0,42,20);
cout << endl << "Center Of C1: " << centreOfC1 << " , Center of C2 : " << centreOfC2 << endl;
//Define the translation and rotation between main axis and the camera 1 axis
Mat translationOfC1 = (Mat_<double>(3, 1) << (0-centreOfC1.x), (0-centreOfC1.y), (0-centreOfC1.z));
float rotxC1 = 0, rotyC1 = 0, rotzC1 = -45;
int focaleC1 = 2;
Mat rotationOfC1 = rotation3D(rotxC1, rotyC1,rotzC1);
cout << endl << "Translation from default axis to C1: " << translationOfC1 << endl;
cout << "Rotation from default axis to C1: " << rotationOfC1 << endl;
Mat transformationToC1 = buildTransformationMatrix(rotationOfC1, translationOfC1);
cout << "Transformation from default axis to C1: " << transformationToC1 << endl << endl;
//Define the translation and rotation between main axis and the camera 2 axis
Mat translationOfC2 = (Mat_<double>(3, 1) << (0-centreOfC2.x), (0-centreOfC2.y), (0-centreOfC2.z));
float rotxC2 = 0, rotyC2 = 0, rotzC2 = -90;
int focaleC2 = 2;
Mat rotationOfC2 = rotation3D(rotxC2, rotyC2,rotzC2);
cout << endl << "Translation from default axis to C2: " << translationOfC2 << endl;
cout << "Rotation from default axis to C2: " << rotationOfC2 << endl;
Mat transformationToC2 = buildTransformationMatrix(rotationOfC2, translationOfC2);
cout << "Transformation from default axis to C2: " << transformationToC2 << endl << endl;
Mat centreOfC2InMat = (Mat_<double>(3, 1) << centreOfC2.x, centreOfC2.y, centreOfC2.z);
Mat centreOfC2InCamera1 = rotationOfC1 * centreOfC2InMat + translationOfC1;
Mat translationBetweenC1AndC2 = -centreOfC2InCamera1;
cout << endl << "****Translation from C2 to C1" << endl;
cout << translationBetweenC1AndC2 << endl;
Mat centreOfC1InMat = (Mat_<double>(3, 1) << centreOfC1.x, centreOfC1.y, centreOfC1.z);
Mat centreOfC1InCamera2 = rotationOfC2 * centreOfC1InMat + translationOfC2;
Mat translationBetweenC2AndC1 = -centreOfC1InCamera2;
cout << "****Translation from C1 to C2" << endl;
cout << translationBetweenC2AndC1 << endl;
cout << "Tran1-1 * Trans2 = " << transformationToC1.inv() * transformationToC2 << endl;
cout << "Tran2-1 * Trans1 = " << transformationToC2.inv() * transformationToC1 << endl;
Mat rotation3D(int alpha, int beta, int gamma)
{
// Rotation matrices around the X, Y, and Z axis
double alphaInRadian = alpha * M_PI / 180.0;
double betaInRadian = beta * M_PI / 180.0;
double gammaInRadian = gamma * M_PI / 180.0;
Mat RX = (Mat_<double>(3, 3) <<
1, 0, 0,
0, cosf(alphaInRadian), sinf(alphaInRadian),
0, -sinf(alphaInRadian), cosf(alphaInRadian));
Mat RY = (Mat_<double>(3, 3) <<
cosf(betaInRadian), 0, sinf(betaInRadian),
0, 1, 0,
-sinf(betaInRadian), 0, cosf(betaInRadian));
Mat RZ = (Mat_<double>(3, 3) <<
cosf(gammaInRadian), sinf(gammaInRadian), 0,
-sinf(gammaInRadian),cosf(gammaInRadian), 0,
0, 0, 1);
// Composed rotation matrix with (RX, RY, RZ)
Mat R = RX * RY * RZ;
return R;
}
Mat buildTransformationMatrix(Mat rotation, Mat translation)
{
Mat transformation = (Mat_<double>(4, 4) <<
rotation.at<double>(0,0), rotation.at<double>(0,1), rotation.at<double>(0,2), translation.at<double>(0,0),
rotation.at<double>(1,0), rotation.at<double>(1,1), rotation.at<double>(1,2), translation.at<double>(1,0),
rotation.at<double>(2,0), rotation.at<double>(2,1), rotation.at<double>(2,2), translation.at<double>(2,0),
0, 0, 0, 1);
return transformation;
}
here is the output:
//Origin of 3 axis
O(0,0,0), OC1 (23, 0, 50), OC2 (0, 42, 20)
Translation from default axis to OC1: [-23;
0;
-50]
Rotation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0;
0.7071067690849304, 0.7071067690849304, 0;
0, 0, 1]
Trans1 = Transformation from default axis to OC1: [0.7071067690849304, -0.7071067690849304, 0, -23;
0.7071067690849304, 0.7071067690849304, 0, 0;
0, 0, 1, -50;
0, 0, 0, 1]
Translation from default axis to OC2: [0;
-42;
-20]
Rotation from default axis to OC2: [-4.371138828673793e-08, -1, 0;
1, -4.371138828673793e-08, 0;
0, 0, 1]
Trans2 = Transformation from default axis to OC2: [-4.371138828673793e-08, -1, 0, 0;
1, -4.371138828673793e-08, 0, -42;
0, 0, 1, -20;
0, 0, 0, 1]
(Trans1)-1 * (Trans2) = [0.7071067623795453, -0.7071068241967844, 0, -13.43502907247513;
0.7071068241967844, 0.7071067623795453, 0, -45.96194156373071;
0, 0, 1, 30;
0, 0, 0, 1]
(Trans2)-1 * (Trans1) = [0.7071067381763105, 0.7071067999935476, 0, 42.00000100536185;
-0.7071067999935475, 0.7071067381763104, -0, 22.99999816412165;
0, 0, 1, -30;
0, 0, 0, 1]
//Calculation of translation between OC1 and OC2:
****Translation from C2 to C1
[52.69848430156708;
-29.69848430156708;
30]
****Translation from C1 to C2
[1.005361930594972e-06;
19;
-30]
As you can see above, the 4th column of `(Trans1)-1 * (Trans2)` is not equal neither to the translation from C2->C1 nor translation from C2->C1
it lets me think that `c1Mc2 = (oMc1)^-1 oMc2` does not get what I want
maystroh10Thu, 16 Jul 2015 05:18:00 -0500http://answers.opencv.org/question/66463/How to adjust rotation matrix by a known anglehttp://answers.opencv.org/question/56448/how-to-adjust-rotation-matrix-by-a-known-angle/Hi,
I have calibrated a camera and the pitch angle considered was 0. I have the rotation mat & translation vector from using solvePnp and Rodrigues. Now, if I tilt the camera, the pitch angle changes.. how can I adjust the 3x3 rotation matrix to take into account the new pitch angle? Or the rotation vector.. The pitch angle is expressed in degrees or radians.
Basically I build up an ipm image using projectPoints function that takes the previously obtained matrixes: camera mat, rotation and translation vectors from the calibration. I simply want to modify the rotation mat to take into account the new pitch angle of the camera (mobile phone). I have rotated the points 'manually' before applying the projectPoints but the computation takes too much time.
Any ideas?
Thanks!razvanSat, 28 Feb 2015 12:17:48 -0600http://answers.opencv.org/question/56448/How to pan and tilt a still image?http://answers.opencv.org/question/44835/how-to-pan-and-tilt-a-still-image/Hello guys,
How can know the pan and tilt angle necessary to center a PTZ camera in a specific pixel in the image?
For example, consider that the center of the image provided by the PTZ camera has the coordinates (0,0). My question is how to determine the pan and tilt angle that I have to rotate the camera to center the camera in the pixel (30,40) ( it is an example).
I am almost sure that the mapping forrmula will require the horizontal angle of view and image resolution, in my case 58,9º and 1080x1920
Thank you for your helpJCNevesSun, 19 Oct 2014 12:42:50 -0500http://answers.opencv.org/question/44835/Apply rotation matrix to imagehttp://answers.opencv.org/question/37599/apply-rotation-matrix-to-image/
Hi, <br>
I am working with openCV api for java. <br>
I have a images pair and want to apply the rotation matrix which I get from
stereoCalibrate()
and I want to use it. I have try for both images, right and left, with
warpPerspective()
But it doesnt work. I get a black image.
With < warpAffine() > I dont know how to use the rotation matrix because R is 3x3 but the input for
warpaffine must be 2x3
Somebody has any idearafaocSat, 19 Jul 2014 19:50:50 -0500http://answers.opencv.org/question/37599/Rotation Vectors and Translation Vectorshttp://answers.opencv.org/question/14256/rotation-vectors-and-translation-vectors/Hii , I am using cvCalibrateCamera2(....) function in opencv . Here one of the output that I get is the rotation_vectors which is Nx3 matrix . I have seen the documentation and it says look at cvRodrigues2() function for further details . And I have understood that cvRodrigues2() function converts the a 1x3 rotation vector to a 3x3 rotation matrix . My question is which 1x3 rotation vector out of the N , should be inputted to cvRodrigues2() function for calculating the Rotation Matrix ?? sachin_rtThu, 30 May 2013 00:03:41 -0500http://answers.opencv.org/question/14256/calibration transformation matriceshttp://answers.opencv.org/question/15882/calibration-transformation-matrices/Hi all,
I am using the OpenCV calibration code from this tutorial:
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html#cameracalibrationopencv
Here is the arguments list for the calibration function:
***runCalibration(s,imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs, totalAvgErr);***
rvecs and tvecs are essentially the rotational and transnational components of the Homogeneous transformation matrix.
My question is the following:
does the transformation matrix( using rvecs, tvecs) take us from:
**calibration grid coordinate system** to the **camera coordinate system**
or
from **camera coordinate system** to the **calibration grid coordinate system**.
Thanks,
Paul
Paul DoliotisFri, 28 Jun 2013 10:02:55 -0500http://answers.opencv.org/question/15882/From Fundamental Matrix To Rectified Imageshttp://answers.opencv.org/question/27155/from-fundamental-matrix-to-rectified-images/I have stereo photos coming from the same camera and I am trying to use them for 3D reconstruction.
To do that, I extract SURF features and calculate Fundamental matrix. Then, I get Essential matrix and from there, I have Rotation matrix and Translation vector. Finally, I use them to obtain rectified images.
The problem is that it works only with some specific parameters.
If I set *minHessian* to *430*, I will have a pretty nice rectified images. But, any other value gives me just a black image or some obviously wrong images.
In all the cases, the fundamental matrix seems to be fine (I draw epipolar lines on both the left and right images). However, I can not say so about Essential matrix, Rotation matrix and Translation vector. Even so I used all the 4 possible combination of *R* and *T*.
Here is my code. Any help or suggestion would be appreciated. Thanks!
<pre><code>
Mat img_1 = imread( "images/imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "images/imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 430;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches );
//-- Show detected matches
namedWindow( "Matches", CV_WINDOW_NORMAL );
imshow("Matches", img_matches );
waitKey(0);
//-- Step 4: calculate Fundamental Matrix
vector<Point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++ )
{
// queryIdx is the "left" image
imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
// trainIdx is the "right" image
imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.1, 0.99);
//-- Step 5: calculate Essential Matrix
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};//Camera Matrix
Mat K(3, 3, CV_64F, data);
Mat_<double> E = K.t() * F * K;
//-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
//-- Step 7: Reprojection Matrix and rectification data
Mat R1, R2, P1_, P2_, Q;
Rect validRoi[2];
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
stereoRectify(K, D, K, D, img_1.size(), R, T, R1, R2, P1_, P2_, Q, CV_CALIB_ZERO_DISPARITY, 1, img_1.size(), &validRoi[0], &validRoi[1] );
</code></pre>gozariFri, 24 Jan 2014 08:48:12 -0600http://answers.opencv.org/question/27155/How can I find rotation angles (pitch, yaw, roll) from perspective transofmation coefficients?http://answers.opencv.org/question/22100/how-can-i-find-rotation-angles-pitch-yaw-roll-from-perspective-transofmation-coefficients/I have two 2d quads (each represented using 4 xy pairs), one of them is a perspective transformation of the other. How can I use these quads to deduce the rotations (pitch, yaw, roll) that caused the perspective distortion?
Notice that I used the cvGetPerspectiveTransform() which returns the perspective transformation coefficients in the form of a 3x3 matrix. I am able to use such coefficients to map a point from one space to another. However, it is the rotation angles which I'm concerned about knowing.
Any ideas?
Thanks, Hasan.has981Tue, 08 Oct 2013 05:32:31 -0500http://answers.opencv.org/question/22100/Kalman filtration for rotationhttp://answers.opencv.org/question/18186/kalman-filtration-for-rotation/I want to use Kalman filter for rotation. I have rotation matrix, but i can't use filtration for it, because all it's elements interdependent. That's why i try to convert matrix to euler angles and to filter each of compoments.
My questions:
- is this filtering of euler angles will be right? (is all components x, y, z independent? I think no) If yes, how to avoid gimbal lock during filtration?
- is there a better way to filter the rotation?tenta4Mon, 05 Aug 2013 02:18:54 -0500http://answers.opencv.org/question/18186/Rotation Matrix calculation using cvRodrigues , Calculating real world coordinates from pixel world coordinates .http://answers.opencv.org/question/14969/rotation-matrix-calculation-using-cvrodrigues-calculating-real-world-coordinates-from-pixel-world-coordinates/Hii I want to get real world (X,Y,Z) coordinates of an object from live capture from a PTZ camera .
I have found the intrinsic parameters , and using chessboard calibration with 15 chessboard images . I have also found the extrinsic prameters . I know Z coordinate cannot be found using a single camera .
To do this I need to find rotation and translation matrix . I have a doubt that for finding rotation matrix cvRodrigues() function will convert a rotation vector to 3x3 rotation matrix , but here I will have 15 rotation vectors . Which one should I use for finding the rotation matrix ??
Also I want to know if I happen to pan or tilt my camera from the original calibration position . will i have to recalculate my rotation matrix and translation matrix or can i use the old ones ?? sachin_rtTue, 11 Jun 2013 00:20:46 -0500http://answers.opencv.org/question/14969/