OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Thu, 05 Sep 2019 10:47:25 -0500Get Rotation and Translation Matrixhttp://answers.opencv.org/question/217954/get-rotation-and-translation-matrix/I'm programming one Asus Xtion depth camera, wich instead of an RGB image, it gives me the Depth information.
I already have the Camera and Distortion Matrices of the depth camera, but now, I want to calibrate the vision system, by getting the rotation and translation matrices.
I already have the 3D local coordinates of the points, from the camera perspective, but now I need to convert them to world/global coordinates. Since this camera has only depth information I was thinking: is it possible to calibrate this vision system by saying where is the ground plane? How shoul I proceed to put the blue plane as the ground plane of my vison system?
![image description](/upfiles/15676982772248991.jpg)
(note, in addition to the ground plane there's also an object on the plane)
I already tried using the solvePnP to get the rotation and translation matrices, but with no luck. Thanks in advance.dbots94Thu, 05 Sep 2019 10:47:25 -0500http://answers.opencv.org/question/217954/How to tranform 2D image coordinates to 3D world coordinated with Z = 0?http://answers.opencv.org/question/150451/how-to-tranform-2d-image-coordinates-to-3d-world-coordinated-with-z-0/Hi everyone,
I currently working on my project which involves vehicle detection and tracking and estimating and optimizing a cuboid around the vehicle. For that I am taking the center of the detected vehicle and I need to find the 3D world coodinate of the point and then estimate the world coordinates of the edges of the cuboid and the project it back to the image to display it.
So, now I am new to computer vision and OpenCV, but in my knowledge, I just need 4 points on the image and need to know the world coordinates of those 4 points and use solvePNP in OpenCV to get the rotation and translation vectors (I already have the camera matrix and distortion coefficients). Then, I need to use Rodrigues to transform the rotation vector into a rotation matrix and then concatenate it with the translation vector to get my extrinsic matrix and then multiply the extrinsic matrix with the camera matrix to get my projection matrix. Since my z coordinate is zero, so I need to take off the third column from the projection matrix which gives the homography matrix for converting the 2D image points to 3D world points. Now, I find the inverse of the homography matrix which gives me the homography between the 3D world points to 2D image points. After that I multiply the image points [x, y, 1]t with the inverse homography matrix to get [wX, wY, w]t and the divide the entire vector by the scalar w to get [X, Y, 1] which gives me the X and Y values of the world coordinates.
My code is like this:
image_points.push_back(Point2d(275, 204));
image_points.push_back(Point2d(331, 204));
image_points.push_back(Point2d(331, 308));
image_points.push_back(Point2d(275, 308));
cout << "Image Points: " << image_points << endl << endl;
world_points.push_back(Point3d(0.0, 0.0, 0.0));
world_points.push_back(Point3d(1.775, 0.0, 0.0));
world_points.push_back(Point3d(1.775, 4.620, 0.0));
world_points.push_back(Point3d(0.0, 4.620, 0.0));
cout << "World Points: " << world_points << endl << endl;
solvePnP(world_points, image_points, cameraMatrix, distCoeffs, rotationVector, translationVector);
cout << "Rotation Vector: " << endl << rotationVector << endl << endl;
cout << "Translation Vector: " << endl << translationVector << endl << endl;
Rodrigues(rotationVector, rotationMatrix);
cout << "Rotation Matrix: " << endl << rotationMatrix << endl << endl;
hconcat(rotationMatrix, translationVector, extrinsicMatrix);
cout << "Extrinsic Matrix: " << endl << extrinsicMatrix << endl << endl;
projectionMatrix = cameraMatrix * extrinsicMatrix;
cout << "Projection Matrix: " << endl << projectionMatrix << endl << endl;
double p11 = projectionMatrix.at<double>(0, 0),
p12 = projectionMatrix.at<double>(0, 1),
p14 = projectionMatrix.at<double>(0, 3),
p21 = projectionMatrix.at<double>(1, 0),
p22 = projectionMatrix.at<double>(1, 1),
p24 = projectionMatrix.at<double>(1, 3),
p31 = projectionMatrix.at<double>(2, 0),
p32 = projectionMatrix.at<double>(2, 1),
p34 = projectionMatrix.at<double>(2, 3);
homographyMatrix = (Mat_<double>(3, 3) << p11, p12, p14, p21, p22, p24, p31, p32, p34);
cout << "Homography Matrix: " << endl << homographyMatrix << endl << endl;
inverseHomographyMatrix = homographyMatrix.inv();
cout << "Inverse Homography Matrix: " << endl << inverseHomographyMatrix << endl << endl;
Mat point2D = (Mat_<double>(3, 1) << image_points[0].x, image_points[0].y, 1);
cout << "First Image Point" << point2D << endl << endl;
Mat point3Dw = inverseHomographyMatrix*point2D;
cout << "Point 3D-W : " << point3Dw << endl << endl;
double w = point3Dw.at<double>(2, 0);
cout << "W: " << w << endl << endl;
Mat matPoint3D;
divide(w, point3Dw, matPoint3D);
cout << "Point 3D: " << matPoint3D << endl << endl;
I have got the image coordinates of the four known world points and hard-coded it for simplification. The vector image_points contain the image coordinates of the four points and the vector world_points contain the world coordinates of the four points. I am considering the the first world point as the origin (0, 0, 0) in the world axis and using known distance calculating the coordinates of the other four points. Now after calculating the inverse homography matrix, I multiplied it with [image_points[0].x, image_points[0].y, 1]t which is related to the world coordinate (0, 0, 0). Then I divide the result by the third component w to get [X, Y, 1]. But after printing out the values of X and Y, it turns out they are not 0, 0 respectively. What am doing wrong?
The result showing is
[21.0400429;
135.683;
1]
My camera matrix is
[ 5.1700368817095330e+02, 0., 320., 0., 5.1700368817095330e+02,
212., 0., 0., 1. ]
Distortion Coefficients matrix is
[ 1.1286636797980941e-01, -1.4877900799224317e+00, 0., 0.,
2.3005718967610673e+00 ]
IndySupertrampSat, 20 May 2017 23:03:08 -0500http://answers.opencv.org/question/150451/python2 - Single camera odometryhttp://answers.opencv.org/question/55604/python2-single-camera-odometry/If you look at the python2 lk_homography sample, you see that opencv can quite easily track the 2D perspecitve shift from one image to the next. If you open that sample and move your camera around, opencv can map exactly where the original image is in relation to the new one.
My question is really about whether it's possible to take this 2D perspective transform and turn it into a 3D one - e.g, if the points spread out in 2D, it's obviously coming closer to the camera in 3D - and if the points skew to the left or right, that's a 3D rotation.
Is there a function to do this? Something like cv2.getAffineTransform but in a 3rd dimension.
The end goal here is to use a camera to estimate change in position of a robot.fridgecowWed, 18 Feb 2015 12:16:48 -0600http://answers.opencv.org/question/55604/Camera with auto-focus and 3D reconstructionhttp://answers.opencv.org/question/7278/camera-with-auto-focus-and-3d-reconstruction/Hi,
I'm using some very simple web cam, during the chessboard calibration I got every time very **different intrinsic matrix**(especially the part with focal lengths), is it because the camera has auto-focus? If I take the pictures of multiple chessboard position the undistorted image is afterward **more distorted then the original**, how can it be? Is it possible the auto-focus is disturb somehow the distortion parameters calculation? When I want to calculate projection matrix I need **non-variable focus length**, needn't I?
But I don't understand how such a camera can have auto-focus, when the there is need to screw the lens to make the picture sharp? I thought auto-focus is moving some lens to focus??
And second question is if I want to make a laser scanner. I need to somehow calculate the homography to laser plane is it right? So probably I can directly find the laser line on the chessboard during the calibration. But do I need to measure the distance of the chessboard or can I somehow calculate the distance from the chessboard? **Do I need chessboard 3D coordinates to calculate the extrinsic matrix**?
Thanks for your time
Regards
Martin
MartinTue, 12 Feb 2013 02:02:17 -0600http://answers.opencv.org/question/7278/