projectPoints functionality question

I'm doing something similar to the tutorial here: http://docs.opencv.org/3.1.0/d7/d53/t... regarding pose estimation. Essentially, I'm creating an axis in the model coordinate system and using ProjectPoints, along with my rvecs, tvecs, and cameraMatrix, to project the axis onto the image plane.

In my case, I'm working in the world coordinate space, and I have an rvec and tvec telling me the pose of an object. I'm creating an axis using world coordinate points (which assumes the object wasn't rotated or translated at all), and then using projectPoints() to draw the axes the object in the image plane.

I was wondering if it is possible to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated. To test, I've done the rotation and translation on the axis points manually, and then use projectPoints to project them onto the image plane (passing identity matrix and zero matrix for rotation, translation respectively), but the results seem way off. How can I eliminate the projection step to just get the world coordinates of the axes, once they've been rotation and translated? Thanks!

edit retag close merge delete

Sort by ยป oldest newest most voted

I am not sure to understand you correctly when you said:

(which assumes the object wasn't rotated or translated at all)

to eliminate the projection, and get the world coordinates of those axes once they've been rotated and translated

Anyway, even if it does not answer your question I will try to add some useful information.

What you have is the camera pose or the transformation matrix that allows to transform a coordinate in the world frame to the corresponding coordinate in the camera frame:

The perspective projection projects the 3D coordinates in the camera frame to the image plane according to the pinhole camera model:

The full operation when you want to draw for example the world frame origin in the image plane should be:

Now if you know the geometric transformation between two frames w1 and w2:

To draw the coordinate of a point in frame w2 onto the image plane, you have to first compute its coordinate in the camera frame:

This figure should illustrate the situation:

I hope that what I have written is mostly correct.

more

So if I understand you correctly, I just need to rotate and translate the axis using rvecs and tvecs, and that should be what I need.

The problem is, I use solvePnP to get the rotation and translation for a face. And I draw an axis on the face and use projectPoints to draw it on the image plane. I want to find the world coordinates of the axis after its rotated. What I did was find the rotation matrix using Rodrigues, and do rotation_matrix*axis_point + tvec. And then used projectPoints to project the result of that onto the image plane (passing identity matrix and zero matrix for rotation, tvec respectively). But the result of that was different than what I got from just using projectPoints normally. Why is that?

( 2016-06-15 17:31:01 -0500 )edit

It should be the same result:

  cv::Mat K = (cv::Mat_<double>(3,3) <<
700, 0, 320,
0, 700, 240,
0, 0, 1);

double theta = 24.0 * M_PI / 180.0;
cv::Mat rvec = (cv::Mat_<double>(3,1) <<
0.4, 0.2, 0.8944) * theta;
cv::Mat R;
cv::Rodrigues(rvec, R);

cv::Mat tvec = (cv::Mat_<double>(3,1) <<
0.5, 0.38, 1.4);

cv::Mat mat_point_x = (cv::Mat_<double>(3,1) <<
1, 0, 0);

std::vector<cv::Point3f> object_points;
cv::Point3f object_point_x(1, 0, 0);
object_points.push_back(object_point_x);
std::vector<cv::Point2f> image_points;

( 2016-06-16 04:32:26 -0500 )edit

Code:

cv::projectPoints(object_points, rvec, tvec, K, cv::noArray(), image_points);

std::cout << "image_point_x=" << image_points.front() << std::endl;

cv::Mat cam_image_point_x = R * mat_point_x + tvec;
cv::Point3f cam_image_point_x2(
cam_image_point_x.at<double>(0), cam_image_point_x.at<double>(1), cam_image_point_x.at<double>(2));
object_points.clear();
object_points.push_back(cam_image_point_x2);

image_points.clear();
cv::projectPoints(object_points, cv::Mat::eye(3, 3, CV_64F), cv::Mat::zeros(3,1,CV_64F), K, cv::noArray(), image_points);
std::cout << "image_point_x_2=" << image_points.front() << std::endl;

( 2016-06-16 04:34:45 -0500 )edit

Got it, thanks!

( 2016-06-16 21:47:40 -0500 )edit

You can use cv::aruco::drawAxis which displays your axis with the given rvec and tvec. Remember to add #include "opencv2/aruco.hpp"

more

There is now drawFrameAxes() in calib3d module (OpenCV >= 4.0.1 or OpenCV >= 3.4.5). No more required to build the contrib modules if not needed.