Ask Your Question

Why is projecting a 3D point to 2D not within the image frame?

asked 2017-01-02 11:46:42 -0600

ale_Xompi gravatar image

I am trying to generate key-points in a set of views given a number of 3D points, the extrinsic parameters of each view and the intrinsic parameters of the camera. However, I noticed that points do not lie within the size of the frame (e.g. 640 x 480). This is the data I am using:

  • Camera pose (1st view) as a 6D vector (orientation + position in world coordinate frame): [-90 90 0 | 80 45 45];
  • 3D point: [50 50 40]
  • Image: 640x480 pxs
  • Focal length: 30 mm
  • Sensor size: 22x16 mm (Sw x Sh)

Thus, the camera matrix (intrinsic parameters) becomes: [fIw/Sw 0 Iw/2; 0 fIh/Sh Ih/2; 0 0 1];

When applying the formula for the pinhole camera model:

f = K * R * [I|t] * M

where R and t are the rotation matrix and the translation vector, respectively - coming from the camera pose - and M is in homogeneous coordinates in this case, I cannot obtain a point within the frame size (i.e. 640 x 480). Please notice that is already scaled by its 3rd component to have a 2D point.

Do you have any idea why the projection does not work? When visualizing the point and the camera, I am already sure the point is in front of the camera in the world coordinate system.

Can you confirm that the identity matrix for the rotation corresponds to the camera looking upwards, please?

I also tried the function cv::projectPoints() to verify, but return an error probably connected with the fact the depth of the point is not positive in the camera coordinate system.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2017-01-02 14:08:55 -0600

Tetragramm gravatar image

I suspect you are constructing your rvec and tvec wrong. Remember that R|t transforms a point from world coordinates (X, Y, Z) to camera coordinates.

Try a couple of simple test cases to make sure you're understanding it correctly. IE: start at no rotation, no translation, and make sure that the point projects to where you think it should. Then start adding translation to your tvec and making sure you see the relationship between tvec and which points project to the center of the camera. Then add rotation and make sure you understand that as well.

Here's the first test case to get started:

    Mat rvec, tvec, cmat;
    rvec.create(1, 3, CV_32F);<float>(0) = 0;<float>(1) = 0;<float>(2) = 0;

    tvec.create(3, 1, CV_32F);<float>(0) = 0;<float>(1) = 0;<float>(2) = 0;

    cmat.create(3, 3, CV_32F);
    setIdentity(cmat);<float>(0, 0) = 30.0*(640.0 / 22.0);<float>(1, 1) = 30.0*(480.0 / 16.0);<float>(0, 2) = 640 / 2;<float>(1, 2) = 480 / 2;

    vector<Point3f> world;
    world.push_back(Point3f(0, 0, 40));
    vector<Point2f> image;

    projectPoints(world, rvec, tvec, cmat, noArray(), image);
    std::cout << image[0] << "\n";
edit flag offensive delete link more

Question Tools

1 follower


Asked: 2017-01-02 11:46:42 -0600

Seen: 1,116 times

Last updated: Jan 02 '17