Using findHomography to create 'fake' intrinsics?

asked Apr 4 '0

antithing gravatar image

updated Apr 4 '0

I have two images from very different cameras.

cv::Mat CameraA is from a 16mm lens, and the resolution is 1920 * 1080
cv::Mat CameraB is from a 7mm lens, and the resolution is 1280 * 720

I have calibrated these, and used a chessboard with solvePnP to get poses.

Now, i want to pass these into a bundle adjustment function, using cvsba.

BUT, the function expects the same intrinsics from both cameras.

If I was to use findHomography to get an H matrix from the sets of 2d points, and then scale CameraA to match CameraB, would I then be able to use the intrinsics from CameraB for CameraA?

If not, is there a way to somehow 'normalise' intrinsics in this way?

Thank you.

Preview: (hide)

Comments

I would try two things:

  • change the bundle adjustment function to take the two camera matrices, look at the equations and understand what is done behind the hood
  • compute the image coordinates in normalized camera coordinates (x,y,z=1) and pass identity camera matrices
Eduardo gravatar imageEduardo (Apr 4 '0)edit

Hi, thank you for getting back to me. Can you please explain the second option for me? How do I compute the image coordinates in normalized camera coordinates ? Is an identity intrinsic matrix zeros, with 1 for fx,fy,cx,cy? Thanks!

antithing gravatar imageantithing (Apr 4 '0)edit

Is this the correct approach?

//test normailse points
    std::vector<cv::Point2f> outputUndistortedPoints;
    cv::undistortPoints(pnts2DS, outputUndistortedPoints, K, D);

    //set identity K
    cv::Mat identity = cv::Mat::zeros(3, 3, CV_32F);
    identity.at<float>(0, 0) = 1;
    identity.at<float>(1, 1) = 1;
    identity.at<float>(2, 2) = 1;
antithing gravatar imageantithing (Apr 5 '0)edit

I think so.

To be sure, just project the undistorted point with the real camera intrinsics and you should get more or less your 2D image coordinates.

Eduardo gravatar imageEduardo (Apr 6 '0)edit