# Using findHomography to create 'fake' intrinsics?

I have two images from very different cameras.

cv::Mat CameraA is from a 16mm lens, and the resolution is 1920 * 1080
cv::Mat CameraB is from a 7mm lens, and the resolution is 1280 * 720


I have calibrated these, and used a chessboard with solvePnP to get poses.

Now, i want to pass these into a bundle adjustment function, using cvsba.

BUT, the function expects the same intrinsics from both cameras.

If I was to use findHomography to get an H matrix from the sets of 2d points, and then scale CameraA to match CameraB, would I then be able to use the intrinsics from CameraB for CameraA?

If not, is there a way to somehow 'normalise' intrinsics in this way?

Thank you.

edit retag close merge delete

I would try two things:

• change the bundle adjustment function to take the two camera matrices, look at the equations and understand what is done behind the hood
• compute the image coordinates in normalized camera coordinates (x,y,z=1) and pass identity camera matrices
( 2020-04-04 09:24:29 -0500 )edit

Hi, thank you for getting back to me. Can you please explain the second option for me? How do I compute the image coordinates in normalized camera coordinates ? Is an identity intrinsic matrix zeros, with 1 for fx,fy,cx,cy? Thanks!

( 2020-04-04 10:26:50 -0500 )edit

Is this the correct approach?

//test normailse points
std::vector<cv::Point2f> outputUndistortedPoints;
cv::undistortPoints(pnts2DS, outputUndistortedPoints, K, D);

//set identity K
cv::Mat identity = cv::Mat::zeros(3, 3, CV_32F);
identity.at<float>(0, 0) = 1;
identity.at<float>(1, 1) = 1;
identity.at<float>(2, 2) = 1;

( 2020-04-05 11:46:38 -0500 )edit

I think so.

To be sure, just project the undistorted point with the real camera intrinsics and you should get more or less your 2D image coordinates.

( 2020-04-06 08:01:54 -0500 )edit