PnP pose estimation using undistorted images?

asked 2020-03-31 16:25:07 -0600

noshky gravatar image

Hi, I am getting currently into camera calibration, undistortion and eventually I'm going to attempt pose estimation to detect the position of the camera in the world. There is a long way ahead of me, as I have to learn all about object coordinates and world coordinates and all between.

However, I have a fairly simple question to ask, which I couldn't find clues for in all my readings. So I got my camera matrix and distance vectors thanks to calibration, and now I'm ready to shoot them into solvePnP() to know where I am.

In my image I got from the camera, I already recognized the 2D coordinates for the world coordinates which are known to me. BUT did I do that on the raw camera image? Or did I undistort the image before?

That is basically my question. As I can't really follow the maths of what is going behind the scenes, I am hoping to get some answers here. That'd be awesome! Thanks a lot.

edit retag flag offensive close merge delete

Comments

If you are using 2D image coordinates extracted from the raw image (so with some distortion), pass the camera intrinsics and distortion coefficients.

If you are undistorting the image and then extract some 2D image coordinates, pass the camera intrinsics and set the distortion coefficients to zero or (cv::noArray()).

Eduardo gravatar imageEduardo ( 2020-04-01 03:01:38 -0600 )edit

Thanks @Eduardo, that makes a lot of sense!

noshky gravatar imagenoshky ( 2020-04-05 10:06:17 -0600 )edit