Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

PnP pose estimation using undistorted images?

Hi, I am getting currently into camera calibration, undistortion and eventually I'm going to attempt pose estimation to detect the position of the camera in the world. There is a long way ahead of me, as I have to learn all about object coordinates and world coordinates and all between.

However, I have a fairly simple question to ask, which I couldn't find clues for in all my readings. So I got my camera matrix and distance vectors thanks to calibration, and now I'm ready to shoot them into solvePnP() to know where I am.

In my image I got from the camera, I already recognized the 2D coordinates for the world coordinates which are known to me. BUT did I do that on the raw camera image? Or did I undistort the image before?

That is basically my question. As I can't really follow the maths of what is going behind the scenes, I am hoping to get some answers here. That'd be awesome! Thanks a lot.