Pass distorted or undistorted points to (E)PnP?

asked 2016-03-08 14:47:46 -0600

akog gravatar image

I have one RGB-D camera and one RGB and I want to estimate the camera pose.

Working on the two RGB images, I'm using ORB for feature detection and descriptors extraction and Flann for feature matching. Then, having the 3D-2D correspondences, I use EPnP for camera pose estimation.

My question regards the undistortion of the images. Considering that the EPnP implementation in OpenCV undistorts the 2D points, should I pass zero distortion coefficients to EPnP and undistort both images before detecting and matching the features (so I use the undistorted points as inputs for the EPnP)? If passing the distortion coefficients to the EPnP, should the input points of 2D or 3D image be distorted or undistorted?

edit retag flag offensive close merge delete