From my understanding, undistortPoints() takes a set of points on a distorted image, and calculates where their coordinates would be on an undistorted version of the same image. projectPoints() maps a set of object coordinates to their corresponding image coordinates.
However, I am unsure if projectPoints() maps the object coordinates to a set of image points on the distorted image (ie. the original image) or one that has been undistorted (straight lines)?
Furthermore, the OpenCV documentation for undistortPoints states that 'the function performs a reverse transformation to projectPoints()'. Could you please explain how this is so?