1 | initial version |
I think you have things backwards, the "old model" as you called it, better known as the radial distortion model, is as the name implies a nonlinear distortion that is applied to undistorted scene coordinates after perspective division (but before projection onto the image plane) according to their distance to the center of distortion (principle point). Input are the non distorted coordinates before projection, and the output are the warped coordinates that the camera lens will produce. This means that i.e. if you know an object's 3d coordinates in the reference frame of the camera, the radial distortion model tells you where the object's points will end up on the image plane for the given lens and how much deviation there will be from the pinhole projection model.
It is worth noting that the inverse formulation, the mapping between a pair of distorted image coordinates taken from the specific camera lens and the corresponding original undistorted scene coordinatess, does not have a closed form solution for the radial model and has to be solved iteratively using non linear optimization techniques.
The second model is based on the rational model by Claus et al.
Note that the model they propose is actually an undistortion model, a mapping between distorted image points and their corresponding undistorted scene points that has a closed form solution, in contrast with the radial distortion model. The OpenCV implementation, however, differs significantly from the method proposed in the paper and is not actually an undistortion model, but simply an extension of the radial distortion model with extra terms k_4 to k_6 in the denominator, with the idea that lenses with more extreme distortions (wide angle, fisheye) can be better modeled with the additional terms.
For a more recent discussion of these models see: