# projectpoints distortion

Hi everybody, I am opening a new question to add images and put together several other questions

To recap, I'm trying to project 3D points to an undistort image, in particular to the images from the KITTI raw dataset. For those who do not know it, for each of the sequences of the dataset, a calibration file is provided both in terms of intrinsic and extrinsic; the dataset contains both camera images and lidar 3D points; also, a tutorial for the projection of the 3D points is provided, but everything is proposed with the undistorted/rectified images that the dataset also provides. However, the intrinsic/extrinsic matrices are also provided so in general I should be able to project the 3D points directly on the image without having to either use their rectification or making any undistorsion of the raw image (raw means without processing in this case).

So let's start with their example, a projection of the points directly on the rectified image (please note, I'm saying rectified and not undistorted because the dataset has four cameras, so they provide the undistorted/rectified images).

as you can see this image is pretty undistorted and the lidar points should be correctly aligned (using their demo toolkit here in matlab).

Next, using the info provided:

• S_xx: 1x2 size of image xx before rectification
• K_xx: 3x3 calibration matrix of camera xx before rectification
• D_xx: 1x5 distortion vector of camera xx before rectification
• R_xx: 3x3 rotation matrix of camera xx (extrinsic)
• T_xx: 3x1 translation vector of camera xx (extrinsic)

together with the info about where the lidar-sensor is I can easily prepare an example to

• rotate the points in the camera coordinate frame (it is the same thing that giving the projectpoints this transformation with the rvec/tvec, which i use as identities since the first camera is the origin of all the cordinate systems, but this is a detail)
• project the points using projectPoints

Ok let's try to do this.... using

D_00 = np.array([ -3.745594e-01, 2.049385e-01, 1.110145e-03, 1.379375e-03, -7.084798e-02], dtype=np.float64)


I hope you can see the image! there are points above the actual limit of the lidar points, as well some "noise" in between the other points (ok just to be fair, in the first image I didn't put all the points, but you should see a different "noise pattern" in this second image).

At the beginning I thought it was due to a stupid error in my code, parsing or whatsoever... but the I tried to do the same just using the undistorted image and thus setting the distortion coefficients to zero and... magic

D_00_zeros = np.array([ 0.0, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)


Can this be still my fault on parsing data? mmm ... so, I decided to create a "virtual plane" and move it through the ...

edit retag close merge delete

Sort by » oldest newest most voted

Ok let's answer this question. I've been digging into the opencv code, compiling all the necessary in debug mode and doing a step-by-step debuggin session of the same implementation I previously had in Python now in C++.

The code that deals with the projectpoints function is the calibration.cpp which in turns calls the cvProjectPoints2Internal routine. The projection itself is done in the lines around line 774, where a long for starts and this is the part where the K4 value (the 5th element of distortion) plays the key role:

    r2 = x*x + y*y;
r4 = r2*r2;
r6 = r4*r2;
a1 = 2*x*y;
a2 = r2 + 2*x*x;
a3 = r2 + 2*y*y;
cdist = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6; <<<< ***HERE***
icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
xd0 = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
yd0 = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;


My conclusion is that is not a real bug, it is simply the way in which the routine works. But, there's the issue of "returning" points, or whatever name we want to give to those points. The fact is that those points I put in the question are nothing less than these ones I've created using a much higher value for K4 (the 5th element of distorsion):

The issue than can be translated into "can we remove those points behind some kind of, let's say, distortion limit?"

I don't really know how to simply do that, maybe the jacobian evaluated for every point can be used, or maybe not, but certainly it would be an interesting extension of the current behavior of this function.

At this time, I do not have time to investigate for an elegant solution and my way to solve this issue was just to compare the position of the distorsion-free point (using all K's set to zero) and checking whether the point falls inside a specific region. I know, is not a real solution for the issue, is just a workaround for my specific problem with my cameras and blabla, but it works. Here is my C++ code with the solution for this temporary workaround, feel free to improve the solution :-D

Also, the fact that the points are projected even if they are behind the camera should be something to consider in the future versions of OpenCV... as also the definition of positive/negative K1 value, as I think the issue here is in the way we are thinking (3D to 2D or 2D to 3D...)

To conclude, here it is my workaround "in action" :-)

With the "problem" https://github.com/trigal/trigal.gith...

Without the "problem" https://github.com/trigal/trigal.gith...

more

1

This is a long post. I have not read it in detail.

At least to track the issue, I would suggest you to open an issue here and name it as a feature request, or possible bug. With reproducible data and code.

( 2020-07-06 15:36:46 -0500 )edit
2
( 2020-07-06 17:36:47 -0500 )edit

Official site

GitHub

Wiki

Documentation