using k1, k2, k3 distortion values in blender

asked 2020-05-18 19:23:11 -0500

cegaton gravatar image

updated 2020-05-19 10:55:50 -0500

I'm trying to use python and OpenCV to automate camera calibration using a checkerboard pattern. My goal would be to use the resulting calibration values (camera matrix and distortion vectors) to undistort the lens when doing motion tracking in blender 3d.

The Python code in OpenCV is quite easy to set up, and yields the camera intrinsics out of a series of photographs of a chessboard chart from different angles.

The resulting information comes as:

a camera matrix:

[fx , 0, Cx ]

[0 , fx, Cy ]

[0 , 0 , 1 ]

and distortion vectors K1,k2 p1, p2 k3.

So far, so good and painless.

The problem comes when I try to use those numbers in Blender to undistort images in the movie clip editor. The camera matrix is fine once the lens size (in pixels) has been converted to mm. The image center (Cx, Cy) seems correct as well.

But the distortion values don't work correctly: k1, k2, and k3 values I get from camera calibration in OpenCV overcompensate the distortion by a large margin. The values seem to be scaled somehow and I'm having a hard time figuring out how to convert them to a useful number. I presume a different algorithm is at play to calculate radial distortion. How can I know how to convert those values to be used in blender?

Here's an example: In opencv I get the following information:

[[3.94523169e+03 0.00000000e+00 2.57186877e+03] [0.00000000e+00 3.95830239e+03 1.75686206e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

distortion k1, k2 , p1, p2, k3 :

[[-3.54617853e-01 1.84303172e-01 3.61469052e-04 -2.85919992e-04 -6.27916142e-02]]

The lens converted to mm ( by multiplying fx*(sensor size in mm/image width in pixels))


If I do a manual calibration in blender (using the very unreliable grease pencil as described here:

I get the following values for k1 k2 and k3

k1: -0.213 k2: 0.043 k3: -0.073

i'm sure these new values are within a large margin of error, as they are calculated visually, but they are nowhere close to the values I get from OpenCV. I just don't understand how to scale the results or how to convert them to the values that blender is using. Can anyone explain what I'm doing wrong? Thanks in advance.

PS. I've read the following posts already, and

But there is no mention of distortion vectors.

edit retag flag offensive close merge delete