Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

using k1, k2, k3 distortion values in blender

I'm trying to use python and OpenCV to automate camera calibration using a checkerboard pattern. My goal would be to use the resulting calibration values (camera matrix and distortion vectors) to undistort the lens when doing motion tracking in blender 3d.

The Python code in OpenCV is quite easy to set up and yields the camera intrinsics out of a series of photographs of a chessboard chart taken with the same lens from different angles.

The resulting information comes as:

a camera matrix:

[fx , 0, Cx ]

[0 , fx, Cy ]

[0 , 0 , 1 ]

and distortion vectors K1,k2 p1, p2 k3.

So far so good and painless.

The problem comes when I try using those numbers in Blender to undistort images. The distortion values don't seem to work correctly.

The camera matrix is fine once the lens size (in pixels) has been converted to mm. The image center (Cx, Cy) seems correct as well.

But the k1, k2, and k3 values seem to be scaled somehow. If I just use the numbers from OpenCV the undistortion is completely wrong.

How can I know how to convert those values to be used in blender?

Here's an example: In opencv I get the following information:

[[3.94523169e+03 0.00000000e+00 2.57186877e+03] [0.00000000e+00 3.95830239e+03 1.75686206e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

distortion k1, k2 , p1, p2, k3 :

[[-3.54617853e-01 1.84303172e-01 3.61469052e-04 -2.85919992e-04 -6.27916142e-02]]

The lens converted to mm ( by multiplying fx*(sensor size in mm/image width in pixels))

16.971195251299598mm

If I do a manual calibration in blender (using the very unreliable grease pencil as described here: https://blender.stackexchange.com/a/15622/96803)

I get the following values for k1 k2 and k3

k1: -0.213 k2: 0.043 k3: -0.073

Nowhere close to the values I get from OpenCV. I just don't understand how to scale the results or how to convert them to the values that blender is using. Can anyone explain what I'm doing wrong? Thanks in advance.

PS. I've read the following posts already,

https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/ and https://answers.opencv.org/question/6331/calibrating-blender-camera-as-test/

But there is no mention of distortion vectors.

using k1, k2, k3 distortion values in blender

I'm trying to use python and OpenCV to automate camera calibration using a checkerboard pattern. My goal would be to use the resulting calibration values (camera matrix and distortion vectors) to undistort the lens when doing motion tracking in blender 3d.

The Python code in OpenCV is quite easy to set up up, and yields the camera intrinsics out of a series of photographs of a chessboard chart taken with the same lens from different angles.

The resulting information comes as:

a camera matrix:

[fx , 0, Cx ]

[0 , fx, Cy ]

[0 , 0 , 1 ]

and distortion vectors K1,k2 p1, p2 k3.

So far far, so good and painless.

The problem comes when I try using to use those numbers in Blender to undistort images. The distortion values don't seem to work correctly.

images in the movie clip editor. The camera matrix is fine once the lens size (in pixels) has been converted to mm. The image center (Cx, Cy) seems correct as well.

But the distortion values don't work correctly: k1, k2, and k3 values I get from camare calibration in OpenCV overcompensate the distortion by a large margin. The values seem to be scaled somehow. If I just use the numbers from OpenCV the undistortion somehow and I'm having a hard time figuring out how to convert them to a useful number. I preusme a different algorithm is completely wrong.

at play to calculate radial distortion. How can I know how to convert those values to be used in blender?blender?

Here's an example: In opencv I get the following information:

[[3.94523169e+03 0.00000000e+00 2.57186877e+03] [0.00000000e+00 3.95830239e+03 1.75686206e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

distortion k1, k2 , p1, p2, k3 :

[[-3.54617853e-01 1.84303172e-01 3.61469052e-04 -2.85919992e-04 -6.27916142e-02]]

The lens converted to mm ( by multiplying fx*(sensor size in mm/image width in pixels))

16.971195251299598mm

If I do a manual calibration in blender (using the very unreliable grease pencil as described here: https://blender.stackexchange.com/a/15622/96803)

I get the following values for k1 k2 and k3

k1: -0.213 k2: 0.043 k3: -0.073

Nowhere i'm sure these new values are within a large margin of error, as they are calculated visually, but they are nowhere close to the values I get from OpenCV. I just don't understand how to scale the results or how to convert them to the values that blender is using. Can anyone explain what I'm doing wrong? Thanks in advance.

PS. I've read the following posts already,

https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/ and https://answers.opencv.org/question/6331/calibrating-blender-camera-as-test/

But there is no mention of distortion vectors.

using k1, k2, k3 distortion values in blender

I'm trying to use python and OpenCV to automate camera calibration using a checkerboard pattern. My goal would be to use the resulting calibration values (camera matrix and distortion vectors) to undistort the lens when doing motion tracking in blender 3d.

The Python code in OpenCV is quite easy to set up, and yields the camera intrinsics out of a series of photographs of a chessboard chart from different angles.

The resulting information comes as:

a camera matrix:

[fx , 0, Cx ]

[0 , fx, Cy ]

[0 , 0 , 1 ]

and distortion vectors K1,k2 p1, p2 k3.

So far, so good and painless.

The problem comes when I try to use those numbers in Blender to undistort images in the movie clip editor. The camera matrix is fine once the lens size (in pixels) has been converted to mm. The image center (Cx, Cy) seems correct as well.

But the distortion values don't work correctly: k1, k2, and k3 values I get from camare camera calibration in OpenCV overcompensate the distortion by a large margin. The values seem to be scaled somehow and I'm having a hard time figuring out how to convert them to a useful number. I preusme presume a different algorithm is at play to calculate radial distortion. How can I know how to convert those values to be used in blender?

Here's an example: In opencv I get the following information:

[[3.94523169e+03 0.00000000e+00 2.57186877e+03] [0.00000000e+00 3.95830239e+03 1.75686206e+03] [0.00000000e+00 0.00000000e+00 1.00000000e+00]]

distortion k1, k2 , p1, p2, k3 :

[[-3.54617853e-01 1.84303172e-01 3.61469052e-04 -2.85919992e-04 -6.27916142e-02]]

The lens converted to mm ( by multiplying fx*(sensor size in mm/image width in pixels))

16.971195251299598mm

If I do a manual calibration in blender (using the very unreliable grease pencil as described here: https://blender.stackexchange.com/a/15622/96803)

I get the following values for k1 k2 and k3

k1: -0.213 k2: 0.043 k3: -0.073

i'm sure these new values are within a large margin of error, as they are calculated visually, but they are nowhere close to the values I get from OpenCV. I just don't understand how to scale the results or how to convert them to the values that blender is using. Can anyone explain what I'm doing wrong? Thanks in advance.

PS. I've read the following posts already,

https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/ and https://answers.opencv.org/question/6331/calibrating-blender-camera-as-test/

But there is no mention of distortion vectors.