Ask Your Question

Understanding the camera matrix

asked 2016-03-09 18:45:05 -0600

solarflare gravatar image

updated 2016-04-29 11:53:56 -0600

Hello all,

I used a chessboard calibration procedure to obtain a camera matrix using OpenCV and python based on this tutorial: http://opencv-python-tutroals.readthe...

I ran through the sample code on that page and was able to reproduce their results with the chessboard pictures in the OpenCV folder to get a camera matrix.

I then tried the same procedure with my own checkerboard grid and camera, and I obtained the following matrix:

mtx = [1535 0 638 0 1536 204 0 0 1]

I am trying to better understand these results, based on the camera sensor and lens I am using.

Based on:

Fx = fx * W/w

Fx = focal length in mm W = sensor width in mm w = image width in pixels fx = focal length in pixels

The size of my images: 1264 x 512 (width x height) I am using the following lens:

This has focal length 8 mm.

I am using a FL3-U3-13Y3 camera from PtGrey (, which has an image width of 12 mm, according to this picture: image description

From the camera matrix, fx is the element in the first row, first column. So above, fx = 1535. In short:

fx = 1535 pixels (from camera matrix I obtained) w = 1264 pixels (image size I set) W = 12 mm (from datasheet) Fx = 8 mm (from datasheet)

Using: Fx = fx * W/w, we would expect Fx = 1535 * 12 / 1264 = 14.57 mm

But the actual lens is 8 mm. Why the discrepancy?

I would think that the actual size of a chess grid would have to be known, but I did not see mention of manipulation of that in the tutorial link I provided. I basically had to scale down the chessboard grid so that it would work with my camera setup.

I would appreciate any help or insight on this.

Thanks in advance

EDIT:Actually to be more specific, the lens has a maximum camera sensor format of 1/3", while the camera sensor format is 1/2". I found an article on this:

Focal length multiplier = (1/2) / (1/3) = 1.5 Focal length of lens as listed on datasheet = 8 mm Equivalent focal length of lens= 1.5 * 8 mm = 12 mm

Still, 12 mm is off from 14.57 mm. Am I not factoring something else in my calculation? Could this be happening from bad images that still happen to find the chessboard corners?

Below is an example image: image description

edit retag flag offensive close merge delete


You have to supply the size of a corner in the chessboard pattern when you construct the object points, see the tutorial in C++ and also the C++ sample.

Eduardo gravatar imageEduardo ( 2016-03-10 03:46:29 -0600 )edit

Hi Eduardo,

Thanks for your reply. From my reading, I see that it does ask for the size of the pattern: "Let there be this input chessboard pattern which has a size of 9 X 6."

However, as I understand it, this is referring to the number of grids, not the individual size of the grids (in either mm or in). Interestingly enough, to me it appears that their grid example is really 10x7.

In your C++ example link: static void calcChessboardCorners(Size boardSize, float squareSize, vector<point3f>& corners, Pattern patternType = CHESSBOARD)

I believe you are referring to the input "squareSize", which by default is 1. Is this the value that should be changed, and the input could be in any desired units? I don't believe anything else has to be changed except grid array size?


solarflare gravatar imagesolarflare ( 2016-04-26 08:46:57 -0600 )edit

This chessboard pattern is a 9x6, look at the image in the tutorial with the drawing.

The squareSize must be set to the real size (if you print it for example on A4 or A3 paper) in whatever unit you want.

Eduardo gravatar imageEduardo ( 2016-04-26 09:04:20 -0600 )edit

Got it. The grid array refers to the number of inner corners, NOT the number of grids in a row by the number of grids in a column.

Other point I wanted to note is that I am dealing with a micro video lens and therefore dealing with smaller field of views / regions of interest. I don't believe this should play a role but just wanted to note that the size of a single grid is about 1.4 mm x 1.4 mm. So if I wanted units of mm, then squareSize = 1.4, correct?

solarflare gravatar imagesolarflare ( 2016-04-26 10:01:40 -0600 )edit

Yes it is correct. Every function that will use the camera intrinsic matrix (for example solvePnP() function) will express the result in mm unit.

Eduardo gravatar imageEduardo ( 2016-04-26 11:52:39 -0600 )edit

So unless I am mistaken, I do not see a similar "squareSize" variable in the python version:

I can see that the routine is finding the corners of the chessboard in the images I have captured. Do I manually need to scale down these values?

I printed the original chessboard and each square seems to be about 21 mm. After measuring my chessboard more closely, each square is about 1.03 mm. However, by simply scaling, I do not see how the math could work out.

solarflare gravatar imagesolarflare ( 2016-04-27 15:59:59 -0600 )edit

You have to manually build the chessboard 3D model, look here.

Eduardo gravatar imageEduardo ( 2016-04-28 03:58:41 -0600 )edit

So that links back to the original C++ sample. Do you mean I have to redefine the python version of "calcChessboardCorners" function?

Actually, looking at this example more closely, it looks like there is no adjustment made based on "squareSize" if the chessboard pattern is used as "patternType"?

solarflare gravatar imagesolarflare ( 2016-04-28 09:59:24 -0600 )edit

Python code:

Eduardo gravatar imageEduardo ( 2016-04-28 10:07:11 -0600 )edit

So the key line is: "pattern_points *= square_size"

I added that in and started playing with the parameters. I made "square_size" have a value of 1 and then 2. In either case, the camera matrix is the same. The rvecs are also the same. The tvecs however, are twice as great when "square_size" has a value of 2. Still, I don't see why the calculation for focal length doesn't work out, using the data I get from the camera matrix. Is there something I am not taking into account?

NOTE: I made an edit to the original post regarding focal length.

solarflare gravatar imagesolarflare ( 2016-04-28 12:00:46 -0600 )edit

2 answers

Sort by » oldest newest most voted

answered 2020-02-19 09:02:32 -0600

syvlvester gravatar image

updated 2020-02-20 00:12:25 -0600

it is too late for you, however I am able to handle this problem by this way that I calculate sensor size in mm using pixel size and true resolution in the dataset. I did it for you at the same time.

your pixel size is 4.8μm and you can multiple with resolution but default camera resolution 1280x1024 and then your sensor size is found by μm convert it to the mm then apply formula Fx = fx * W/w, instead of your 12mm write my above described method .

edit flag offensive delete link more

answered 2017-11-28 05:47:05 -0600

Hi, solarflare. Since the real value and the calculated value are in the same order of magnitude. I guess the problem could be that the "w" and "W" don't macth each other. I'm not sure about that, but according to your describe, I believe that you did nothing wrong in the calculation. " w = 1264 pixels (image size I set) W = 12 mm (from datasheet) " As you said, "W = 12mm comes from datasheet", but maybe in your selected image size or in video capture mode not the whole ccd/coms sensor array are working. Sometimes for a webcam, for example my Logitech C270, its "Optical Resolution(Ture): 1280 x 960" and the "Video Capture: 800x600" are just different. So if i use 800 instead of 1280 to calculate w/W then ... To avoid problem like this, I suggest that if you can find the mm/pixel rate of your camera's ccd sensor, you can use it to try again.

edit flag offensive delete link more

Question Tools



Asked: 2016-03-09 18:45:05 -0600

Seen: 8,077 times

Last updated: Feb 20 '20