How is the principal point defined?

asked 2019-01-30 01:53:14 -0600

DanielPi gravatar image

Hello.

Something I've thought about for a while, how is the principal point in the calibration matrix defined (in the general case, and from OpenCV's calibration algorithm)? Depending on how you define your principal point, the corners of pixel (0, 0) end up in either

P_corners = [(-0.5, -0.5), (0.5, -0.5), (-0.5, 0.5), (0.5, 0.5)]

or in

P_corners = [(0, 0), (1, 0), (0, 1), (1, 1)]

This is crucial when calculating the vector/ray that goes from the camera optical center to the pixel center or the pixel corner. If you're wrong about how the principal point is defined, your ray will be off by half a pixel (which corresponds to some angle depending on your focal length and fov etc.).

Here's an image to further explain my question:

C:\fakepath\pixel_coordinates.png

Is it Alt. 1 or Alt. 2? Also, is there any way to deduce this from the intrinsic parameters?

Regards, Daniel

edit retag flag offensive close merge delete

Comments

Since the calibration matrix or camera matrix can be defined using floating point coordinates I assume the principal point can lie anywhere on a pixel (not only in the middle or the corners). In real world, if your lens has a focal length of 35mm usually when you calibrate the camera you receive a value like 35.391751mm. In Computer Vision you translate the mm into pixel using e.g. the known pixel size afaik.

Grillteller gravatar imageGrillteller ( 2019-01-30 02:30:43 -0600 )edit
1

doc is here

LBerger gravatar imageLBerger ( 2019-01-30 02:39:08 -0600 )edit

@Grillteller: the fact that the Calibration matrix is defined using floats and that it can be defined in mm isn't any information that can be used to deduce the answer to this question. You still have the same discrepancy if you're working in mm. @LBerger: The documentation does not explicitly mention this and does not answer the question.

DanielPi gravatar imageDanielPi ( 2019-01-30 03:30:16 -0600 )edit
2

@DanielPi documentation does not mention signal processing basic. When you sample a signal there is an integration over spatial domain (pixel size) and time (time exposure) there is always an uncertainty about position in time and space. Usually center is used because uncertainty +/- half pixel size.

LBerger gravatar imageLBerger ( 2019-01-30 04:49:27 -0600 )edit
1

I have the same question.

Looking at the definition of cvGetOptimalNewCameraMatrix (at the time of writing this is around here: https://github.com/opencv/opencv/blob...)

It seems to me that the principal point is calcualted as:

double cx = (newImgSize.width)*0.5;
double cy = (newImgSize.height)*0.5;

So it looks like an image with 2x2 pixel has the center of 1,1... In other words, it looks like Alt. 2

But how to test it? Is there a way to project an image to 3D via this camera and have a look at the output image?

Interersted in further ideas...

Michael

Michael Möllney gravatar imageMichael Möllney ( 2019-05-22 06:32:06 -0600 )edit