understanding cv2.calibrateCamera in python

asked 2019-05-29 13:09:21 -0500

Hi All

im fairly new to opencv and decided to use it in python. To make it easy as possible. My overall objective is to use a 3d camera to triangulate the 3d position of an object. The camera i bought is two cameras mounted together so I know their orientation will be parallel and that they are 60mm apart

im trying to follow https://answers.opencv.org/question/1...

so..

first im using https://opencv-python-tutroals.readth... to get

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objectPointsArray, imgPointsArray, gray.shape[::-1], None, None)

using a chessboard

now

  1. Im confused about how this calibration works and whats returned? should i be moving the chess board in all dimensions while the camera is still? or just in the x ,y axis while z remains constant? or should i keep the camera and the board still?

  2. of ret, mtx, dist, rvecs, tvecs which are returned

is mtx == camera matrix i.e. focal lengths and optical centres? how could it know this if it didnt know the z values of the chessboard

is dist == distortion coefficients i.e. a matrix describing the fish eye effect of the camera

is rvecs == the rotation of the camera? should this be zero if the camera is still and mounted parallel to the ground

is tvecs == the location of the camera? i don't understand how it could know this from looking at a chessboard, whic could be anywhere?

ant help would greatly appreciated

edit retag flag offensive close merge delete