# Getting projection matrix

at first I want to apologize for my bad English.

I am really new in OpenCV and in virtual reality. I tried to find out the theory of image processing, but some points are missing there for me. I learned that projection matrix is matrix to transform 3D point to 2D. Am I right? Essential matrix gives me information about rotation between two cameras and fundamental matrix gives information about the relationship between pixel in one image with pixel in other image. The homography matrix relates coordinates of pixel in two image (is that correct?). What is the difference between fundamental and homography matrix?

Do I need all these matrices to get projection matrix? I am new in these, so please if you can, try to explain me it simply. Thanks for your help.

UPDATE: I found out that for I can somehow get projection matrix from fundamental and from homography matrix, but I dont' t really understand this algorithm. I read that with stereoCalibrate() method I can get the intrinsic and axrinsic parameters. Is these parameters enough to get projection matrix? What is the formula to get the projection matrix? If the first camera in co-ordinate system [0,0,0] is it necessery to get the projection matrix of the first camera?

edit retag close merge delete

Sort by ยป oldest newest most voted

In short: Homography relates two views of a set of coplanar points. The essential (E) and the fundamental (F) matrix relate two views of a set of points that don't need to be coplanar. The difference between E and F is that E only works for pairs of points whose image coordinates have been normalized beforehand - their raw image coordinates have been transformed by the camera matrices you get when you perform camera calibration.

For a stereo pair, the intrinsic parameters are the ones related to the cameras themselves, that is their respective distortion coefficients and calibration matrices. The extrinsic parameters are the relative rotation and translation between the cameras.

If you could tell me a bit more about what you'd like to - what is your input data and what projection matrix you'd like to compute - I might be able to help a little more.

more

thank you for your response. I would like to transform 3D point from real world to 2D image point using projection matrix. But first I wanted to get some knowledge about computer vision and epipolar geometry. I used OpenCV methods - stereoCalibrate, stereoRectify and findChessboardCorners - like I said and get the projection matrices of cameras. Now I am stucked cause after rectifying I get two images which are several times zoomed. I really don't know what does it mean. If my calibration was cirrect or not.

( 2013-05-21 09:57:28 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Seen: 1,678 times

Last updated: May 20 '13