# Camera projection matrix from fundamental

I'm pretty new to OpenCV and trying to puzzle together a monocular AR application **getting structure from motion**. I've got a tracker up and running which tracks points pretty well as the optical flow looks good. It needs to work on uncalibrated cameras.

From the point correspondences I get the fundamental matrix from findFundamentalMat, but I'm lost at how to get the camera projection matrix. Matrix math is not my strong suit, and for all my google foo all I can find are examples using pre-calibrated cameras.

- Find fundamental matrix using findFundamentalMat (check!)
- Find epilines with computeCorrespondEpilines (check!)
**Extract projection matrix P and P1**(????)

P is identity matrix for the uncalibrated case, but **how do I get P1**?

You should start by looking to the corresponding topics, fundamental matrix and essential matrix, for example:

You just have to grab the idea behind the theory and get the geometry idea.

In my opinion, get the projection matrix (the intrinsic + extrinsic parameters) is not possible.

Why, because intuitively given two views from uncalibrated and at unknown position cameras, you can estimate the camera motion that transforms one set of points to another set of points but you cannot get the 3D information as the pinhole camera model "suppress" this information.

Even with calibrated cameras, it seems that you can only estimate the rotation and translation (up to a scale) from one camera to the other camera but if the inital pose of the first camera is unknow, it seems that you cannot get the full projection matrix for me.

But don't take my word for it, that's why I posted the different links.

Hi, thanks for the links I'll look into that. I'm not worried about getting the scale correct, as this is for entertainment not science. The question about getting the P1 is probably trivial, as most texts just say "from the fundamental matrix you can extract P and P1" and then just continue as if P and P1 is now available... I'm fine with only knowing P1s translation and rotation relative to P (where I assume it will be in some arbitrarily chosen scale?)

I think that what you want is:

To compute R and t, you can look at the link Determining R and t from E or directly from the book "Multiple View Geometry in computer vision" from Hartley and Zisserman.

My bad, I thought that what you wanted was the extrinsic matrix or the camera pose but what you want is the camera projection matrix that map a point in 3D in the camera frame to a point in 2d in the image frame.