# relations of fundamental matrices, projection matrices & reprojections from multiple views

I have a (most probably) very basic question in relation to Fundamental Matrices, Projection matrices & reprojections.

I'm trying to determine the 3D coordinates of some points, based on a series of images (actually, 2D coordinates already identified in subsequent images). It seems that when running the same algorithm on subsequent images tracking the same points, I get very different 3D coordinates. Most probably some very basic step is missing from my approach. I'd appreciate any pointers to highlight my mistake :)

I have the following:

K - the intrinsic matrix for the camera, K. all images were taken with the same camera

N images, taken from slightly different positions / orientation of the same points. (it's a sequence of images made by moving the camera around the points)

cca. 40 points are being tracked, and are identified for all images. (not all images see all the points though). thus for each image I have a set of (i, xi, yi) triplets, where 'i' is the identifier of a point (0..39), xi and yi are its 2D coordinates of the point for the particular image.

starting with the above, for each image n = 0..N-1, do the following

1 - take image number img(n) and img(n+1). initially each image has a projection matrix P(n) = I|0

2 - check if they have at least 8 common points

3 - calculate the fundamental & projection matrices for img(n+1)

3.1 calculate the fundamental matrix F using OpenCV's findFundamentalMat() function

3.2 calculate the essential matrix by K.t() * F * K

3.3 decompose the essential matrix using SVD into R and t

3.4 calculate the projection matrix P(n+1) = R|t for img(n+1)

4 - calculate the 3D coordinates

4.1 triangulate the common points using a linear LS triangulation based on P(n) and P(n+1) for all matching points

4.2 reproject the points for P(n+1) through K * P(n+1) * X(i) (where X(i) is the triangulated 3D point for point i)

4.3 check the reprojection error

for each image pair where there is at least 8 corresponding points, I'm getting fairly good results in terms of low reprojection error. but, the 3D points calculated for each pair are widely different, for example, these are some of the triangulated 3D point results for the same tracked point in various images:

```
[-535.266, 251.398, -1142.35]
[0.862544, -0.39743, 1.84496]
[5.55258, -2.59372, 12.7258]
[20.9094, -7.89917, 56.7389]
[-0.242497, 0.113039, -0.515921]
[18.0375, -8.38645, 38.6765]
```

my expectation was that they would be close to each other, with only the measurement and some algorithm inaccuracies bringing in some small error. especially as the reprojection errors are quite low, values like 0.03, 0.5 or 3.

the issue might be that the subsequent projection matrices are not 'aligned' in some way. or their scale, etc. is ...

I am having a similar issue but with a different code. My approach is similar:

From what I understand R should be very close to the real one and t is good up to a scale. So in this point an optimization algorithm like bundle adjustment has to be used. The reprojection error has to be minimized. sum( d( x, PX )^2 + d( x', P'X )^2 ). Here is where I am stuck. I do not understand how to apply this efficiently since I do not see any improvement. Is it there any examples of how to do this?