Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Triangulation - Combining Data from Multiple Cameras

I have a setup with 8 synchronized cameras and I am trying to perform some 3D reconstruction of keypoints on a persons body. I am wondering if there is a way for me improve my triangulation results from OpenCV by performing some sensor fusion or other technique.

Currently I am just pairing the cameras to create stereo pairs, calculating the 3D points with triangulatePoints (and other data like intrinsic/extrinsics from chessboard calibration), and then I’m left with multiple estimates (one estimate of the point from each pair) of the same keypoints that I can average out for example.

If anyone has any ideas or knows of any papers that may help with this it would be really appreciated!