Triangulation - Combining Data from Multiple Cameras

asked 2020-09-11 15:06:37 -0600

ConnorM gravatar image

I have a setup with 8 synchronized cameras and I am trying to perform some 3D reconstruction of keypoints on a persons body. I am wondering if there is a way for me improve my triangulation results from OpenCV by performing some sensor fusion or other technique.

Currently I am just pairing the cameras to create stereo pairs, calculating the 3D points with triangulatePoints (and other data like intrinsic/extrinsics from chessboard calibration), and then I’m left with multiple estimates (one estimate of the point from each pair) of the same keypoints that I can average out for example.

If anyone has any ideas or knows of any papers that may help with this it would be really appreciated!

edit retag flag offensive close merge delete

Comments

berak gravatar imageberak ( 2020-09-12 03:35:05 -0600 )edit

From what I've seen of other SFM work is that people are generally using feature detectors/matchers like ORB/SIFT etc. in order to get the scene points and then use those points to perform bundle adjustment. Are there any examples of people using chessboard calibration for intrinsic/extrinsic parameters and then using SFM with the already obtained camera matrices/distortion coefficients and other parameters?

ConnorM gravatar imageConnorM ( 2020-09-13 15:50:59 -0600 )edit