Structure from motion and bundle adjustment

asked 2018-05-25 07:43:55 -0600

Czak gravatar image

I'm creating own pipeline of sfm using carefully calibreted single camera, with intrinsic I'm accuretly know. After calibration, I triangulate room full of aruco markers and get preliminary triangulation for all marker points in multiple frames ( detection of marker points is performed in undistorted images), so effect of lens distortion is compenseted.

Finaly I have data to perform bundle adjustment, and it requires not only intrinsic of camera, estimate k1 and k2 distortion and camera position wrt to global frame. As I understand the principles, there should not be intrinsic or distortion involved in this process (becouse they are already compenseted, and images for measurment are captures with lower quality then calibration images(no control over light, motion in frames etc). For this task camera position founded by chaining perspective transformation for frames can also be ommited (becouse estimation of such element yelds way bigger error in xyz in global frame then xyz of single marker in frame). Such expresion turns out to motion synchronisation in SE3 with can be much more efficienty solved then BA.

Is there any reson to involve intrinsic, distortion, and camera pose in this problem, when we have calibreted camera?

edit retag flag offensive close merge delete