Ask Your Question

Utilise known extrinsic parameters when stitching panoramas

asked 2015-01-08 05:46:54 -0500

emiswelt gravatar image

Dear OpenCV Community,

I am currently designing a mobile 360° panorama stitching app using OpenCV.

Since a 360° panorama needs a lot of source images (I use 62 at the moment), the adjustment (especially finding the extrinsic parameters) of the images is quite slow. Luckily, I can utilize orientation data derived from the smartphone's sensors to calculate the extrinsic camera parameters for each image. This way, I do not need to detect and match features at all.

However, those parameters are subject to slight drift. This means that a few images are slightly displaced in the result:

Finished panorama

Is it possible to optimize those parameters I already know, by, for example, matching image features? I'm thinking here about only matching adjacent images to gain some performance, but I have no idea how this fits into the stitching pipeline.

TL/DR: I already know extrinsic camera parameters, but I would like to optimize them based on image features for a better result.

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted

answered 2015-12-02 09:16:26 -0500

emiswelt gravatar image

In regards of the OpenCV Stitching Pipeline, it's fairly easy:

Deactivate the (homography-based) Rotation Estimator, which would overwrite the existing extrinsic parameters.

Then, use bundle adjustment (ray-based or reprojection-based) to refine the extrinsic and intrinsic parameters.

The documentation is somewhat difficult to understand without background knowledge. I can suggest "Computer Vision: Algorithms and Applications" from "Richard Szeliski" as a read here.

edit flag offensive delete link more


Hi emiswelt, did you succeed? I am trying to do a similar thing, the difference is that I only need to reach 360º horizontally.. But if I use the Ray Bundle Adjuster, the rotation for each camera parameter is highly increased, and, if the total reaches 360 the end result has overlapped areas where it should not have. I am wondering if this is because of the translation that I am doing while capturing and what can I do to solve it. My use case also uses a smartphone and envolves a user to rotating the camera, but the camera translates a bit ofc since the user is rotating with the camera in front of his face :P

skm gravatar imageskm ( 2016-10-11 05:24:02 -0500 )edit

Question Tools



Asked: 2015-01-08 05:46:54 -0500

Seen: 1,671 times

Last updated: Dec 02 '15