Stitching images at different time intervals - How to stabilize subsequent frames?
Hi there,
I'm still learning about image stitching at this point, and have been using the opencv library and their sample code to stitch together several images.
We have a camera taking images at multiple positions (9 images), and we take these images and stitch them into a single image (approx.) 3x3 grid.
Every few hours, these images are captured using the same camera at 9 different camera presets (pan/tilt/zoom settings) and stitched together.
The code I am using is from the sample with no modifications: https://github.com/Itseez/opencv/blob...
I am using ORB feature finder.
We were hoping to play the stitched outputs in a slideshow fashion. The issue we are encountering is that the stitched images are "unstable". What I mean is the dimensions of the stitched output generated from the images at different capture times is different and therefore when we play a slide show of these stitched outputs, the stitched images are shifting in different directions..
It is an issue similar to this one here: http://stackoverflow.com/questions/16...
Do you have any recommendations on how to make the stitched outputs more stable (features are placed in the same position, pixel wise), in terms of the stitching_detailed.cpp code above? Could we re-use a particular calculation of one of the stitching pipeline steps for subsequent frames stitching such as the homograph matrix (pretending to know what I am talk about ;) )?
Is there some way to make use of the fact that the subsequent frames (images captured at different time intervals) are taken by the same camera at the same camera presets?
Please let me know if this needs further clarification and thanks for your help.
Best Regards