How to use openCV stitcher with rough pre-alignment data?
Hi there.
I am working on a project about stitching and I considering that I know where all the images are taken, it is possible to do a rough pre-alignment. However, how to how to accelerate the stitching progress with this alignment data is a problem. The Stitcher.stitch() function only accepts the region of interests data.
https://docs.opencv.org/trunk/d2/d8d/...
Besides, I think it would help if I can get the alignment data generated by the stitching pipeline rather than just feeding in some images with random order and getting a stitched image as the output with almost zero control to its quality.
Does anyone have suggestions on these?
PS:
The additional information I intend to use is the position of the images. I am stitching some scanned images. Considering that I know the order of scanning, so I have a 2D grid of the tiles images. Although the positions are not 100% correct, it should be a rough alignment.
so, what additional information DO you have, and want to exploit ? (it's a bit unclear in the Q.)
The information I intend to use is the position of the images. I am stitching some scanned images. Considering that I know the order of scanning, so I have a 2D grid of the tiles images. Although the positions are not 100% correct, it should be a rough alignment.