Correlation of high resolution images
Hello,
I'm currently working on a project where I have to correlate several similar images taken from a DSLR camera. Those images may vary on factors such as focal distance, orientation, level of exposure and noise.
My approach consists in using SIFT or SURF for keypoint detection, followed by the FLANN library for keypoint matching. However, the images may have resolutions that go as far as 18 megapixels, which makes the process of keypoint detection too slow. Both SIFT and SURF take more than 4 seconds to detect keypoints in higher resolutions. I've tried to relax the parameters of both algorithms but still the processing time is too high, which may ruin the user experience.
I've read somewhere that a possible approach would be to divide each image into a certain number of sub-images and then try to correlate respective sub-regions between different images. However, as the focal distance and orientation may vary I find this approach ineffective in the context of my project.
Anyone have a suggestion on a possible approach to make the correlation process of high resolution images more efficient?
In stitching exampleimage size are reduced and descriptors are detected using low resolution image. May be you can too meta data in image to help your algorithm. About phase correlation you can have a look at http://docs.opencv.org/modules/imgpro...
Thanks for the answer @LBerger. I'm trying to avoid downsizing the images as some information is lost in the process, which may be necessary in the folowing phases of the project for the sake of precision. I'll take a look on the phase correlation to see if it useful. Thank you!
You might want to use binary descriptors such as ORB or BRISK http://docs.opencv.org/modules/featur... They are quite faster than SURF or SIFT, there are no patents on them! Usually the quality of the detection is similar or pretty close to SURF, but they are optimized to run fast on modern processors