# Determining Scale of query image against larger target image

I am trying to match and align a query image to a larger image. The query image can be a subset of the larger image, basically a region of interest, and might be at a smaller scale. My goal is to determine the scale and alignment of the smaller image required to match the larger image. Is there a way to do this in OpenCV? I was looking at homography and the stitching algorithms, but I ultimately want to determine how much I would need to scale and translate my query image to match the parent image. It doesn't need to be pixel perfect, but I would like to get with in 1-3% of my target image.

I was looking at some Matlab code that demonstrates how to determine scale and rotation of a copy of an image, see http://www.mathworks.com/help/images/...

Again, Is it possible to compute a geometric transform in OpenCV?

edit retag close merge delete

Sort by ยป oldest newest most voted

From the Matlab tutorial, you should be able to reproduce the different steps with:

Also a recall on the different Geometric Transformations.

more

So let me summarize. You read in your two images into 2 cv::Mats, generate key points and descriptors (SIFT SURF) for your two images, but once I have my key points, how would I compare those? (I also found this link which might address the matching http://stackoverflow.com/questions/13...).

( 2016-03-11 11:50:50 -0500 )edit
1

Look at this tutorial for keypoints matching: AKAZE local features matching.

The affine transformation matrix estimation with a RANSAC method can also discard some outliers (false matching). Basically, the RANSAC is:

• pick randomly the minimal number of points allowing to compute the affine transformation (3 points for affine transformation)
• check if the majority of the matches agree, fit with the model previously estimated
• if no, pick another minimal set of points
• if yes, the inliers are the matches whose the model error is below a threshold, refines the model with only the inliers

It works only if there is a majority of inliers.

PS: for SIFT or SURF keypoints, the appropriate distance is the L2 distance.

( 2016-03-11 12:25:09 -0500 )edit

Also, affine transformation describes 6 degrees of freedom: translation + rotation + scale + aspect ratio + shear.

( 2016-03-11 12:31:15 -0500 )edit

Official site

GitHub

Wiki

Documentation