Ask Your Question
0

Determining Scale of query image against larger target image

asked 2016-03-11 00:36:29 -0600

JasonHonduras gravatar image

updated 2016-03-11 00:51:04 -0600

I am trying to match and align a query image to a larger image. The query image can be a subset of the larger image, basically a region of interest, and might be at a smaller scale. My goal is to determine the scale and alignment of the smaller image required to match the larger image. Is there a way to do this in OpenCV? I was looking at homography and the stitching algorithms, but I ultimately want to determine how much I would need to scale and translate my query image to match the parent image. It doesn't need to be pixel perfect, but I would like to get with in 1-3% of my target image.

I was looking at some Matlab code that demonstrates how to determine scale and rotation of a copy of an image, see http://www.mathworks.com/help/images/...

Again, Is it possible to compute a geometric transform in OpenCV?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2016-03-11 07:59:32 -0600

Eduardo gravatar image

From the Matlab tutorial, you should be able to reproduce the different steps with:

Also a recall on the different Geometric Transformations.

edit flag offensive delete link more

Comments

So let me summarize. You read in your two images into 2 cv::Mats, generate key points and descriptors (SIFT SURF) for your two images, but once I have my key points, how would I compare those? (I also found this link which might address the matching http://stackoverflow.com/questions/13...).

JasonHonduras gravatar imageJasonHonduras ( 2016-03-11 11:50:50 -0600 )edit
1

Look at this tutorial for keypoints matching: AKAZE local features matching.

The affine transformation matrix estimation with a RANSAC method can also discard some outliers (false matching). Basically, the RANSAC is:

  • pick randomly the minimal number of points allowing to compute the affine transformation (3 points for affine transformation)
  • check if the majority of the matches agree, fit with the model previously estimated
  • if no, pick another minimal set of points
  • if yes, the inliers are the matches whose the model error is below a threshold, refines the model with only the inliers

It works only if there is a majority of inliers.

PS: for SIFT or SURF keypoints, the appropriate distance is the L2 distance.

Eduardo gravatar imageEduardo ( 2016-03-11 12:25:09 -0600 )edit

Also, affine transformation describes 6 degrees of freedom: translation + rotation + scale + aspect ratio + shear.

Eduardo gravatar imageEduardo ( 2016-03-11 12:31:15 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-03-11 00:36:29 -0600

Seen: 363 times

Last updated: Mar 11 '16