I am trying to match and align a query image to a larger image. The query image can be a subset of the larger image, basically a region of interest, and might be at a smaller scale. My goal is to determine the scale and alignment of the smaller image required to match the larger image. Is there a way to do this in OpenCV? I was looking at homography and the stitching algorithms, but I ultimately want to determine how much I would need to scale and translate my query image to match the parent image. It doesn't need to be pixel perfect, but I would like to get with in 1-3% of my target image.
I was looking at some Matlab code that demonstrates how to determine scale and rotation of a copy of an image, see http://www.mathworks.com/help/images/examples/find-image-rotation-and-scale-using-automated-feature-matching.html
Again, Is this possible in OpenCV?