In python, I am trying to exploit the affine invariance of matchShapes() (compared to matchTemplate()) to match my template to distorted candidate images.
The template and candidate images are quite complicated, but the documentation states that grayscale images can be inputted to matchShapes():
http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#double%20matchShapes%28InputArray%20contour1,%20InputArray%20contour2,%20int%20method,%20double%20parameter%29
This is my preferred option to breaking out into a large ensemble of contours.
As far as I can tell, matchShapes cannot in fact handle two grayscale images. What could I be doing wrong?
That then leaves the contours route. But how does one aggregrate information from all possible contour pairs in order to assess the confidence of a match?