Ask Your Question

Revision history [back]

cvMatchTemplate algorithm and speed

This is a continuation of an older post: http://answers.opencv.org/question/83870/why-is-opencvs-template-matching-method-tm_sqdiff-so-fast/

Basically, what I want to know is:

1) What is up with the official documentation ( http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html ) saying that the method involves a raster scan pixel by pixel through an image and storing the coordinates? When this is implemented for a decent size file, a 100x100 pixel scan = 10,000 movements of an image matrix and then the resultant scores. When I've tried this method, it results in match times in excess of 5 minutes per image.

2) The official documentation does not mention DFT/FFT at all but Tetragramm (on this site) provided a helpful response. I'd like more clarity on this, though: what is the actual algorithm used? The code is dense when I've looked in the .cpp source files. The source and template images are Fourier transformed, and then somehow they are realigned to determine the geometric transformation?

Thanks for the help - I'm just trying to understand how this all works.

cvMatchTemplate algorithm and speed

This is a continuation of an older post: http://answers.opencv.org/question/83870/why-is-opencvs-template-matching-method-tm_sqdiff-so-fast/

Basically, what I want to know is:

1) What is up with the official documentation ( http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html ) saying that the method involves a raster scan pixel by pixel through an image and storing the coordinates? When this is implemented for a decent size file, a 100x100 pixel scan = 10,000 movements of an image matrix and then the resultant scores. When I've tried this method, it results in match times in excess of 5 minutes per image.

2) The official documentation does not mention DFT/FFT at all but Tetragramm (on this site) provided a helpful response. I'd like more clarity on this, though: what is the actual algorithm used? The code is dense when I've looked in the .cpp source files. The source and template images are Fourier transformed, and then somehow they are realigned to determine the geometric transformation?

Thanks for the help - I'm just trying to understand how this all works.