Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Speed of filter2d vs. matchTemplate

For educational purposes I'm trying to understand following relation:

I'm applying a blur kernel (32x32) on an image (500x667 Grayscale, 8 bit for a single channel) which takes approx. 107ms using cv::filter2d. When however a template is being matched on the same image with size 32x32, the call for matchTemplate (CV_TM_SQDIFF) just takes 14ms.

Why is there such a huge difference in processing time? The documentation states, that starting with a kernel size of 11 x 11 filter2d will apply the kernel in the frequency domain which should speed things up. The documentation but also states that filter2d is computing the correlation instead of the convolution. So aren't both methods computing similar things?

Speed of filter2d vs. matchTemplate

For educational purposes I'm trying to understand following relation:

I'm applying a blur kernel (32x32) on an image (500x667 Grayscale, 8 bit for a single channel) which takes approx. 107ms using cv::filter2d. When however a template is being matched on the same image with size 32x32, the call for matchTemplate (CV_TM_SQDIFF) just takes 14ms.

Why is there such a huge difference in processing time? The documentation states, that starting with a kernel size of 11 x 11 filter2d will apply the kernel in the frequency domain which should speed things up. The But the documentation but also states that filter2d is computing the correlation instead of the convolution. So aren't both methods computing similar things?