Speed of filter2d vs. matchTemplate [closed]
For educational purposes I'm trying to understand following relation:
I'm applying a blur kernel (32x32) on an image (500x667 Grayscale, 8 bit for a single channel) which takes approx. 107ms using cv::filter2d. When however a template is being matched on the same image with size 32x32, the call for matchTemplate (CV_TM_SQDIFF) just takes 14ms.
Why is there such a huge difference in processing time? The documentation states, that starting with a kernel size of 11 x 11 filter2d will apply the kernel in the frequency domain which should speed things up. But the documentation also states that filter2d is computing the correlation instead of the convolution. So aren't both methods computing similar things?
apples vs. pears
@berak Could you be more specific. How does the actual computation differ between both methods? What is the difference with respect to computational complexity?
I question your benchmarks.
I'm getting filter2d 14ms, SQDIFF 11ms, SQDIFF_NORMED 16ms for a 32x32 kernel on a 1024x768 single channel image 32F image.
For a single channel 8U image, I get filter2D: 8ms, SQDIFF: 11ms, SQDIFF_NORMED: 16ms.
It could depend too of opencl and test order in source code.