Speed of filter2d vs. matchTemplate [closed]

asked 2016-09-19 10:47:55 -0600

t4l0s gravatar image

updated 2016-09-19 12:14:50 -0600

For educational purposes I'm trying to understand following relation:

I'm applying a blur kernel (32x32) on an image (500x667 Grayscale, 8 bit for a single channel) which takes approx. 107ms using cv::filter2d. When however a template is being matched on the same image with size 32x32, the call for matchTemplate (CV_TM_SQDIFF) just takes 14ms.

Why is there such a huge difference in processing time? The documentation states, that starting with a kernel size of 11 x 11 filter2d will apply the kernel in the frequency domain which should speed things up. But the documentation also states that filter2d is computing the correlation instead of the convolution. So aren't both methods computing similar things?

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by sturkmen
close date 2020-12-04 11:33:22.739345

Comments

apples vs. pears

berak gravatar imageberak ( 2016-09-19 11:24:18 -0600 )edit

@berak Could you be more specific. How does the actual computation differ between both methods? What is the difference with respect to computational complexity?

t4l0s gravatar imaget4l0s ( 2016-09-19 12:02:46 -0600 )edit

I question your benchmarks.

I'm getting filter2d 14ms, SQDIFF 11ms, SQDIFF_NORMED 16ms for a 32x32 kernel on a 1024x768 single channel image 32F image.

For a single channel 8U image, I get filter2D: 8ms, SQDIFF: 11ms, SQDIFF_NORMED: 16ms.

Tetragramm gravatar imageTetragramm ( 2016-09-19 17:51:47 -0600 )edit

It could depend too of opencl and test order in source code.

LBerger gravatar imageLBerger ( 2016-09-20 02:23:39 -0600 )edit