Ask Your Question

matchTemplate() with a mask

asked 2013-01-01 15:50:49 -0500

Adi gravatar image

Is there a way to use matchTemplate() with a mask?
If not, is there some other way to achieve this effect? This means that some areas of the template can be excluded from the score calculation.

edit retag flag offensive close merge delete

4 answers

Sort by » oldest newest most voted

answered 2013-01-04 13:16:20 -0500

updated 2013-01-04 13:17:27 -0500

In the template matching by OpenCV it's better at the first get canny edge from image then smooth this edge image then in the source image you can fill your mask region by zero.

edit flag offensive delete link more


Interesting approach. I'm using pixel values as I want the image texture preserved. I may try what you suggest to see how it affects the matches.

Adi gravatar imageAdi ( 2013-01-07 12:50:29 -0500 )edit

In the template matching ,if you reduce your features size for correlation process.In the other words، the response will be more than distinct from massive state(with pattern).So you can find all objects by use of the specific threshold in the result.

Mostafa Sataki gravatar imageMostafa Sataki ( 2013-01-07 22:55:31 -0500 )edit

Hi i agree with Mostafa Satak's sugguestion,here is an example:

wuling gravatar imagewuling ( 2014-04-09 09:16:43 -0500 )edit

answered 2013-01-07 09:08:51 -0500

matt.hammer gravatar image

If you're using simple cross correlation, you could use a mask to set those excluded areas to 0, which would prevent them from contributing to the cross correlation sum. This tactic isn't going to work with any sort of normalization, and probably not with coefficient correlation, as they use a mean of all template pixels - which is going to be affected by "0" value pixels.

I'm working on a warpPerspective -> matchTemplate setup, and was about ready to try and rewrite matchTemplate to be able to handle the quadrilaterals I was getting from warpPerspective (inscribed in rectangles with padded 0's), until I figured out I could just use the inverse transform matrix to warpPerspective of the larger image rather than the template. (I'm more interested in the center of the image, so I can assume the match will be somewhere in the interior of the image)

edit flag offensive delete link more


I'm using SSD (CV_TM_SQDIFF), so this won't work as there are no such values, but maybe cross-correlation will give sufficiently similar results. I'll think about it.

Adi gravatar imageAdi ( 2013-01-07 12:48:45 -0500 )edit

answered 2013-03-25 03:10:31 -0500

MattiasN gravatar image

Have you done any more with this? I'm using the normalized cross correlation and would like to mask away parts of my template. I'm thinking about rewriting matchTemplate to handle masks, but I'm not sure that I have the time and knowledge needed...

edit flag offensive delete link more


Well, my approach for WarpPerspective & MatchTemplate failed (Warping the target instead of the template uses a TON of memory - much more than I have available on Android phones), so I am probably going to deep dive into a new/modified version of matchTemplate in April. That is, unless someone else beats me to it (and saves me a bunch of work)

matt.hammer gravatar imagematt.hammer ( 2013-03-25 09:00:35 -0500 )edit

Any update on your work? I would be interested in using it.

LadyZayin gravatar imageLadyZayin ( 2013-07-26 16:01:41 -0500 )edit

Actually I ended up using a workaround. Starting from a warped image quadrilateral inscribed in a rectangle, I pick a smaller rectangle inscribed (and thus containing only "active pixels") in the quadrilateral. I discard everything else and use this rectangle with matchTemplate - thus no "empty" zero-value pixels to influence averages and weights in the matching algorithms. As of yet, results are inconclusive - I think it might work because the warps I am dealing with are "gentle" - maybe 5, 10 degree Euler angle rotations.

matt.hammer gravatar imagematt.hammer ( 2013-07-31 16:00:59 -0500 )edit

answered 2014-04-09 08:55:03 -0500

Hi guys, I ended up writing my own function from the matchTemplate() source code, with the CV_SQDIFF_NORMED method; it takes a transparent PNG and creates a vector with the coordinates of every point in the mask (every point that has an alpha >0) then it uses this point to access every point in the image for a given offset, instead of using the integral rectangle trick as it is actually done in the original function. It' s surely slower but it seems to work.

VS2010 demo code here

Hope it helps

image description

edit flag offensive delete link more


Hello, it appears the link to your demo code is broken; would you mind updating it? I'm curious to see how this was done.

camptonc gravatar imagecamptonc ( 2018-07-13 17:56:02 -0500 )edit

Question Tools

1 follower


Asked: 2013-01-01 15:50:49 -0500

Seen: 13,368 times

Last updated: Apr 09 '14