1 | initial version |
Going to write quite a bit so it won't fit in a comment here, so this is directed at Matt's answer. This is also my answer to the original question, still.
Firstly, for CCORR - in normal pictures there are no negative values, so I think your explanation of CCORR isn't complete, at least. Best way I can explain it from the math is, like 68 is smaller than 77 (i.e. the maximum of x(a-x) is at x=a/2) the highest you can get is if the numbers are close. Their product will be higher if they are higher, but that is taken care of with the normalization of CCORR_NORMED - you can see from the examples in https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html that CCORR is indeed not very useful, and returns bright for bright areas and dark for dark areas as expected.
Even after normalization it wouldn't seem like this would be as effective as cross-correlation (CCORR) over information that also has negative values (since, for example the 68 is exactly smaller by 1 than the 77). So: For CCOEFF - I believe that the formula on opencv's documentation is indeed division; it's supposed to be read as (1/wh)(sum(T(x'', y'')) and not 1/(whsum(T(x'', y''))) and so it is the average or mean of the template (and of the template-sized portion of the image). By substracting it from every pixel you make the darker ones negative and the lighter ones positive, just like we wanted the CCORR to behave - and then you use those values for exactly the same procedure CCORR does. This is already a better answer, which can still be normalized - both CCOEFF and CCOEFF_NORMED work.
I know this is incredibly late, but if anyone stumbles here like I did I hope this can help :P