Ask Your Question

MatchTemplate on a single color [closed]

asked 2013-07-25 13:50:24 -0600

LadyZayin gravatar image

updated 2013-07-25 14:57:21 -0600

I am attempting to compare the quality of a black&white map built by a robot running slam with a ground truth map. I decided to try MatchTemplate for that purpose. When I call the function on the maps, the result is far from accurate - the matching region is way off.

Please look at the image attached, which is a hand-drawn example of what happens: Maps. On the left is the ground truth and on the right is the slam map of a single room (say that I stopped my slam algorithm at this point). The gray rectangles represent the boundaries of each image. I would expect MatchTemplate to locate the room at the bottom left corner of the ground truth (where it should be), but it doesn't. In fact, the algorithm would match it where a lot of white can be found (such as the region enclosed by the green rectangle). Therefore, the white regions of my slam map affect the result of the algorithm.

I thought of two solutions, but I don't know how to apply them. First, is there a way of setting MatchTemplate to only take black into account and ignore white completely? Second, is it possible to enclose my slam map with a non-rectangular mask (the rooms are not always rectangular)? If not, is there another algorithm that would best fit my purpose?

I found several topics on using MatchTemplate with masks or transparency, but the solutions to these questions don't seem to apply to my case. For instance, I tried using edge detection prior to using MatchTemplate, but it doesn't work since my original map is approximately equivalent to an image on which edge detection was already applied (obviously!).

I hope I made myself clear!

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by Mathieu Barnachon
close date 2013-07-29 16:55:49.464374

2 answers

Sort by » oldest newest most voted

answered 2013-07-26 02:45:41 -0600

Siegfried gravatar image

Hi, i think you will have no luck when you use MatchTemplate to compare your SLAM map with the ground truth map. One reason is that MatchTemplate isn't rotation invariant. In SLAM the robot learns it's unknown environment while exploring it. The robot builds a map in it's own reference frame. Typically the origin of the reference frame is where the robot starts SLAM. So this reference frame can have another orientation as the ground truth reference frame. The different orientation is caused by an initial displacement of the robot, loop closing in SLAM algorithm, laserscan matching errors or map optimizations. You see, its not possible to guaranty that the maps have the same orientations.

You see, determining the accuracy of a generated SLAM map is a difficult task. But, since SLAM has a large research community there exist some papers about determining the accuracy of a map.

You should read this paper: On Measuring the Accuracy of SLAM Algorithms by Rainer Kümmerle, Bastian Steder, Christian Dornhege, Michael Ruhnke, Giorgio Grisetti, Cyrill Stachniss, and Alexander Kleiner Journal of Autonomous Robots, 27(4):387-407, 2009.

Take a look at the related work section. In this section some approaches to compare the SLAM map are discussed. For example one of these approaches use translation and rotation invariant Hausdorff metric. I think this could be a better method to compare the maps than MatchTemplate.

edit flag offensive delete link more


Thanks for your reply. I will definitely read this paper. However, I should have mentioned that I managed to align both maps (rotation + translation) using the GIMP image registration plugin. It works well enough in most cases.

Now that might seem weird since I'm using match template which aligns images as well, but what I need in the end is the similarity score provided by minMaxLoc.

LadyZayin gravatar imageLadyZayin ( 2013-07-26 10:05:24 -0600 )edit
LadyZayin gravatar imageLadyZayin ( 2013-07-26 13:55:04 -0600 )edit

answered 2013-07-29 14:11:38 -0600

LadyZayin gravatar image

updated 2013-07-29 14:21:06 -0600

I think I found a solution to my problem using this code. Here is the code that I use:

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <iostream>
#include <stdio.h> 
#include <string.h>                           

using namespace std;                  
using namespace cv; 

int main (int argc, char** argv){   

    String slamMapPath, realMapPath;    
    int method, resultColumns, resultRows;
    double* maxVal;
    Point minLoc, maxLoc;
    Mat result;

    String comparisonMethods[] = {"CV_TM_SQDIFF", "CV_TM_SQDIFF_NORMED", "CV_TM_CCORR",
        "CV_TM_CCORR_NORMED", "CV_TM_CCOEFF", "CV_TM_CCOEFF_NORMED"}; //List of comparison methods.
    method = CV_TM_CCOEFF_NORMED; //"Cross coefficient normed" by default.  

    //Bad parameters handling.
    if(argc < 3){
        cout << "Error: missing arguments.";
        return 1;

    realMapPath = argv[1]; 
    slamMapPath = argv[2]; 
    Mat realMap = imread(realMapPath, -1); //Get the real map image. 0 is grayscale. -1 is original image.
    Mat slamMap = imread(slamMapPath, -1); //Get the slam map image. 0 is grayscale. -1 is original image.

    //Bad parameters handling.
    if( == NULL && == NULL){       
        cout << "Error: neither images can be read.\n";     
        return 1;
    else if( == NULL){
        cout << "Error: first image cannot be read.\n";     
        return 1;
    else if( == NULL){
        cout << "Error: second image cannot be read.\n";        
        return 1;

    //Case with method parameter present.
    if(argc > 3){
        //More bad parameter handling.
        if(atoi(argv[3]) < 0 || atoi(argv[3]) > 5){
            cout << "Error: wrong value for comparison method.\n";
            return 1;
            method = atoi(argv[3]);

    //Create the result image.  
    resultColumns =  realMap.cols - slamMap.cols + 1; //# columns of result.
    resultRows = realMap.rows - slamMap.rows + 1; //# rows of result.
    result.create(resultColumns, resultRows, CV_32FC1); //Allocate space for the result.    

    ///This piece of code is based on
    Mat templ, img;
    const double UCHARMAX = 255;
    const double UCHARMAXINV = 1./UCHARMAX;
    vector<Mat> layers;

    //RGB+Alpha layer containers.
    Mat templRed(templ.size(), CV_8UC1);
    Mat templGreen(templ.size(), CV_8UC1);
    Mat templBlue(templ.size(), CV_8UC1);
    Mat templAlpha(templ.size(), CV_8UC1);

    Mat imgRed(img.size(), CV_8UC1);
    Mat imgGreen(img.size(), CV_8UC1);
    Mat imgBlue(img.size(), CV_8UC1);
    Mat imgAlpha(img.size(), CV_8UC1);

    //Check if one the the images has an alpha channel.
    if(templ.depth() == CV_8U && img.depth() == CV_8U && 
      (img.type() == CV_8UC3 || img.type() == CV_8UC4) &&
      (templ.type() == CV_8UC3 || templ.type() == CV_8UC4)){

      //Divide image and template into RGB+alpha layers.
      if(templ.type() == CV_8UC3){ //Template doesn't have alpha.
        templAlpha = Scalar(UCHARMAX);
        split(templ, layers);
      else if(templ.type() == CV_8UC4){ //Template has alpha.
        split(templ, layers);
      if(img.type() == CV_8UC3){ //Image doesn't have alpha.
        imgAlpha = Scalar(UCHARMAX);
        split(img, layers);     
      else if(templ.type() == CV_8UC4){ //Image has alpha.
        split(img, layers);
      Size resultSize(img.cols - templ.cols + 1, img.rows - templ.rows + 1);
      result.create(resultSize ...
edit flag offensive delete link more


Consider this closed. I just can't approve my answer 'cause I lack the karma.

LadyZayin gravatar imageLadyZayin ( 2013-07-29 15:54:48 -0600 )edit

Question Tools


Asked: 2013-07-25 13:50:24 -0600

Seen: 2,680 times

Last updated: Jul 29 '13