Ask Your Question

LadyZayin's profile - activity

2020-07-16 22:55:44 -0600 received badge  Popular Question (source)
2016-01-20 13:52:29 -0600 received badge  Notable Question (source)
2015-01-01 13:16:35 -0600 received badge  Popular Question (source)
2013-11-18 22:53:59 -0600 commented question Alpha-Dependent Template Matching

No problem, thanks anyway! :)

2013-11-18 22:51:35 -0600 commented question RandomForest->predict always results 0

Hi, I'm siso's teammate. I just wanted to add that we are trying to use CvRTrees with feature vectors of variable length, and we believe than the problem might originate from this. For instance, a feature vector could be [1,2,3,1,2] or [1,1,2,3,3,1,2] and represent the same output. I suppose we could pad the vectors with a certain value (such as 0), but how can we ensure that this value will be ignored by the classifier? We have tried using Mat masks to solve the issue, but it doesn't seem to help. Either there is a bug in our implementation, or CvRTrees cannot be used with variable length inputs.

2013-07-29 15:54:48 -0600 commented answer MatchTemplate on a single color

Consider this closed. I just can't approve my answer 'cause I lack the karma.

2013-07-29 15:48:41 -0600 received badge  Scholar (source)
2013-07-29 15:40:11 -0600 commented answer Feature matching tutorial compilation problem

Thanks, I'll have a look at it. Sorry for my double posting (now fixed), there seems to be server problems.

[Edit] I just solved my problem in an unexpected way. I changed my build command to "g++ -Wall -o "feature" "feature.cpp" -I /usr/include/ $(pkg-config --cflags --libs opencv)" and it worked. I'm surprised because I had tried it before and it had failed. I guess reinstalling might have fixed the issue.

Thanks a lot for your help! :)

2013-07-29 14:11:38 -0600 answered a question MatchTemplate on a single color

I think I found a solution to my problem using this code. Here is the code that I use:

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
#include <iostream>
#include <stdio.h> 
#include <string.h>                           

using namespace std;                  
using namespace cv; 

int main (int argc, char** argv){   

    String slamMapPath, realMapPath;    
    int method, resultColumns, resultRows;
    double* maxVal;
    Point minLoc, maxLoc;
    Mat result;

    String comparisonMethods[] = {"CV_TM_SQDIFF", "CV_TM_SQDIFF_NORMED", "CV_TM_CCORR",
        "CV_TM_CCORR_NORMED", "CV_TM_CCOEFF", "CV_TM_CCOEFF_NORMED"}; //List of comparison methods.
    method = CV_TM_CCOEFF_NORMED; //"Cross coefficient normed" by default.  

    //Bad parameters handling.
    if(argc < 3){
        cout << "Error: missing arguments.";
        return 1;
    }

    realMapPath = argv[1]; 
    slamMapPath = argv[2]; 
    Mat realMap = imread(realMapPath, -1); //Get the real map image. 0 is grayscale. -1 is original image.
    Mat slamMap = imread(slamMapPath, -1); //Get the slam map image. 0 is grayscale. -1 is original image.

    //Bad parameters handling.
    if(slamMap.data == NULL && realMap.data == NULL){       
        cout << "Error: neither images can be read.\n";     
        return 1;
    }
    else if(realMap.data == NULL){
        cout << "Error: first image cannot be read.\n";     
        return 1;
    }
    else if(slamMap.data == NULL){
        cout << "Error: second image cannot be read.\n";        
        return 1;
    }

    //Case with method parameter present.
    if(argc > 3){
        //More bad parameter handling.
        if(atoi(argv[3]) < 0 || atoi(argv[3]) > 5){
            cout << "Error: wrong value for comparison method.\n";
            return 1;
        }
        else{   
            method = atoi(argv[3]);
        }
    }   

    //Create the result image.  
    resultColumns =  realMap.cols - slamMap.cols + 1; //# columns of result.
    resultRows = realMap.rows - slamMap.rows + 1; //# rows of result.
    result.create(resultColumns, resultRows, CV_32FC1); //Allocate space for the result.    

    ///This piece of code is based on
    ///http://answers.opencv.org/question/16535/alpha-dependent-template-matching/  
    Mat templ, img;
    slamMap.copyTo(templ);
    realMap.copyTo(img);
    const double UCHARMAX = 255;
    const double UCHARMAXINV = 1./UCHARMAX;
    vector<Mat> layers;

    //RGB+Alpha layer containers.
    Mat templRed(templ.size(), CV_8UC1);
    Mat templGreen(templ.size(), CV_8UC1);
    Mat templBlue(templ.size(), CV_8UC1);
    Mat templAlpha(templ.size(), CV_8UC1);

    Mat imgRed(img.size(), CV_8UC1);
    Mat imgGreen(img.size(), CV_8UC1);
    Mat imgBlue(img.size(), CV_8UC1);
    Mat imgAlpha(img.size(), CV_8UC1);

    //Check if one the the images has an alpha channel.
    if(templ.depth() == CV_8U && img.depth() == CV_8U && 
      (img.type() == CV_8UC3 || img.type() == CV_8UC4) &&
      (templ.type() == CV_8UC3 || templ.type() == CV_8UC4)){

      //Divide image and template into RGB+alpha layers.
      if(templ.type() == CV_8UC3){ //Template doesn't have alpha.
        templAlpha = Scalar(UCHARMAX);
        split(templ, layers);
        layers[0].copyTo(templBlue);
        layers[1].copyTo(templGreen);
        layers[2].copyTo(templRed);
      }
      else if(templ.type() == CV_8UC4){ //Template has alpha.
        split(templ, layers);
        layers[0].copyTo(templBlue);
        layers[1].copyTo(templGreen);
        layers[2].copyTo(templRed);
        layers[3].copyTo(templAlpha);
      }
      if(img.type() == CV_8UC3){ //Image doesn't have alpha.
        imgAlpha = Scalar(UCHARMAX);
        split(img, layers);     
        layers[0].copyTo(imgBlue);
        layers[1].copyTo(imgGreen);
        layers[2].copyTo(imgRed);
      }
      else if(templ.type() == CV_8UC4){ //Image has alpha.
        split(img, layers);
        layers[0].copyTo(imgBlue);
        layers[1].copyTo(imgGreen);
        layers[2].copyTo(imgRed);
        layers[3].copyTo(imgAlpha);
      }
      Size resultSize(img.cols - templ.cols + 1, img.rows - templ.rows + 1);
      result.create(resultSize ...
(more)
2013-07-29 12:25:10 -0600 commented question Alpha-Dependent Template Matching

Nevermind, I got it to work with MatchTemplate. I was experiencing problems because the images I am matching are black and white, and I wanted to mask the white regions. As a result, the alpha channel would generate a completely black image. I just had to invert my images before applying the mask.

2013-07-29 10:39:29 -0600 commented answer Feature matching tutorial compilation problem

Yes I have. I'm reinstalling OpenCV now and making sure nonfree is too. Hopefully that'll fix the issue. [Edit] It's still not working. Here is my current build command:

g++ -Wall -o "feature" "feature.cpp" -I /usr/include/opencv2/ -lopencv_features2d -lopencv_highgui -lopencv_imgproc -lopencv_nonfree -lopencv_flann -lopencv_core

And this is what I get:

/usr/bin/ld: warning: libopencv_core.so.2.4, needed by /usr/local/lib/libopencv_nonfree.so, may conflict with libopencv_core.so.2.3

/usr/bin/ld: /tmp/ccGzJyU9.o: undefined reference to symbol 'cv::Algorithm::~Algorithm()'

/usr/bin/ld: note: 'cv::Algorithm::~Algorithm()' is defined in DSO /usr/local/lib/libopencv_core.so.2.4 so try adding it to the linker command line

Does the order of linked libraries matter?

2013-07-29 10:04:44 -0600 commented answer Feature matching tutorial compilation problem

feature.cpp:(.text._ZN2cv4SURFD1Ev[cv::SURF::~SURF()]+0x1e): undefined reference to `vtable for cv::SURF'

feature.cpp:(.text._ZN2cv4SURFD1Ev[cv::SURF::~SURF()]+0x6b): undefined reference to `cv::Algorithm::~Algorithm()'

Actually, I just realized that there is no nonfree folder in /usr/include/opencv2. I just tried to find a way of installing it individually, but I couldn't find a way. Hmm.

2013-07-29 09:57:56 -0600 commented answer Feature matching tutorial compilation problem

Thanks for your reply. I do have searched the forum, but the questions I found were slightly different... Anyway, I have included all of the following:

"<opencv2/core/core.hpp> <opencv2/nonfree/nonfree.hpp> <opencv2/features2d/features2d.hpp> <opencv2/highgui/highgui.hpp> <stdio.h> <iostream>"

but I still get:

/usr/bin/ld: /tmp/ccPbfO0y.o: undefined reference to symbol 'cv::flann::SearchParams::SearchParams(int, float, bool)'

/usr/bin/ld: note: 'cv::flann::SearchParams::SearchParams(int, float, bool)' is defined in DSO /usr/lib/libopencv_flann.so.2.3 so try adding it to the linker command line

/usr/lib/libopencv_flann.so.2.3: could not read symbols: Invalid operation

Then, I included flann.hpp as well and added -lopencv_flann to the build command and I get new errors, such as

2013-07-26 16:01:41 -0600 commented answer matchTemplate() with a mask

Any update on your work? I would be interested in using it.

2013-07-26 15:36:08 -0600 asked a question Feature matching tutorial compilation problem

I am trying to compile the code in this tutorial, but I get the following error messages:

feature.cpp: In function ‘int main(int, char**)’:
feature.cpp:26:3: error: ‘SurfFeatureDetector’ was not declared in this scope
feature.cpp:26:23: error: expected ‘;’ before ‘detector’
feature.cpp:30:3: error: ‘detector’ was not declared in this scope
feature.cpp:34:3: error: ‘SurfDescriptorExtractor’ was not declared in this scope
feature.cpp:34:27: error: expected ‘;’ before ‘extractor’
feature.cpp:38:3: error: ‘extractor’ was not declared in this scope
feature.cpp:24:7: warning: unused variable ‘minHessian’ [-Wunused-variable]

However, I am pretty sure I linked the right libraries. Here is my build command (from Geany):

g++ -Wall -o "%e" "%f" -I /usr/include/opencv2/ -lopencv_features2d -lopencv_core -lopencv_highgui -lopencv_imgproc

I don't understand why it doesn't work because the core and highgui libraries are not causing any issue. I checked my /usr/include/opencv2/features2d folder and I did find features2d.hpp. However, if I replace

#include "opencv2/features2d/features2d.hpp"

in the program by

#include "opencv2/nonfree/features2d.hpp"

I get the following compilation error:

/usr/bin/ld: /tmp/ccVyvyXB.o: undefined reference to symbol 'cv::flann::SearchParams::SearchParams(int, float, bool)'
/usr/bin/ld: note: 'cv::flann::SearchParams::SearchParams(int, float, bool)' is defined in DSO /usr/lib/libopencv_flann.so.2.3 so try adding it to the linker command line
/usr/lib/libopencv_flann.so.2.3: could not read symbols: Invalid operation
collect2: ld returned 1 exit status

What am I doing wrong? Thanks!

2013-07-26 13:55:04 -0600 commented answer MatchTemplate on a single color
2013-07-26 13:34:44 -0600 commented question Alpha-Dependent Template Matching

Would you mind also posting the crossCorr function that you used? I attempted to replace it and the for loops by MatchTemplate, but the result is innacurate. Thanks!

2013-07-26 10:05:24 -0600 commented answer MatchTemplate on a single color

Thanks for your reply. I will definitely read this paper. However, I should have mentioned that I managed to align both maps (rotation + translation) using the GIMP image registration plugin. It works well enough in most cases.

Now that might seem weird since I'm using match template which aligns images as well, but what I need in the end is the similarity score provided by minMaxLoc.

2013-07-26 10:02:38 -0600 received badge  Supporter (source)
2013-07-25 13:51:34 -0600 received badge  Editor (source)
2013-07-25 13:50:24 -0600 asked a question MatchTemplate on a single color

I am attempting to compare the quality of a black&white map built by a robot running slam with a ground truth map. I decided to try MatchTemplate for that purpose. When I call the function on the maps, the result is far from accurate - the matching region is way off.

Please look at the image attached, which is a hand-drawn example of what happens: Maps. On the left is the ground truth and on the right is the slam map of a single room (say that I stopped my slam algorithm at this point). The gray rectangles represent the boundaries of each image. I would expect MatchTemplate to locate the room at the bottom left corner of the ground truth (where it should be), but it doesn't. In fact, the algorithm would match it where a lot of white can be found (such as the region enclosed by the green rectangle). Therefore, the white regions of my slam map affect the result of the algorithm.

I thought of two solutions, but I don't know how to apply them. First, is there a way of setting MatchTemplate to only take black into account and ignore white completely? Second, is it possible to enclose my slam map with a non-rectangular mask (the rooms are not always rectangular)? If not, is there another algorithm that would best fit my purpose?

I found several topics on using MatchTemplate with masks or transparency, but the solutions to these questions don't seem to apply to my case. For instance, I tried using edge detection prior to using MatchTemplate, but it doesn't work since my original map is approximately equivalent to an image on which edge detection was already applied (obviously!).

I hope I made myself clear!