open cv, vector<DMatch> matches, xy coordinates

asked 2014-03-19 04:21:20 -0500

smallbbb gravatar image

updated 2014-03-19 07:58:14 -0500

berak gravatar image

Dear all,

i am using SIFT to recognize the image comparing to web-cam video.

colorImg.setFromPixels(vidGrabber.getPixels(), 320,240);

    grayImage = colorImg;

    // take the abs value of the difference between background and incoming and then threshold:
    //grayDiff.absDiff(grayBg, grayImage);
    //grayDiff.threshold(threshold);

    // find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
    // also, find holes is set to true so we will get interior contours as well....
    //contourFinder.findContours(grayDiff, 20, (340*240)/3, 10, true);  // find holes
}

queryImg = cv::imread("..\\Images\\1.bmp", CV_LOAD_IMAGE_GRAYSCALE);

// trainImg = cv::imread("..\Images\2.bmp", CV_LOAD_IMAGE_GRAYSCALE); trainImg = grayImage.getCvImage();

if(queryImg.empty() || trainImg.empty())
{
    printf("Can't read one of the images\n");
}

// Detect keypoints in both images.
SurfFeatureDetector detector(800);
detector.detect(queryImg, queryKeypoints);
detector.detect(trainImg, trainKeypoints);

// Print how many keypoints were found in each image.
printf("Found %d and %d keypoints.\n", queryKeypoints.size(), trainKeypoints.size());

// Compute the SIFT feature descriptors for the keypoints.
// Multiple features can be extracted from a single keypoint, so the result is a
// matrix where row 'i' is the list of features for keypoint 'i'.
SiftDescriptorExtractor extractor;
Mat queryDescriptors, trainDescriptors;
extractor.compute(queryImg, queryKeypoints, queryDescriptors);
extractor.compute(trainImg, trainKeypoints, trainDescriptors);

// Print some statistics on the matrices returned.

cv::Size size = queryDescriptors.size();
printf("Query descriptors height: %d, width: %d, area: %d, non-zero: %d\n", 
       size.height, size.width, size.area(), countNonZero(queryDescriptors));

size = trainDescriptors.size();
printf("Train descriptors height: %d, width: %d, area: %d, non-zero: %d\n", 
       size.height, size.width, size.area(), countNonZero(trainDescriptors));
// For each of the descriptors in 'queryDescriptors', find the closest 
// matching descriptor in 'trainDescriptors' (performs an exhaustive search).
// This seems to only return as many matches as there are keypoints. For each
// keypoint in 'query', it must return the descriptor which most closesly matches a
// a descriptor in 'train'?

BruteForceMatcher<L2<float>> matcher;//NORM_L2);//NORM_L2);
//BruteForceMatcher matcher= cv::BruteForceMatcher(cv::NORM_L2, crossChecking=True);

matcher.match(queryDescriptors, trainDescriptors, matches);

printf("Found %d matches.\n", matches.size());

===================================================================== how can i get the x,y coordinations of the matches point thanks!

edit retag flag offensive close merge delete

Comments

1

Where is "matches" defined?

GilLevi gravatar imageGilLevi ( 2014-03-19 07:22:38 -0500 )edit

here is the .h class testApp : public ofBaseApp{

public:
    void setup();
    void update();
    void draw();

    void keyPressed(int key);
    void keyReleased(int key);
    void mouseMoved(int x, int y );
    void mouseDragged(int x, int y, int button);
    void mousePressed(int x, int y, int button);
    void mouseReleased(int x, int y, int button);
    void windowResized(int w, int h);
    void dragEvent(ofDragInfo dragInfo);
    void gotMessage(ofMessage msg);
smallbbb gravatar imagesmallbbb ( 2014-03-23 17:48:29 -0500 )edit

#ifdef _USE_LIVE_VIDEO ofVideoGrabber vidGrabber; #else ofVideoPlayer vidPlayer; #endif

    ofxCvColorImage         colorImg;
    ofxCvGrayscaleImage     grayImage;


    int                 threshold;
    bool                bLearnBakground;
    Mat                 queryImg;
    Mat                 trainImg;
    vector&lt;KeyPoint&gt;  queryKeypoints, trainKeypoints;
    vector&lt;DMatch&gt;        matches;

    ofxCvGrayscaleImage     showImg;
    IplImage *      showimage;

    ofxCvGrayscaleImage compareImg;
    IplImage *      compareimage;

};

smallbbb gravatar imagesmallbbb ( 2014-03-23 17:48:51 -0500 )edit
GilLevi gravatar imageGilLevi ( 2014-03-24 08:20:07 -0500 )edit

I can't find the coordinate of matched points

smallbbb gravatar imagesmallbbb ( 2014-03-24 22:00:52 -0500 )edit

Yes,it refers to that Dmatch

smallbbb gravatar imagesmallbbb ( 2014-03-25 22:52:59 -0500 )edit

I think the x and y coordinates of the matches are:

queryKeypoints[matches-&gt;queryIdx].pt.x
queryKeypoints[matches-&gt;queryIdx].pt.y
trainKeypoints[matches-&gt;trainIdx].pt.x
trainKeypoints[matches-&gt;trainIdx].pt.y

Check it to make sure.

GilLevi gravatar imageGilLevi ( 2014-03-27 05:45:59 -0500 )edit