Partial Matches using FLANN Based Matcher. How to increase leniancy? c++

asked 2017-10-04 06:06:13 -0500

LeTo gravatar image

updated 2017-10-05 04:14:02 -0500


I am currently trying to use opencv's FLANN Based Matcher using SURF detectors to try match a template in a scene. However the template is showing the image from a different rotation, scale and perspective. So far the code has been successful at finding some matches but not enough for a bounding box to be drawn around the template in the scene (Using findHomography() and perspectiveTransform()).

The objects of interest are birds in flight and as such are ever changing. What I want to achieve is to have a bounding box drawn around the object of interest in the scene when as little as 2 "good matches" are found.

Any and all help would be greatly appreciated!

Thanks, Levi.

The code for this matching function is included below:

Mat Homography(Mat tmpl, Mat scene)
    Mat img_matches;

    //Detect the keypoints and extract descriptors (feature vectors) using SURF
    int minHessian = 400;

    Ptr<SURF> detector = SURF::create( minHessian );

    vector<KeyPoint> keypoints_tmpl, keypoints_scene;
    Mat descriptors_tmpl, descriptors_scene;

    detector->detectAndCompute(tmpl, Mat(), keypoints_tmpl, descriptors_tmpl );
    detector->detectAndCompute(scene, Mat(), keypoints_scene, descriptors_scene );

    //Matching descriptor vectors using FLANN matcher
    if ( descriptors_tmpl.empty() )
        //cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);
        return frame;
    if ( descriptors_scene.empty() )
        //cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);
        return frame;

    FlannBasedMatcher matcher;
    vector< DMatch > matches;
    matcher.match( descriptors_tmpl, descriptors_scene, matches );

    double max_dist = 0; double min_dist = 100;

    //max and min distances between keypoints
    for( int i = 0; i < descriptors_tmpl.rows; i++ )
        double dist = matches[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;

    //Finds "good" matches (distance less than 3*min_dist )
    vector< DMatch > good_matches;

    for( int i = 0; i < descriptors_tmpl.rows; i++ )
        if( matches[i].distance < 3*min_dist )
                good_matches.push_back( matches[i]);

    drawMatches( tmpl, keypoints_tmpl, scene, keypoints_scene,
                   good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                   vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //Localize the tmpl
    vector<Point2f> templ;
    vector<Point2f> scn;

    for( int i = 0; i < good_matches.size(); i++ )
        //keypoints from the good matches
        templ.push_back( keypoints_tmpl[ good_matches[i].queryIdx ].pt );
        scn.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );

    Mat H = findHomography( templ, scn, RANSAC );

    //corners from the template ( the object to be "detected" )
    vector<Point2f> tmpl_corners(4);
    tmpl_corners[0] = cvPoint(0,0);
    tmpl_corners[1] = cvPoint( tmpl.cols, 0 );
    tmpl_corners[2] = cvPoint( tmpl.cols, tmpl.rows );
    tmpl_corners[3] = cvPoint( 0, tmpl.rows );
    vector<Point2f> scene_corners(4);

        perspectiveTransform( tmpl_corners, scene_corners, H);

    //-- Draw lines between the corners (bounding box)
    line( img_matches, scene_corners[0] + Point2f( tmpl.cols, 0), scene_corners[1] + Point2f( tmpl.cols, 0), Scalar(0, 255, 0), 4 );
    line( img_matches, scene_corners[1] + Point2f( tmpl.cols, 0), scene_corners[2] + Point2f( tmpl.cols, 0), Scalar( 0, 255, 0), 4 );
    line( img_matches, scene_corners[2] + Point2f( tmpl.cols, 0), scene_corners[3] + Point2f( tmpl.cols, 0), Scalar( 0, 255, 0), 4 );
    line( img_matches, scene_corners[3] + Point2f( tmpl.cols, 0), scene_corners[0] + Point2f( tmpl.cols, 0), Scalar( 0, 255, 0), 4 );

    return img_matches;
edit retag flag offensive close merge delete


@levi, yes, we need to see your code, but please as a TEXT version !

(i removed the silly shots)

berak gravatar imageberak ( 2017-10-04 06:23:28 -0500 )edit

@berak When I add the code as 'preformatted text' it seems to recognize only some of it as code.

LeTo gravatar imageLeTo ( 2017-10-05 03:54:55 -0500 )edit

don't worry, we'll help with the formatting. just add the code

(also: format as code, 4th button from left ('"')

berak gravatar imageberak ( 2017-10-05 03:58:10 -0500 )edit

@berak Thanks, I have remedied this.

LeTo gravatar imageLeTo ( 2017-10-05 04:08:09 -0500 )edit

yea, thanks a lot !

(now folks here can actually try with your code)

berak gravatar imageberak ( 2017-10-05 04:20:35 -0500 )edit

you need at least 4 points for a homography.

(also, flying birds will vary so much in appearance, that it's quite unlikely, you get enough good features)

berak gravatar imageberak ( 2017-10-06 10:19:50 -0500 )edit

ye, that sounds about right, Thanks. Might have to steer away from the birds. Is it possible to get features from more than one template for a homography?

LeTo gravatar imageLeTo ( 2017-10-07 04:34:56 -0500 )edit

can you explain :

from more than one template ?

berak gravatar imageberak ( 2017-10-07 04:41:36 -0500 )edit

So to get features from say two pictures of a bird in different positions and use the features from both pictures to try make a match

LeTo gravatar imageLeTo ( 2017-10-07 04:44:52 -0500 )edit

that's a cute idea, i'm just worried, that different features(keypoints) from different images also relate to different coordinate systems. if you "mix" things, it's doubtful, that you get a proper homography.

but i'm just guessing.

also : poses. tip of a wing will be at the back for a sitting bird, and perpendicular while in flight.

a homography will probably only work for "rigid" things.

berak gravatar imageberak ( 2017-10-07 04:57:06 -0500 )edit