Calculated Points go out of Screen (OpticalFlowPyrLK) [closed]

asked 2016-12-01 02:50:41 -0600

Vintez gravatar image

updated 2016-12-01 04:29:56 -0600

I use the calcOpticalFlowPyrLK method, to track the Points I've matched through a recognition step in an augmented reality app for Android.

In a recent Question I asked for alternative methods or changes which would raise the performance of the algorithm. Now it has the performance which I aimed for, but the Points which are tracked by it, are totally wrong. Here a small example:

After recognition, I get "good" points for the corners of the object e.g.:

21.999258, 360.0007
25.975346, 356.97644
23.104279, 359.1565
23.653475, 358.75168

After the first Iteration I already get strange results (note I did not move the camera):

-147.47705, 316.19443
-147.8505, 297.06882
-147.57626, 311.12216
-147.62378, 308.65738

Here is my code:

calcOpticalFlow:

int BruteForceMatcher::trackWithOpticalFlow(std::vector<cv::Mat> prevPyr, std::vector<cv::Mat> nextPyr, std::vector<cv::Point2f> &srcPoints, std::vector<cv::Point2f> &srcCorners){

std::vector<cv::Point2f> estPoints;
std::vector<cv::Point2f> estCorners;
std::vector<cv::Point2f> goodPoints;
std::vector<cv::Point2f> leftsrc;
std::vector<uchar> status;
std::vector<float> error;

//double opticalFlowT = cv::getTickCount(), homoT, perspT, tf = cv::getTickFrequency();

if(srcPoints.size() > 0) {

    cv::calcOpticalFlowPyrLK(prevPyr, nextPyr, srcPoints, estPoints, status, error, cv::Size(7,7));

    for (int i = 0; i < estPoints.size(); i++) {
        if (!status[i] && error[i] < 20.f) {
            //LOGW("ERROR : %f\n", error[i]);
            goodPoints.push_back(estPoints[i] *= 4);
            leftsrc.push_back(srcPoints[i] *= 4);
        }
    }

    //opticalFlowT = cv::getTickCount() - opticalFlowT;
    //LOGD("Time opticalFlow and Outlier removal: %f\n", opticalFlowT*1000./tf);
    //LOGD("Left Points (est/src): %i, %i", goodPoints.size(), leftsrc.size());

    if(goodPoints.size() <= 0){
        //LOGD("No good Points calculated");
        return 0;
    }
    //homoT = cv::getTickCount();

    cv::Mat f = cv::findHomography(leftsrc, goodPoints);

    //homoT = cv::getTickCount() - homoT;
    //LOGD("Homography: %f\n", homoT*1000./tf);
    if(cv::countNonZero(f) < 1){
        //LOGD("Homography Matrix is empty!");
        return 0;
    }

    //perspT = cv::getTickCount();
    cv::perspectiveTransform(srcCorners, estCorners, f);

    //perspT = cv::getTickCount() - perspT;

    //LOGD("Perspective Transform: %f\n", perspT*1000./tf);
    srcCorners.swap(estCorners);
    estCorners.clear();
    srcPoints.swap(goodPoints);
    goodPoints.clear();
    status.clear();
    error.clear();

    return srcPoints.size();
}

return 0;
}

And the Method which calls the opticalFlow Tracking and also builds the pyramids:

std::vector<cv::Point2f> findBruteForceMatches(cv::Mat img){

int matches = 0;
BruteForceMatcher *bruteForceMatcher = new BruteForceMatcher();

if(trackKLT){
    LOGD("TRACK WITH KLT");

    std::vector<cv::Mat> currPyr;
    cv::resize(img, img, cv::Size(img.cols/4, img.rows/4));
    cv::buildOpticalFlowPyramid(img, currPyr, cv::Size(9,9), 3);

    for(int i = 0; i < srcPoints.size(); i++){
            srcPoints[i] *= 0.25;
    }        

    double kltTime = (double) cv::getTickCount();

    matches = bruteForceMatcher->trackWithOpticalFlow(prevPyr, currPyr, srcPoints, scene_corners);

    kltTime = (double) cv::getTickCount() - kltTime;
    LOGD("KLT Track Time: %f\n", kltTime*1000./tf);
    //returningtime = cv::getTickCount();

    if(matches > 10){

        trackKLT = true;
        prevPyr.swap(currPyr);
        currPyr.clear();
        delete bruteForceMatcher;
        return scene_corners;

    }else{

        trackKLT = false;
        prevPyr.clear();
        srcPoints.clear();
        scene_corners.clear();
        delete bruteForceMatcher;
        return scene_corners;

    }
} else{
    LOGD("RECOGNIZE OBJECT");

    std::vector<cv::Point2f> ransacs;
    ransacs.reserve(100);

    double bfMatchTime = (double) cv::getTickCount();

    matches = bruteForceMatcher->findMatchesBF(img, features2d, descriptors ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason question is not relevant or outdated by Vintez
close date 2016-12-01 08:25:46.481633

Comments

I found one bug of my Application, In my recognition, I make the Points smaller with ransacs[i] *= 0.25 but I don't do sth. similar in the KLT and always upscale the points by four. So now I dont have it, that the Points are increasing all the time, but still the tracked points are far too much away from the recgonized area. I edit the Question with that.

Vintez gravatar imageVintez ( 2016-12-01 04:25:18 -0600 )edit

I'm not quite sure why, but I can not reproduce my Error anymore. After testing some Input I now create a Pyramid with cv::Size(9,9) and also search/track with a cv::Size of 9. And I dont get any Error anymore ;/

Vintez gravatar imageVintez ( 2016-12-01 08:25:25 -0600 )edit