Ask Your Question

Revision history [back]

Fast feature detect

Hi,

i have a gStreamer Pipeline and output the Frames to my application via an appsink.

cv::Mat frame = cv::Mat(cv::Size(720, 405), CV_8UC3, (char*)map.data, cv::Mat::AUTO_STEP);
    cv::Mat frame_gray;
    if (frame.channels() > 1) {
        cv::cvtColor(frame, frame_gray, CV_BGR2GRAY);
    } else {
        frame_gray = frame;
    }

the map.data is 874.800 bytes. the format seems to be right. i pass the frame_gray to my algorithm:

             cv::Rect rect = Rect(250,250,pipeCtrl->displayData->getSelectionRectWidth(),pipeCtrl->displayData->getSelectionRectHeight());
            pipeCtrl->algo.consensus.estimate_scale = false;
            pipeCtrl->algo.consensus.estimate_rotation = true;
            pipeCtrl->algo.initialize(frame_gray, rect);

there i try to use the fast feature detector.

 void algo::initialize(const Mat im_gray, const Rect rect) {

     imshow( "Display window", im_gray );

    detector = cv::FastFeatureDetector::create();
    descriptor = cv::BRISK::create();
    vector<KeyPoint> keypoints;   
    detector->detect(im_gray, keypoints);
    [whatever...]
}

the imshow works fine. i can see the scaled (720x405 px) gray scale image. but the fast festure detector finds ~4.2 billion features in the image, at least that is what keypoints.size() returns after the fast algorithm runs.

The algorithm works fine if i use VideoCapture and a webcam as input using the same grayscale conversion. so i think it might be a problem with the data i get from the gStreamer app sink. but that seems to be ok. (imshow works on the not grayscale image too)

Do i miss something? Is there a way to debug the input to the fast feature detect without messing around with the source code?

Hava a nice day!

Fast feature detect

Hi,

i have a gStreamer Pipeline and output the Frames to my application via an appsink.

cv::Mat frame = cv::Mat(cv::Size(720, 405), CV_8UC3, (char*)map.data, cv::Mat::AUTO_STEP);
    cv::Mat frame_gray;
    if (frame.channels() > 1) {
        cv::cvtColor(frame, frame_gray, CV_BGR2GRAY);
    } else {
        frame_gray = frame;
    }

the map.data is 874.800 bytes. the format seems to be right. i pass the frame_gray to my algorithm:

             cv::Rect rect = Rect(250,250,pipeCtrl->displayData->getSelectionRectWidth(),pipeCtrl->displayData->getSelectionRectHeight());
            pipeCtrl->algo.consensus.estimate_scale = false;
            pipeCtrl->algo.consensus.estimate_rotation = true;
            pipeCtrl->algo.initialize(frame_gray, rect);

there i try to use the fast feature detector.

 void algo::initialize(const Mat im_gray, const Rect rect) {

     imshow( "Display window", im_gray );

    detector = cv::FastFeatureDetector::create();
    descriptor = cv::BRISK::create();
    vector<KeyPoint> keypoints;   
    detector->detect(im_gray, keypoints);
    [whatever...]
}

the imshow works fine. i can see the scaled (720x405 px) gray scale image. but the fast festure detector finds ~4.2 billion features in the image, at least that is what keypoints.size() returns after the fast algorithm runs.

The algorithm works fine if i use VideoCapture and a webcam as input using the same grayscale conversion. so i think it might be a problem with the data i get from the gStreamer app sink. but that seems to be ok. (imshow works on the not grayscale image too)

Do i miss something? Is there a way to debug the input to the fast feature detect without messing around with the source code?

Hava a nice day!

[edit]

it seems to be a general problem. if i load a picture from the harddisk and try to use it with the fast detector -> same 4 billion features.

i am using opencv inside an QT5.5 application and it think that is the problem. but don't know what exactly goes wrong :/