Ask Your Question

uavcamera's profile - activity

2015-12-22 03:53:34 -0600 commented question Fast feature detect

problem solved. used the non debug libraries in a project compiled with debug flags. that why some of the algorithms fail.

2015-12-10 21:04:53 -0600 commented question Fast feature detect

4 bil. is what vector.size() returns after running the fast detect. more precise: thats the output. but the application seems to crash in another thread and then displaying bullshit for the debugging output. i am not sure about anything right now :/

something goes terribly wrong inside the fast detector algorithm for some, to me, unknown reason. compiling without any IPP/SSE/TBB features right now to hopefully get a single point of failure

[edit]

its not a problem of the fast detector. agast fails too :/

[edit2] have a look at this:

http://postimg.org/image/7nzkh1tj5/

thats totally weired. these values are not right.. not at all.

2015-12-09 21:28:03 -0600 received badge  Editor (source)
2015-12-09 18:03:54 -0600 commented question Fast feature detect

291.600 px -> 720x405

even if there is a lot of noise how can it find 4 billion features?

2015-12-09 04:49:02 -0600 asked a question Fast feature detect

Hi,

i have a gStreamer Pipeline and output the Frames to my application via an appsink.

cv::Mat frame = cv::Mat(cv::Size(720, 405), CV_8UC3, (char*)map.data, cv::Mat::AUTO_STEP);
    cv::Mat frame_gray;
    if (frame.channels() > 1) {
        cv::cvtColor(frame, frame_gray, CV_BGR2GRAY);
    } else {
        frame_gray = frame;
    }

the map.data is 874.800 bytes. the format seems to be right. i pass the frame_gray to my algorithm:

             cv::Rect rect = Rect(250,250,pipeCtrl->displayData->getSelectionRectWidth(),pipeCtrl->displayData->getSelectionRectHeight());
            pipeCtrl->algo.consensus.estimate_scale = false;
            pipeCtrl->algo.consensus.estimate_rotation = true;
            pipeCtrl->algo.initialize(frame_gray, rect);

there i try to use the fast feature detector.

 void algo::initialize(const Mat im_gray, const Rect rect) {

     imshow( "Display window", im_gray );

    detector = cv::FastFeatureDetector::create();
    descriptor = cv::BRISK::create();
    vector<KeyPoint> keypoints;   
    detector->detect(im_gray, keypoints);
    [whatever...]
}

the imshow works fine. i can see the scaled (720x405 px) gray scale image. but the fast festure detector finds ~4.2 billion features in the image, at least that is what keypoints.size() returns after the fast algorithm runs.

The algorithm works fine if i use VideoCapture and a webcam as input using the same grayscale conversion. so i think it might be a problem with the data i get from the gStreamer app sink. but that seems to be ok. (imshow works on the not grayscale image too)

Do i miss something? Is there a way to debug the input to the fast feature detect without messing around with the source code?

Hava a nice day!

[edit]

it seems to be a general problem. if i load a picture from the harddisk and try to use it with the fast detector -> same 4 billion features.

i am using opencv inside an QT5.5 application and it think that is the problem. but don't know what exactly goes wrong :/

2015-10-16 19:07:37 -0600 commented question Documentation - KeyPoint class

@LorenaGdL thanks for that http://docs.opencv.org/master/d4/db1/... !

is there a similar tutorial for writing the code? some things are not clear on the first view for newbies (things like CV_OUT)

2015-10-16 05:35:03 -0600 commented question Documentation - KeyPoint class

i expected two functions or a switch param: convertTo and convertFrom or something like that (convert(..., bool fromTo)). two functions would make more sense in my eyes because most of the params in the overloaded function have no use in the point->keypoint case.

i don't know why it is solved like this but i think it has historical/compatibility reasons.

to move/reorganize the comment would clarify the documentation, yes. the easy way without causing compatibility issues.

2015-10-16 04:42:34 -0600 asked a question Documentation - KeyPoint class

Hi,

I have a question about the documentation of the KeyPoint class.

see files:

  • core/include/opencv2/core/types.hpp
  • core/src/opencv2/core/types.cpp

Background:

i was looking for a way to convert vector< KeyPoint > to vector< Point2f >.

There is the convert(const std::vector< KeyPoint > &keypoints, std::vector< Point2f > &points2f, const std::vector< int > &keypointIndexes=std::vector< int >()) function. The documentation in the header file says:

/**
This method converts vector of keypoints to vector of points or the reverse, where each keypoint is
assigned the same size and the same orientation.

@Param keypoints Keypoints obtained from any feature detection algorithm like SIFT/SURF/ORB
@Param points2f Array of (x,y) coordinates of each keypoint
@Param keypointIndexes Array of indexes of keypoints to be converted to points. (Acts like a mask to
convert only specified keypoints)
*/

and there is the convert (const std::vector< Point2f > &points2f, std::vector< KeyPoint > &keypoints, float size=1, float response=1, int octave=0, int class_id=-1) function on which the documentation says:

/** @overload
@Param points2f Array of (x,y) coordinates of each keypoint
@Param keypoints Keypoints obtained from any feature detection algorithm like SIFT/SURF/ORB
@Param size keypoint diameter
@Param response keypoint detector response on the keypoint (that is, strength of the keypoint)
@Param octave pyramid octave in which the keypoint has been detected
@Param class_id object id
*/

what expands to

"This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts."

in the documentation at http://docs.opencv.org/master/d2/d29/...

But if you see the implementation its clear that the actual functionalities of the both convert() functions do not fit their documentation. The first one converts from vector< KeyPoint > to vector< Point2f > and the overloaded one from vector< Point2f > to vector < KeyPoint >.

So now i'm not sure if

  1. everything is OK and well documented and me, as a newbie, just cannot read the documentation well
  2. the documentation in core/include/opencv2/core/types.hpp is wrong
  3. the implementation in core/src/opencv2/core/types.cpp is wrong

Would be nice if you guys could help me out on this. Is it a issue that need to get fixed? Or is it normal to document it this way?