difference between detect()/compute() and detectAndCompute() [closed]
Hi everyone,
In the last couple of months, I am experimenting with the features extraction algorithms. I notice that there are two ways in computing the features and I was wondering what is the difference between:
std::vector<cv::KeyPoint> keypoint;
cv::Mat descriptors;
detector_->detect(frame_, keypoint);
descriptor_->compute(frame_, keypoint, descriptors);
and:
std::vector<cv::KeyPoint> keypoint;
cv::Mat descriptors;
detector_->detectAndCompute(features_frame_, cv::noArray(),keypoint,descriptors)
when using the Feature2D class (http://docs.opencv.org/3.1.0/d0/d13/c...). Is there a reason to use the second call, instead of the first?
With kind regards
Vasilis
The reason is fairly simple, namely the possibility to combine descriptors and detectors of keypoints as you wish.
It brings a tons of extra possibilities.
One more question then. If I use the seperate interface (detect() / compute()) in a algortithm that provides both keypoint detector and descriptor (detectAndCompute()) will there be an overhead ???
Probably, but I am not sure if it will be a large influence!
Thanks a lot for the info! Could you copy your first comment as answer so I could choose it as a correct answer?