MSER slower in 3.0.0 than 2.4.9
I have a project that I originally developed with opencv2.4.9 that used mser like so...
cv::MserFeatureDetector mser(delta, mnArea, mxArea, maxVariation, minDiversity);
std::vector< std::vector< cv::Point > > ptblobs;
mser(img, ptblobs);
It works just like I expect. I switched to opencv3.0.0 and so had to change the above to this...
cv::Ptr<cv::MSER> mser = cv::MSER::create(delta, mnArea, mxArea, maxVariation, minDiversity);
std::vector< std::vector< cv::Point > > ptblobs;
std::vector<cv::Rect> bboxes;
mser->detectRegions(img, ptblobs, bboxes);
It is MUCH MUCH slower in version 3.0.0 than 2.4.9. If I comment out the detectRegions() line everything else runs same speed as before...so I know its the detectRegions() call and not some other thing or things. What am I doing wrong?
Thanks in advance for your help
EDIT/ADDITION I've come up with an uber simple project that isolates and exhibits the issue. With 2.4.9 I get 1.5sec and with 3.0.0 its 4.4sec. I'm posting the code here just in case its useful...
#include "opencv2/highgui/highgui.hpp"
#include "opencv2\features2d\features2d.hpp"
#include "opencv2\imgproc\imgproc.hpp"
#include <iostream>
#include <chrono>
#include <ctime>
#define USE249
using namespace std;
int main(int argc, char** argv)
{
std::cout << "OpenCV version: "
<< CV_MAJOR_VERSION << "."
<< CV_MINOR_VERSION << "."
<< CV_SUBMINOR_VERSION
<< std::endl;
cv::Mat im = cv::imread("C:/example.jpg", 1);
if (im.empty())
{
cout << "Cannot open image!" << endl;
return -1;
}
cv::Mat gray;
cv::cvtColor(im, gray, cv::COLOR_BGR2GRAY);
int mnArea = 40 * 40;
int mxArea = im.rows*im.cols*0.4;
std::vector< std::vector< cv::Point > > ptblobs;
std::vector<cv::Rect> bboxes;
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
#ifndef USE249
cv::Ptr<cv::MSER> mser = cv::MSER::create(1, mnArea, mxArea, 0.25, 0.2);
mser->detectRegions(gray, ptblobs, bboxes);
#else
cv::MserFeatureDetector mser(1, mnArea, mxArea, 0.25, 0.2);
mser(gray, ptblobs);
#endif
end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
std::time_t end_time = std::chrono::system_clock::to_time_t(end);
std::cout << "finished computation at " << std::ctime(&end_time)
<< "elapsed time: " << elapsed_seconds.count() << "s\n";
cv::namedWindow("image", cv::WINDOW_NORMAL);
cv::imshow("image", im);
cv::waitKey(0);
return 0;
}
Are you sure that both versions are built with the same CMAKE flags? It could be due to not using a specific optimization.
Thats a good question....I used defaults in cmake (the gui version) for both builds. But I went back and checked and as near as I can tell they are essentially the same. I did notice that in 3.0.0 "with IPP" is checked (enabled) by default as I hear there is a subset of IPP for free....in my 2.4.9 build it was disabled completely. Just for giggles I rebuilt 3.0.0 with IPP disabled but it did not change the performance at all. I also tried using a UMat instead of a Mat but it too had no effect.
Do I have to do anything to specify I want the greyscale version of MSER rather than the color version? The image I run detectRegions on is greyscale.
No experience with MSER so not really sure there ...
Maybe this one will help you: http://answers.opencv.org/question/67...