Mac/Windows differences using ORB feature detector.
Per the bug reporting instructions I'm asking about this first before submitting a bug report. I'm new to ORB so I wanted to make sure I'm not doing anything incorrect first.
The following code produces difference results when run on Mac and Windows.
Windows 10 Pro 64
Mac OSX 10.11.1
OpenCV 3.0.0
Built with MS Visual Studio 2013. x64.
Built with clang 7.0 x64
cv::Mat a, b; // 8 bit grayscale images loaded elsewhere. CV_8UC1
cv::Ptr<cv::ORB> detector = cv::ORB::create(900, 1.2, 8, 15);
std::vector<cv::KeyPoint> keypoints_object, keypoints_scene;
detector->detect(a, keypoints_object);
detector->detect(b, keypoints_scene);
//#ifdef DEBUG
printf("!!!!!!!!!!!%d\n", keypoints_object.size());
printf("!!!!!!!!!!!%d\n", keypoints_scene.size());
//#endif
//-- Step 2: Calculate descriptors (feature vectors)
cv::Ptr<cv::BRISK> extractor = cv::BRISK::create(30, 3, 1.0f);
cv::Mat descriptors_object, descriptors_scene;
extractor->compute(a, keypoints_object, descriptors_object);
extractor->compute(b, keypoints_scene, descriptors_scene);
descriptors_object.convertTo(descriptors_object, CV_32F);
descriptors_scene.convertTo(descriptors_scene, CV_32F);
//-- Step 3: Matching descriptor vectors using FLANN matcher
cv::FlannBasedMatcher matcher;
std::vector<cv::DMatch> matches;
matcher.match(descriptors_object, descriptors_scene, matches);
double max_dist = 0;
double min_dist = 300;
//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++) {
double dist = matches[i].distance;
if (dist < min_dist)
min_dist = dist;
if (dist > max_dist)
max_dist = dist;
}
//#ifdef DEBUG
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
On Windows my output is
-- Max dist : 876.513550
-- Min dist : 21.587032
On Mac my output is
-- Max dist : 906.911426
-- Min dist : 21.587032
Same inputs.
Thanks for any insight.
In theory it shouldn't differ, but it is possible that both operation systems have other precisions on their float calculations ... which could lead to this mistake/difference.
Maybe, but in all my experience developing cross platform I've never encountered precision issues like that. I would be very surprised. Besides that would be quite worrisome for a library like OpenCV to have such a problem. No the results should be identical.
I agree that it should be identical. What you should do is run through OpenCV sourcecode and see where in one of those functions there s a different execution for windows and for linux... Then see if those seperate parts get the same result.
Will do thanks.
I have the same phenomenon in Linux and Windows, and do not know how to deal with it
As previously suggested, you need to debug both stacks and come up with a detailed debug output to have us help you out.