Mac/Windows differences using ORB feature detector.

asked 2016-01-20 15:17:09 -0600

Per the bug reporting instructions I'm asking about this first before submitting a bug report. I'm new to ORB so I wanted to make sure I'm not doing anything incorrect first.

The following code produces difference results when run on Mac and Windows.

Windows 10 Pro 64

Mac OSX 10.11.1

OpenCV 3.0.0

Built with MS Visual Studio 2013. x64.

Built with clang 7.0 x64

    cv::Mat a, b; // 8 bit grayscale images loaded elsewhere. CV_8UC1   

    cv::Ptr<cv::ORB> detector = cv::ORB::create(900, 1.2, 8, 15);

std::vector<cv::KeyPoint> keypoints_object, keypoints_scene;

detector->detect(a, keypoints_object);
detector->detect(b, keypoints_scene);
//#ifdef DEBUG
printf("!!!!!!!!!!!%d\n", keypoints_object.size());
printf("!!!!!!!!!!!%d\n", keypoints_scene.size());
//#endif
//-- Step 2: Calculate descriptors (feature vectors)
cv::Ptr<cv::BRISK> extractor = cv::BRISK::create(30, 3, 1.0f);

cv::Mat descriptors_object, descriptors_scene;

extractor->compute(a, keypoints_object, descriptors_object);
extractor->compute(b, keypoints_scene, descriptors_scene);

descriptors_object.convertTo(descriptors_object, CV_32F);
descriptors_scene.convertTo(descriptors_scene, CV_32F);

//-- Step 3: Matching descriptor vectors using FLANN matcher

cv::FlannBasedMatcher matcher;
std::vector<cv::DMatch> matches;
matcher.match(descriptors_object, descriptors_scene, matches);
double max_dist = 0;
double min_dist = 300;

//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++) {
    double dist = matches[i].distance;
    if (dist < min_dist)
        min_dist = dist;
    if (dist > max_dist)
        max_dist = dist;
}
//#ifdef DEBUG
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);

On Windows my output is

-- Max dist : 876.513550

-- Min dist : 21.587032

On Mac my output is

-- Max dist : 906.911426

-- Min dist : 21.587032

Same inputs.

Thanks for any insight.

edit retag flag offensive close merge delete

Comments

In theory it shouldn't differ, but it is possible that both operation systems have other precisions on their float calculations ... which could lead to this mistake/difference.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-01-21 04:31:16 -0600 )edit

Maybe, but in all my experience developing cross platform I've never encountered precision issues like that. I would be very surprised. Besides that would be quite worrisome for a library like OpenCV to have such a problem. No the results should be identical.

dibbitson gravatar imagedibbitson ( 2016-01-21 10:15:25 -0600 )edit

I agree that it should be identical. What you should do is run through OpenCV sourcecode and see where in one of those functions there s a different execution for windows and for linux... Then see if those seperate parts get the same result.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-01-21 11:51:34 -0600 )edit

Will do thanks.

dibbitson gravatar imagedibbitson ( 2016-01-21 15:41:17 -0600 )edit

I have the same phenomenon in Linux and Windows, and do not know how to deal with it

tao li gravatar imagetao li ( 2019-05-21 07:41:08 -0600 )edit

As previously suggested, you need to debug both stacks and come up with a detailed debug output to have us help you out.

StevenPuttemans gravatar imageStevenPuttemans ( 2019-06-26 04:37:20 -0600 )edit