Ask Your Question
0

Error on cv::cuda::DescriptorMatcher::knnMatch

asked 2015-04-26 14:48:16 -0500

krips89 gravatar image

While trying to use the knnMatch function cuda::DescriptorMatcher using the documentation provided here: I am getting the following error:

OpenCV Error: The function/feature is not implemented (getGpuMat is available only for cuda::GpuMat and cuda::HostMem) in getGpuMat, file /home/sarkar/opencv/opencv/modules/core/src/matrix.cpp, line 1419
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/sarkar/opencv/opencv/modules/core/src/matrix.cpp:1419: error: (-213) getGpuMat is available only for cuda::GpuMat and cuda::HostMem in function getGpuMat

Any idea what it means? I am using a very simple code like the following:

matcher_gpu_->knnMatch(descriptors_frame, descriptors_model, matches, 2);

Where descriptor_frame, descriptor_model are cv::Mat; and matches is vector of vector of DMatch.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2015-06-08 17:24:05 -0500

Eduardo gravatar image

updated 2015-06-08 17:30:51 -0500

I hope the original poster already solved his problem, but the error highlighted is that you have to supply the descriptors using cv::cuda::GpuMat and not with cv::Mat, as you use the GPU matcher class.

Nevertheless, I post here 2 example codes (in OpenCV 3.0) to achieve ORB detection/extraction and descriptors matching using the CUDA module which, I hope, could be helpful to someone else:

  • example_with_full_gpu(): detect ORB keypoints, compute ORB descriptors and perform the knn-matching with only calls to cuda functions
  • example_with_gpu_matching(): only the matching use the GPU, to demonstrate that it is possible to use all the features available in features2d.hpp or xfeatures2d.hpp and match with the GPU

    #include <iostream>
    #include <opencv2/opencv.hpp>
    #include <opencv2/core/cuda.hpp>
    #include <opencv2/cudaimgproc.hpp>
    #include <opencv2/cudafeatures2d.hpp>
    
    void example_with_full_gpu(const cv::Mat &img1, const cv::Mat img2) {
    //Upload from host memory to gpu device memeory
    cv::cuda::GpuMat img1_gpu(img1), img2_gpu(img2);
    cv::cuda::GpuMat img1_gray_gpu, img2_gray_gpu;
    
    //Convert RGB to grayscale as gpu detectAndCompute only allow grayscale GpuMat
    cv::cuda::cvtColor(img1_gpu, img1_gray_gpu, CV_BGR2GRAY);
    cv::cuda::cvtColor(img2_gpu, img2_gray_gpu, CV_BGR2GRAY);
    
    //Create a GPU ORB feature object
    //blurForDescriptor=true seems to give better results
    //http://answers.opencv.org/question/10835/orb_gpu-not-as-good-as-orbcpu/
    cv::Ptr<cv::cuda::ORB> orb = cv::cuda::ORB::create(500, 1.2f, 8, 31, 0, 2, 0, 31, 20, true);
    
    cv::cuda::GpuMat keypoints1_gpu, descriptors1_gpu;
    //Detect ORB keypoints and extract descriptors on train image (box.png)
    orb->detectAndComputeAsync(img1_gray_gpu, cv::cuda::GpuMat(), keypoints1_gpu, descriptors1_gpu);
    std::vector<cv::KeyPoint> keypoints1;
    //Convert from CUDA object to std::vector<cv::KeyPoint>
    orb->convert(keypoints1_gpu, keypoints1);
    std::cout << "keypoints1=" << keypoints1.size() << " ; descriptors1_gpu=" << descriptors1_gpu.rows 
        << "x" << descriptors1_gpu.cols << std::endl;
    
    std::vector<cv::KeyPoint> keypoints2;
    cv::cuda::GpuMat descriptors2_gpu;
    //Detect ORB keypoints and extract descriptors on query image (box_in_scene.png)
    //The conversion from internal data to std::vector<cv::KeyPoint> is done implicitly in detectAndCompute()
    orb->detectAndCompute(img2_gray_gpu, cv::cuda::GpuMat(), keypoints2, descriptors2_gpu);
    std::cout << "keypoints2=" << keypoints2.size() << " ; descriptors2_gpu=" << descriptors2_gpu.rows 
        << "x" << descriptors2_gpu.cols << std::endl;
    
    //Create a GPU brute-force matcher with Hamming distance as we use a binary descriptor (ORB)
    cv::Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(cv::NORM_HAMMING);
    
    std::vector<std::vector<cv::DMatch> > knn_matches;
    //Match each query descriptor to a train descriptor
    matcher->knnMatch(descriptors2_gpu, descriptors1_gpu, knn_matches, 2);
    std::cout << "knn_matches=" << knn_matches.size() << std::endl;
    
    std::vector<cv::DMatch> matches;
    //Filter the matches using the ratio test
    for(std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
        if(it->size() > 1 && (*it)[0].distance/(*it)[1].distance < 0.8) {
            matches.push_back((*it)[0]);
        }
    }
    
    cv::Mat imgRes;
    //Display and save the image with matches
    cv::drawMatches(img2, keypoints2, img1, keypoints1, matches, imgRes);
    cv::imshow("imgRes", imgRes);
    cv::imwrite("GPU_ORB-matching.png", imgRes);
    
    cv::waitKey(0); 
    }
    
    void example_with_gpu_matching(const cv::Mat &img1, const cv::Mat img2) {
    //Create a CPU ORB feature object
    cv::Ptr<cv::Feature2D> orb = cv::ORB::create(500, 1.2f, 8, 31, 0, 2, 0 ...
(more)
edit flag offensive delete link more
0

answered 2015-06-08 04:17:13 -0500

abhiguru gravatar image

updated 2015-06-08 04:32:33 -0500

Matches for knnMatch need to be

 std::vector< std::vector< DMatch > > & matches,

and NOT

std::vector< DMatch > &
edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2015-04-26 14:48:16 -0500

Seen: 2,271 times

Last updated: Jun 08 '15