Ask Your Question
0

Orb feature detection problem

asked 2018-02-19 07:21:54 -0600

Houthius gravatar image

updated 2018-02-20 03:25:00 -0600

Hello,

So I'm trying to use opencv's ORB features detector on an object. So I want to get as much points on my object as I can even if I have a textured background. So I created an orb detector :

Ptr<featuredetector> orb = ORB::create(300, 2.0, 3, 31, 0, 2, 0, 31,15);

and then I tried to compare detect function with and without mask. what I did is, in the png below, I did a color filter then I tried to compare detection on the image and detection with a mask based on color. The first image is the original. The second shows the result for detection on the whole image , the second image is the mask that I want to apply. And the last image is the result of the detection by applying a mask( using the function : detect(image, keypoints, mask).

So what I was expecting is by applying the mask I would get more features on the object and less on the chessboard but what I got was the opposite. I circled in red some points that I found by detecting on the whole image and didn't find using the mask. Can someone tell me what am I missing ? by the way both detections give the same number of keypoints.

image description image description image description image description

edit retag flag offensive close merge delete

Comments

I cannot reproduce your problem can you post original image?

LBerger gravatar imageLBerger ( 2018-02-19 14:32:57 -0600 )edit

I added the original on top

Houthius gravatar imageHouthius ( 2018-02-20 03:25:14 -0600 )edit

" I would get more features on the object" -- that's probably a misassumption. why would that happen ?

berak gravatar imageberak ( 2018-02-20 04:23:07 -0600 )edit

because I thought that the points that it detects outside the mask, it will detect them inside it. So let's say without the mask I got 100 on the object and 200 on the chessboard, I thought that with the mask I would get the same 100 + some points because I gave it less space to search.

Houthius gravatar imageHouthius ( 2018-02-20 04:42:12 -0600 )edit

no, not so. if it did not find any keypoints on your plain, yellow surface before, it won't find any there now.

the mask just restricts the search area, that's it.

berak gravatar imageberak ( 2018-02-20 04:47:02 -0600 )edit

yes but my problem is that without the mask there were more points on the object (I circled some of them with red) so normally I get at least those same points no?

Houthius gravatar imageHouthius ( 2018-02-20 04:50:53 -0600 )edit

are those jpg images ?

berak gravatar imageberak ( 2018-02-20 04:52:55 -0600 )edit

yes they are why?

Houthius gravatar imageHouthius ( 2018-02-20 04:55:50 -0600 )edit

Could be that with multi-scale pyramidal detection, you lose some features when the masked image is very small? You need to detect corner-like feature, by masking the image, maybe at a very high level of the pyramid it changes the appearance of the previously detected features?

Looks like you changed scaleFactor. You could try with default scaleFactor=1.2f to see if it changes something.

Eduardo gravatar imageEduardo ( 2018-02-20 05:09:47 -0600 )edit

I have the same results with scaleFactor=1.2f

Houthius gravatar imageHouthius ( 2018-02-20 08:05:44 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-02-20 08:57:55 -0600

LBerger gravatar image

I cannot reproduce using this program : all keypoints in image in mask can be found inside image with mask.

Color for keypoints is choosen randomly may be keypoint missing are background color?

When I need N keypoints (in create), i get N keypoints (of course when it is possible).

Mat img = imread("c:/temp/1519118663237344.jpg", IMREAD_UNCHANGED);
Mat imgHSV;
cvtColor(img, imgHSV, COLOR_BGR2HSV);
Ptr<ORB> f1 = ORB::create(300, 2.0, 3, 31, 0, 2, 0, 31, 15);
Ptr<ORB> f2 = ORB::create(301, 2.0, 3, 31, 0, 2, 0, 31, 15);
vector<KeyPoint> key1, key2;
Mat desc1, desc2;
Mat mask1;
blur(imgHSV, imgHSV, Size(7, 7));
inRange(imgHSV, Scalar(15, 000, 00), Scalar(35, 255, 255), mask1);
vector<vector<Point>> ctr;
vector<cv::Vec4i> h;
findContours(mask1, ctr, h, RETR_EXTERNAL, CHAIN_APPROX_NONE);
Mat mask = Mat::zeros(mask1.size(), CV_8UC1);
drawContours(mask, ctr, ctr.size()-1, 255, -1);
imshow("mask", mask);


f1->detectAndCompute(img, noArray(), key1, desc1);
f2->detectAndCompute(img, mask, key2, desc2);
cout << "Image 1 keypoints : "<<key1.size() << "\n";
cout << "Image 2 keypoints : " << key2.size() << "\n";

BFMatcher bf(4,true);
vector<DMatch> dm,dme;
bf.match(desc1, desc2, dm);
Mat dst = img.clone();
drawKeypoints(img, key1, dst, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawKeypoints(img, key2, dst, Scalar(255, 0, 0), DrawMatchesFlags::DRAW_OVER_OUTIMG);
drawMatches(img, key1, img, key2, dme, dst);
imshow("Key 1 2", dst);

waitKey(0);

result is

Image 1 keypoints : 300
Image 2 keypoints : 301

and image is

image description

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-02-19 07:21:54 -0600

Seen: 2,169 times

Last updated: Feb 20 '18