Ask Your Question
0

Improving ORB/ORB accuracy on mobile with OpenCV

asked 2014-05-20 04:39:24 -0600

bertus gravatar image

updated 2014-05-20 04:40:11 -0600

So we are doing this computer vision study project trying to recognize paintings on the walls.

After some research we decided to go for the ORB/ORB extractor/descriptor because of it's speed and good accuracy.

For the matching we are using a BruteForceMatcher.

We searched and read very much about this topic, but there are still some questions which just don't find the answers to.

We are using a library of descriptor mat's of max 320px width or height. The scene is also around 320px.

The image is read in as colored because of the practical improvement on accuracy.

On the web examples are all converted to gray, why is this? And is it better to do so in every situation?

Is the ORB/ORB really the fastest algoritm for the scenario we want to use it for? GFTT/FREAK also gives very good results but it's way too slow.

At real time tests we are getting an accuracy of about 50%, this is way too low. What should we study to improve the accuracy?

For example, the matching is done like this right now:

get matches -> keep valid matches -> if matches > X -> return title

We are thinking about trying it another way:

get matches -> keep valid matches -> calculate likelihood percentage of image -> return image with highest percentage if percentage > X

Does this make sense?

Sometimes the matcher returns the full set of matches as valid matches, so if there are 500 matches, it returns 500 matches as valid. Even on a black camera picture. We think there is something wrong with the camera, maybe it's in use or something. Or it could be the matcher distance is too low?

Reducing the number of features seems to speed up the whole like: 2x less features == 2x faster speed

Does reducing the number of features by default reduce the accuracy?

We are looking for some tips to improve detection accuracy, that's the top priority but maybe somebody has some other tips because this project is going to fail if accuracy is not going to increase :(

edit retag flag offensive close merge delete

Comments

I think there is a large contradictory in your approach. Brute force matching and fast processing is a no-go. You need to eliminate false feature matches by using techniques like ransac!

StevenPuttemans gravatar imageStevenPuttemans ( 2014-05-20 04:48:42 -0600 )edit

Dear Bertus, Any update on how to increase the accuracy of ORB detector/descriptor. I am currently working on the similar project now with ORB. You post depicted my current situation. The results are just 60% accurate for me and it is not detecting in most of the cases. Any tips from your side on how to increase the accuracy of ORB.

WhoAmI gravatar imageWhoAmI ( 2016-05-31 07:45:42 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-06-18 04:49:41 -0600

hayley gravatar image

All images are converted to gray first because ORB detector and descriptor work with grayscale intensity to detect the features and compare with the neighbouring pixels.

Reducing number of features will speed up the detection but not necessarily improve the accuracy.

You have detections for an image of a black screen because the sensitivity of your detector is high. You need to increase the detector's threshold to enhance the detection by reducing the noise.

edit flag offensive delete link more

Question Tools

Stats

Asked: 2014-05-20 04:39:24 -0600

Seen: 1,942 times

Last updated: Jun 18 '18