Ask Your Question

bertus's profile - activity

2014-05-28 05:20:26 -0600 asked a question Why convert to greyscale?

We are doing an opencv study right now.

We are trying to match art paintings on mobile.

Now I'm looking for arguments to why or why not convert the images to greyscale first.

I think recognition could be improved keeping the color but since it's three layers instead of one it slows the progress down.

Everywhere on the internet I see examples who are seriously aiming at "the right way to do" converting the images to greyscale but nobody seems to be argumenting this.

So that's why I question this here:

Why should one convert color images to greyscale, what are the pro's and cons's?

2014-05-20 04:40:11 -0600 received badge  Editor (source)
2014-05-20 04:39:24 -0600 asked a question Improving ORB/ORB accuracy on mobile with OpenCV

So we are doing this computer vision study project trying to recognize paintings on the walls.

After some research we decided to go for the ORB/ORB extractor/descriptor because of it's speed and good accuracy.

For the matching we are using a BruteForceMatcher.

We searched and read very much about this topic, but there are still some questions which just don't find the answers to.

We are using a library of descriptor mat's of max 320px width or height. The scene is also around 320px.

The image is read in as colored because of the practical improvement on accuracy.

On the web examples are all converted to gray, why is this? And is it better to do so in every situation?

Is the ORB/ORB really the fastest algoritm for the scenario we want to use it for? GFTT/FREAK also gives very good results but it's way too slow.

At real time tests we are getting an accuracy of about 50%, this is way too low. What should we study to improve the accuracy?

For example, the matching is done like this right now:

get matches -> keep valid matches -> if matches > X -> return title

We are thinking about trying it another way:

get matches -> keep valid matches -> calculate likelihood percentage of image -> return image with highest percentage if percentage > X

Does this make sense?

Sometimes the matcher returns the full set of matches as valid matches, so if there are 500 matches, it returns 500 matches as valid. Even on a black camera picture. We think there is something wrong with the camera, maybe it's in use or something. Or it could be the matcher distance is too low?

Reducing the number of features seems to speed up the whole like: 2x less features == 2x faster speed

Does reducing the number of features by default reduce the accuracy?

We are looking for some tips to improve detection accuracy, that's the top priority but maybe somebody has some other tips because this project is going to fail if accuracy is not going to increase :(

2014-05-12 06:11:47 -0600 asked a question iPhone 4(S) vs iPad2 computer vision performance problems

We have this project using OpenCV and at first we developed for iPad2.

Everything ran smooth and an computer vision object recognition iteration was taking a little under 1 second.

So far so good. Now we are testing the app for iPhone on both 4 and 4S. Of course we did our research as the results were stating the iPhone4S performance was almost as fast as the iPad2.

The results of the iPhone 4 are terrible, one iteration takes 15 seconds. In the iPhone 4S on iteration takes 8 seconds.

So with our algorithms: iPhone4 is 15x slower than an iPad2 iPhone4S is 7-8x slower than an iPad2

Does anybody know if this is true? Is there something the iPhone is doing differently than the iPad2? Isn't the processor of the same type?

Anybody who can point us in the right direction?