detectMultiScale on Gpu problem with minNeighbors parameter

asked 2016-09-07 10:37:12 -0600

LucaVeggian gravatar image

Hi, for my thesis i'm trying to make a program that can detect an object and can assign to the detection object a score using the GPU. The score represents the quality of the detection.

Using the CPU version of the function detectMultiScale I created a metric for assign a score. I look the list of Rect returned by the functions and a search for those rectangle that are similar in area and position and, after, I group the rectangles into a single one who contains the detected object. This is possible if a put the parameter minNeighbors = 0, in this way detectMultiScale for CPU does not group the rectangle and so i can apply my metric. The problem is when i try to do the same thing with the GPU version of detectMultiScale, indeed if i put the parameter minNeighbors = 0 the classifier does not detect anything!! but if I put minNeighbors = 1 or greater it detect the same object of the CPU classifier version. My teacher pretends the use of GPU and I don't know why the two version of the classifier have not the same behavior. Please, can any one help me to resolve this problem?

I loaded two image that explain what i mean grouped.png ungrouped.png The ungrouped.png is the output of the classifier CPU and grouped.png image is the result of my metric, and 17 is the detection score.

PS: sorry for my english :-)

edit retag flag offensive close merge delete

Comments

so i can apply my metric. --> is there any reason why you should use your own metric and not the built-in ones. That being said, many of us are aware that the CUDA implementation of cascade classifiers is horribly corrupted. This is exactly why I tell my students to use the multithreaded CPU with TBB. That works in our case faster than the GPU-CPU data copying bottleneck.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-09-08 06:06:44 -0600 )edit

the reason is that my thesis consist of two parts. The first parts is the object detector that detect and assign a score to some target. The second part is an implementation of an algorithm of orienteering that use the previous score for create a strategy to visit the target with constraints of time e maximizing the score. An important thing that I not mentioned is that i use OpenCV version 2.4.12 optimized by Nvidia on a board jetson tk1.

LucaVeggian gravatar imageLucaVeggian ( 2016-09-08 06:38:03 -0600 )edit

please, can you explain me a way for give the score to a detection using the CPU version of detectMultiscale? i'm in panic because I'm not expert in computer vision because, i'm a "system embedeed student" and if you can teach me a little bit on this problem I appreciate it! thanks!

LucaVeggian gravatar imageLucaVeggian ( 2016-09-12 05:49:08 -0600 )edit

There is an overlap function that can get you the number of neighbours per detection. It is exactly what you need!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-09-12 05:59:11 -0600 )edit

is groupRectangles(vector<rect>& rectList, vector<int>& weights, int groupThreshold, double eps=0.2)? Do you mean that in param weights are summed the weight of the groupped rectangles?

LucaVeggian gravatar imageLucaVeggian ( 2016-09-12 06:04:56 -0600 )edit

no, there is this overload function for detectMultiScale, which can be found here. Take a look at that one!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-09-12 06:20:17 -0600 )edit
1

Thanks!!!|

LucaVeggian gravatar imageLucaVeggian ( 2016-09-12 07:10:58 -0600 )edit