# why may detectMultiScale() give too many points out of the interested object?

I trained my pc with opencv_traincascade all one day long to detect 2€ coins using more than 6000 positive images similar to the following:

Now, I have just tried to run a simple OpenCV program to see the results and to check the file cascade.xml. The final result is very disappointing:

There are many points on the coin but there are also many other points on the background. Could it be a problem with my positive images used for training? Or maybe, am I using the detectMultiScale() with wrong parameters?

Here's my code:

#include "opencv2/opencv.hpp"
using namespace cv;

int main(int, char**) {

Mat src_gray;

std::vector<cv::Rect> money;

cvtColor(src, src_gray, CV_BGR2GRAY );
equalizeHist(src_gray, src_gray);

return -1;
}

euro2_cascade.detectMultiScale( src_gray, money, 1.1, 0, 0, cv::Size(10, 10),cv::Size(2000, 2000) );

for( size_t i = 0; i < money.size(); i++ ) {
cv::Point center( money[i].x + money[i].width*0.5, money[i].y + money[i].height*0.5 );
ellipse( src, center, cv::Size( money[i].width*0.5, money[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
}

//namedWindow( "Display window", WINDOW_AUTOSIZE );
imwrite("result.jpg",src);
}


I have also tried to reduce the number of neighbours but the effect is the same, just with many less points... Could it be a problem the fact that in positive images there are those 4 corners as background around the coin? I generated png images with Gimp from a shot video showing the coin, so I don't know why opencv_createsamples puts those 4 corners.

UPDATE I also tried to create a LBP cascade.xml but this is quite strange: in fact, if I use, in the abve OpenCV program, an image used as training, then the detection is good:

Instead, if I use another image (for example, taken by my smartphone) there there's nothing detected. What does it mean this? Have I made any error during training?

edit retag close merge delete

1

Maybe a stupid question: all your positive images look like the image with the coin and the trees ?

If yes, if you want to detect coins, you should have to use positive images with only coins. Or you can provide the coordinates (if you did not already done that) of the coins in the image in the positive image lists.

( 2016-01-29 08:30:06 -0500 )edit

@Eduardo yes, all my positive images look like the one I posted in my question...there's the coin and the background. yes, opencv_createsamples should provide the coordinates as you said for each image containing a coin...as explained here (http://www.memememememememe.me/traini...) and in many other tutorial...

( 2016-01-29 10:06:41 -0500 )edit

Can you provide the command line you used for opencv_traincascade ? Also the ouput log of opencv_traincascade if you have it. It could help some other people to help you.

My bad, I thought all your positive images have the coin + a random background, that's why I was confused and asked the question to be sure to understand completely.

Your positive images are images with only the coins and opencv_createsamples just combined them with random background and with some image warping. Some people say this way of doing is not optimal but this is another thing and should not be related to your issue.

( 2016-01-29 10:30:26 -0500 )edit

@Eduardo all the commands I ran were the ones discussed here

http://www.memememememememe.me/traini...

This is the only "useful" tutorial I found whichi I followed strictly. Now I'm trying with -LBP flag but I don't know if it will improve things. Anyway, yes, as you said I had 100 photos showing only a 2€ coin which were then combined with random backgrounds by executing opencv_createsamples

Let me know if there's a way I can get my aim...it's for my thesis.

( 2016-01-29 10:52:56 -0500 )edit

I tried myself to train a classifier to detect coin. The results are:

The detections are not very stable unfortunately (a coin can be detected and the next frame not and the next frame yes etc.).

In my opinion, you have to redo your training as it should be possible to have better results.

Unfortunately, I don't have the magical recipe to train a good classifier as I am not an expert in this field.

( 2016-02-01 04:28:37 -0500 )edit
1

What I did:

• I took around 80 pictures myself and I cropped manually to keep only the coins
• I extracted around 800 images from videos for negative samples
• I used opencv_createsamples to warp (with no background) the original images to have around 700 positive samples at the end
• I checked that the created samples are ok with opencv_createsamples -vec
• I used LBP features as it is much more faster than HAAR features for first tests
• I used positive samples with 80x80 size for first tests and in order to keep the coin details
• the other options are classical: -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5

You have to try and test many times with different parameters to understand what happen under the hood in my opinion.

( 2016-02-01 04:49:05 -0500 )edit

@Eduardo thank you very much for you comment Eduardo, I will try with another training at once. Anyway, I had already tried with a LBP training. Would you like to have a look at my updated question? Because If I use an image which had been used for training, then the detection is quite goog...this does not happen with an arbitrary image :(

( 2016-02-01 05:20:09 -0500 )edit

I would say that maybe you have overfitted your data ? Also, the object is round and I don't know exactly how to make the training invariant to the background.

( 2016-02-01 08:03:33 -0500 )edit

Making the training invariant to the background can ONLY be done by collecting real test images in the background conditions in which your classifier will have to work...

( 2016-02-04 04:01:21 -0500 )edit

@StevenPuttemans I don't know exactly what you mean but it's impossibile to collect real test images in the background conditions in which a classifier will have to work...people could lay the coin everywhere...

( 2016-02-04 04:05:44 -0500 )edit

Sort by » oldest newest most voted

recently i learned the reason from the book OpenCV 3 Blueprints

lets look parameters of detectMultiScale

CV_WRAP void detectMultiScale( InputArray image,
CV_OUT std::vector<Rect>& objects,
double scaleFactor = 1.1,
int minNeighbors = 3, int flags = 0,
Size minSize = Size(),
Size maxSize = Size() );


minNeighbors Parameter specifying how many neighbors each candidate rectangle should have to retain it.

if you set minNeighbors parameter to zero you will get all the candidates

so you should to change your code like

  euro2_cascade.detectMultiScale( src_gray, money, 1.1, 3, 0, cv::Size(10, 10),cv::Size(2000, 2000) );


also i think minSize parameter value is too small

more

Thank you for your answer but as I said in my question, even if I increment the number of neighbors the effect is the same...there are only many less points but their distribution is the same: very few on the coin, many more on the background...

( 2016-01-29 08:54:31 -0500 )edit

I used opencv_createsamples to warp (with no background) the original images to have around 700 positive samples at the end

There is your problem, using that single step you created the most awfull and bad training samples ever. Better collect 50 natural samples than 5000 artificial ones. To know why I am stating this, read up on about every single response I made on this forum or read the recently released OpenCV 3 Blueprints book.

more

Many tutorial say to do what Eduardo did...what would be the corredct way in your opinion?

( 2016-02-02 07:16:57 -0500 )edit

Well there is a reason why 50% of all the reported issues on this topic are on the problem of the artificial data. It is just NOT what you will get when running the app, and thus it is stupid to try this ...

( 2016-02-04 04:00:35 -0500 )edit

Official site

GitHub

Wiki

Documentation