quality of trained material

asked 2015-01-03 05:50:42 -0500


Is there any "formal methods" to evaluate the quality of a trained cascade classifier ?

I have used 1000 positives and 1000 negatives to train Haar and LBP classifiers, but when using the cascade on pictures, it failed detecting the pattern used during training: calling detectMultiscale for those classifiers would return a lot of matches, none of them being close to the correct pattern.

I checked the vector file (passed to the opencv_traincascade) with the -info option and i can see the vector file contains the correct pattern.

Also, if I use one of the positive sample as an input of the detectMutliScale function, shall i expect the classifier to actually always find the pattern ? (in my case it also failed).

I am just wondering if there are formal methods to understand how the classifier has been trained, what features it used etc? The obvious method would be to increase the number of positives samples, and repeat the process. But it is time consuming and my worry is that i am even not sure the pattern itself can be "recognized".

It makes me ask another question : can we assume any pattern can be recognized through the use of cascade classifiers? or is there any constraints we shall take into considerations?

(Note that if i use the face classifiers, i am able to detect faces with the same source code, so i would say that the code i use to detect pattern is ok).


edit retag flag offensive close merge delete