Possible bug in traincascade's consumption of negative samples. [closed]
CvCascadeClassifier::train() calls updateTrainingSet(), which returns the leaf false alarm rate (tempLeafFARate). This is then compared to the required leaf false alarm rate (requriedLeafFARate). updateTrainingSet() calls fillPassedSamples() twice: once for positives and once for negatives. fillPassedSamples() returns after obtaining count samples that pass the existing stages of the cascade.
Suppose that the existing stages of the classifier are perfect. Will traincascade not either hang forever or run out of negative samples to grab and fail? Should there not be a check that makes sure that, even if no negative samples that pass the existing classifier stages have been found, the required leaf false alarm rate hasn't already been satisfied?