I am using detectMultiScale
to process two images A
and B
. When I process A
alone, there are no detections, however when I process A
after B
there is one detection.
CascadeClassifier classifier("classifiers/haarcascade_frontalface_alt.xml");
Mat imageA, imageB;
vector<Rect> face_detectionsA_beforeB, face_detectionsA_afterB, face_detectionsB;
string fileA = "A.jpg";
string fileB = "B.jpg";
imageA = imread(fileA, 1);
imageB = imread(fileB, 1);
classifier.detectMultiScale(imageA, face_detectionsA_beforeB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 0 faces detected
classifier.detectMultiScale(imageB, face_detectionsB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 2 faces detected
classifier.detectMultiScale(imageA, face_detectionsA_afterB, 1.2, 1, CASCADE_FIND_BIGGEST_OBJECT, Size(64, 64)); // 1 face detected
It is like the output of the current image depended on the output of the previous one, which doesn't make much sense to me... I don't have this problem (i.e., I don't get any detection when I process A
after B
) if I replace image B
by other images, or if I load the classifier every time I use the detectMultiScale
function.
Any ideas about what could be happening?