When I trained my FisherFaceRecognizer
in my PC using the GENDER-FERET database, I used dlib to align and frontalize the face, grayscaled it, and did a histogram equalization on it before putting it into the list of data to train said FisherFaceRecognizer
. When I tested it with the GENDER-FERET test set with the usual way, most of the predicted genders return the correct values, around 95-98%.
But when I use the generated YAML from that into my Android app's FisherFaceRecognizer
on a live camera preview, it all falls apart, with fluctuating prediction results when I aim the camera at someone's face, especially in a computer screen, and sometimes wrong prediction results altogether.
Now I think it's because the data from the camera preview cannot replicate the ideal test conditions that netted me the 95-98% accuracy, especially when I aim said camera at a face behind the computer screen. So I'm thinking of background removal and illumination standardization.
Does OpenCV4Android 2.4.13.3 or 3.3.0 have protocols for those, and what other steps can/should I take to preprocess the images from a live camera preview to nearly replicate the 95-98% accuracy?