# Combining FisherFaces and LBPH to improve accuracy

Hi everyone, i’m trying to implement a face recognition system for a video surveillance application. In this context test images are low quality, illumination change from an image to another, and, moreover, the detected subjects are not always in the same pose.

As first recognizer i used FisherFaces and, with 49 test images, i obtain an accuracy of 35/49, without considering the distances of each classified subject (i just considered labels). Trying to get a better accuracy i attempt to make a preprocessing both of training images and test images; the preprocessing i choose is described in “Mastering OpenCV with Practical Computer Vision Projects” book. The steps are:

- detection of the eyes in order to allign and rotate a face;
- separate histogram equalization to standardize the lighting in the image;
- filtering to reduce the effect of pixel noise because the histogram equalization increase it;
- the last step is to apply an elliptical mask to the face in order to delete some details of the face that are not significant for the recognition.

Well, with this type of preprocessing, i obtain worse results than before (4/49 subjects properly classified). So i thought of using another classifier, the LBPH recognizer, to improve the accuracy of the recognition since these two types of algorithms have different features and different ways to classify a face; if one use them together maybe the accuracy increase. So my question is about the ways to combine these two algorithms; anyone knows how to merge the two outputs in order to obtain better accuracy? I thought at this: if FisherFaces and LBPH give the same result (the same label) then there is no problem; otherwise if they disagree my idea is to take the vector of the labels and the vector of the distances for each algorithm and for each subject sum the corresponding distances; at this point the label of the test image is the one that has the shortest distance. This is just my idea but there are other ways to fuse the output of both algorithm also because i should change the code of the predict function of face module in OpenCV since it returns an int type not a vector of int.

"my idea is to take the vector of the labels and the vector of the distances for each algorithm and for each subject sum the corresponding distances;" -- bear in mind, that distances in lbph / fisher space have a different value-range, so you'd have to normalize both vectors before summing up.

Thanks berak for the answer, maybe i was not clear to explain my idea. Let [1,0,5,3] (labels), [20,50,100,200] (distances) the two output vectors return by fisher faces method and [0,1,3,5] (labels), [5,10,400,600] (distances) the two output vectors return by lbph method. My idea is to do 20+10 that is the sum relative to label 1, 50+5 the sum relative to label 0 and so on...all of these distances i take the minimum and the corresponding label is the output. Do you think that in this way i should normalize both vectors?

i can't remember the concrete numbers, but it's likely, that your fisher distance vec will look like[10,19,13,55] and the lbph one like [200,220,90,120], so, in order to weigh them equally, i'd normalize both to [0..1] range before summing up

actually , the weakest part in both lbph and fisher pipeline is the final nearest-neighbour classification. replacing that with an svm or ann even will definitely improve your results.

Yes i've just found this idea in this paper "Combining Classifier for face recognition" by Xiaoguang Lu, Yunhong Wang, Anil K. Jain, but unfortunately i'm in an embedded context so i don't know if it is possible to implement a RBF network as done by the authors of the paper mentioned above.