2020-02-18 08:51:12 -0600 | received badge | ● Notable Question (source) |
2018-03-20 22:50:10 -0600 | received badge | ● Popular Question (source) |
2017-06-20 02:59:17 -0600 | received badge | ● Student (source) |
2016-03-09 17:07:00 -0600 | received badge | ● Editor (source) |
2016-03-09 15:49:25 -0600 | received badge | ● Organizer (source) |
2016-03-09 15:46:57 -0600 | asked a question | Pure virtual method called extending FaceRecognizer Hi everyone, i want to add a new method predict to the class FaceRecognizer in order to obtain all the labels and all the distances from the prediction both with FisherFace algorithm and LBPH algorithm. I use OpenCV 3.0.0 and my idea is to extend the class FaceRecognizer creating two classes ExtFisherFaceRecognizer and ExtLBPHFaceRecognizer showed below (for brevity i'll show only the code regarding FisherFace): I create the model using the class FaceExt that call the respective native methods: And this is the cpp file that implements the native methods; i omit the details regarding building of result vector to java side: As you can see from the code above the prototype of the native method predictVec is the follow: I modified the source of OpenCV face module adding this new method in such way that works for what i want. Until i create the models all works fine but when i try to call an inherited method from BasicFaceRecognizer that extends the class FaceRecognizer or this new method predictVec, at the end of execution i obtain the error: pure virtual method called terminate called without an active exception. I have no ... (more) |
2016-03-02 14:40:15 -0600 | commented question | Combining FisherFaces and LBPH to improve accuracy Yes i've just found this idea in this paper "Combining Classifier for face recognition" by Xiaoguang Lu, Yunhong Wang, Anil K. Jain, but unfortunately i'm in an embedded context so i don't know if it is possible to implement a RBF network as done by the authors of the paper mentioned above. |
2016-03-02 10:59:51 -0600 | commented question | Combining FisherFaces and LBPH to improve accuracy Thanks berak for the answer, maybe i was not clear to explain my idea. Let [1,0,5,3] (labels), [20,50,100,200] (distances) the two output vectors return by fisher faces method and [0,1,3,5] (labels), [5,10,400,600] (distances) the two output vectors return by lbph method. My idea is to do 20+10 that is the sum relative to label 1, 50+5 the sum relative to label 0 and so on...all of these distances i take the minimum and the corresponding label is the output. Do you think that in this way i should normalize both vectors? |
2016-03-02 08:37:19 -0600 | asked a question | Combining FisherFaces and LBPH to improve accuracy Hi everyone, i’m trying to implement a face recognition system for a video surveillance application. In this context test images are low quality, illumination change from an image to another, and, moreover, the detected subjects are not always in the same pose. As first recognizer i used FisherFaces and, with 49 test images, i obtain an accuracy of 35/49, without considering the distances of each classified subject (i just considered labels). Trying to get a better accuracy i attempt to make a preprocessing both of training images and test images; the preprocessing i choose is described in “Mastering OpenCV with Practical Computer Vision Projects” book. The steps are:
Well, with this type of preprocessing, i obtain worse results than before (4/49 subjects properly classified). So i thought of using another classifier, the LBPH recognizer, to improve the accuracy of the recognition since these two types of algorithms have different features and different ways to classify a face; if one use them together maybe the accuracy increase. So my question is about the ways to combine these two algorithms; anyone knows how to merge the two outputs in order to obtain better accuracy? I thought at this: if FisherFaces and LBPH give the same result (the same label) then there is no problem; otherwise if they disagree my idea is to take the vector of the labels and the vector of the distances for each algorithm and for each subject sum the corresponding distances; at this point the label of the test image is the one that has the shortest distance. This is just my idea but there are other ways to fuse the output of both algorithm also because i should change the code of the predict function of face module in OpenCV since it returns an int type not a vector of int. |
2016-03-02 08:35:38 -0600 | received badge | ● Enthusiast |
2016-02-26 12:09:22 -0600 | commented question | Ideas to distribute face detection Thanks sturkmen for your suggestion i will try to do some experiment with this framework. I'm using java for my project so the ideal would be a .jar library since i'm not expert with jni. |
2016-02-26 11:16:13 -0600 | asked a question | Ideas to distribute face detection Hi all, i'm trying to implement a face recognition system using an embedded board, intel Galileo, precisely 4 board. Each board has a video in which there are persons crossing a portal and the cam is at the top of this portal. The idea is to distribute the load of the computation due to face recognition between all the boards so that no one face is lost. At the beginning i think that face detection was slower than face recognition but, going on with my work i discovered that is the opposite; i use lpb as face detector and fisherfaces algorithm as face recognizer: the first takes in average, 1 second to detect a face within an image (since before i shrink the image to an appropriate size in order to decrease detection time) while the second, in the case of 30 subjects and 10 images per subject, takes 0.04 seconds to recognize a face. Therefore, at this point, i think that the problematic task (regarding time) is face detection rather than face recognition and this is my question: are there methods to distribute face detection in order that, with the collaboration among more computation units, the task become easier (minus time consuming) for each board? At the moment the only idea coming in my mind is a stupid idea that is to divide a frame in more parts and distribute these parts between the boards; in this way the detection time is less than before because the image is smaller but there is no guarantee not to cut the faces. I hope i was clear to explain my problem and that this is the right place to post this question; if not i ask you to explain me why. Thanks in advance for your time and help. |
2016-01-20 04:09:50 -0600 | commented question | Time complexity face recognition algorithms Thanks for your suggestions! |
2016-01-20 03:56:06 -0600 | asked a question | Time complexity face recognition algorithms Hi all, i hope that this is the right place where post this question, otherwise, if the post is inappropriate, please explain me why. For my thesis i'm facing the problem of face recognition in an embedded context in particular in a Intel Galileo Board. I experiment that face recognition in this board is time consuming and that is not possible to think a real-time face recognition. My intention is to measure the average time for recognition using the online face databases, as for example the yaledatabase and AT&T; i've already found a paper that addresses this issue. But before doing this i want to understand if there are paper or books that speak about time complexity of algorithms implemented in opencv (Eigenfaces, Fisherfaces and LocalBinaryPatternHistogram). I searched through the web an answer to this question but i've not found yet. If someone knows this theme and wants to help me, i thanks him in advance. |