1 | initial version |
The documentation notes, that the implementation is based on the paper:
In this paper a 1-Nearest Neighbor classifier with a Chi-Square Distance is used and so does the implementation. You can easily implement a different classifier, by for example wrapping the LBPHFaceRecognizer to get the histograms out. If you want to implement a cv::FaceRecognizer
yourself, then you'll find all the implementation details in:
Do I think using a SVM is going to give you much better recognition rates? I don't think so and none of my experiments showed significantly improved recognition rates by simply employing a SVM. Note: I did the normalization for the Kernels and optimized the parameters with a Grid Search of course, but I didn't employ something like a Chi-Square Kernel. This might be worth trying, although I personally don't think it'll yield much better recognition rates.
I wrote a Python framework once, where you could easily combine feature extraction methods with different classifiers (yeah, yeah basically I rewrote everything scikit-learn already has):
This was a fast way for me to try out different feature extraction and classifier combinations. If I would do it now, I would use scikit-learn for almost all of the stuff... and this is what I recommend.
2 | No.2 Revision |
The documentation notes, that the implementation is based on the paper:
In this paper a 1-Nearest Neighbor classifier with a Chi-Square Distance is used and so does the implementation. You can easily implement a different classifier, by for example wrapping the LBPHFaceRecognizer to get the histograms out. If you want to implement a it and not rely on the given classes, you'll find cv::FaceRecognizer
yourself, then all the implementation details in:
Do I think using a SVM is going to give you much better recognition rates? I don't think so and none of my experiments showed significantly improved recognition rates by simply employing a SVM. Note: Of course: I did the normalization for the Kernels RBF Kernel and optimized the parameters with a Grid Search of course, Search, but I didn't employ something like a Chi-Square Kernel. This might be worth trying, although I personally don't think it'll yield much better recognition rates.
I wrote a Python framework once, where you could easily combine feature extraction methods with different classifiers (yeah, yeah basically I rewrote everything scikit-learn already has):has). It also has a Python implementation of Local Binary Pattern Histograms:
This was has always been a fast way for me to try out different validate the performance of feature extraction and classifier combinations. If I would do it now, I would use scikit-learn for almost all of the stuff... and stuff. And this is what I recommend.
3 | No.3 Revision |
The documentation documentation notes, that the implementation is based on the paper:
In this paper a 1-Nearest Neighbor classifier with a Chi-Square Distance is used and so does the implementation. You can easily implement a different classifier, by for example wrapping the LBPHFaceRecognizer to get the histograms out. If you want to implement it and not rely on the given classes, you'll find the implementation in:
Do I think using a SVM is going to give you much better recognition rates? I don't think so and none of my experiments showed significantly improved recognition rates by simply employing a SVM. Of course: I did the normalization for the RBF Kernel and optimized the parameters with a Grid Search, but I didn't employ something like a Chi-Square Kernel.
I wrote a Python framework once, where you could easily combine feature extraction methods with different classifiers (yeah, yeah basically I rewrote everything scikit-learn already has). It also has a Python implementation of Local Binary Pattern Histograms:
This has always been a fast way for me to validate the performance of feature extraction and classifier combinations. If I would do it now, I would use scikit-learn for almost all of the stuff. And this is what I recommend, simply because with Python it is much faster to prototype. Once you are confident the algorithm works as expected, then I would go ahead and implement it with OpenCV.
4 | No.4 Revision |
The documentation documentation notes, that the implementation is based on the paper:
In this paper a 1-Nearest Neighbor classifier with a Chi-Square Distance is used and so does the implementation. You can easily implement a different classifier, by for example wrapping the LBPHFaceRecognizer to get the histograms out. If you want to implement it and not rely on the given classes, you'll find the implementation in:
Do I think using a SVM is going to give you much better recognition rates? I don't think so and none of my experiments showed significantly improved recognition rates by simply employing a SVM. Of course: I I have used all the kernel functions libsvm has, did the normalization for the RBF Kernel of course and optimized the parameters with a Grid Search, but Search. I didn't employ something like a Chi-Square Kernel. , which might work better for histograms.
I wrote a Python framework once, where you could easily combine feature extraction methods with different classifiers (yeah, yeah basically I rewrote everything scikit-learn already has). It also has a Python implementation of Local Binary Pattern Histograms:
This has always been a fast way for me to validate the performance of feature extraction and classifier combinations. If I would do it now, I would use scikit-learn for almost all of the stuff. And this is what I recommend, simply because with Python it is much faster to prototype. Once you are confident the algorithm works as expected, then I would go ahead and implement it with OpenCV.