I posted this as a comment on another question, but didn't get any answers so am trying again.
I am using the eigenfaces FaceRecognizer to identify faces. I would like to be able to scale up to 1000's (or more) faces. My idea is simple:
- Create multiple models, each trained for a limited number of races (ex max 100 different people per model)
- Train each model separately
- When presented with a new face, attempt identification using each model separately
- Take the one with the best match.
Has anyone taken this approach. Are there any non-obvious things to consider that would make this less reliable than having all the people in a single model?
The motivation for the approach is to get around two limitations of eigenfaces :
- The time needed to retrain when a new person is added; with this approach only a single (relatively) small model needs to be retrained
- Memory usage of large model; with this approach, things can be spread over multiple machines, etc
Any comments would be appreciated.