FaceRecognizer - using multiple models

asked 2016-04-22 08:15:09 -0500

logidelic gravatar image

updated 2016-04-22 08:15:52 -0500

I posted this as a comment on another question, but didn't get any answers so am trying again.

I am using the eigenfaces FaceRecognizer to identify faces. I would like to be able to scale up to 1000's (or more) faces. My idea is simple:

  • Create multiple models, each trained for a limited number of faces (ex max 100 different people per model)
  • Train each model separately
  • When presented with a new face, attempt identification using each model separately
  • Take the one with the best match.

Has anyone taken this approach. Are there any non-obvious things to consider that would make this less reliable than having all the people in a single model?

The motivation for the approach is to get around two limitations of eigenfaces :

  • The time needed to retrain when a new person is added; with this approach only a single (relatively) small model needs to be retrained
  • Memory usage of large model; with this approach, things can be spread over multiple machines, etc

Any comments would be appreciated.

edit retag flag offensive close merge delete

Comments

"I would like to be able to scale up to 1000's (or more) faces." -- this should already be possible. no idea, where your machine will catch fire, but probably somewhere beyond 10 or 20k.

berak gravatar imageberak ( 2016-04-22 08:38:03 -0500 )edit

Well, ok, in that case I would like to scale up to 1 million faces. The point is that the model needs to be retrained every time you add another person and that takes too long with 1 million faces. :)

logidelic gravatar imagelogidelic ( 2016-04-22 09:24:01 -0500 )edit

why Eigenfaces, then ? LBPH does not try to build a "global" model, you can update it, and it makes no difference, if you dispatch the data to multiple instances.

berak gravatar imageberak ( 2016-04-22 09:58:02 -0500 )edit

my 2ct.: if you really have gazillions of persons, don't waste your time with opencv's facerec. rather move on to "distributed tensorflow", deep learning, or just outsource it (to folks with better hardware)

berak gravatar imageberak ( 2016-04-22 10:06:27 -0500 )edit

Re LBPH: My sense after playing a bit was that it was not as good, but maybe I'm wrong?

Re why not outsource: Because I like to do things myself. :)

Re distributed tensorflow: Any specific pointers? I know nothing about neural nets and deep learning. I guess I had better start learning deeply.

Thank you for the suggestions!

logidelic gravatar imagelogidelic ( 2016-04-22 10:22:40 -0500 )edit
  • "Because I like to do things myself" -- sure, so do i. but in the end, it's unprofessional, unless you're in a toy world (and don't get paid for delivering the correct solution)

  • that it was not as good, but maybe I'm wrong? -- it all depends on your data, preprocessing, etc. actually here, eigenfaces are ranking pretty low.

  • you need a good testbed for this. different datasets, crossfold-validation, etc.

berak gravatar imageberak ( 2016-04-22 10:26:59 -0500 )edit