Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version
  • "each with 1 photo" - you're plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users, each with 1 photo (forward facing); how long will it take for the results to return?" a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)
  • "each with 1 photo" - you're plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users, each with 1 photo (forward facing); how long will it take for the results to return?" a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)
  • "each with 1 photo" - you're your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users, each with 1 photo (forward facing); how long will it take for the results to return?" a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users, each with 1 photo (forward facing); users ... how long will it take for the results to return?" a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen)eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 images
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 imagesimages (lbph is far less restricted wrt. image count))
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: time_for_getting_the_features + 100,000 x (time_for_getting_the_features + time_for_norm_or_compareHist) time_for_norm_or_compareHist (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 images (lbph is far less restricted wrt. image count))
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: time_for_getting_the_features + 100,000 x time_for_norm_or_compareHist (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 images (lbph is far less restricted wrt. image count))
  • "...would be a quad core" - unfortunately, not much of it is optimized for multicore (yea, norm(), compareHist() or such functions used are, but not the recogniti
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: time_for_getting_the_features + 100,000 x time_for_norm_or_compareHist (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 images (lbph is far less restricted wrt. image count))
  • "...would be a quad core" - unfortunately, not much of it is optimized for multicore (yea, norm(), (norm(), compareHist() or such functions used are, but not the recognitirecognition process itself.
  • "each with 1 photo" - your plan is doomed. you need like a good dozen (better two) per person.
  • "If we take a database of 100,000 users ... how long will it take for the results to return?" - a plain 1-nearest neighbour search is applied for all of opncv's algos here, so: time_for_getting_the_features + 100,000 x time_for_norm_or_compareHist (no worries, that's still fast. training on 100,000 images will set you back for years (or will just explode) when using a 'global' model (like fisher or eigen) an average pc hits the memory/cpu contraints for a pca at ~ 20k 100x100 images (lbph is far less restricted wrt. image count))
  • "...would be a quad core" - unfortunately, not much of it is optimized for multicore (norm(), compareHist() or such are, but not the recognition process itself.itself).