I'm throwing the kitchen sink at recognising faces by running every *face*.xml CascadeClassifier on every frame of a video (and the eyes, mouth, and smile finders!) to see which performs the best. I noticed that the docs mentioned the scaling as a large chunk of the CPU effort, which got me wondering:
- Is it really? Or is the majority of the time in the actual face-detecting steps for each scaled size?
- If it IS a big part of the time, can I somehow reuse the scaling step across all my various classifiers?
If there is an easy way to do it (like pre-calculating a pyramid?) - great! If not, no worries, this doesn't have to be realtime.
Java/Kotlin,OpenCV 3.x