Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Avoid retraining a model when executing a program?

I've started using OpenCV for some image processing projects and I'm wondering if there's a way to save time when it comes to processing test images against a database of faces.

Issue: 10 pictures of each subject A, B, and C exist in folders on the desktop and each subject has their own identifier as to who the subject is in a list. The program navigates to the first subject folder, trains on their face and name from the list, then moves to the next subject, rinse and repeat until complete. Once the training process is done, a test image is then given to the program to see who it thinks the subject is (Person A, B, or C). The test image is the only thing that changes each time the script is run.

So far it's fairly successful at predicting who each subject is, but the training time alone makes up a fair bit of the execution time.

Question: Is there a way to make it so the model doesn't have to retrain every single time? I figured this is what the cascade files (haarcascade_frontalface_default.xml,lbpcascade_frontalface.xml, etc.) are for in terms of prediction accuracy, but I haven't been able to find a clear cut answer for a newbie like myself. Would each subject need their own .xml cascade file?

I'm fairly new to ML and image processing so even pointing me to a similar post, forum, or book would be awesome. Thanks!

Avoid retraining a model when executing a program?

I've started using OpenCV for some image processing projects and I'm wondering if there's a way to save time when it comes to processing test images against a database of faces.

Issue: 10 pictures of each subject A, B, and C exist in folders on the desktop and each subject has their own identifier as to who the subject is in a list. The program navigates to the first subject folder, trains on their face and name from the list, then moves to the next subject, rinse and repeat until complete. Once the training process is done, a test image is then given to the program to see who it thinks the subject is (Person A, B, or C). The test image is the only thing that changes each time the script is run.

So far it's fairly successful at predicting who each subject is, but the training time alone makes up a fair bit of the execution time.

Question: Is there a way to make it so the model doesn't have to retrain every single time? I figured this is what the cascade files (haarcascade_frontalface_default.xml,lbpcascade_frontalface.xml, etc.) are for in terms of prediction accuracy, but I haven't been able to find a clear cut answer for a newbie like myself. Would each subject need their own .xml cascade file?

I'm fairly new to ML and image processing so even pointing me to a similar post, forum, or book would be awesome. Thanks!

Edit: I should mention that I currently use the EigenFaceRecognizer on the test images in the prediction stage after the model has been trained on the images of each of the subjects.