Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

improving fps of face recognition by using threads

So with your average facerec demo code as in the face/samples (shown below), the more faces it detects the slower the fps gets.

I w as thinking, either i start a thread right before

detectmultiscale

and close it after, or i start it after:

for(int i = 0; i < faces.size(); i++) {

And closing it after the work in that loop is completed so that each face get's it's own separate thread. Would this be a good idea?

for(;;) {
char key;
label_unknown=0;
label_recognized=0;

cap >> frame;
// Clone the current frame:
Mat original = frame.clone();
// Convert the current frame to grayscale:
Mat gray;
if(original.empty()) { // We got an empty or a grayscale frame.
// This will hopefully prevent the cvtcolor crash
cout << "frame type:" << original.type() << endl;
cout << "frame empty:" << original.empty() << endl;
cout << "frame depth:" << original.depth() << endl;
cout << "frame channels:" << original.channels() << endl;
break;
}

cvtColor(original, gray, CV_BGR2GRAY); //original
//equalizeHist(gray,eq_image);
// Find the faces in the frame:
vector< Rect_<int> > faces;
haar_cascade.detectMultiScale(gray, faces,1.2,4,0|CASCADE_SCALE_IMAGE,      Size(min_face_size,min_face_size),Size(max_face_size,max_face_size)); 
// At this point you have the position of the faces in
// faces. Now we'll get the faces, make a prediction and
// annotate it in the video. Cool or what?

//NEW
Mat face_resized;
//NEW
for(int i = 0; i < faces.size(); i++) {