video face detection using with ffmpeg and rstp slips
hello! i have a problem. i have a stream from (for now) my webcam to recognize faces. it works only if i do not sleep in the while cycle.
but if i add a delay into the capture, and sleep 30 ms, in the next frame it is not using the current frame ,but is slipping. what is weird is, that if i use /dev/video0 it works, but if i use a stream via ffmpeg, it is happening then.
how could this happen?
try {
while(running) {
capture >> frame;
if( frame.empty()) {
continue;
}
Mat frame1 = frame.clone();
vector<Rect> facesResult = detectAndDraw( frame1, cascade);
facesMutex.lock();
faces = facesResult;
facesMutex.unlock();
GETTING THIS SLIPPING IF I USE SLEEPING
//waitKey(250);
//std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
} catch (std::exception &e) {
facesMutex.unlock();
Napi::AsyncWorker::SetError(e.what());
}
It looks like when the thread sleeps and some frames should be elapsed, but instead from 1..2..3..4..5, i should get the 5th frame, but even though i slept , it going to the 2nd...
is it because it a real time stream and a real device? so it can only give a ordered frames instead of vs just the current?