It is not an implementation, but I would like to say some "tips" about drowsiness detection on drivers in order to help other people.
Detecting drowsiness in drivers is not an easy task. If you want to develop a robust algorithm,
first step is to robustly detect the face and track it over time. Different illumination conditions and
different poses should be taken into account. Using multiple cameras, the operational
range of head pose can be tracked by mitigating failures under large head motions.
In [1], head pose is estimated independently on each camera perspective and tracked over a wide operational range
in the yaw rotation angle using both camera perspectives. In order to handle camera selection and hand-off, they had
success with using the yaw as the camera hand-off cue.
See also this publication: 'Continuous head movement estimator for driver
assistance: Issues, algorithms, and on-road' [2]
- [1] E. Ohn-Bar, A. Tawari, S. Martin, and M. M. Trivedi, \On surveillance for safety critical events:
In-vehicle video networks for predictive driver assistance systems,"
- [2] A. Tawari, S. Martin, and M. M. Trivedi, \Continuous head movement estimator for driver
assistance: Issues, algorithms, and on-road evaluations," 2014.
Secondly, you should detect both eyes and mouth. For this task, you should detect the posistion of the eyes and
mouth. You should check this publication: 'One millisecond face alignment with an ensemble of regression trees' [3].
There is also an open implementation of this paper or flandmarks (http://cmp.felk.cvut.cz/~uricamic/fla...).
- [3] Kazemi, V., & Sullivan, J. (2014, June). One millisecond face alignment with an ensemble
of regression trees. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference
on (pp. 1867-1874). IEEE.
Thirdly, you should analize eyes and mouth. You should check PERCLOS as and indicator of drowsiness.
PERCLOS is recognized as the most effective vision-based fatigue evaluation method. But other characteristics
can be extracted from eyes: PERCLOS, eyelid distance, eye blink rate, blink speed,
gaze direction, eye saccadic movement are commonly cues used for detecting drowsiness.
PERCLOS is deffined as the measurement of the percentage of time eyes pupils are 80%, or more, occluded over a
speciffied time interval. So, a cut-off value of 80% eyelid closure is proposed to calculate the proportion of time when eyes remain fully or almost fully closed. For example, in [4], PERCLOS and degree of mouth opening are extracted and SVM classifier is used to identify drowsiness.
- [4] M. Sacco and R. A. Farrugia, \Driver fatigue monitoring system using support vector machines,"
in Communications Control and Signal Processing (ISCCSP), 2012 5th International Symposium on,
pp. 1{5, IEEE, 2012.
As drowsiness often occurs after fatigue, yawning detection can be an important factor
to take into account because it is a strong signal that the driver can be affected by drowsiness in a short period of
time. The openness of the mouth can be represented by the ratio of its height and width.
For example, In [5], two cameras are used to detect fatigue in real ... (more)
There is no ready to pluck code for that! You will need to build for example cascade classifiers to detect eye and mouth regions which suit your camera setup and then analyze the information inside. You could use facial landmark libraries for that purpose.
Why do you mean by drowsiness eye? :)
An eye that is opening and closing ^^
like between two consecutive blinks ?
No like when he is falling asleep and keeping his eyes closed for a longer period of time!
ok, but for that it is needed a camera with high fps... 30fps I do not think it is enough...
o_O i do not think so, it is not blinking! It is actually closing your eyes, getting 30 FPS means you will already evaluatie every 0.3seconds ...
yes, and if you have a capture of the blink, you'll get a closed eyes longer than normal... I have seen cases of false alarm because of that, or light, if the camera is not IR
OpenCV has a HaarCascade to detect Eyes, but it isn't very fast. My Tip: Search for the face with the lbp cascade first, create a sub-image from the upper half of the face and run an eyedetection on that part. If you don't find an eye in 5-10 Images in a row, hit the alarm. 10 images would be closed eyes for 0.33 Seconds.
With a good setup you should detect the Face and eye in less then 25ms, which is still faster then a 30FPS camera.
you have two eyes detectors, one that finds square areas and one that finds rectangle areas, the squared one should not find close eyes, while the rectangle one finds them, I've tested this, but I was not very pleased with the results, so I have trained a SVM classifier on open/closed eyes (rectangle ones); that one was more close to the application. But as I have told, you'll meet a lot of changes in the light if you are filming the driver with a normal camera, and these are going to influence your results a lot