Ask Your Question
0

eye landmark points

asked 2016-12-02 14:34:25 -0600

sarmad gravatar image

updated 2016-12-09 13:38:17 -0600

Hi

I'm using facial landmark detector (dlib) to detect eye blinks . How the eye landmarks can be imported to a file ?

I need to use eye landmarks to calculate the ration between height and width of eye and to use SVM to classify blinks

Update : when I try to write landmark point to a file , different valuses are saved than the displayed landmarks in terminal windows , how to fix ?

Thanks

 #include <dlib/opencv.h>
 #include <opencv2/highgui/highgui.hpp>
 #include <dlib/image_processing/frontal_face_detector.h>
 #include <dlib/image_processing/render_face_detections.h>
 #include <dlib/image_processing.h>
 #include <dlib/gui_widgets.h>

using namespace dlib;
using namespace std;

int main()
{
try
{
    cv::VideoCapture cap(0);
    if (!cap.isOpened())
    {
        cerr << "Unable to connect to camera" << endl;
        return 1;
    }

    image_window win;
    frontal_face_detector detector = get_frontal_face_detector();
    shape_predictor pose_model;
    deserialize("shape_predictor_68_face_landmarks.dat") >> pose_model;

    while(!win.is_closed())
    {
        cv::Mat temp;
        cap >> temp;

        cv_image<bgr_pixel> cimg(temp);

        // Detect faces 
        std::vector<rectangle> faces = detector(cimg);
        // Find the pose of each face.
        std::vector<full_object_detection> shapes;
           ofstream outputfile;
           outputfile.open("data1.csv");

        for (unsigned long i = 0; i < faces.size(); ++i)
      {  

               full_object_detection shape = pose_model(cimg, faces[i]);
               cout << "number of parts: "<< shape.num_parts() << endl;

        cout << "Eye Landmark points for right eye : "<< endl;
        cout << "pixel position of 36 part:  " << shape.part(36) << endl;
        cout << "pixel position of 37 part: " << shape.part(37) << endl;
        cout << "pixel position of 38 part:  " << shape.part(38) << endl;
        cout << "pixel position of 39 part: " << shape.part(39) << endl;
        cout << "pixel position of 40 part: " << shape.part(40) << endl;
        cout << "pixel position of 41 part: " << shape.part(41) << endl;

        cout << endl;

        cout << "Eye Landmark points for left eye : "<< endl;

        cout << "pixel position of 42 part:  " << shape.part(42) << endl;
        cout << "pixel position of 43 part: " << shape.part(43) << endl;
        cout << "pixel position of 44 part:  " << shape.part(44) << endl;
        cout << "pixel position of 45 part: " << shape.part(45) << endl;
        cout << "pixel position of 46 part: " << shape.part(46) << endl;
        cout << "pixel position of 47 part: " << shape.part(47) << endl;

        double P37_41_x = shape.part(37).x() - shape.part(41).x();
        double P37_41_y=  shape.part(37).y() -shape.part(41).y() ;

        double p37_41_sqrt=sqrt((P37_41_x * P37_41_x) + (P37_41_y * P37_41_y));


       double P38_40_x = shape.part(38).x() - shape.part(40).x();
       double P38_40_y = shape.part(38).y() - shape.part(40).y();

       double p38_40_sqrt=sqrt((P38_40_x * P38_40_x) + (P38_40_y * P38_40_y));



      double P36_39_x = shape.part(36).x() - shape.part(39).x();  
      double P36_39_y = shape.part(36).y() - shape.part(39).y();

      double p36_39_sqrt=sqrt((P36_39_x * P36_39_x) + (P36_39_y * P36_39_y));



     double EAR= p37_41_sqrt +  p38_40_sqrt/2* p36_39_sqrt;


    cout << "EAR value =  " << EAR << endl;


  shapes.push_back(pose_model(cimg, faces[i]));


   const full_object_detection& d = shapes[0];

              }

        win.clear_overlay();
        win.set_image(cimg);
        win.add_overlay(render_face_detections(shapes));
    }
}
catch(serialization_error& e)
{
    cout << "You need dlib's default face landmarking model file to run this example." << endl;
    cout << "You can get it from the following URL: " << endl;
    cout << "   http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
    cout << endl << e.what() << endl;
}
catch(exception& e)
{
    cout << e.what() << endl;
}
 }
edit retag flag offensive close merge delete

Comments

did you actually try it ?

i don't think, that dlib's landmarks will deliver significant enough differences for open/closed eyes (but maybe i'm wrong here)

you'll still need the landmarks to detect the eye position, but imho, you'll need some cropped, open/close dataset of images to train on.

berak gravatar imageberak ( 2016-12-03 00:23:02 -0600 )edit

Hi , This paper : http://vision.fe.uni-lj.si/cvww2016/p... used facial landmarks to

detect eyes and then the eye aspect ratio (EAR) between height and width of the eye is computed.

sarmad gravatar imagesarmad ( 2016-12-03 12:38:21 -0600 )edit
1

in the end, you just need to save your EAR ratio (a single float) plus a "label" (open/closed), right ?

(i'm curious, how that'll work - training an SVM on a single feature)

berak gravatar imageberak ( 2016-12-04 03:08:38 -0600 )edit

It will only work if the classes are seperable. However, in this case I would go for a Normal Bayes classifier or a KNN classifier, who do way better in low dimensional data.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-12-07 03:55:37 -0600 )edit

thanks @StevenPuttemans for your suggestion , is that means I should use Normal Bayes classifier or a KNN classifier on the obtained calcuaitng eye aspect ratio (EAR) ?

sarmad gravatar imagesarmad ( 2016-12-07 04:30:44 -0600 )edit

currently, you're saving all landmarks, but only printing out the eye-ones.

why don't you calculate your EAR right there, and save that ?

berak gravatar imageberak ( 2016-12-07 04:56:19 -0600 )edit

@berak Thanks for suggesting this , I will calculate EAR equation , but is ||p2-p6|| means Euclidian distance ? any suggestion of how can be calculated

sarmad gravatar imagesarmad ( 2016-12-08 05:08:32 -0600 )edit
1

yes, euclidean distance (L2 norm)

berak gravatar imageberak ( 2016-12-08 05:19:57 -0600 )edit
1

I have edited the question and included EAR calculation equation , is it correct ?

sarmad gravatar imagesarmad ( 2016-12-09 13:39:08 -0600 )edit

imho, you're missing braces here: (2.1 (1) in paper)

double EAR= (p37_41_sqrt +  p38_40_sqrt) / (2* p36_39_sqrt);

also, don't forget the other eye ! ("Since eye blinking is performed by both eyes synchronously, the EAR of both eyes is averaged")

(btw, just curious, what kind of (labelled??) data do you you have for this ?)

berak gravatar imageberak ( 2016-12-10 02:01:03 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-12-04 03:37:34 -0600

berak gravatar image

updated 2016-12-04 03:38:29 -0600

you can save your train input as csv, like:

    1, 0.2324
    0, 0.1487

    ^    ^
label   EAR value

and later use loadFromCSV() to feed your SVM:

Ptr<SVM> svm = SVM::create();
Ptr<TrainData> tdata = TrainData::loadFromCSV("my.csv",0,0,1);
svm->train(tdata);
edit flag offensive delete link more

Comments

How eye landmarks points can be saved into .csv file ?

I applied face_landmark_detection_ex to an image and I got these points :

Test Image

are the eye landmarks correct ? how this can be used with a video file or web cam ?

sarmad gravatar imagesarmad ( 2016-12-05 03:43:04 -0600 )edit

yes, imho, the landmarks are correct. now try that with an closed eye image !

"how to save numbers into a txtfile" - now, cmon. that's basic.

berak gravatar imageberak ( 2016-12-05 03:53:04 -0600 )edit

I have tried on cllosed eye image , and got different eye landmarks . so now eye landmarks points are different for open/closed eyes .

I have tried this to save eye landmarks : Link to code but I'm getting errors .

sarmad gravatar imagesarmad ( 2016-12-05 05:01:10 -0600 )edit

sooo, what errors ?

berak gravatar imageberak ( 2016-12-05 05:24:05 -0600 )edit

unfortunately, the error does not match your code !

but i guess, you missed #include <dlib/opencv/cv_image.h>

berak gravatar imageberak ( 2016-12-06 19:08:12 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-12-02 14:34:25 -0600

Seen: 2,634 times

Last updated: Dec 09 '16