Ask Your Question

sarmad's profile - activity

2020-02-14 04:29:39 -0600 received badge  Notable Question (source)
2020-02-13 05:02:44 -0600 received badge  Notable Question (source)
2019-04-20 20:53:22 -0600 received badge  Famous Question (source)
2019-04-09 09:22:57 -0600 received badge  Popular Question (source)
2019-03-08 07:54:38 -0600 received badge  Popular Question (source)
2019-01-21 15:17:28 -0600 edited question calculate optical flow between several consecutive frames

calculate optical flow between several consecutive frames Hi How can I calculate optical flow between several consecuti

2019-01-21 15:03:35 -0600 asked a question calculate optical flow between several consecutive frames

calculate optical flow between several consecutive frames Hi How can I calculate optical flow between several consecuti

2018-10-16 20:25:00 -0600 received badge  Notable Question (source)
2018-10-09 03:55:54 -0600 received badge  Popular Question (source)
2018-04-12 03:50:30 -0600 received badge  Popular Question (source)
2018-04-09 00:56:59 -0600 received badge  Popular Question (source)
2017-09-10 23:58:57 -0600 received badge  Notable Question (source)
2017-05-07 07:54:44 -0600 received badge  Popular Question (source)
2017-01-22 12:33:18 -0600 commented answer Plotting values from video frames

@kbarni , I have tried you suggestions , but it is the same problem I described .

2017-01-19 11:08:41 -0600 answered a question Plotting values from video frames

I have addedd template<typename T> void graphArray into my code below , where double data[15784];

is saving double values , 15784 is the number of frames in the video file . The code compiles witout error , when

I run , it shows the plot window but it is empty and became unresponsive after a while .

 #include <dlib/opencv.h>
 #include <opencv2/highgui/highgui.hpp>
 #include <dlib/image_processing/frontal_face_detector.h>
 #include <dlib/image_processing/render_face_detections.h>
 #include <dlib/image_processing.h>
 #include <dlib/gui_widgets.h>

using namespace dlib;
using namespace std;
using namespace cv;


template<typename T> void graphArray(const char *title,T* data, int n, int height,bool cont)
 {
    Mat img(height+1,n,CV_8UC3);
    img.setTo(Scalar(255,255,255));
    T max=0;
     for(int x=0;x<n;x++)
          if(data[x]>max)max=data[x];
           if(!cont){
            for(int x=0;x<n;x++)
            img.at<Vec3b>((int)(height-data[x]*height/max),x)=Vec3b(255,0,0);
      } else 
        {
     int si,si1,inc;
      for(int x=0;x<n-1;x++){
     si=data[x]*height/max;si1=data[x+1]*height/max;
     if(si1>si)inc=1;else inc=-1;
     for(int v=si;v!=si1+inc;v+=inc)
         img.at<Vec3b>(height-v,x)=Vec3b(255,0,0);
    }
 }
     namedWindow(title,WINDOW_FREERATIO);
     imshow(title,img);
    }



 int main()
     {
      try
     { 
    cv::VideoCapture cap("1.avi");
    if (!cap.isOpened())
    {
        cerr << "Unable to connect to camera" << endl;
        return 1;
    }

    image_window win;

    frontal_face_detector detector = get_frontal_face_detector();
    shape_predictor pose_model;
    deserialize("shape_predictor_68_face_landmarks.dat") >> pose_model;

   double data[15784];
    while(!win.is_closed())
    {
        // Grab a frame
        cv::Mat temp;
        cap >> temp;
    if ( temp.empty())
 {
    // reach to the end of the video file
    break;
}

        cv_image<bgr_pixel> cimg(temp);

        std::vector<rectangle> faces = detector(cimg);
        std::vector<full_object_detection> shapes;

        for (unsigned long i = 0; i < faces.size(); ++i)
      {


       full_object_detection shape = pose_model(cimg, faces[i]);


        double P37_41_x = shape.part(37).x() - shape.part(41).x();
        double P37_41_y=  shape.part(37).y() -shape.part(41).y() ;

        double output=sqrt((P37_41_x * P37_41_x) + (P37_41_y * P37_41_y));


       data[i]=output;

      graphArray<double>("Plot",data,10,255,true);

   shapes.push_back(pose_model(cimg, faces[i]));
   const full_object_detection& d = shapes[0];

              }

        win.clear_overlay();
        win.set_image(cimg);
        win.add_overlay(render_face_detections(shapes));

      }
 }
       catch(serialization_error& e)
      {
          cout << "You need dlib's default face landmarking model file to run this example." << endl;
          cout << "You can get it from the following URL: " << endl;
          cout << "   http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
          cout << endl << e.what() << endl;
         }
          catch(exception& e)
        {
            cout << e.what() << endl;
           }
       }
2017-01-19 06:52:19 -0600 commented answer Plotting values from video frames

Thanks @pi-null-mezon ,

I don't want to display the values on the video , I need to plot them like matlab style.image

2017-01-19 05:09:30 -0600 asked a question Plotting values from video frames

HI

My question about plotting using cv::line , after doing some calculations , I'm getting double values for each video frame e.g :

0.3
0.288462
0.288462
0.289614
0.307465
0.20198 0.166522
0.16
0.24
0.249815
0.270398
0.269032

how can I plot them using cv line , or any other method ?

2017-01-08 12:49:24 -0600 commented question undefined reference to issue opencv3.2

Hi Berak.

I always compile it using :

cmake --build . or cmake --build . --config Release

. It works fine with other examples . Here is the cmake file , it is the same for dlib just added the example code . Cmakelists.txt

2017-01-06 12:16:23 -0600 asked a question undefined reference to issue opencv3.2

Hi

I'm using opencv 3.2 on ubuntu 16.04 . these errors are showing on when I'm trying to compile my code :

here is the output of pkg-config --libs opencv :

-lopencv_shape -lopencv_stitching -lopencv_objdetect -lopencv_superres -lopencv_videostab -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_video -lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_viz -lopencv_core

Update : the errors now resolved using this cmakelists:

cmake_minimum_required(VERSION 2.8) project( svm_test ) find_package( OpenCV REQUIRED ) include_directories( ${OpenCV_INCLUDE_DIRS} ) add_executable( svm_test svm_test.cpp ) target_link_libraries( svm_test ${OpenCV_LIBS} )

the compilation now showing :

Segmentation fault (core dumped)
_svm.cpp:(.text._ZN2cv3MatD2Ev[_ZN2cv3MatD5Ev]+0x1b): undefined reference to `cv::Mat::deallocate()'
_svm.cpp:(.text._ZN2cv3MatD2Ev[_ZN2cv3MatD5Ev]+0x78): undefined reference to `cv::fastFree(void*)'
CMakeFiles/blink_svm.dir/blink_svm.cpp.o: In function `cv::MatExpr::~MatExpr()':
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0x25): undefined reference to    `cv::Mat::deallocate()'
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0x8d): undefined reference to   `cv::fastFree(void*)'
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0xac): undefined reference to   `cv::Mat::deallocate()'
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0x112): undefined reference to `cv::fastFree(void*)'
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0x12e): undefined reference to `cv::Mat::deallocate()'
_svm.cpp:(.text._ZN2cv7MatExprD2Ev[_ZN2cv7MatExprD5Ev]+0x188): undefined reference to `cv::fastFree(void*)'
CMakeFiles/blink_svm.dir/blink_svm.cpp.o: In function `main':
_svm.cpp:(.text.startup+0x53f): undefined reference to `cv::ml::SVM::create()'
_svm.cpp:(.text.startup+0x57f): undefined reference to `cv::String::allocate(unsigned long)'
_svm.cpp:(.text.startup+0x5c3): undefined reference to `cv::ml::TrainData::loadFromCSV(cv::String const&,   int, int, int, cv::String const&, char, char)'
_svm.cpp:(.text.startup+0x5cd): undefined reference to `cv::String::deallocate()'
 _svm.cpp:(.text.startup+0x5d5): undefined reference to `cv::String::deallocate()'
 svm.cpp:(.text.startup+0x682): undefined reference to `cv::Mat::convertTo(cv::_OutputArray const&, int, double, double) const'
 svm.cpp:(.text.startup+0x70c): undefined reference to `cv::ml::TrainData::getTestSamples() const'
 svm.cpp:(.text.startup+0x844): undefined reference to `cv::operator==(cv::Mat const&, cv::Mat const&)'
 svm.cpp:(.text.startup+0x871): undefined reference to `cv::countNonZero(cv::_InputArray const&)'
 svm.cpp:(.text.startup+0x9cc): undefined reference to `cv::Mat::create(int, int const*, int)'
 svm.cpp:(.text.startup+0x9ea): undefined reference to `cv::Mat::operator=(cv::Scalar_<double> const&)'
 svm.cpp:(.text.startup+0xf9f): undefined reference to `cv::Formatter::get(int)'
 svm.cpp:(.text.startup+0x13ca): undefined reference to `cv::String::deallocate()'
 svm.cpp:(.text.startup+0x15c8): undefined reference to `cv::String::deallocate()'
collect2: error: ld returned 1 exit status
CMakeFiles/blink_svm.dir/build.make:106: recipe for target 'blink_svm' failed
make[2]: *** [blink_svm] Error 1
    CMakeFiles/Makefile2:215: recipe for target 'CMakeFiles/blink_svm.dir/all' failed
 make[1]: *** [CMakeFiles/blink_svm.dir/all] Error 2
2016-12-19 06:29:21 -0600 commented question eye landmark points

here is the link to it , it is annotated data . http://www2.fiit.stuba.sk/~fogelton/a...

2016-12-15 09:30:07 -0600 commented question eye landmark points

To TEST: In a sliding-window fashion, for each frame in a video, take surrounding 13 EAR values and asked SVM classifier if these values are positive or negative. If positive, it means that the tested frame (in a center of 13 frames) is blink or not. In the annotated videos .tag and .txt shows the eye states and frame numbers LInk . but i'm very confused on how can I combine extracted EAR values with the annotated data

2016-12-15 08:42:52 -0600 commented question eye landmark points

Yes , in processing video frames of the annotated video which has .tag ( blinks ) and .txt ( frames ). I got (EAR values computed for each frame in an annotated video sequence).

but , Now how I find a peak of an annotated blink, I don't know how to deal with the annotated video files for example in blink8 it is annotated by start and end of a blink . So the peak is probably a center of this interval. E. g. blink starts in a 38th frame and ends at 42th frame, so the blink peak is in the 40th frame of a sequence. after that take EAR values from 34th-46th frames = 13 scalar numbers and these numbers are 1 positive feature for training SVM.

2016-12-12 04:24:47 -0600 commented question eye landmark points

In the paper , it says ,

A linear SVM classier (called EAR SVM) is trained from manually annotated sequences. Positive examples are collected as ground-truth blinks, while the negatives are those that are sampled from parts of the videos where no blink occurs,

I have video data with annotations , but I don't have idea of how to make a classifier for EAR using it , do you have any suggestions ?

2016-12-09 13:39:08 -0600 commented question eye landmark points

I have edited the question and included EAR calculation equation , is it correct ?

2016-12-08 05:08:32 -0600 commented question eye landmark points

@berak Thanks for suggesting this , I will calculate EAR equation , but is ||p2-p6|| means Euclidian distance ? any suggestion of how can be calculated

2016-12-07 04:30:44 -0600 commented question eye landmark points

thanks @StevenPuttemans for your suggestion , is that means I should use Normal Bayes classifier or a KNN classifier on the obtained calcuaitng eye aspect ratio (EAR) ?

2016-12-07 03:47:19 -0600 answered a question eye landmark points

This is the code that I'm trying to import landmarks to file , is there any suggestion to save the landmarks in a different way ?

this is the errors :

[ 51%] Building CXX object CMakeFiles/face_landmark_detection_ex.dir/face_landmark_detection_ex.cpp.o
/home//dlib/examples/face_landmark_detection_ex.cpp: In function ‘int main(int, char**)’:
/home/dlib/examples/face_landmark_detection_ex.cpp:64:52: error: ‘img’ was not declared in this scope
         std::vector<rectangle> dets = detector(img);
                                                ^
 make[2]: *** [CMakeFiles/face_landmark_detection_ex.dir/face_landmark_detection_ex.cpp.o] Error 1
 make[1]: *** [CMakeFiles/face_landmark_detection_ex.dir/all] Error 2
 make: *** [all] Error 2



#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include <dlib/gui_widgets.h>
#include <dlib/image_io.h>
#include <iostream>
#include <dlib/opencv/cv_image.h>

using namespace dlib;
using namespace std;

int main(int argc, char** argv)
{  
try
{

    if (argc == 1)
    {
        cout << "Call this program like this:" << endl;
        cout << "./face_landmark_detection_ex shape_predictor_68_face_landmarks.dat faces/*.jpg" << endl;
        cout << "\nYou can get the shape_predictor_68_face_landmarks.dat file from:\n";
        cout << "http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
        return 0;
    }

    frontal_face_detector detector = get_frontal_face_detector();

    shape_predictor sp;
    deserialize(argv[1]) >> sp;


    image_window win, win_faces;

    for (int i = 2; i < argc; ++i)
    {

        std::string filename(argv[i]);
        cout << "processing image " << filename << endl;

        cout << "processing image " << argv[i] << endl;


       cv::Mat imgMat = cv::imread(argv[i]);




        std::vector<rectangle> dets = detector(img);
        cout << "Number of faces detected: " << dets.size() << endl;


       size_t lastindex = filename.find_last_of(".");
       string basename = filename.substr(0, lastindex);

        std::vector<full_object_detection> shapes;
        for (unsigned long j = 0; j < dets.size(); ++j)
        {
            full_object_detection shape = sp(img, dets[j]);
            cout << "number of parts: "<< shape.num_parts() << endl;


            cout << "Eye Landmark points for right eye : "<< endl;
            cout << "pixel position of 36 part:  " << shape.part(36) << endl;
            cout << "pixel position of 37 part: " << shape.part(37) << endl;
            cout << "pixel position of 38 part:  " << shape.part(38) << endl;
            cout << "pixel position of 39 part: " << shape.part(39) << endl;
            cout << "pixel position of 40 part: " << shape.part(40) << endl;
            cout << "pixel position of 41 part: " << shape.part(41) << endl;

            cout << endl;

           cout << "Eye Landmark points for left eye : "<< endl;

            cout << "pixel position of 42 part:  " << shape.part(42) << endl;
            cout << "pixel position of 43 part: " << shape.part(43) << endl;
            cout << "pixel position of 44 part:  " << shape.part(44) << endl;
            cout << "pixel position of 45 part: " << shape.part(45) << endl;
            cout << "pixel position of 46 part: " << shape.part(46) << endl;
            cout << "pixel position of 47 part: " << shape.part(47) << endl;


            shapes.push_back(shape);

            std::stringstream points_filename;
            std::ofstream ofs;

          if ( j == 0 )
            {
                points_filename << basename <<  ".txt";
            }else
            {
                points_filename << basename <<  "_"  << j << ".txt";
            }

            ofs.open(points_filename.str().c_str());
            const full_object_detection& d = shapes[0];
            for (unsigned long k = 0; k < shape.num_parts(); ++k)
            {
                ofs << shape.part(k).x() << " " << shape.part(k).y() << endl;

            }
            ofs.close();



        }

        cv::imshow("image", imgMat);
        cv::waitKey(0);

    }
}
catch (exception& e)
{
    cout << "\nexception thrown!" << endl;
    cout << e.what() << endl;
}
}
2016-12-05 05:01:10 -0600 commented answer eye landmark points

I have tried on cllosed eye image , and got different eye landmarks . so now eye landmarks points are different for open/closed eyes .

I have tried this to save eye landmarks : Link to code but I'm getting errors .

2016-12-05 03:43:04 -0600 commented answer eye landmark points

How eye landmarks points can be saved into .csv file ?

I applied face_landmark_detection_ex to an image and I got these points :

Test Image

are the eye landmarks correct ? how this can be used with a video file or web cam ?

2016-12-03 12:38:21 -0600 commented question eye landmark points

Hi , This paper : http://vision.fe.uni-lj.si/cvww2016/p... used facial landmarks to

detect eyes and then the eye aspect ratio (EAR) between height and width of the eye is computed.

2016-12-02 14:34:25 -0600 asked a question eye landmark points

Hi

I'm using facial landmark detector (dlib) to detect eye blinks . How the eye landmarks can be imported to a file ?

I need to use eye landmarks to calculate the ration between height and width of eye and to use SVM to classify blinks

Update : when I try to write landmark point to a file , different valuses are saved than the displayed landmarks in terminal windows , how to fix ?

Thanks

 #include <dlib/opencv.h>
 #include <opencv2/highgui/highgui.hpp>
 #include <dlib/image_processing/frontal_face_detector.h>
 #include <dlib/image_processing/render_face_detections.h>
 #include <dlib/image_processing.h>
 #include <dlib/gui_widgets.h>

using namespace dlib;
using namespace std;

int main()
{
try
{
    cv::VideoCapture cap(0);
    if (!cap.isOpened())
    {
        cerr << "Unable to connect to camera" << endl;
        return 1;
    }

    image_window win;
    frontal_face_detector detector = get_frontal_face_detector();
    shape_predictor pose_model;
    deserialize("shape_predictor_68_face_landmarks.dat") >> pose_model;

    while(!win.is_closed())
    {
        cv::Mat temp;
        cap >> temp;

        cv_image<bgr_pixel> cimg(temp);

        // Detect faces 
        std::vector<rectangle> faces = detector(cimg);
        // Find the pose of each face.
        std::vector<full_object_detection> shapes;
           ofstream outputfile;
           outputfile.open("data1.csv");

        for (unsigned long i = 0; i < faces.size(); ++i)
      {  

               full_object_detection shape = pose_model(cimg, faces[i]);
               cout << "number of parts: "<< shape.num_parts() << endl;

        cout << "Eye Landmark points for right eye : "<< endl;
        cout << "pixel position of 36 part:  " << shape.part(36) << endl;
        cout << "pixel position of 37 part: " << shape.part(37) << endl;
        cout << "pixel position of 38 part:  " << shape.part(38) << endl;
        cout << "pixel position of 39 part: " << shape.part(39) << endl;
        cout << "pixel position of 40 part: " << shape.part(40) << endl;
        cout << "pixel position of 41 part: " << shape.part(41) << endl;

        cout << endl;

        cout << "Eye Landmark points for left eye : "<< endl;

        cout << "pixel position of 42 part:  " << shape.part(42) << endl;
        cout << "pixel position of 43 part: " << shape.part(43) << endl;
        cout << "pixel position of 44 part:  " << shape.part(44) << endl;
        cout << "pixel position of 45 part: " << shape.part(45) << endl;
        cout << "pixel position of 46 part: " << shape.part(46) << endl;
        cout << "pixel position of 47 part: " << shape.part(47) << endl;

        double P37_41_x = shape.part(37).x() - shape.part(41).x();
        double P37_41_y=  shape.part(37).y() -shape.part(41).y() ;

        double p37_41_sqrt=sqrt((P37_41_x * P37_41_x) + (P37_41_y * P37_41_y));


       double P38_40_x = shape.part(38).x() - shape.part(40).x();
       double P38_40_y = shape.part(38).y() - shape.part(40).y();

       double p38_40_sqrt=sqrt((P38_40_x * P38_40_x) + (P38_40_y * P38_40_y));



      double P36_39_x = shape.part(36).x() - shape.part(39).x();  
      double P36_39_y = shape.part(36).y() - shape.part(39).y();

      double p36_39_sqrt=sqrt((P36_39_x * P36_39_x) + (P36_39_y * P36_39_y));



     double EAR= p37_41_sqrt +  p38_40_sqrt/2* p36_39_sqrt;


    cout << "EAR value =  " << EAR << endl;


  shapes.push_back(pose_model(cimg, faces[i]));


   const full_object_detection& d = shapes[0];

              }

        win.clear_overlay();
        win.set_image(cimg);
        win.add_overlay(render_face_detections(shapes));
    }
}
catch(serialization_error& e)
{
    cout << "You need dlib's default face landmarking model file to run this example." << endl;
    cout << "You can get it from the following URL: " << endl;
    cout << "   http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2" << endl;
    cout << endl << e.what() << endl;
}
catch(exception& e)
{
    cout << e.what() << endl;
}
 }
2016-09-30 06:41:41 -0600 commented question Running Clandmark webcam example

why you are so angry ? If you can't help please don't answer

2016-09-29 05:35:23 -0600 asked a question Running Clandmark webcam example

Hi

I installed clandmark on Ubuntu , now I want to run the facial landmark detection using web cam .

What is the procedure to run the code ?

Thanks for help

2016-09-13 06:58:01 -0600 asked a question Loading OpenCV on android fails

hello, i am working on android app using open cv library,

here is main service code :

Main_service_code.java

I tried to use static link by include libopencv_java.so from opencv 2.4.10 inside folder :

/home/user/app/_1OpenCVopticalflow/src/main/libs/armeabi-v7a/libopencv_java.so

when the app starts on my mobile this message is displayed in android studio :

I/MainService: OpenCV Load Failure .

This is android.mk file :

LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
#OPENCV_LIB_TYPE:=STATIC
#OPENCV_CAMERA_MODULES:=off
#OPENCV_INSTALL_MODULES:=on
include /home/developer/OpenCV-2.4.10-android-sdk/sdk/native/jni/OpenCV.mk
#include ../../sdk/native/jni/OpenCV.mk
#include ../../../OpenCV-2.4.10-android-sdk/sdk/native/jni/OpenCV.mk
#include ../../../OpenCV-android-sdk-2-4-11/sdk/native/jni/OpenCV.mk

 LOCAL_SRC_FILES  := common_settings_phone.cpp common.cpp blinkmeasure.cpp blinkmeasuref.cpp  farneback_jni.cpp farneback.cpp optflow_jni.cpp optflow.cpp templatebased.cpp templatebased_jni.cpp eyeLike/src/findEyeCenter.cpp eyeLike/src/helpers.cpp
 LOCAL_C_INCLUDES += $(LOCAL_PATH)
 LOCAL_LDLIBS     += -llog -ldl
 LOCAL_CFLAGS += -std=c++11 -DIS_PHONE
 LOCAL_MODULE     := eyemon
 include $(BUILD_SHARED_LIBRARY)
2016-09-13 06:51:25 -0600 answered a question I want to install opencv3.0.0 in ubuntu14.04
2016-09-07 09:45:10 -0600 asked a question import opencv project into android studio

Hi

I'm trying to import opecv_android test project into android studio.

the opencv used for this project is opencv_android sdk 2.4.10.

What is the required setting for opencv and android studio is needed to run the app ?