Ask Your Question

tofi's profile - activity

2016-01-20 15:27:09 -0600 commented answer accurate eye center localisation by means of gradients

thanks very much for help , I tried plying video file but was not working , what should be changed to play video files

2016-01-20 15:18:21 -0600 commented question accurate eye center localisation by means of gradients

is your output like this image ?

2016-01-20 13:29:12 -0600 commented question accurate eye center localisation by means of gradients

Hi @sturkmen , I have changed capture = cvCaptureFromCAM( -1 );to `cv::VideoCapture( 0 ); still this erros displayed using opencv 2.4.10 :

   `C:\Documents\opencv\face\main.o:main.cpp|| undefined reference to `createCornerKernels()'

   'C:\Documents\opencv\face\main.o:main.cpp|| undefined reference to `findEyeCenter(cv::Mat, cv::Rect_<int>, std::string)'
2016-01-06 13:18:46 -0600 commented question apply result of morphology to source frame

@sturkmen I'm not part of team :)

2016-01-06 13:15:44 -0600 commented question apply result of morphology to source frame

Hi, @LorenaGdL the link you put is not my question , please don't judge me with something that I haven't done

2016-01-03 18:08:27 -0600 commented answer RGB to Lab conversion and median filtering function

@r S Nikhil ,

  1. can you make this clearer

    ( Note: With the second last argument in addWeighted() , you can also manually input an offset for each pixel's 'L' value to make your image brighter or darker as a whole. )

  2. what about cv::add() ? what the difference if I used it ?

Thanks very much

2016-01-03 14:01:27 -0600 received badge  Editor (source)
2016-01-03 09:53:59 -0600 asked a question RGB to Lab conversion and median filtering function

Hi .

I trying to do the following tasks using function below :

  1. after the RGB input image is converted to an Lab* image, an

  2. extract L ( illuminative component ) and apply a 31×31 median filter to the L image,

  3. obtain the inverted illuminative image (L) .

  4. adding the inverted image to the original L image.

I need your review of the function is it ok ?

and how the addition of inverted image and L component can be obtained (void add() or addWeighted() )

or is there another way to do it ?

    void split_lab(Mat planes)
    {

    Mat lab,dst,inv;
    cvtColor(planes, lab, CV_BGR2Lab);

    vector <Mat> splits;
    split(lab, splits);


    Mat L = splits[0];

    medianBlur(L, dst, 31);             // is that how 31*31 median blur is obtained ?


    bitwise_not(dst, inv);             // is this the right way to obtain invert ?

      }
2016-01-03 08:29:16 -0600 commented answer unhandled memory exception problem

@sturkmen , thanks very much for help , my best wishes to you

2016-01-02 18:21:24 -0600 commented answer unhandled memory exception problem

yes , this is the code

2016-01-02 15:39:10 -0600 commented answer unhandled memory exception problem

I have tried the code on ubuntu , using video file and webcam , the detected eye seems to be zoomed , why this is happening ? output image

2016-01-02 11:08:57 -0600 received badge  Enthusiast
2015-12-31 14:17:58 -0600 commented answer unhandled memory exception problem

what about the questions should I open another question ? or you can answer me Thanks

2015-12-31 14:15:59 -0600 received badge  Scholar (source)
2015-12-31 14:15:58 -0600 received badge  Supporter (source)
2015-12-29 16:49:05 -0600 asked a question unhandled memory exception problem

hi

I'm using this code for face and eye detection :

when I run it it throws memory exception and in the cmd this message dispalyed :

image description

why this happening can you help me please

Thanks

int main()

{

         CascadeClassifier faceCascade;
         CascadeClassifier eyeCascade1;
         CascadeClassifier eyeCascade2;

Rect faceRect;
VideoCapture videoCapture ("nn.mp4");



cout << "Face Detection ." << endl;
cout << "Realtime face detection using LBP " << endl;
cout << "Compiled with OpenCV version " << CV_VERSION << endl << endl;

// Load the face and 1 or 2 eye detection XML classifiers.
initDetectors(faceCascade,eyeCascade2);

Mat thresh, gray;
while (1)
{
     Mat frame;
     videoCapture>> frame;
     detectLargestObject(frame, faceCascade, faceRect);
     if (faceRect.width > 0)
     {
         rectangle(frame, faceRect, CV_RGB(255, 0, 0), 2, CV_AA);
     }

     Mat faceImg = frame(faceRect);

     Mat gray;
     if (faceImg.channels() == 3) {
         cvtColor(faceImg, gray, CV_BGR2GRAY);
     }
     else if (faceImg.channels() == 4) {
         cvtColor(faceImg, gray, CV_BGRA2GRAY);
     }
     else {
         // Access the input image directly, since it is already grayscale.
         gray = faceImg;
     }

     Point leftEye, rightEye;
     Rect searchedLeftEye, searchedRightEye;

     detectBothEyes(gray, eyeCascade1, eyeCascade2, leftEye, rightEye, & searchedLeftEye, & searchedRightEye);

     rectangle(faceImg, searchedLeftEye,Scalar(0, 255, 0), 2, 8, 0);
     rectangle(faceImg, searchedRightEye, Scalar(0, 255, 0), 2, 8, 0);

       searchedRightEye.y += searchedRightEye.height / 3;

       searchedRightEye.height -= searchedRightEye.height / 3;

      Mat eye_region=faceImg(searchedRightEye);

     cvtColor(eye_region, gray, CV_BGR2GRAY);

     threshold(gray, thresh, 60, 255, THRESH_BINARY);


     imshow("eye_  video", thresh);

     imshow("video", frame);
     // Press 'c' to escape
     if (waitKey(30) == 'c') break;
}

return 0;
}
2015-12-06 10:03:28 -0600 commented answer calculating how many times white pixels appear form frame difference

I have used countNonZero() like this :

delta_count = cv2.countNonZero(frame_delta) print delta_count it shows different numbers like : 1151 878 751 891 868 I don't know how to classify them . is there any other ways like contours ?

2015-12-06 05:40:43 -0600 asked a question calculating how many times white pixels appear form frame difference

Hi

I'm now working on this steps to detect blinks

  1. calculate Frame difference = current frame - previous frame using (cv2.absdiff)

  2. convert the result of Frame difference to binary image (threshold cv2.threshold) and cv2.countNonZero() ) .

  3. Eroding and Dilating (cv2.morphologyEx (openmorphologyEx )

with these steps the result is when the eyes blinks there is white pixels like the image below :

image description

.

when the eyes open there is not whote pixels only black output like image below

image description

Now I want to calculate rate of blinks based on white pixels when eyes blinks . for example a counter that counts

how many these white pixels detected . thanks for help

2015-11-22 13:18:57 -0600 commented question localize the eyes using ROI

Can I modify the Rect Roi (x,y, height and width) parameters to find the eyes separately or either left ,right eye ?

2015-11-22 12:59:01 -0600 commented question localize the eyes using ROI

It was not accurate when there is head movement and I want only eye location even if one of them

2015-11-22 11:19:24 -0600 asked a question localize the eyes using ROI

I have used face-cascade classifier and found the face and found eyes region using ROI without using eye cascade

as shown in the image Image.

Now I want to apply ROI to detect eyes region exactly from given face coordinates

how can I get this result ?

image description

This is my code for the first image :

cvtColor(frame, frame_gray, COLOR_BGR2GRAY);
    equalizeHist(frame_gray, frame_gray);

    // Detect faces
    face_cascade.detectMultiScale(frame_gray, faces, 1.1, 4, CASCADE_SCALE_IMAGE, Size(20, 20));

    // Set Region of Interest

    size_t i = 0; 

    for (i = 0; i < faces.size(); i++) // Iterate through all current elements (detected faces)

    {

        Point pt1(faces[i].x, faces[i].y); // Display detected faces on main window - live stream from camera
        Point pt2((faces[i].x + faces[i].height), (faces[i].y + faces[i].width));
        rectangle(frame, pt1, pt2, Scalar(0, 255, 0), 2, 8, 0);

        // set ROI for the eyes

        Rect Roi = faces[i];

        Roi.height = Roi.height / 4;

        Roi.y = Roi.y + Roi.height;

        Mat eye_region = frame(Roi).clone();

        Mat eye_gray = frame(Roi).clone();

        imshow("video main      frame", eye_region);