Save the eye image after the detection [closed]

asked 2017-01-09 05:46:45 -0600

Tiaguituh05 gravatar image

updated 2017-01-09 05:48:00 -0600

berak gravatar image

Hello, So I was playing around with the face and eye detector with OpenCV

import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
img = cv2.imread('image_020_1.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    roi_gray = gray[y:y+h, x:x+w]
    roi_color = img[y:y+h, x:x+w]
    eyes = eye_cascade.detectMultiScale(roi_gray)
    for (ex,ey,ew,eh) in eyes:
        cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

And I wanted to try to get it a little further, like saying If the eye is open or closed, for that I already trained a classifier, but I need to save the roi (in this case the eyes separetly) into a image file, I tried everything but I couldnt get anywhere, does any1 know how to do it?

edit retag flag offensive reopen merge delete

Closed for the following reason duplicate question by Tiaguituh05
close date 2017-01-09 08:29:08.543356

Comments

"I already trained a classifier" -- just curious, what did you do there ? on what data ?

berak gravatar imageberak ( 2017-01-09 05:49:06 -0600 )edit
1

@berak Used the nvidia DIGITS plattaform to create a caffe model using the CEW dataset (closed eyes in the wild)

Tiaguituh05 gravatar imageTiaguituh05 ( 2017-01-09 05:51:19 -0600 )edit

do you need to discriminate between left and right eye ?

berak gravatar imageberak ( 2017-01-09 05:53:13 -0600 )edit
1

@berak yea I do. After a quick search here on the forum I found that some1 already had this question before http://answers.opencv.org/question/10... Should had done the search before creating the post, im sorry :)

Tiaguituh05 gravatar imageTiaguituh05 ( 2017-01-09 06:00:18 -0600 )edit

@berak Is there anyway to tune up the eye haarcacade classifier? The photos I need to work with have a low resolution, and that leads to some errors. Example: link text Which then saves me a image of that part of the lip, and I wanted to try to reduce those liittle errors

Tiaguituh05 gravatar imageTiaguituh05 ( 2017-01-09 06:12:27 -0600 )edit
1

maybe also have a look at this related question , because:

  • opencv's eye-cascades are not really accurate (well, might not be accurate enough to feed it into a cnn)
  • they might even miss one of the eyes, so you'd have to check (maybe by rectangle x,y) which eye you got there.

again, using some landmarks lib (like dlib) might give much better results.

berak gravatar imageberak ( 2017-01-09 06:13:08 -0600 )edit
1

but again, to kill false positives, increase the "min_neighbours" param in eye_cascade.detectMultiScale

(you used 5 for the face detection there, so try that for the eyes, too)

berak gravatar imageberak ( 2017-01-09 06:18:19 -0600 )edit

@berak thats exacly the problem Im facing, the eye cascades are really not that accurate. My first try was exacly that one, the dlib landmarker, but I couldnt crop the eyes from that scrip

Awesome, changing to min_neighbours to 2 (5 made it only detect one eye) fixed the problem, at least for that image, I will try to test it on more later. I really would like to extract the eyes from the dlib landmarker, but I think its too much for me, im still a newbie on programming

Tiaguituh05 gravatar imageTiaguituh05 ( 2017-01-09 06:18:50 -0600 )edit

I am using this code. But returned this error: roi_color = img[y:y+h, x:x+w] TypeError: 'NoneType' object has no attribute '__getitem__'

Denis Nobre gravatar imageDenis Nobre ( 2017-01-09 07:14:20 -0600 )edit

@Denis, please do not post answer, if you have a question or comment, to start with.

then, your image might not be named "img", as in the example above.

berak gravatar imageberak ( 2017-01-09 07:25:05 -0600 )edit