Ask Your Question

How to use OpenCV to find if a person is wearing a hat or glasses?

asked 2015-09-21 22:46:07 -0500

thesanefreak gravatar image

I would like to use Computer Vision to do the following:

A picture is taken as input from the user. The program checks if the person in the picture is wearing a hat or glasses and decides validity or invalidity, and goes forward to correct the dimensions if the picture is valid.

What would be the best/easiest way to get this done?

I've been looking at cascade classifier training, would that be appropriate?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2015-09-22 01:36:50 -0500

I suggest you to read this post in order to see the main research on the topic of eyeglasses detection, recognition and so on:

You can add to this list another recently published algorithm based on local binary pattern:

Glasses detection on real images based on robust alignment

This algorithm works as follows. First of all, this work starts with location of the face using the algorithm by Viola & Jones. At this step you could use another face detection algorithm (see or After the face is located in the image, some preprocessing is necessary in order to deal with pose, rotation, scale and inaccuracies of the located face. A face normalization algorithm is applied to get the region around the eyes. Afterwards, LBP and Robust LBP are applied in order to get the feature sets. Finally, Support Vector Machine (SVM) is applied on the classiffication step. SVM is applied to classify the extracted feature histograms over the normalized eyes glasses regions. The output of the SVM classiffier is a two-class clasiffication problem i.e. glasses vs no glasses.

The face normalization algorithm relies on a detector of facial landmarks. When the shape or intensity characteristics of the eyes cannot be reliably measured due to occlusion (like wear- ing glasses or sunglasses), the context characteristics are very useful for eye localization. This is because eyes in the face context usually have stable relationship with other facial features in terms of both appearance and structure distribution. In this way, a detector of facial landmarks ( or learned by structured output SVM has been applied in order to detect the positions of the main points of the face and hence, of the eyes. To sum up, the input is a still image containing a single face. The output is estimated locations of a set of facial landmarks.

The face aligment algorithm is explained here:

The face normalization algorithm is explained here:

How to know if a person is wearing a hat

You could use this same approach but in order to know if a person is wearing a hat or not, you can use the upper part of the face instead the region of the eyes.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower


Asked: 2015-09-21 22:46:07 -0500

Seen: 7,191 times

Last updated: Sep 22 '15