Ask Your Question

Revision history [back]

Basically you want two steps for this

  1. Face detection in the image
  2. Mood detection of the face

For the first item, you can simply use the Viola&Jones platform, which is available in OpenCV, and which can be easily adapted to match your needs. More info on this link:

Once you have detected the region you can go to psosible directions

  1. Match the head image versus a database of images of emotions, try to find the closest matching element and assign the same classification/label.
  2. Detect interesting face points, like nose tip, mouth corners, eye locations, closed/open lids and determine a relation between this elements for each mood.

First approach is possible by applying an abstract representation of the image (eigenFaces, fisherFaces, ...) creating a unique representation of each mood and fitting a codebook featurevector to it. Then match vectors using a distance measure to find the best match.

Interesting links for you

Second approach asks more reading of papers, google on face landmarks. Try to use the eye and mouth models of openCV to detect them inside the face region, ... This will work alot better, but will be more challenging to implement.