Ask Your Question

Mood/Emotion Detection (Happy to Sad) using OpenCV

asked 2013-04-02 09:00:23 -0600

UserOpenCV gravatar image

updated 2017-09-09 08:37:22 -0600

My project is to detect the mood of a person in an image.
I want it as a percentage from 0-100; 0 for very sad and 100 for very happy.

Can I do it using OpenCV?

Should I download any database for my project for training purpose? If so please help me to start?


I have come across this link in OpenCV tutorials. The example is for Gender but the link mentions we can do the same for Emotion also.

Can anyone provide me any reference or suggestions on what DB to download the database and how to do cropping (only lips or ...). How many images?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2013-04-03 04:44:22 -0600

mrgloom gravatar image
  1. Detect face on image. ( )
  2. Make\get sad and happy faces and train SVM for 2 classes with probability output.
edit flag offensive delete link more

answered 2013-04-03 02:49:29 -0600

Basically you want two steps for this

  1. Face detection in the image
  2. Mood detection of the face

For the first item, you can simply use the Viola&Jones platform, which is available in OpenCV, and which can be easily adapted to match your needs. More info on this link:

Once you have detected the region you can go to psosible directions

  1. Match the head image versus a database of images of emotions, try to find the closest matching element and assign the same classification/label.
  2. Detect interesting face points, like nose tip, mouth corners, eye locations, closed/open lids and determine a relation between this elements for each mood.

First approach is possible by applying an abstract representation of the image (eigenFaces, fisherFaces, ...) creating a unique representation of each mood and fitting a codebook featurevector to it. Then match vectors using a distance measure to find the best match.

Interesting links for you

Second approach asks more reading of papers, google on face landmarks. Try to use the eye and mouth models of openCV to detect them inside the face region, ... This will work alot better, but will be more challenging to implement.

edit flag offensive delete link more


I followed this link using GENKI DB for training and the accuracy is 60% on the DB with 200 training and rest as test images and when we tried testing it using web-cam, the results is not at all good. Can you comment ...

UserOpenCV gravatar imageUserOpenCV ( 2013-04-10 02:13:40 -0600 )edit

Basically if you want good classification you need to train with a data set that is representable for the environment where you want your application to work. So basically, gather 20 friends, ask them to look sad and happy at a webcam for like 10 minutes, and this in the setup where tests will happen also. Extract frames, segment faces, peform classification and create a recognizer like the guide says. Then try again. These algorithms are heavily influenced by lightning, skin color, amount of training data, ...

StevenPuttemans gravatar imageStevenPuttemans ( 2013-04-10 02:27:31 -0600 )edit

Question Tools



Asked: 2013-04-02 09:00:23 -0600

Seen: 23,920 times

Last updated: Apr 03 '13