Ask Your Question
0

How to estimate a face's age/gender using cv::dnn APIs

asked 2018-04-23 04:32:18 -0600

Paul Kuo gravatar image

updated 2018-04-23 05:13:20 -0600

berak gravatar image

Hi, OpenCV developer,

This is a subsequent question from http://answers.opencv.org/question/186005/dnnreadnetfromtensorflow-fail-on-loading-pre-trained-network-on-age-gender-detection/?answer=186571#post-id-186571 . Now I am able to load a deep age-gender detection model(developed in tensorFlow) into opencv dnn module(using cv::dnn::readNetFromTensorflow(...)) --Thank you, @dkurt.

However, after loading the deep model, I called

cv::Mat blobImg = cv::dnn::blobFromImage(...);
ageGenderNet.setInput(blobImg);
cv::Mat result= ageGenderNet.forward();

to estimate age/gender (details of how I have used these APIs are in the previous post), the result is incorrect. I feel I am not far from the finish line, but just need to make sure input argments (eg. the image channel(RGB, BGR) order, input parameters for cv::dnn::blobFrmImage() and whether there are some additional processing before or after calling these APIs. Any good suggestions are welcome~~

Thank you in advance

edit retag flag offensive close merge delete

Comments

1

First, it's a good idea to compare the results of TensorFlow and OpenCV DNN to see if the results are valid. Normally they should give the same result for the same data.

Then, check if the image should be normalized. This is often the case in DNNs (it's generally(image-mean)/std.dev or (image-128)/256), see the original TF code. Normally the channel order (RGB/BGR) is correctly handled by OpenCV.

kbarni gravatar imagekbarni ( 2018-04-23 08:10:08 -0600 )edit

Please explain more clearly what you mean with the result is incorrect ...

StevenPuttemans gravatar imageStevenPuttemans ( 2018-04-25 06:24:30 -0600 )edit

OK, let me explain in more detail.. I am using DLib to detect the face and facial landmarks then I can crop and normalise the face (this is the pre-processing for face's age and gender estimation). I used image provided from the original author https://www.dropbox.com/s/0nx4qo2ved43w8o/demo.jpg?dl=0 and able to pre-process it https://www.dropbox.com/s/7n0v6fh0cgvqmhe/faceNorm.bmp?dl=0

After pre-processing, the input image(faceNorm) is of 160x160x3 and should be ready for the following

 cv::Mat blobImg = cv::dnn::blobFromImage(faceNorm, 1.0, cv::Size(), cv::Scalar(), true, false);
 ageGenderNet.setInput(blobImg);
 cv::Mat result = ageGenderNet.forward()
Paul Kuo gravatar imagePaul Kuo ( 2018-04-26 04:05:32 -0600 )edit
1

They run fine. The result is 1 by 2 matrix, and I suppose 1st element is age and 2nd element is gender. However, the answer is strange, they are [-1.636259 2.316289].

Compare to running the same image in the python+tensorflow version (the author's provided version), I get [23.088894, 1] which indicates age is 23.088894 and gender is 1 (which is male).

The author provided python+tensorflow version can be found https://www.dropbox.com/s/e0b0d22qeyle0kz/eval.py?dl=0

Could anyone advise me what I did wrong or what I missed in my c/c++ version. Thank you

Paul Kuo gravatar imagePaul Kuo ( 2018-04-26 04:16:28 -0600 )edit

Hi, @StevenPuttemans@dkurt,

Any suggestion on this? thanks

Paul Kuo gravatar imagePaul Kuo ( 2018-05-01 20:13:48 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2018-10-04 12:28:38 -0600

noli gravatar image

cv::Mat result = ageGenderNet.forward("logits/age/MatMul") do the trick on this net. Note that the layers names are given in the pbtxt or can be retrieve by something like :

std::vector<cv::String> LayerNames= ageNet_.getLayerNames();

for (auto itername : LayerNames)

std::cout << itername << " --- Id : " << ageNet_.getLayerId(itername) << std::endl ;

As already mentionned, the image submit should be centered and normalized (stddev) (BTW i don't know how to properly insert code in an answer...)

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-04-23 04:32:18 -0600

Seen: 837 times

Last updated: Apr 23 '18