Ask Your Question
2

Facemark Kazemi Training

asked 2018-01-14 13:33:02 -0600

phillity gravatar image

updated 2018-01-19 18:49:58 -0600

Hi everyone,

I am trying to train a Facemark Kazemi model. I am following the this guide and sample code.

Is there anyway to train the model where I can avoid loading all training images at once as they do in the sample and guide? I run out of memory after creating Mats for ~300 of the 2000 images in the HELEN training dataset :(

EDIT: I was able to load all training data using a 64bit process as StevenPuttemans suggested! I trained with the 2000 image training set from the HELEN dataset and model ended up being 39.5MB. The training took about 1.5 days to complete.

Although the model seemed to train okay, I am not getting very good results and, when I try to detect landmarks in real-time (video/webcam stream), the model is very slow :( The results I get using the LBF facemark class and its pretrained model are far better in terms of speed and accuracy. This makes me nervous that I did something incorrectly when training the Kazemi model.

One possible problem I noticed was that, when training the model, I get a message saying "[ INFO:0] Initialize OpenCL runtime..." rather than the "Training with 3080 samples" message which the tutorial lists. I also get this "[ INFO:0] Initialize OpenCL runtime..." message when loading the model I created. Is this message signaling that something is wrong with how I trained the model? I don't recieve this message when using the LBF and AAM facemark classes. Furthermore, did the author of the tutorial use 3080 training images rather than 2000 training training images? If anyone sees how I can improve my model's accuracy/speed please let me know!

Here is my training code. I use this sample config file and the haarcascade_frontalface_default:

#include "opencv2/face.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/objdetect.hpp"

#include <iostream>
#include <vector>
#include <string>

using namespace std;
using namespace cv;
using namespace cv::face;

static bool myDetector(InputArray image, OutputArray faces, CascadeClassifier *face_cascade)
{
    Mat gray;

    if (image.channels() > 1)
        cvtColor(image, gray, COLOR_BGR2GRAY);
    else
        gray = image.getMat().clone();

    equalizeHist(gray, gray);

    vector<Rect> faces_;
    face_cascade->detectMultiScale(gray, faces_, 1.4, 2, CASCADE_SCALE_IMAGE, Size(30, 30));
    Mat(faces_).copyTo(faces);
    return true;
}

int main(int argc, char** argv) {
    string annotations = "annotations.txt";
    string imagesList = "images.txt";
    string configfile_name = "sample_config_file.xml";
    string modelfile_name = "model.dat";
    string cascade_name = "haarcascade_frontalface_default.xml";
    Size scale(460, 460);

    CascadeClassifier face_cascade;
    face_cascade.load(cascade_name);
    FacemarkKazemi::Params params;
    params.configfile = configfile_name;
    Ptr<FacemarkKazemi> facemark = FacemarkKazemi::create(params);
    facemark->setFaceDetector((FN_FaceDetector)myDetector, &face_cascade);

    std::vector<String> images;
    std::vector<std::vector<Point2f> > facePoints;
    loadTrainingData(imagesList, annotations, images, facePoints, 0.0);

    vector<Mat> Trainimages;
    std::vector<std::vector<Point2f> > Trainlandmarks;

    Mat src;
    for (unsigned long i = 0; i < images.size(); i++) {
        src = imread(images.at(i));
        std::cout << "Image " <<  i << " " << src.rows << " " << src.cols << endl;

        if (src.empty()) {
            cout << images.at(i) << endl;
            cerr << string("Image ...
(more)
edit retag flag offensive close merge delete

Comments

1

I pointed the original developer of the code to this topic and he will have a look at it soon. In the meanwhile, can you at least specify the system you are using, just to know what limits you are facing.

StevenPuttemans gravatar imageStevenPuttemans ( 2018-01-15 04:20:47 -0600 )edit

Thanks for your reply @StevenPuttemans ! I am using OpenCV 3.4.0, Windows 10 and Visual Studio 2017 32 Bit Release compiler. My machine has 16GB of RAM and an Intel Core i7-6500U CPU.

After entering the for loop to load each training image and add them to the vector in the tutorial/sample, I am able to load about 300 images and then I receive this error: OpenCV Error: Insufficient memory (Failed to allocate 23887872 bytes) in cv::OutOfMemoryError

I also receive the same error when training the AAM or LBF model using the addTrainingSample() method after adding around 300 images to the models

phillity gravatar imagephillity ( 2018-01-15 13:10:45 -0600 )edit

Oh wow ... How do you hit 16GB of RAM with 300 images? You are using HD quality? It would Mean your images are about 50 MB a piece... That's just ridiculous because HELEN dataset is not that heavy as I remember. Are you sure you are not using a 32bit process? They are limited to 4 GB memory.

StevenPuttemans gravatar imageStevenPuttemans ( 2018-01-16 00:56:24 -0600 )edit
1

Yeah I am running the code using the VS 2017 32Bit Release compiler. I did not realize 32Bit had this limitation! I will rebuild and run it with the 64Bit compiler to see if it resolves the issue!

phillity gravatar imagephillity ( 2018-01-16 01:34:09 -0600 )edit

I was trying to work around training a dataset using Facemark Kazemi Algo. But am unsure of what does a config file (here: sample_config_file.xml) mean? Is it some pre-defined file? If not, any tutorial where it shows how to generate such file?

krshrimali gravatar imagekrshrimali ( 2018-03-23 16:16:46 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2018-01-16 02:42:57 -0600

YEP in principle any 32bit system cannot address more than 4GB of memory, that is why they invented 64 bit systems that do not have that limitation. That is also the reason why your OS is 64 bit from the moment you have more than 64GB of RAM installed.

edit flag offensive delete link more

Comments

1

Using a 64bit process allowed me to load all the training data, thanks!! Memory usage peaked at about 12GB. I have been training now for ~8 hours and should be done later today :)

phillity gravatar imagephillity ( 2018-01-18 12:53:07 -0600 )edit
1

Yep 12 GB should be about the memory needed for processing the HELEN dataset.

StevenPuttemans gravatar imageStevenPuttemans ( 2018-01-19 02:40:02 -0600 )edit
3

answered 2018-01-20 10:45:07 -0600

sukhad gravatar image

The problem is not in the message that you are getting. If you are not using GPU, then you should turn off all OpenCL and CUDA modules while installing OpenCV, then you would surely not get this message. The message displayed in the tutorial "Training with 3080 samples" is because I used 3080 images for training. While training ,I flipped each image from the training dataset(HELEN) to augment the dataset also i increased the oversampling amount to 100. To get the results as the trained model file you would have to keep these parameters. Also if you are training with 194 landmarks, then results would be not as accurate as expected. You should try to train with 68 landmarks. Talking about the speed issues, make sure you have" TBB = ON" while installing OpenCV, which is parallel processing library. Also please specify number of cores you have in your CPU. If you could please post the results, I would be better able to solve your issues. Sorry for the delay.

edit flag offensive delete link more

Comments

1

Hi. Thank you for your response! My machine has 2 cores and no GPU. I will also flip each training image as you did. I am training using 68 landmarks. I will rebuild using your tips and see if I can get rid of the message and get better results!

phillity gravatar imagephillity ( 2018-01-20 11:19:15 -0600 )edit
1

When flip images keep in mind this http://blog.dlib.net/2018/01/correctl...

sturkmen gravatar imagesturkmen ( 2018-01-21 13:38:49 -0600 )edit

Thank you! I only did the naive point flip so I will go back and fix them!

phillity gravatar imagephillity ( 2018-01-21 13:49:16 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-01-14 13:33:02 -0600

Seen: 1,588 times

Last updated: Jan 19 '18