Ask Your Question
1

Emotion recognition

asked 2017-12-19 08:14:50 -0600

Mateusz gravatar image

updated 2017-12-19 10:07:12 -0600

Hi there !

I am doing my emotion recognition project (facial emotion recognition) on Raspberry Pi. I have done quite a lot already. I tried many possibilities. Tensorflow (but Raspberry Pi is too slow for that, in general neural networks need great computional power). So I've came up with an idea to user another possibility. Namely, my algorithm is the following.

  1. Find face on the frame (HAAR CASCADE - this is quite good for raspberry pi).
  2. Transform to grayscale.
  3. Find facial landmarks.
  4. Cut the face recogion (only face region) using the facial landmarks.
  5. Extract features SURF, BRIEF and so on and I categorize features using kmeans clustering algorithm.
  6. Then I try to notice/save "frequency of occurrence" of every single feature, simply I create feature histogram.
  7. I normalize the feature histogram
  8. Put every single feature histogram into features vector and put the labels to the vector.
  9. Reduce dimensionality using PCA.
  10. Divide data set into testing and training set (propotion 0.2/0.8).
  11. Do the testing. AND IT WORKS. But problems begin when:

I want to classify single sample. Then I repeat the procedure again and then everything goes right till I come to 9th point, it means dimensionality reduction. My single sample "feature histogram of single sample" has 1000 columns but but I need to reduce dimensionality to 214 columns (as in the training data set), in order to be able to classify the single sample. Could you please help how to reduce the dimensionality ?

Below I give you the code snippets in order to give you an overview about my algorithm.

     for(int i = 0; i < numberOfImages; i++)
    {
        string pathToImage = "";
        inputFile["img_" + to_string(i) + "_face"] >> pathToImage;
        cout << "img_" + to_string(i) + "_face" << endl;
        cout << pathToImage << endl;

        Mat face = imread(pathToImage, CV_LOAD_IMAGE_GRAYSCALE);

        resize(face, face, Size(80,80));

        Mat extractedFeature = extractFeature(face);

        bowTrainer.add(extractedFeature);

        temp.push_back("Obrazek " + to_string(i));
        temp.push_back("Cols: " + to_string(extractedFeature.cols));
        temp.push_back("Rows: " + to_string(extractedFeature.rows));

        temp1.push_back(temp);

        cout << "pathToImage:  " << pathToImage << endl;

        outputFile << "image_path_" + to_string(i) << pathToImage;

        featuresVector.push_back(extractedFeature);

    }

    vector<Mat> descriptors = bowTrainer.getDescriptors();

    Mat dictionary = bowTrainer.cluster();

    bowDE.setVocabulary(dictionary);

    Ptr<Feature2D> surf_1 = xfeatures2d::SURF::create();
    vector<Mat> histogramsVector(0);
    for(int i = 0; i < numberOfImages; i++)
    {
        string pathToImage = "";
        inputFile["img_" + to_string(i) + "_face"] >> pathToImage;

        Mat face = imread(pathToImage, CV_LOAD_IMAGE_GRAYSCALE);
        resize(face, face, Size(80,80));

        Mat descriptors_1;

        vector<KeyPoint> keypoints;


        surf_1->detect(face, keypoints, Mat());
        bowDE.compute(face, keypoints, descriptors_1);

        histogramsVector.push_back(descriptors_1);
    }

Then I am doing kmeans:

kmeans(rawFeatureData, bins, labels, TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 100, 1.0), 3, KMEANS_PP_CENTERS, centers);

then PCA:

PCA pca(featuresDataOverBins_BOW, Mat(), PCA::DATA_AS_ROW);

It works. But below I give you the procedure for single sample, to extract features from it. Unfortunately, it crashes.

Mat FeatureExtractor::extractFeaturesFromSingleFrame(Mat & face)
{
FileStorage in(outputFileName, FileStorage::READ);
//FlannBasedMatcher matcher;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
Ptr<Feature2D> surf = xfeatures2d::SURF::create();

/*****************************************************************************************************
*****************************************************************************************************/
//C++: BOWKMeansTrainer::BOWKMeansTrainer(int clusterCount, const TermCriteria& termcrit=TermCriteria()
//                                        , int attempts=3, int flags=KMEANS_PP_CENTERS )
BOWKMeansTrainer bowTrainer(bins, TermCriteria ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-12-19 10:28:14 -0600

berak gravatar image

updated 2017-12-19 10:29:19 -0600

again, making a PCA of a 1 row Mat does not make any sense, and also even trying to reduce that to 0.9 leads to your error. you need at least as many rows in your PCA, as your desired feature size.

simple example:

Mat m(1,200,CV_32F);
PCA pca(m,Mat(),0,0); // retain all
Mat n = pca.project(m);
cout << n.size() << endl;
[1x1]

it would work, like this:

Mat m(100,200,CV_32F);
PCA pca(m,Mat(),0,50);
Mat n = pca.project(m);
cout << n.size() << endl;
[50 x 100]

so, either entirely skip the PCA idea, and use only the desired 214 clusters in your BOW, or:

  • make a PCA from the whole SVM traindata set (a [1000 x nImages] Mat)
  • project the traindata set, so it's [214 x nImages]
  • train the SVM on that
  • for testing later, project any BOW feature, using the PCA from before (from 1000 x 1 to 214 x 1)
  • predict with SVM

but again, imho none of it makes much sense. do some profiling, and you'll see, that the major bottleneck is the feature detection and the BOW matching, not the SVM prediction (but this is, what you're trying to optimize here)

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2017-12-19 08:14:50 -0600

Seen: 685 times

Last updated: Dec 19 '17