Ask Your Question
2

Real time head segmentation using opencv

asked 2015-03-23 12:46:02 -0600

john_1282 gravatar image

I am using opencv 2.410 to implement a project. My project allows to segment head from video sequence which get from camera. First, I detect the head region and then apply segmentation method for that ROI region. For high accurate segmentation, I have chosen Grabcut method. However, it is very slow. I only achieved about 2 frames/second (Although I used downsampling method). I have two questions:

1. Have any faster method than Grabcut which have similar accuracy? On other hands, Do we have any way to segment head region.

2. Could you see my code and give me some optimal way to make it faster?

Thank you in advance

    #include <iostream>
    #include <string>
    #include <time.h>
    //include opencv core
    #include "opencv2\core\core.hpp"
    #include "opencv2\contrib\contrib.hpp"
    #include "opencv2\highgui\highgui.hpp"
    #include "opencv2\objdetect\objdetect.hpp"
    #include "opencv2\opencv.hpp"

    //file handling
    #include <fstream>
    #include <sstream>

    using namespace std;
    using namespace cv;

    //Functions
    int VideoDisplay();
    Mat GrabCut(Mat image);

    const unsigned int BORDER = 5;
    const unsigned int BORDER2 = BORDER + BORDER;

    int main()
    {   
        int value=VideoDisplay();
        system("pause");
        return 0;
    }
    Mat GrabCut(Mat image)
    {
        clock_t tStart_all = clock();
        cv::Mat result; // segmentation result (4 possible values)
        cv::Mat bgModel,fgModel; // the models (internally used)
        // downsample the image
        cv::Mat downsampled;
        cv::pyrDown(image, downsampled, cv::Size(image.cols/2, image.rows/2));
        cv::Rect rectangle(BORDER,BORDER,downsampled.cols-BORDER2,downsampled.rows-BORDER2);

        clock_t tStart = clock();
        // GrabCut segmentation
        cv::grabCut(downsampled,    // input image
            result,   // segmentation result
            rectangle,// rectangle containing foreground
            bgModel,fgModel, // models
            1,        // number of iterations
            cv::GC_INIT_WITH_RECT); // use rectangle
        printf("Time taken by GrabCut with downsampled image: %f s\n", (clock() - tStart)/(double)CLOCKS_PER_SEC);

        // Get the pixels marked as likely foreground
        cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
        // upsample the resulting mask
        cv::Mat resultUp;
        cv::pyrUp(result, resultUp, cv::Size(result.cols*2, result.rows*2));
        // Generate output image
        cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
        image.copyTo(foreground,resultUp); // bg pixels not copied
        return foreground;
    }

    int  VideoDisplay(){

        cout << "start recognizing..." << endl;
        //lbpcascades/lbpcascade_frontalface.xml
        string classifier = "C:/opencv/sources/data/haarcascades/haarcascade_frontalface_default.xml";

        CascadeClassifier face_cascade;
        string window = "Capture - face detection";

        if (!face_cascade.load(classifier)){
            cout << " Error loading file" << endl;
            return -1;
        }
        VideoCapture cap(0);
        //VideoCapture cap("C:/Users/lsf-admin/Pictures/Camera Roll/video000.mp4");

        if (!cap.isOpened())
        {
            cout << "exit" << endl;
            return -1;
        }

        //double fps = cap.get(CV_CAP_PROP_FPS);
        //cout << " Frames per seconds " << fps << endl;
        namedWindow(window, 1);
        long count = 0;
        int fps=0;
        //Start and end times
        time_t start,end;
        //Start the clock
        time(&start);
        int counter=0;



        while (true)
        {
            vector<Rect> faces;
            Mat frame;
            Mat graySacleFrame;
            Mat original;

            cap >> frame;

            time(&end);
            ++counter;
            double sec=difftime(end,start);
            fps=counter/sec;

            if (!frame.empty()){

                //clone from original frame
                original = frame.clone();

                //convert image to gray scale and equalize
                cvtColor(original, graySacleFrame, CV_BGR2GRAY);
                //equalizeHist(graySacleFrame, graySacleFrame);

                //detect face in gray image
                face_cascade.detectMultiScale(graySacleFrame, faces, 1.1, 3, 0, cv::Size(90, 90));

                //number of faces detected
                //cout << faces.size() << " faces ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
3

answered 2015-03-23 16:47:36 -0600

fedor gravatar image

Hi

For fast face pose estamination you can use algoritms based on 68 points .

Dlib has implementation of alghoritm that finds 68 points in 1 ms ( if you given ROI of face ) . Dlib have standart face detection too but you can use Opencv detection and Dlib estamination I didn't used Grabcut method so can't say that it's what you are searching :)

Plus you can detect face not on each frame because on near for example 5 frames it's pretty same. Maybe you can mach frames and if its too different serch new face pose ...or it's just my fantasy

edit flag offensive delete link more

Comments

Thank fedor. Do you try to use Dlib method? It looks like fast algorithm.

john_1282 gravatar imagejohn_1282 ( 2015-03-23 22:23:43 -0600 )edit
1

Yes, I tryed it :) And tryed with opencv face detection. It's realy fast algorithm wich takes only ~ 1ms for pose estamination for detected face !

fedor gravatar imagefedor ( 2015-03-24 13:33:18 -0600 )edit

Could you share your source code to my email? It is very useful for my saving time. I want to compare that method and grabcut about both accuracy and computational time. Thank you so much. My email is [email protected]

john_1282 gravatar imagejohn_1282 ( 2015-03-25 00:57:26 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-03-23 12:46:02 -0600

Seen: 2,223 times

Last updated: Mar 23 '15