How to use Edge Orientation histogram for object detection?

asked 2015-08-29 08:27:06 -0500

Islam Alaa gravatar image

I am working on an object detection code and I chose the edge orientation histogram as a descriptor for the matching.

I am facing a problem in the back projected histogram as i don't seem to have a good matching , the back projected image is mostly white, which means that i cannot use meanshift or so for detection of the object.

Please help me regarding this matter:

here is what i've done so far:

  1. Take an initial ROI (the target needed to be detected in the video stream).
  2. convert ROI to grayscale
  3. apply sobel operator for both x, y derivatives.
  4. calculate orientation using opencv phase function (from derivative x and derivative y)
  5. make a histogram of the generated orientation. with the following specs: (range : 0 to 2 PI) , (single channel) , (256 bins)
  6. normalize the histogram

the code for doing these steps is the following:

Mat ROI_grad_x, ROI_grad_y , ROI_grad , ROI_gray;
        Mat ROI_abs_grad_x, ROI_abs_grad_y;

        cvtColor(ROI, ROI_gray, CV_BGR2GRAY);

        /// Gradient X
        Sobel( ROI_gray, ROI_grad_x, CV_16S, 1, 0, 3 );
        /// Gradient Y
        Sobel( ROI_gray, ROI_grad_y, CV_16S, 0, 1, 3 );

        convertScaleAbs( ROI_grad_x, ROI_abs_grad_x );
        convertScaleAbs( ROI_grad_y, ROI_abs_grad_y );

        addWeighted( ROI_abs_grad_x, 0.5, ROI_abs_grad_y, 0.5, 0, ROI_grad );

        Mat ROI_orientation = Mat::zeros(ROI_abs_grad_x.rows, ROI_abs_grad_y.cols, CV_32F); //to store the gradients 
        Mat ROI_orientation_norm ;



        phase(ROI_grad_x, ROI_grad_y, ROI_orientation , false);

        Mat ROI_orientation_hist ;
        float ROI_orientation_range[] = {0 , CV_PI};
        const float *ROI_orientation_histRange[] = {ROI_orientation_range};
        int ROI_orientation_histSize =256;
        //calcHist( &ROI_orientation, 1, 0, Mat(), ROI_orientation_hist, 1, &ROI_orientation_histSize, &ROI_orientation_histRange , true, false);
        calcHist( &ROI_orientation, 1, 0, Mat(), ROI_orientation_hist, 1, &ROI_orientation_histSize, ROI_orientation_histRange , true, false);

        normalize( ROI_orientation_hist, ROI_orientation_hist, 0, 255, NORM_MINMAX, -1, Mat() );

then , and for each camera frame captured , I do the following steps:

  1. convert to grayscale

  2. apply sobel operator for both x derivative and y derivative.

  3. compute orientation using phase opencv function (using the 2 derivatives mentioned above)

  4. back project the histogram onto the orientation frame matrix to get the matches.

the code used for this part is the following :

 Mat grad_x, grad_y , grad;
Mat abs_grad_x, abs_grad_y;

/// Gradient X
Sobel( frame_gray, grad_x, CV_16S, 1, 0, 3 );
/// Gradient Y
Sobel( frame_gray, grad_y, CV_16S, 0, 1, 3 );

convertScaleAbs( grad_x, abs_grad_x );
convertScaleAbs( grad_y, abs_grad_y );

addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );

Mat orientation = Mat::zeros(abs_grad_x.rows, abs_grad_y.cols, CV_32F); //to store the gradients 
Mat orientation_norm ;



phase(grad_x, grad_y, orientation , false);
Mat EOH_backProj ;
calcBackProject( &orientation, 1, 0, ROI_orientation_hist, EOH_backProj, ROI_orientation_histRange, 1, true );

So , what seems to be the problem in my approach ?!

Thanks alot.

edit retag flag offensive close merge delete