Ask Your Question

altella's profile - activity

2020-09-28 16:43:18 -0600 received badge  Notable Question (source)
2017-12-14 12:29:17 -0600 received badge  Popular Question (source)
2014-05-22 08:52:50 -0600 asked a question HOG person detection and setting up SVM classifiers

Hello all;

I am using HOG descriptors and SVM like the examples provided in OpenCV cpp samples to detect people. Following the sample, if I use:


I obtain a processing rate of approximately 150ms per image, which is a good performance. The drawback is that I obtain also many false positives. I have used MIT person database (128*64 pixel images containing a person), to train a ONE CLASS SVM classifier, as I only have positive examples. I have obtained the HOG feature vector of the images (3780 values) for the 924 images in the database. I have trained the SVM as follows: CvSVMParams params;

params.svm_type = CvSVM::ONE_CLASS;
params.kernel_type = CvSVM::LINEAR;;
params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 1000, 1e-6);

Mat ClassOutput = Mat::ones(1,numImagesDB,CV_32F);
int x=SVM.get_support_vector_count();
const float *v = SVM.get_support_vector(0);
vector<float> descriptorVector;

for(unsigned int i=0;i<trainingSVMData.cols;i++)

Obtaining one support vector of 3780 components. Inserting this SVM to he HOG descriptor:


Problems I have:

  • The processing now is extremely slow, almost 10 SECONDS for each frame !!!
  • The alforithm is more robust, but the rectangle detecting the person is twice the size of the person in height and width.

Could anyone help me or give some advise about it? Why so slow?

Thank you very much in advance,

Best regards,

2013-06-17 09:07:46 -0600 received badge  Student (source)
2013-06-17 07:51:37 -0600 received badge  Editor (source)
2013-06-17 04:30:31 -0600 asked a question pattern recognition to detect object position ?

Hello all;

I am trying to program a pattern recognition system using the features2d module also with nonfree module. My main objective is to detect the position of an object in a scene, given 5 models of different positions available. This algorithm must work translation, rotation and scale independent. I am using Surf detector as a first try, adjusting its parameters, and I obtain correct matches when the postion of the model and the position in the scene coincide. This can be seen in the following image:

image description

however, when I use the same algorithm with another position, I also obtain matches which obviously are incorrect:

image description

I want to detect the position of the object in the scene, but if I obtain matches in all the cases, it is impossible to know which is the real position. Is this approach correct for what I am intending to do? Any other good idea?

Thank you all very much in advance,

Best regards, Alberto

PD: I attach the code:

    int main( int argc, char** argv )

    Mat img_object = imread( "Pos2Model_Gray.png", CV_LOAD_IMAGE_GRAYSCALE );
    Mat img_scene = imread( "Kinect_grayscale_36.png", CV_LOAD_IMAGE_GRAYSCALE );

    //-- Step 1: Detect the keypoints using SURF Detector
    int minHessian = 800;
    std::vector<KeyPoint> keypoints_object, keypoints_scene;
    SurfFeatureDetector detector(minHessian);
    detector.detect(img_object, keypoints_object);
    detector.detect(img_scene, keypoints_scene);

    //-- Step 2: Calculate descriptors (feature vectors)
    SurfDescriptorExtractor extractor;
    Mat descriptors_object, descriptors_scene;
    extractor.compute( img_object, keypoints_object, descriptors_object );
    extractor.compute( img_scene, keypoints_scene, descriptors_scene );

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    FlannBasedMatcher matcher;
    std::vector< DMatch > matches;
    matcher.match( descriptors_object, descriptors_scene, matches );

     //-- Quick calculation of max and min distances between keypoints
        double max_dist = 0; double min_dist = 100;
        for( int i = 0; i < descriptors_object.rows; i++ )
            double dist = matches[i].distance;
            if( dist < min_dist ) min_dist = dist;
            if( dist > max_dist ) max_dist = dist;
        printf("-- Max dist : %f \n", max_dist );
        printf("-- Min dist : %f \n", min_dist );

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
    std::vector< DMatch > good_matches;
    for( int i = 0; i < descriptors_object.rows; i++ )
        if( matches[i].distance < 1.5 *min_dist )
            good_matches.push_back( matches[i]); 

    Mat img_matches;
    drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //-- Localize the object
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;

    for( int i = 0; i < good_matches.size(); i++ )
        //-- Get the keypoints from the good matches
        obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
        scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );

    Mat H = findHomography( obj, scene, CV_RANSAC );

    //-- Get the corners from the image_1 ( the object to be "detected" )
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
    obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
    std::vector<Point2f> scene_corners(4);

    perspectiveTransform( obj_corners, scene_corners, H);
    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    //-- Show detected matches
    imshow( "Good Matches & Object detection", img_matches );

    return 0;