use svm in hog to detect image by cpu and gpu, the result is different

asked 2017-08-16 06:32:36 -0600

Boris gravatar image

Hi everyone,

I use cpu to training my own svm, and use this svm to detect image, i find use cpu to detect image and use gpu to detect image is different. but when I use default people detector to detect image, use cpu and gpu is same result.

This is use cpu to deetct image

// Load the trained SVM.
svm = StatModel::load<SVM>( "/Users/BorisLok/Desktop/1.yml" );
// Set the trained svm to my_hog
vector< float > hog_detector;
get_svm_detector( svm, hog_detector );
my_hog.setSVMDetector( hog_detector );
vector< Rect > locations;
Mat src = imread("/Users/BorisLok/Desktop/1.jpg");
my_hog.detectMultiScale( src, locations );

this is use gpu to detect image

cvtColor(src, src, CV_RGB2GRAY);
cv::cuda::GpuMat gpu_img(src);
cv::Ptr<cv::cuda::HOG> gpu_hog = cv::cuda::HOG::create(cv::Size(64,64 * 2),
                                                       cv::Size(16,16),
                                                       cv::Size(8,8),
                                                       cv::Size(8,8),
                                                       9);

std::vector<cv::Rect> gpu_found;
//vector< float > gpu_hog_detector;
//get_svm_detector(svm, gpu_hog_detector);
gpu_hog->setSVMDetector(hog_detector);
gpu_hog->setGroupThreshold(0);
gpu_hog->setNumLevels(64);
gpu_hog->setHitThreshold(0);
gpu_hog->setWinStride(Size(48,96));
gpu_hog->setScaleFactor(1.05);
std::vector<double> confidences;
gpu_hog->detectMultiScale(gpu_img, gpu_found, &confidences);

I found that detect result is different between cpu and gpu. I think is my training data is too less or not?I train svm use 3 positive image, and 3 neg image to train my svm.

My computer spec: CPU : intel i7 Graphics card is GTX 750m OpenCV-3.2.0 Cuda 8.0

Please Help Thanks!

edit retag flag offensive close merge delete