How to Detect People using OpenCV HOG descriptor - getDefaultPeopleDetector?

asked 2013-12-05 02:48:23 -0500

Manoj Patil gravatar image

Hi Guys,

I am using OpenCV for people detection using HOG descriptor - getDefaultPeopleDetector. Below is the sample code i am using -

UIImage * img = ImageVw.image;
cv::Mat cvImg = [self CVGrayscaleMat:img];
cv::HOGDescriptor hog;
hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
cv::vector<cv::Rect> found;
double t = (double)cv::getTickCount();
hog.detectMultiScale(cvImg, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);
t = (double)cv::getTickCount() - t;
printf("Detection time = %gms\n", t*1000./cv::getTickFrequency());

for( int i = 0; i < (int)found.size(); i++ ){ cv::Rect r = found[i]; r.x += cvRound(r.width0.1); r.y += cvRound(r.height0.1); r.width = cvRound(r.width0.8); r.height = cvRound(r.height0.8); [self addViewAtCvRect:r]; }

-(void)addViewAtCvRect:(cv::Rect)r { NSLog(@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height);

CGRect frame = CGRectMake(r.x, r.y, r.width, r.height);
UIView *personView = [[UIView alloc] initWithFrame:frame];
personView.backgroundColor = [UIColor colorWithWhite:1 alpha:0.5];

UIImageView *placeMarkView = [[UIImageView alloc] initWithFrame:CGRectZero];
placeMarkView.frame = CGRectMake(r.x, r.y-40, r.width, 40);

placeMarkView.image = [UIImage imageNamed:@"PlaceMark"];

UILabel *lbl = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, frame.size.width, 40)];
lbl.numberOfLines = 0;
lbl.font =  [UIFont systemFontOfSize:12];

NSString *foundStr = [NSString stringWithFormat:@"%d,%d,%d,%d",r.x, r.y, r.width, r.height];

lbl.text = foundStr;
lbl.backgroundColor = [UIColor clearColor];
lbl.textColor = [UIColor blackColor];

[placeMarkView addSubview:lbl];
placeMarkView.tag = TAG;

[self.view addSubview:placeMarkView];

[self.view bringSubviewToFront:detectPeopleBtn];
[self.view bringSubviewToFront:chooseImageBtn];

}

-(cv::Mat)CVGrayscaleMat:(UIImage *)img {

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = img.size.width;
CGFloat rows = img.size.height;

NSLog(@"cols = %f, rows = %f",cols,rows);

cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel

CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                cols,                      // Width of bitmap
                                                rows,                     // Height of bitmap
                                                8,                          // Bits per component
                                                cvMat.step[0],              // Bytes per row
                                                colorSpace,                 // Colorspace
                                                kCGImageAlphaNone |
                                                kCGBitmapByteOrderDefault); // Bitmap info flags

CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), img.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);

return cvMat;

}

Resources - I am using iPhone 4s with iOS 7. Above code doesn't give me accurate detection for images taken from iPhone 4s. But i do use images captured from Google nexus 5 camera. It gives me 60% of detection right. I dont understand how to use it. Can anyone please help me?

edit retag flag offensive close merge delete