Hello,
I have written a Python script with OpenCV 3.2.0, which reads sample images using HOGDescriptor and feeds the features to SVM. The SVM vector is then used to detect objects with HOG.
My training images are 19x19 face samples from an MIT database - 2429 positive and 4548 negative samples. The HOGDescriptor settings are:
winSize=(19, 19) blockSize=(4, 4) blockStride=(3, 3) cellSize=(2, 2)
The training process passes and using SVM predict gives me a %1.5 error rate among ~24000 test images again from the same MIT database.
So far so good.
When I use a HOGDescirptor with the trained SVM vector on the same ~24000 images test set with winStride=(8, 8) and scale=1.0 I don't get any detections at all.
Are my detectMultiScale params winStride=(8, 8) and scale=1.0 correct?
I also tested on a generic photo with size about 500x500 pixels and a face that is around 75x75 pixels. Will my trained detector match that face and what winStride should I use? Should the training images be the same size as the ones that are going to be processed during detection?
Thanks!
Here are parts of the training code:
# Computing gradients:
image = cv2.imread(sample, cv2.IMREAD_GRAYSCALE)
if resize is True:
image = cv2.resize(image, self.size)
if equalize is True:
image = cv2.equalizeHist(image)
# Original image.
gradient = self.descriptor.compute(image).ravel()
gradients.append(gradient)
# Mirror image.
image = cv2.flip(image, 1)
gradient = self.descriptor.compute(image).ravel()
...
# Training
self.svm = cv2.ml.SVM_create()
self.svm.setType(cv2.ml.SVM_C_SVC)
self.svm.setKernel(cv2.ml.SVM_LINEAR)
self.svm.train(gradients, cv2.ml.ROW_SAMPLE, labels)
(rho, alpha, supportVectorIndices) = self.svm.getDecisionFunction(0)
supportVectors = self.svm.getSupportVectors().ravel()
supportVectors = numpy.append(supportVectors, -rho)
self.descriptor.setSVMDetector(supportVectors)