Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Out of desperation, I tried training & running on only grayscale images. For reasons that are currently unknown to me, this worked. By my best understanding, gpu::HogDescriptor should work fine on BGRA images, so I'm not certain why it didn't work in this case, but I also haven't explored it thoroughly yet.

So, for anyone who finds this via google in future, here's my best practice right now:

0) try grayscale images

1) getDescriptors works fine for generating large +/- corpus, default param descr_format works

2) train svm using "libsvm -s 0 -t 0", combine to form single 'w' vector with bias/rho term

3) input as vector<float> to HogDescriptor with bias/rho at end is fine

4) as many others have said, expect false positives and bootstrapping from the results

5) Tweak the detection threshold parameter to reduce detection count... this can get pretty high in some cases (threshold currently as high as 3 for me) which might say something about the separation distance in the svm, not sure tho.