Opencv training, difference between one image and set of image.
Hi all, I have create a road signal classifier using four signal (Stop, turn left, turn right and roundabout), i create the .vec file using one image for every road signal (one for stop, one for left ecc.) with opencv_createsamples, and with the file i training the classifier. I use the .xml file in one app for detect and track the signal with android but i have a problem, the classifier do 14 (at last) stage and never reach 20, and when i try to detect a photo of one (i have take the photo around) of this the classifier don't detect it. The question is: Is it better create a classifier with one image (With createsamples) or do a lot of a photo object? why the classifier does not work with real photos? Thanks for help!
i have never seen a good classifier, that was trained on a single image only. rather get a few hundred images.
then, i doubt, that doing 5 cascade detections is a good idea at all. since a single detection on android already takes almost a second, this approach won't scale.
do your signs have the same shape ?
The stop signal is hexagonal and the other are circle. With stop sign, turn left and turn right work good, they give me a little problem but work, only with roundabout i have problem. I think the error is in the images used for training the classifier. How i can get a few hundred image? With createsamples it's possibile?
By grabbing a camera, get on the road and collect sign images in real life conditions ... this is about the only thing that will work decently!
i think i do like this, how many image do you think i take? he must have the same size or don't care? and in the image there is only the signal?
You grab like 500 images, then you use the opencv annotation tool to select the regions of interest, who will in turn get resized to a default width and height using the opencv create samples tool.
ok, i take 500 photo of image, for the tool i use Haartraining_Stuff. I think is the same! No?