Hi!
Im creating my own carwheel-detection cascade as a fun project. My attempt so far is based on different tutorials, and I've described the whole process below:
Positive data: 40 images of a carwheel, cropped from photos taken of cars, downsized to 50x50 png (approx. 7kb size each). Negative data: 600 random outdoor photos not containing cars or wheels. Resized to 500x500 jpg (approx. 100kb each)
Used Naotoshi Seo's perlscript to generate 1500 positive samples (same settings except -w and -h set to 50x50).
Used his mergevector.py script to merge all the .vec files generated.
Used the same trainingparameters with opencv_traincascade, except with LBP, and -w and -h parameters set to 50x50.
Well, training is super fast (A couple of mins for 20 stages), and when I tested, it detected a lot of false positives. I suspect somethings wrong with the data, or that I can tweak some parameters/settings.
Does anyone have any ideas or tips on what parameters/settings/datatweaks I can use for better performance?
Thanks!
//Nick