Opencv_traincascade training too fast?
Hi!
Im creating my own carwheel-detection cascade as a fun project. My attempt so far is based on different tutorials, and I've described the whole process below:
Positive data: 40 images of a carwheel, cropped from photos taken of cars, downsized to 50x50 png (approx. 7kb size each). Negative data: 600 random outdoor photos not containing cars or wheels. Resized to 500x500 jpg (approx. 100kb each)
Used Naotoshi Seo's perlscript to generate 1500 positive samples (same settings except -w and -h set to 50x50).
Used his mergevector.py script to merge all the .vec files generated.
Used the same trainingparameters with opencv_traincascade, except with LBP, and -w and -h parameters set to 50x50.
Well, training is super fast (A couple of mins for 20 stages), and when I tested, it detected a lot of false positives. I suspect somethings wrong with the data, or that I can tweak some parameters/settings.
Does anyone have any ideas or tips on what parameters/settings/datatweaks I can use for better performance?
Thanks!
//Nick
"Positive data: 40 images of a carwheel," -- come back with 10X or even 100x of th that
I thought the perlscript generated more samples? I followed this tutorial: http://coding-robin.de/2013/07/22/tra..., he uses 40 images and then uses the script to generate 1500 samples.
forget the perl script (or any silly attempt at generating synthetic data from single images)