1 | initial version |
Haartraining uses haar-like wavelets to define characteristics about the images to be trained. Imagining that a 24 x 24 pixels sample has about 180.000 features of this kind, you can see that the progress adaBoost takes to choose the correct feature for the next step can be quite computational expensive.
Having that said, getting 3 days is still quite fast for haartraining, going from the point that you used quite a large training set database. I have seen trainings of 1000 positives and 5000 negatives taken over more than a week to complete. So be patient I would say.
If you decide to stop training at this level, kill the process, then repeat the training, giving the same destination folder to the algorithm, with 1 stage less than the stages already trained. it will just read the parameter files and create a new xml cascade model from those. This will give you the chance to actually try your detector.
About traincascade, its the newer training system contained in OpenCV for cascade models. Big difference is that you can also use LBP features and HOG features to train your model. Advantages of LBP is that it is way faster in training. Large datasets are done 10 times faster.
So suggestion I give you, first try to define a good amount of data for your detector by using the LBP trainer. Then when your sample set is good enough to reach a robust detector, spent some time on training with HAAR.
Any more questions, feel free to ask.