Cascade training parameters and time

asked 2017-06-28 10:07:31 -0600

I have been fiddling around with OpenCV's cascade trainer in an attempt to train my own classifier. The problem is that it has been training for 25+hours now and it is yet to even pass stage 1.

Initially, I ran it with the following command

nohup opencv_traincascade -data data -vec board.vec -bg bg.txt -numPos 580 -numNeg 1160 -numStages 2 -w 115 -h 153 -featureType LBP &

After about 24 hours, it wasn't able to get through even stage 1. A look into the nohup.out file, I realized that the default precalcValBufSize was set to 1024Mb. I figured maybe increasing this to 4096Mb will help with the processing so I went ahead and re-started the training with the following command

nohup opencv_traincascade -data data -vec board.vec -bg bg.txt -numPos 580 -numNeg 1160 -numStages 2 -w 115 -h 153 -featureType LBP -precalcIdxBufSize 4096 -precalcValBufSize 4096 &

The training has been running for almost 25 hours now and it also hasn't even produced the XML file for stage 0.

A look into the process itself states that its using 8284M of virtual memory but 930M of physical memory and this shows all the files currently in use by the process. Its doing a great job burning through my cores but none at producing any results or even letting me know how far its got.

My question(s) is/are, is there any way of making it use more of my actual physical memory in attempt to speed it up? If not, are there any adjustments I need to make on my training dataset?

Side Note: I know the general standard for dataset size is 24x24 but I already tried that out and it was really horrible even after 10 stages.

At that size, my object no longer attains its features correctly. This is because my object is of this nature and resizing it to 24x24 or even 48x48 just makes it look like a giant horizontally distorted blob of black pixels without even some of its unique features being visible.

edit retag flag offensive close merge delete