Cascade Training Error OpenCV 2.4.13 error @ stage 4! Train dataset for temp stage can not be filled
I've built opencv 2.4.13 for ubuntu 16.04 x64 from source zip file downloaded from opencv website.
I prepared the required libraries and packages according OpenCV Installation in Linux, then compiled the release version by :
cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local
then:
make -j4
and then:
make install
The installation was successful and I could use "opencv_createsamples" to generate the desired vector file.
The problem is the "opencv_traincascade". Let me show you some info:
mohammad@ThinkPad-Xenial:~$ ls /home/mohammad/Documents -all
total 3219104
drwxr-xr-x 10 mohammad mohammad 4096 Aug 23 12:11 .
drwxr-xr-x 26 mohammad mohammad 4096 Aug 23 10:30 ..
-rw-rw-r-- 1 mohammad mohammad 2482 Aug 23 11:41 bg.txt
drwxrwxrwx 2 mohammad mohammad 4096 Aug 22 21:27 mohammadspic
drwxrwxrwx 2 mohammad mohammad 20480 Aug 22 15:01 negative_images
-rw-rw-r-- 1 mohammad mohammad 10389 Aug 23 10:32 pos.vec
-rw-rw-r-- 1 mohammad mohammad 341 Aug 23 10:30 ps.txt
drwxrwxr-x 2 mohammad mohammad 4096 Aug 23 11:44 xm
Here is my training command, but if look at the end of the training procedure, you will see a similar error. But this time, the negative images description file is correct and "opencv_traincascade" works properly until stage 4!!!!
mohammad@ThinkPad-Xenial:~$ opencv_traincascade -data /home/mohammad/Documents/xm -vec /home/mohammad/Documents/pos.vec -bg /home/mohammad/Documents/bg.txt -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 9 -numNeg 50 -w 24 -h 24 -mode ALL -precalcValBufSize 1024 -precalcIdxBufSize 1024
PARAMETERS:
cascadeDirName: /home/mohammad/Documents/xm
vecFileName: /home/mohammad/Documents/pos.vec
bgFileName: /home/mohammad/Documents/bg.txt
numPos: 9
numNeg: 50
numStages: 20
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 24
sampleHeight: 24
boostType: GAB
minHitRate: 0.999
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: ALL
Number of unique features given windowSize [24,24] : 261600
===== TRAINING 0-stage =====
<BEGIN
POS count : consumed 9 : 9
NEG count : acceptanceRatio 50 : 1
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 1 seconds.
===== TRAINING 1-stage =====
<BEGIN
POS count : consumed 9 : 9
NEG count : acceptanceRatio 50 : 0.0773994
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 2 seconds.
===== TRAINING 2-stage =====
<BEGIN
POS count : consumed 9 : 9
NEG count : acceptanceRatio 50 : 0.0162496
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 3 seconds.
===== TRAINING 3-stage =====
<BEGIN
POS count : consumed 9 : 9
NEG count : acceptanceRatio 50 : 0.00854701
Precalculation time: 1
+----+---------+---------+
| N | HR | FA |
+----+---------+---------+
| 1| 1| 0|
+----+---------+---------+
END>
Training until now has taken 0 days 0 hours 0 minutes 4 seconds.
===== TRAINING 4-stage =====
<BEGIN
POS count : consumed 9 : 9
Train dataset for temp stage can not be filled. Branch training terminated.
Here is the "/home/mohammad/Documents/bg.txt" contents which includes address of 50 negative images:
/home/mohammad/Documents/negative_images ...
What is the size of your negatives?
Very small, 50*28 pixels.
Thats the issue. Those small images as negatives only yield a small set of samples that are allowed given your model dimensions and parameters. It simply cannot find new negative samples to continue training and thus stops. Simple solution, provide more negatives in your
bg.txt
file!I increased negative images to 600, but same error, just took much more time: http://pastebin.com/J6vtArrX. You explained but I still don't know what's the relation between number of negatives and stop the training procedure.
You need to increase the physical amount of samples!
I did, I added 550 jpg to the negative folder and updated the bg.txt file: http://pastebin.com/8fxCwrxu. :(
Actually, did you even test your classifier? It seems that with 9 positives, and 50 negatives, the first stage is already able to perfectly seperate the given data!
Yes, but the accuracy was very bad! It's because of the train failure or low positive images?
The low number of samples will never get you a decent accuracy. Stage 1 clearly states that given your data, the separation is perfect on the training data. It cannot do better then perfect!
I increased negatives to 1200 and 4800 samples, no changes!
Regenerate vector file with smaller and larger width and height(24&100), no changes!
I doubled the positives(18 images), no changes!
But with LBP type training, it passed stage 4 and stopped at 5.
However, training with 9 positive samples is just for test, I will prepare more than thousand images for final xml file.