opencv_traincascade stuck on precalculation when running LBP mode

asked 2015-07-01 10:31:30 -0500

jackwayneright gravatar image

updated 2015-07-06 10:09:59 -0500

My cascade training works when using Haar but not LBP. The problem seems to occur during the precalcuation phase. For example, when running:

opencv_traincascade -data classifier -vec positive_samples.vec -featureType LBP -bg negative_image_list.txt -precalcValBufSize 1024 -precalcIdxBufSize 1024 -numPos 315 -numNeg 458 -nstages 20 -w 40 -h 40

The output I receive is:

cascadeDirName: classifier
vecFileName: positive_samples.vec
bgFileName: negative_image_list.txt
numPos: 315
numNeg: 458
numStages: 20
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
stageType: BOOST
featureType: LBP
sampleWidth: 40
sampleHeight: 40
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100

===== TRAINING 0-stage =====
POS count : consumed   315 : 315
NEG count : acceptanceRatio    458 : 1

And it stalls at this point without moving forward (even when waiting for 30+ minutes). If I run this same command using HAAR instead of LBP, the recalculation finishes within 10 seconds or so. I've tried fiddling with the minHitRate and whatnot, but with no other results. When other people's opencv_traincascade stalls, it seems to occur before NEG count : acceptanceRatio is displayed, which leads me to believe I'm having a different problem. Can anyone explain why I might be hitting this wall?


I've found that the program is definitely trying in some way, as it shows up in my Activity Monitor as consuming a huge amount of CPU.

Another user seems to have had this problem a year ago on StackOverflow and tried many things, but they seem to have had no success. Their plight can be found here. It may be worth noting that we are both on OS X.


Trying the exact same dataset and command on an Ubuntu machine has the script run correctly. So it seems to be related to the OS X installation of OpenCV in some way.

edit retag flag offensive close merge delete


It could be the memory management that is going loco. Can you try increasing -precalcValBufSize 2048 and precalcIdxBufSize 2048 and see if anything else goes wrong?

StevenPuttemans gravatar imageStevenPuttemans ( 2015-07-01 15:09:53 -0500 )edit

@StevenPuttemans: Thanks, but unfortunately, bumping it up to 3GB on each of the buffers doesn't help.

jackwayneright gravatar imagejackwayneright ( 2015-07-01 21:00:16 -0500 )edit

hmm weird ... can you change -data classifier to -data classifier/? It might be the case he cannot write the calculation details.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-07-02 02:12:34 -0500 )edit

@StevenPuttemans: Tried it, but with no luck. The Haar training runs fine, so I doubt it's a path issue, though I could be wrong.

jackwayneright gravatar imagejackwayneright ( 2015-07-02 10:28:30 -0500 )edit

Its weird because here my trainings run just fine. If you are in the ability to send me your training data (vec file and negatives folder) and parameters through WeTransfer I can always have a look at it.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-07-03 03:26:36 -0500 )edit

@StevenPuttemans: Sorry for the delay since the last update. See the latest update above for relevant information.

jackwayneright gravatar imagejackwayneright ( 2015-07-06 10:09:01 -0500 )edit

Will have a look tomorrow!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-07-06 13:44:38 -0500 )edit