Ask Your Question

Cascade Training Error OpenCV 3.1 - Train dataset for temp stage can not be filled. Branch training terminated. Cascade classifier can't be trained. Check the used training parameters.

asked 2016-03-28 23:22:36 -0500

Matheusft gravatar image

updated 2016-03-29 07:05:05 -0500

"Train dataset for temp stage can not be filled. Branch training terminated. Cascade classifier can't be trained. Check the used training parameters."

I'm really struggling on this error. It's taking me forever to solve it.

I've already read

and I'm doing the same as the following tutorials:

I couldn't find any real solution, hence I'm updating the question.

I'm using OpenCV 3.1.0 with Python on a Macbook with 8 GB of memory (Unix).

Here you can find the folder structure and files of my project. (Negative and Positive folders, the .vec file, and a sample of both images database).

I have in total 500(640 × 240) positive images and 2988(640 × 480) negative images.

The command that I used to create de .vec file (at the directory Positivas) was :

opencv_createsamples -vec Placas_Positivas.vec -info Training.txt -num 500 -w 200 -h 50 (just like as here)

The command that I'm using (at the directory Cascade_Training) is:

opencv_traincascade -data Data -vec Positivas/Placas_Positivas.vec -bg Negativas/Training_Negatives.txt -numPos 400 -numNeg 2500 -numStages 15 -w 200 -h 50 -featureType LBP

and NO MATTER WHAT I DO, I always see the result in the following image: image description

Any help?

edit retag flag offensive close merge delete



Okay some initial questions

  1. How much memory your pc has? With a model of 200x50 dimensions, underlying memory needs can run into this problem. With a setup like that (amount of images) compared to model size, I am guessing you need at least 4-5GB of available RAM.
  2. Can you specify how you made your positive vec file and your negative txt file and how the structure inside the negative file looks like. This error is generated in any normal case because your software is incapable of reading the negative files to grasp the negative windows for training.

Update your question and I will get back to you helping you to solve this.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-29 03:56:10 -0500 )edit

Thanks for adding the data. I am going to try it myself now and report back to you!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-29 06:57:21 -0500 )edit

I've updated with all your questions. Thanks for your fast response. As I posted, you can check my folders and files here

I hope we can solve this problem now =)

Matheusft gravatar imageMatheusft ( 2016-03-29 07:06:59 -0500 )edit

1 answer

Sort by » oldest newest most voted

answered 2016-03-29 07:38:09 -0500

updated 2016-03-30 07:01:42 -0500

Ok downloading and inspecting your data, I have found already tons of problems

  1. When inspecting your Training_Negativas.txt file, I see the following structure Images/UMD_001.jpg which is asking for trouble. Start by changing those to absolute paths, so that the software reading it will always find the exact image. For example it could be /data/Images/UMD_001.jpg. Relative paths always generate problems...
  2. Same goes for the Training.txt, which has the same problem, but somehow you seemed to have trained the *.vec file so it might have been going good.
  3. The data inside Training.txt is seperated using tabs, while it is stated that data should be seperated by spaces. If you do this with tabs, I am afraid it is already possible that your vec file is actually filled with rubbish.
  4. More a tip, avoid capitals in filenames. If you forget them somewhere some OS will not handle your data correctly.
  5. The file training.txt got 500 entries but the folder has only 490 images, where are the other 10?

Could you supply the missing 10 images so I can perform tests to see if it works when fixing all this?


After changing the data structure as follows [WILL ADD HERE LATER] I ran the opencv_createsamples tool first with the following result

image description

Then I ran the training command like this

opencv_traincascade -data cascade/ -vec positivas.vec -bg negativas.txt -numPos 400 -numNeg 2500 -numStages 15 -w 200 -h 50 -featureType LBP -precalcValBufSize 4048 -precalcIdxBufSize 4048 -numThreads 24

Watch the fact that I increased the memory consumption ... because your system can take more than the standard 1GB per buffer AND I set the number of threads to take advantage from that.

Training starts for me and features are being evaluated. However due to the amount of unique features and the size of the training samples this will take long...

Looking at the memory consumption, this data uses a full blown 8.6GB, so you might want to lower those buffers to ensure that no swapping will happen and that it will cripple the system.

image description

will update once a first stage is succesfully trained, and will increase memory to speed the process up


I increased my buffers to 8GB each, since I have 32 GB available using both fully would lead to a max allowed memory usage of 16GB. When looking at memory it halts around 13 GB now, the space needed to represent all the multiscale features calculated for a single training stage....

I am guessing this is one of the main reasons why your program is running extremely slow! I would suggest to reduce the dimensions of your model to like -w 100 -h 25 for a starter, which will reduce the memory footprint drastically. Else it will indeed take ages to train.

Using this memory you can see that weak classifiers and stages are being constructed

image description


Training finished on this 32GB RAM 24 core system in about 1 hour and ... (more)

edit flag offensive delete link more


Ok, sorry for that. The Positive Images folder now has 500 images.

I'll have a look on what you said about the *.txt files and post the outcome later.

Matheusft gravatar imageMatheusft ( 2016-03-29 11:27:22 -0500 )edit

Perfect. Will have a look at it tomorrow at work!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-29 11:37:20 -0500 )edit

Thank you for your answer. Its is very complete and detailed. Apparently I solved the problem that I described in this topic by changing the path of the image in the *.txt file in the Negative folder from:




Now, my training is able to read the *.txt file from the negative folder. However I still need to fix my training time as you described, but this is another question.

Would you be able to answer that too?

I'm using the size 200x50. Why? I don't know, I'm a beginner =) It's just a dimension that came into my mind.

How will the Height and Width of the output samples influence on the results? What if I use 60x15 instead of 200x50? I couldn't find any answer to that

Matheusft gravatar imageMatheusft ( 2016-03-29 22:41:53 -0500 )edit

Well as you can read in OpenCV 3 Blueprints, Chapter 5 I experimentally derived that if you have about 4GB of RAM you should keep your largest dimension below +-100 pixels. In that case the memory consumption is reasonable. Basically for each extra pixel the amount of features rises exponentially and thus the memory footprint also. Thats one of the main reasons why a face detector is only of size 24x24 pixels, because the necessary info is still maintained there. Will update the answer soon with a link to a trained model for you!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-30 05:53:23 -0500 )edit

Added final comments and data! Good luck with it!

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-30 06:12:52 -0500 )edit

Added a last link to the best detection for each training window... which is not ideal.. but it is better than nothing. IF you got test data available I suggest running it on that.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-03-30 06:51:16 -0500 )edit

Question Tools

1 follower


Asked: 2016-03-28 23:22:36 -0500

Seen: 8,507 times

Last updated: Mar 30 '16