Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that display an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that display displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

The object that i am going to detect is not rigid. As far as complextiy goes its probably comparable to a fried egg on a plate with salad. Fried eggs are always a little different (still nicely cooked, not those unsuccessful ones ;) ).

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

The object that i am going to detect is not rigid. As far as complextiy complexity goes its probably comparable to a fried egg on a plate with salad. Fried eggs are always a little different (still nicely cooked, not those unsuccessful ones ;) ).

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

The object that i am going to detect is not rigid. As far as complexity goes its probably comparable to a fried egg on and the background is a plate with salad. Fried eggs are always a little different (still nicely cooked, not those unsuccessful ones ;) ).

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

The object that i am going to detect is not rigid. As far as complexity goes its probably comparable to a fried egg and the background is a plate with salad. Fried eggs are always a little different (still nicely cooked, not those unsuccessful ones ;) ).

Edit2:

Ok, quick status update. I temporarily used normal negative images instead of the extracted samples. It seems to train a lot better with those. But why? I though OpenCV internally does the same thing, that i was attempting to do manually: Extract peices from the negative images with the size of the train window.

How to train classifiers the best way possible

Hello,

i am trying to train a cascade classifier with opencv_traincascade. There are a few tutorials and posts about that, but all of them seem to be either outated or are contradictory. Also i am interested in doing things the best way possible, not just getting it to work. I am going to tell you what i learned so far, and what is unclear to me.


Acquiring positive samples

There are two methods:

  • Using a single (or few) images and use OpenCV's createsamples utility to generate a lot of samples from them. In case of more then one image, the resulting *.vec files are merged with a special utility.

  • Using multiple annoted images and feeding them to createsamples with the -info option. This does not create more samples, it just extracts them.

The first approach seems to be used by a lot of people, mainly because you dont need annoted images and can create a large number of samples from just a few images. But I've read that the first approach is really bad and one should use the seccond approach because it provides real world samples that are more likely to match the input images in a real application. Some people say that taking a smaller number of real world images is way better then using a really large number of generated artificial samples.

I currently have about 300 annoted samples at hand. I could directly feed them to createsamples (with -info).

  • 300 samples are not a huge amount, but those are "real world" samples. Referring to the above statements i might not need more. By how much are real world samples actually better then generated ones? Are 300 samples enough?
  • Otherwise would it make sense to create about 10 artificial samples per real world sample using createsamples and merge those *.vec files? This way i would have 3000 samples.

Acquiring negative samples

Most people use random images from the internet as negatives to train. But i read throught this Article which is suggesting a better approach:

Creating negatives from the backgrounds of the positives is much more “natural” and will give far better results, than using a wild list of background images taken from the Internet.

Also:

I would avoid leaving OpenCV training algorithm create all the negative windows. To do that, create the background images at the final training size so that it cannot subsample but only take the entire negative image as a negative.

If i understand this right, the author suggests to take the images containing the object and extract negative samples from the regions around the object, also making sure those extracted negatives have the same size as the training window.

  • Does that make sense? I looked at his git repository and his negatives where a lot larger then the training window. Do i miss something?

  • Also, should i extract regions with the window size out of the images, or could i use larger portions with the same aspect ratio and scale them to the window size afterwards (which i allready tried, see below)?


Number of negative samples

  • I found sources that suggest a positive to negative ratio of around 1 to 4 and also some suggesting more positives then negatives, so it seems noone really knows?

  • With my second paragraph in mind, if one would use these small negative samples, so that OpenCV can't subsample anymore, doesn't that mean that you would need a LOT more of those small samples? Because normally OpenCV would generate multiple negative samples out of a single negative image.


What I have done so far

I created a program that displays an image with the object i want to detect. I can then draw a box around that object. The box will be automatically constraint to a fixed aspect ratio. The program then divides the rest of the image in negative regions with the same size as the marked object. After that, the object region gets saved to my list of positives (to use with the -info argument). I also subsample the image using the generated negative regions, scale those samples down to the training window size and save them in another folder (to use with -bg).

Results:

I tried to train an LBP classifier with 300 positive annoted images and 1200 (4 times 300) negative samples obtained like described above using opencv_traincascade. I used a variety of different arguments for the training procedures. None of them finished successfully. Some get stuck at collecting the negative samples for the next stage, others fail with:

Train dataset for temp stage can not be filled. Branch training terminated.

This just shows that there is something wrong with the assuptions i made above. Maybe the way to generate negative samples doesnt work at all and the article is wrong (but i doubt that) or i just executed it poorly. It is also possible that my number of positive samples is too low or my ratio to negatives is wrong.

Or my samples are bad, but lets dont assume that, i think they are descent.


Thank you in advance for all answers and suggestions. Training a classifier is a tricky task for sure and i think how to do it as best as possible is not covered enough. I would like to hear your experiences and suggestions. Maybe some of you allready tried what i am currently doing? Or some of you have other tips for training classifiers?

Best regards,

eTicket

Edit:

I gathrered a few more samples. I now have around 500 positives (still same error).

I am training with the folowing parameters:

opencv_traincascade -data cascade -vec data.vec -numPos 400 -numNeg 2000 -precalcValBufSize 1024 -precalcIdxBufSize 1024 -minHitRate 0.999 -weightTrimRate 0.95 -maxFalseAlarmRate 0.5 -maxDepth 1 -w 40 -h 40 -bg negatives.txt -maxWeakCount 100 -featureType LBP -numStages 16

negatives.txt contains a little more then 2000 samples. positives.txt contains more then 500 samples. With those parameters the training fails with the error above after stage 5. I also tried to do training with -minHitRate 0.995 and -maxFalseAlarmRate 0.3. It gets stuck (endless loop) after it collected positive samples at stage 4.

I also made sure that i am not running out of memory.

The object that i am going to detect is not rigid. As far as complexity goes its probably comparable to a fried egg and the background is a plate with salad. Fried eggs are always a little different (still nicely cooked, not those unsuccessful ones ;) ).

Edit2:

Ok, quick status update. I temporarily used normal negative images instead of the extracted samples. It seems to train a lot better with those. But why? I though thought OpenCV internally does the same thing, that i was attempting to do manually: Extract peices from the negative images with the size of the train window.