2016-10-06 11:16:48 -0600 | commented answer | cascade training best practices for lit sign Really appreciate the detailed answers Steven! The opencv community would not be the same without you. I have yet to perform the additional annotations. However, my good friend was able to create an excellent classifier that I tested last night. He was able to do so with just one image using the following commands: opencv_createsamples -img sign.png -bg negatives.txt -info pos/info.lst -pngoutput pos -maxxangle 0.5 -maxyangle 0.5 -maxzangle 0.5 -num 500 -w 48 -h 24 opencv_createsamples -info pos/info.lst -w 48 -h 24 -vec VeniceLeft.vec -bg negatives.txt -num 500 opencv_traincascade -data cascade/ -vec VeniceLeft.vec -bg negatives.txt -numNeg 1000 -numPos 450 -w 48 -h 24 Do you happen to know what sort of objects this sort of training methodology might be suited for? |
2016-10-04 16:06:27 -0600 | received badge | ● Editor (source) |
2016-10-04 15:37:02 -0600 | asked a question | cascade training best practices for lit sign Was hoping to get some guidance on a few issues... Here is the sign I would like to be able to recognize at night: http://67.media.tumblr.com/035fa2a4d9... Here are my questions/issues: To generate the images to be used for training I have used ffmpeg to create images from a video that I recorded. It created roughly 500 images, all from the left hand side of the street. I painstakingly annotated all 500 images only to have the training cease at stage 3. Should i not be using ffmpeg? As an alternative I could use the burst capability on the iphone, which will take a bunch of pictures. Should I be getting images from all angles? Should blurry images be omitted? The negative images I used were also taken from a video using ffmpeg. The video is of the surrounding area, minus the sign of course. I have been able to train a model successfully on a soda can (la croix) but for whatever reason I cannot get through the training for this type of object. Any help would be greatly appreciated Here are my commands and their corresponding output: (more) |