Training Dectectors - Image Sample Preparations
Recently I have been experimenting with trying to train my own cascade classifiers to use in building my own detector. I am using OpenCV 2.4.10. I have seen many different tutorials to help with this processes (always looking for more though). I have not been successful to this date. I have gotten through the training process in some cases, but the detector does not detect the image in the scene.
I ran into a couple of questions when preparing my samples that I would like to ask:
For positive and negative images do the sizes (pixels width and height) matter? When I take samples pics with my tablet I am getting sizes like: 2592 pixels (width) by 1458 pixels (height). I have been resizing these large pics (pixel wise) down to smaller sizes like 48x26. Does the size of the pictures matter?
For creating positive samples, I have experimented with cropping out an image from a scene that I am interested in, where the scene has a lot in it and I have experimented with where the image I am interested in takes up the entire picture frame. With the first case I do provide the coordinates of the image I have cropped. Does it matter how these samples get created, would either way work? Will one work better than the other.
Do the negative samples and positive samples image size need to be the same or similar. For example can my positive samples be small (48x26) but my negative images large (2592x1458). Or should both sets be similar in size
I would appreciate any suggestions, best practices or guidance to help me get through this process and build a successful classifier.
Thank you
First, you shall read the documentation, and especially this part, so you'll see that the size of negatives should be greater or equal to the one of the positives (they shall be the same size, the one that the detector shall detect) then see this video, it is explaining how the detection is done. Now you'll understand that the negatives are split in many (like when detecting; and please be careful not to have small positive objects in your negatives images).
Thanks I will take a look at these links. According to the documentation, during opencv_createsamples a -h and -w parameter is used. The documentation also states that these -h and -w value should be used in the training (opencv_traincascade). Couple of follow up questions:
Does it matter what the starting size of your positive images happens to be, prior to running the opencv_createsamples? For example, see the original post, my default pixel size of my positive image was 2592X1458. Should I resize this to something smaller before I run the opencv_createsamples
Can i assume that this is telling the opencv_createsamples to create positive samples with this -h and -w value?
Is there a preferred value that works best or does this depend on what you are trying to detect?
The create_samples sets your positives to a fix size (if I remember right... but you can test it on a few images to see).
Preferred values? I do not think so, you shall test. I depends on your object, how many features (haar, lbp, etc) it has and their sizes... And its speed depends on the size too; the greater, the slower. But its performances depends on the size too: the larger the more accurate. You shall test on a few sizes like 30x(h/w*30), 20x... 10x... (even if 10 I think is too small). you can also try other sizes.