Recently I have been experimenting with trying to train my own cascade classifiers to use in building my own detector. I am using OpenCV 2.4.10. I have seen many different tutorials to help with this processes (always looking for more though). I have not been successful to this date. I have gotten through the training process in some cases, but the detector does not detect the image in the scene.
I ran into a couple of questions when preparing my samples that I would like to ask:
For positive and negative images do the sizes (pixels width and height) matter? When I take samples pics with my tablet I am getting sizes like: 2592 pixels (width) by 1458 pixels (height). I have been resizing these large pics (pixel wise) down to smaller sizes like 48x26. Does the size of the pictures matter?
For creating positive samples, I have experimented with cropping out an image from a scene that I am interested in, where the scene has a lot in it and I have experimented with where the image I am interested in takes up the entire picture frame. With the first case I do provide the coordinates of the image I have cropped. Does it matter how these samples get created, would either way work? Will one work better than the other.
Do the negative samples and positive samples image size need to be the same or similar. For example can my positive samples be small (48x26) but my negative images large (2592x1458). Or should both sets be similar in size
I would appreciate any suggestions, best practices or guidance to help me get through this process and build a successful classifier.
Thank you