1 | initial version |
3)It is because your classifier starts to "learn" your dataset thus doing less mistakes on it. 2)The negative grabber functions use a sliding window on negative images according to your model size (-w -h) and this is done each time when your image gets resized and it does this while maintaining the images original aspect ratio,otherwise you would end up with an artificial distortion which your original image didn't contain.
2 | No.2 Revision |
3)It is because your classifier starts to "learn" your dataset thus doing less mistakes on it.
2)The 2) The negative grabber functions use a sliding window on negative images according to your model size (-w -h) and this is done each time when your image gets resized and it does this while maintaining the images original aspect ratio,otherwise you would end up with an artificial distortion which your original image didn't contain.
3) It is because your classifier starts to "learn" your dataset thus doing less mistakes on it.