How do I approach training dataset for HoG, using images which are larger than 64 x 128 pixels ?

asked 2012-10-22 09:38:55 -0600

sub_o gravatar image

I'm writing my own HoG for future modification purposes, and experimenting with different approaches. But I stumbled upon this question / issue.

I have downloaded dataset from INRIA, and there are images which are in 320 x 240. While the default training window size for HoG is 64 x 128.

How should I go around this ?

For the positive images, they are around 96 x 160 pixels, and what I did is to resize them down to 64 x 128. But for larger images, do I resize them, use a sliding window which moves pixel by pixel, or do I calculate features for 64 x 128 patches in that large image ?

What's the best way to approach this ?

edit retag flag offensive close merge delete