Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Let's do some maths:

Your camera has a 640x480 resolution, is 6 inches away from the scene and has a lens with some focal length (I don't know, look on the spec). So, imagine that, given these parameters, 1 pixel on the image represent 1x1 mm square (or about 0.04'') in reality (this is a random value I chose, it actually depends of the focal length of your lens, the size of the captor, etc.) (If you don't want to calculate, just print a calibration grid with squares of a known dimension and count how many pixels are used to represent a single square...)

Now, you move the exact same camera to a distance of 48'' (4 feet), which is 8 times farther. Then, 1 pixel will represent a region of 8x8mm (about 0.32''). So, your number of pixels per inch has actually decreased! It means that you are no longer able to see fine details. So, when you do an ORB detection on the image, you actually no longer have the same image than in the first setup, or maybe the number of pixels per inch is too low to represent meaningful keypoint.

So, it seems to be the cause of the problem. Now, how to solve it? Well, it is not possible if you keep the same setup. I assume that you cannot place the camera close enough (6''), so you may need to use a lens providing a relevant level of zoom, or choose another camera with a sensor that allow an higher number of pixels per inch. Note that I assume all of this make some sense to you. If not, just ask in a comment and I'll try to point you to the right learning resources...