Increasing the accuracy of a moment-based computer vision system
In my current work, I am working on using OpenCV to create a computer vision system to work in conjunction with an x-y-z robot. The computer vision program takes a frame from the camera, processes it by looking for a certain color, and then computes the moments of the image. The moments are used to find the center of the object, and the robot's end effector is then sent to that position. As of right now, the maximum error throughout the field of view is about 5mm and we are not doing any distortion correction. However, when using distortion correction the error changes very little if at all.
Does any one have any broad suggestions on how to decrease my maximum error? Also, does anyone have any specific suggestions about how many frames to use and what kind of viewpoints to take them from for the CalibrateCamera function?
Thank you very much, Barry