Guys i've been working on a Quadrotor project and its state estimation is done using stereo visual odometry (libviso2). I have used two logitec c525 cameras for 3d reconstruction. I calibrated two cameras using openCV sample calibration code for random baseline distances and it seems based on baseline distance, the accuracy of calibration differs. What should be the optimal baseline distance for best results. libviso2, stereo vision odometry function accepts rectified images and i use Adaptive histogram equalization for image pre-processing. Since i use cameras in a quadrotor, motion blur has been a problem. If i used a motion blur rejection algorithm that would distort and will reduce performance of feature matching unsure emoticon. What is your experience with visual odometry and image pre-processing work?
the objective of quad is Autonomous takeoff, landing and navigation. Visual odometry will be used for taking off and navigation phases and cameras will be turned off while navigating .