Handling motion-blur problem in ArUco augmented reality toolkit [closed]
I'm trying to handle motion-blur problem in ArUco augmented reality toolkit. The main idea of ArUco is to track a board of multiple markers in order to calculate the camera pose relative to the board. The position to which the virtual object will be rendered depends on the camera's position calculated relative to as many as possible markers. ArUco cannot detect markers when the board makes fast or sudden movements. But during that, the little number of markers it detects are located precisely without any jittering error. In pursuit of handling the motion-blur problem. I have applied meanshift-FAST algorithm and it succeeded in detecting all the markers which ArUco fails to detect. The markers which are detected by ArUco are detected with my algorithm with the same accuracy. However, the markers which are not detected with ArUco are detected with my algorithm with jittering errors.
My question is: Is my algorithm is considered an enhancement of ArUco toolkit and I have handled its motion-blur problem or not? By other words, detecting little number of markers without jittering errors is better or detecting all markers with jittering errors of the blurred ones?
Note: Jittering error means that the marker's position I calculated is not identical to the right position.
Thanks in advance.
Is this OpenCV related?
Yes, ArUco toolkit is based on OpenCV.
Please read the FAQS. ArUco is not an official package of OpenCV though it may use it internally. Therefore I think you should ask in ArUco specifiic fora.
Some things to consider ...