1 | initial version |
Not quite sure of the problem, obviously make sure you calibration, etc is correct. Here's another way of tackling. Assuming the placement of the markers in the fixed area is known:
To get the very best pose estimation possible I would dynamically build the object points and image points for each frame depending on what markers are detected in that frame and then pass those to solvePnP. The more data solvePnP has, the more accurate.
First you need to settle on a fixed point in your tracking area, probably the center. Then for every frame: Run Aruco detection only, you dont need pose estimation. Get a list of all corner points Aruco returns (2d points) This will be from 4-48 points in your case, these are the image points Dynamically build your object points (3d points) depending on your known marker placements with 0,0,0 being the fixed point you settled on Make sure the order of points is the same in both lists (sometimes this matters, sometimes not depending on alogrithm) Run solvePnP with the data, and you'll get a Rvec and Tvec. Run projectpoints if you need screen coordinate for chosen fixed point
Not sure any of that helps, good luck.