Ask Your Question
0

Aruco Markers point position estimation

asked 2018-04-12 10:03:06 -0600

Bobaz gravatar image

I need to detect and track an area of fixed size (let's say 1mt by 1mt).

I am using 4 sets of 3 markers (each one for a different corner) so that by reading the marker Id I can know which corner it belongs to. Each set of marker has an L disposition (oriented to match the corner shape).

If I can detect all the markers I am able to successfully identify the target area. But, if one or more markers are not detected (e.g. they are outside the frame) I would like to use the others to estimate the missing points.

Unfortuately pose estimation is good locally around the marker but gives very poor results if i try to estimate the position of farther points.

The code I am using at the moment is something like this

Marker m; //filled, imagine this is the bottom left marker and I want the top right one
CameraParameters camParams; //filled
Mat objPoint(1,3,CV_32FC1);
objPoint.at<float>(0,0) = 1.0f;
objPoint.at<float>(0,1) = 1.0f;
objPoint.at<float>(0,2) = 0.0f;
// From the center of the marker I want to move 1mt along x axis and 1mt along y axis
vector<Point2f> imagePoints;
projectPoints(objectPoints, m.Rvec, m.Tvec, camParams.CameraMatrix, camParams.Distorsion, imagePoints);
Point2f estimate = imagePoints[0];

At this point I expect estimate to be the point I wanted expressed in screen pixels. But the result is often very bad. I also tried using PoseTrakcers and "homemade tracking" by saving the previous positions of the markers and then by trying to predict the next.

Am I doing something wrong? Is there a better way?

I am using OpenCV 3.4.1 and Aruco 3.0.6.

Thanks in advance.

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-04-13 10:37:35 -0600

dpizzle gravatar image

Not quite sure of the problem, obviously make sure you calibration, etc is correct. Here's another way of tackling. Assuming the placement of the markers in the fixed area is known:

To get the very best pose estimation possible I would dynamically build the object points and image points for each frame depending on what markers are detected in that frame and then pass those to solvePnP. The more data solvePnP has, the more accurate.

First you need to settle on a fixed point in your tracking area, probably the center. Then for every frame: Run Aruco detection only, you dont need pose estimation. Get a list of all corner points Aruco returns (2d points) This will be from 4-48 points in your case, these are the image points Dynamically build your object points (3d points) depending on your known marker placements with 0,0,0 being the fixed point you settled on Make sure the order of points is the same in both lists (sometimes this matters, sometimes not depending on alogrithm) Run solvePnP with the data, and you'll get a Rvec and Tvec. Run projectpoints if you need screen coordinate for chosen fixed point

Not sure any of that helps, good luck.

edit flag offensive delete link more

Comments

This definetly helped a lot, thank you very much. By any chance, do you know the best way to calibrate the camera? I mean, what types of points of view should I consider? How many photos? The ArUco board provided with sources is the best way? Thank in advance.

Bobaz gravatar imageBobaz ( 2018-04-16 04:55:28 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-04-12 10:03:06 -0600

Seen: 1,617 times

Last updated: Apr 13 '18