Ask Your Question
0

Accuracy of cv2.aruco.estimatePoseSingleMarkers

asked 2019-10-29 10:06:41 -0600

cv_usr_mji gravatar image

updated 2019-10-29 10:21:44 -0600

Hello everyone,

I have been trying to estimate the accuracy of cv2.aruco.estimatePoseSingleMarker in Python. To do so, I have generated a pixel picture with a perfectly centered AprilTag. I also created a synthetic camera matrix assuming no astigmatism and a perfectly centered focal point (cx,cy). Distortion Coefficients were set to zero. Using this synthetic data, if have been running the pose estimation.

Using this synthetic data, we should have an optical axis that is perfectly aligned with the marker/image center.

I did expect a tvec with X/Y-coordinates close to zero. However, I keep getting X/Y-coordinates with values around 10 units of length (in my case: mm).

I also tried it with ArUco Markers: Same result.

Please find my code and output attached below.

Is my assumption, that X/Y should be close to zero, correct? If so, what might be the reason for this offset? Is it related to the PnP Algorithm or is there a bug in my code?

Thanks in advance !

import cv2 as cv
import numpy as np #imports

dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_APRILTAG_16H5) 
para = cv.aruco.DetectorParameters_create()                   
para.cornerRefinementMethod = cv.aruco.CORNER_REFINE_APRILTAG
para.aprilTagDeglitch = 0                                  
para.aprilTagMinWhiteBlackDiff = 30
para.aprilTagMaxLineFitMse = 20                          
para.maxErroneousBitsInBorderRate = 0.35                 
para.errorCorrectionRate = 1.0                             
para.minMarkerPerimeterRate = 0.05                      
para.maxMarkerPerimeterRate = 4                             
para.polygonalApproxAccuracyRate = 0.05                     
para.minCornerDistanceRate = 0.05                          
para.aprilTagCriticalRad = 0.1745329201221466 *6
para.aprilTagMinClusterPixels = 5 #set dictionary and parameter object

img = cv.imread("testimg.png")
camMat = np.array([[14492.753623188406, 0, 1024],[0, 14492.753623188406, 1224],[0, 0, 1]])#camera matrix for f = 50mm with chipsize = 0.00345 mm and 2048x2448px
distCoeffs = np.ndarray([0]) #distortion coefficients

marker_length = 30.00 #mm

corners,ids = cv.aruco.detectMarkers(img,dictionary,parameters = para)[:2]
corners = np.array(corners)

pose = cv.aruco.estimatePoseSingleMarkers(corners,marker_length,camMat,distCoeffs)  

tvec = pose[1][0][0].tolist()
print(tvec) #get tvec

My output is:

Tvec for Apriltag Detection = [10.714285714225264, -10.714285714225268, 776.397515563502]
Tvec for ArUco Detection = [10.689704341562148, -10.743286819239351, 776.5576507340221]

Used Picture: C:\fakepath\testimg.png

My system is running: Opencv: 4.1.1, Python 3.7.3, Windows 10

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-11-04 10:10:37 -0600

csCV gravatar image

Your assumption concerning the X and Y coordinates of the TVEC is right: If you supply a picture corresponding to a perfectly perpendicular, undistorted view and have the marker centered in the frame, X and Y should yield something close to zero.

It seems that cx and cy in your camera matrix are swapped: As camera sensors formats are usually defined as landscape (h x v), cx should be the bigger value. If I change this in your code, I do get

  [-2.276270217259004e-38, 4.629643948357638e-24, 776.3975155294337]

which is pretty much, what you expected.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2019-10-29 10:06:41 -0600

Seen: 3,605 times

Last updated: Oct 29 '19