Ask Your Question

Revision history [back]

Accuracy of cv2.aruco.estimatePoseSingleMarkers

Hello everyone,

I have been trying to estimate the accuracy of cv2.aruco.estimatePoseSingleMarker in Python. To do so, I have generated a pixel picture with a perfectly centered AprilTag. I also created a synthetic camera matrix assuming no astigmatism and a perfectly centered focal point (cx,cy). Distortion Coefficients were set to zero. Using this synthetic data, if have been running the pose estimation.

Using this synthetic data, we should have an optical axis that is perfectly aligned with the marker/image center.

I did expect a tvec with X/Y-coordinates close to zero. However, I keep getting X/Y-coordinates with values around 10 units of length (in my case: mm).

I also tried it with ArUco Markers: Same result.

Please find my code and output attached below.

Is my assumption, that X/Y should be close to zero, correct? If so, what might be the reason for this offset? Is it related to the PnP Algorithm or is there a bug in my code?

Thanks in advance !

import cv2 as cv
import numpy as np #imports

dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_APRILTAG_16H5) 
para = cv.aruco.DetectorParameters_create()                   
para.cornerRefinementMethod = cv.aruco.CORNER_REFINE_APRILTAG
para.aprilTagDeglitch = 0                                  
para.aprilTagMinWhiteBlackDiff = 30
para.aprilTagMaxLineFitMse = 20                          
para.maxErroneousBitsInBorderRate = 0.35                 
para.errorCorrectionRate = 1.0                             
para.minMarkerPerimeterRate = 0.05                      
para.maxMarkerPerimeterRate = 4                             
para.polygonalApproxAccuracyRate = 0.05                     
para.minCornerDistanceRate = 0.05                          
para.aprilTagCriticalRad = 0.1745329201221466 *6
para.aprilTagMinClusterPixels = 5 #set dictionary and parameter object

img = cv.imread("testimg.png")
camMat = np.array([[14492.753623188406, 0, 1024],[0, 14492.753623188406, 1224],[0, 0, 1]])#camera matrix for f = 50mm with chipsize = 0.00345 mm and 2048x2448px
distCoeffs = np.ndarray([0]) #distortion coefficients

marker_length = 30.00 #mm

corners,ids = cv.aruco.detectMarkers(img,dictionary,parameters = para)[:2]
corners = np.array(corners)

pose = cv.aruco.estimatePoseSingleMarkers(corners,marker_length,camMat,distCoeffs)  

tvec = pose[1][0][0].tolist()
print(tvec) #get tvec

My output is:

Tvec for Apriltag Detection = [10.714285714225264, -10.714285714225268, 776.397515563502]
Tvec for ArUco Detection = [10.689704341562148, -10.743286819239351, 776.5576507340221]

My system is running: Opencv: 4.1.1, Python 3.7.3, Windows 10

Accuracy of cv2.aruco.estimatePoseSingleMarkers

Hello everyone,

I have been trying to estimate the accuracy of cv2.aruco.estimatePoseSingleMarker in Python. To do so, I have generated a pixel picture with a perfectly centered AprilTag. I also created a synthetic camera matrix assuming no astigmatism and a perfectly centered focal point (cx,cy). Distortion Coefficients were set to zero. Using this synthetic data, if have been running the pose estimation.

Using this synthetic data, we should have an optical axis that is perfectly aligned with the marker/image center.

I did expect a tvec with X/Y-coordinates close to zero. However, I keep getting X/Y-coordinates with values around 10 units of length (in my case: mm).

I also tried it with ArUco Markers: Same result.

Please find my code and output attached below.

Is my assumption, that X/Y should be close to zero, correct? If so, what might be the reason for this offset? Is it related to the PnP Algorithm or is there a bug in my code?

Thanks in advance !

import cv2 as cv
import numpy as np #imports

dictionary = cv.aruco.getPredefinedDictionary(cv.aruco.DICT_APRILTAG_16H5) 
para = cv.aruco.DetectorParameters_create()                   
para.cornerRefinementMethod = cv.aruco.CORNER_REFINE_APRILTAG
para.aprilTagDeglitch = 0                                  
para.aprilTagMinWhiteBlackDiff = 30
para.aprilTagMaxLineFitMse = 20                          
para.maxErroneousBitsInBorderRate = 0.35                 
para.errorCorrectionRate = 1.0                             
para.minMarkerPerimeterRate = 0.05                      
para.maxMarkerPerimeterRate = 4                             
para.polygonalApproxAccuracyRate = 0.05                     
para.minCornerDistanceRate = 0.05                          
para.aprilTagCriticalRad = 0.1745329201221466 *6
para.aprilTagMinClusterPixels = 5 #set dictionary and parameter object

img = cv.imread("testimg.png")
camMat = np.array([[14492.753623188406, 0, 1024],[0, 14492.753623188406, 1224],[0, 0, 1]])#camera matrix for f = 50mm with chipsize = 0.00345 mm and 2048x2448px
distCoeffs = np.ndarray([0]) #distortion coefficients

marker_length = 30.00 #mm

corners,ids = cv.aruco.detectMarkers(img,dictionary,parameters = para)[:2]
corners = np.array(corners)

pose = cv.aruco.estimatePoseSingleMarkers(corners,marker_length,camMat,distCoeffs)  

tvec = pose[1][0][0].tolist()
print(tvec) #get tvec

My output is:

Tvec for Apriltag Detection = [10.714285714225264, -10.714285714225268, 776.397515563502]
Tvec for ArUco Detection = [10.689704341562148, -10.743286819239351, 776.5576507340221]

Used Picture: C:\fakepath\testimg.png

My system is running: Opencv: 4.1.1, Python 3.7.3, Windows 10