Hello
I can currently identify an aruco marker with my webcam. I would like to know if it is possible to calculate the distance "D" between the camera webcam and the Aruco marker, as in the following image.
marker. Is it possible to find the angle "a" between the blue axis of the Aruco marker and the center of the camera image?
note: 3m distance is an example, I can get it through a rangefinder that measures the distance from the center of the camera to the center of the Arcuo marker. The camera is fixed at the center of a differential robot and the robot can align with the center of the Aruco marker. I would like to calculate the distance "D" to position(approximately) the robot in the center of the Aruco marker.
i'm using kyle-bersani code (https://github.com/kyle-bersani/opencv-examples):
import numpy
import cv2
import cv2.aruco as aruco
import os
import pickle
# Check for camera calibration data
if not os.path.exists('./calibration.pckl'):
print("You need to calibrate the camera you'll be using. See calibration project directory for details.")
exit()
else:
f = open('calibration.pckl', 'rb')
(cameraMatrix, distCoeffs, _, _) = pickle.load(f)
f.close()
if cameraMatrix is None or distCoeffs is None:
print("Calibration issue. Remove ./calibration.pckl and recalibrate your camera with CalibrateCamera.py.")
exit()
# Constant parameters used in Aruco methods
ARUCO_PARAMETERS = aruco.DetectorParameters_create()
#ARUCO_DICT = aruco.Dictionary_get(aruco.DICT_6X6_1000) original
ARUCO_DICT = aruco.Dictionary_get(aruco.DICT_5X5_1000)
# Create grid board object we're using in our stream
board = aruco.GridBoard_create(
markersX=2,
markersY=2,
markerLength=0.09,
markerSeparation=0.01,
dictionary=ARUCO_DICT)
# Create vectors we'll be using for rotations and translations for postures
rvecs, tvecs = None, None
cam = cv2.VideoCapture(0)
while(cam.isOpened()):
# Capturing each frame of our video stream
ret, QueryImg = cam.read()
if ret == True:
# grayscale image
gray = cv2.cvtColor(QueryImg, cv2.COLOR_BGR2GRAY)
# Detect Aruco markers
corners, ids, rejectedImgPoints = aruco.detectMarkers(gray, ARUCO_DICT, parameters=ARUCO_PARAMETERS)
# Refine detected markers
# Eliminates markers not part of our board, adds missing markers to the board
corners, ids, rejectedImgPoints, recoveredIds = aruco.refineDetectedMarkers(
image = gray,
board = board,
detectedCorners = corners,
detectedIds = ids,
rejectedCorners = rejectedImgPoints,
cameraMatrix = cameraMatrix,
distCoeffs = distCoeffs)
QueryImg = aruco.drawDetectedMarkers(QueryImg, corners, borderColor=(0, 0, 255))
if ids is not None:
try:
rvec, tvec, _objPoints = aruco.estimatePoseSingleMarkers(corners, 10.5, cameraMatrix, distCoeffs)
QueryImg = aruco.drawAxis(QueryImg, cameraMatrix, distCoeffs, rvec, tvec, 5)
except:
print("Deu merda segue o baile")
cv2.imshow('QueryImage', QueryImg)
# Exit at the end of the video on the 'q' keypress
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
thanks