Ask Your Question

Revision history [back]

Opencv - Camera position relative to Aruco Markers

I've been playing around with Aruco markers for a while and I want to start using them for a purpose. I created a board 2:2 of markers, calibrated my camera and printed out Rvec & Tvec (Rotation and translation vectors) of the detected board:

detectedBoard = TheBoardDetector.getDetectedBoard(); rvec = detectedBoard.Rvec; tvec = detectedBoard.Tvec;

After that I used a code snippet that I found online for getting the real position of the camera relative to the markers:

cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec,cv::Mat Tvec) { cv::Mat m33(3,3,CV_32FC1); cv::Rodrigues(Rvec, m33) ;

cv::Mat m44=cv::Mat::eye(4,4,CV_32FC1);
for (int i=0;i<3;i++)
    for (int j=0;j<3;j++)
        m44.at<float>(i,j)=m33.at<float>(i,j);

//now, add translation information
for (int i=0;i<3;i++)
    m44.at<float>(i,3)=Tvec.at<float>(0,i);
//invert the matrix
m44.inv();
return  cv::Point3f( m44.at<float>(0,0),m44.at<float>(0,1),m44.at<float>(0,2));

} I tested that code but I got random results. The results changed when I moved my camera but I don't know the unit of measurement. My questions: 1- Is that function correct? I don't know anything about solvepnp nor Rodrigues functions and I tried reading about them but didn't understand a thing. 2- Can I get (in meters) the coordinates (x, y, z) of the camera relative to the markers? 3- How?

I know that I might be lazy a bit for not testing those functions, but I guess that question will be a piece of cake for an experienced person in computer vision and opencv.

Thank you!

Cheers.

Opencv - Camera position relative to Aruco Markers

I've been playing around with Aruco markers for a while and I want to start using them for a purpose. I created a board 2:2 of markers, calibrated my camera and printed out Rvec & Tvec (Rotation and translation vectors) of the detected board:

detectedBoard = TheBoardDetector.getDetectedBoard(); rvec = detectedBoard.Rvec; tvec = detectedBoard.Tvec;

After that I used a code snippet that I found online for getting the real position of the camera relative to the markers:

cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec,cv::Mat Tvec) {
  cv::Mat m33(3,3,CV_32FC1);
 cv::Rodrigues(Rvec, m33) ;

;
cv::Mat m44=cv::Mat::eye(4,4,CV_32FC1);
for (int i=0;i<3;i++)
for (int j=0;j<3;j++)
m44.at<float>(i,j)=m33.at<float>(i,j);
//now, add translation information
for (int i=0;i<3;i++)
m44.at<float>(i,3)=Tvec.at<float>(0,i);
//invert the matrix
m44.inv();
return cv::Point3f( m44.at<float>(0,0),m44.at<float>(0,1),m44.at<float>(0,2));
}

} I tested that code but I got random results. The results changed when I moved my camera but I don't know the unit of measurement. My questions: 1- Is that function correct? I don't know anything about solvepnp nor Rodrigues functions and I tried reading about them but didn't understand a thing. 2- Can I get (in meters) the coordinates (x, y, z) of the camera relative to the markers? 3- How?

I know that I might be lazy a bit for not testing those functions, but I guess that question will be a piece of cake for an experienced person in computer vision and opencv.

Thank you!

Cheers.

Opencv - Camera position relative to Aruco Markers

I've been playing around with Aruco markers for a while and I want to start using them for a purpose. I created a board 2:2 of markers, calibrated my camera and printed out Rvec & Tvec (Rotation and translation vectors) of the detected board:

detectedBoard = TheBoardDetector.getDetectedBoard(); rvec = detectedBoard.Rvec; tvec = detectedBoard.Tvec;

After that I used a code snippet that I found online for getting the real position of the camera relative to the markers:

cv::Point3f CameraParameters::getCameraLocation(cv::Mat Rvec,cv::Mat Tvec) {

cv::Mat m33(3,3,CV_32FC1);
cv::Rodrigues(Rvec, m33)  ;

cv::Mat m44=cv::Mat::eye(4,4,CV_32FC1);
for (int i=0;i<3;i++)
    for (int j=0;j<3;j++)
        m44.at<float>(i,j)=m33.at<float>(i,j);

//now, add translation information
for (int i=0;i<3;i++)
    m44.at<float>(i,3)=Tvec.at<float>(0,i);
//invert the matrix
m44.inv();
return  cv::Point3f( m44.at<float>(0,0),m44.at<float>(0,1),m44.at<float>(0,2));
}

I tested that code but I got random results. The results changed when I moved my camera but I don't know the unit of measurement. My questions: questions:

1- Is that function correct? I don't know anything about solvepnp nor Rodrigues functions and I tried reading about them but didn't understand a thing.

2- Can I get (in meters) the coordinates (x, y, z) of the camera relative to the markers?

3- How?

I know that I might be lazy a bit for not testing those functions, but I guess that question will be a piece of cake for an experienced person in computer vision and opencv.

Thank you!

Cheers.