How to transform a 3D model for Augmented Reality application using OpenCV Viz and ARUCO

asked 2017-01-05 07:17:52 -0600

mkocabas gravatar image

I'm developing a simple marker based augmented reality application with OpenCV Viz and ARUCO. I just want to visualize a 3D object (in PLY format) on a marker.

I can run marker detection and pose estimation (returning rotation and translation vectors) with ARUCO without a problem. And I can visualize any 3D object (PLY format) and camera frames in Viz window. However, I stuck in using rotation and translation vector outputs from ARUCO to localize 3D model on the marker.

I'm creating an affine transformation with rotation and translation vectors and applying it to 3D model. Is it correct? How should I make use of translation and rotation vectors?

Below is my code snippet.

// Camera calibration outputs
   cv::Mat cameraMatrix, distCoeffs;
   loadIntrinsicCameraParameters(cameraMatrix, distCoeffs);

// Marker dictionary
   Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);

   viz::Viz3d myWindow("Coordinate Frame");

   cv::Mat image;

// Webcam frame pose, without this frame is upside-down
   Affine3f imagePose(Vec3f(3.14159,0,0), Vec3f(0,0,0));

// Viz viewer pose to see whole webcam frame
  Vec3f cam_pos( 0.0f,0.0f,900.0f), cam_focal_point(0.0f,0.0f,0.0f), cam_y_dir(0.0f,0.0f,0.0f);
  Affine3f viewerPose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);

// Video capture from source
   VideoCapture cap(camID);
   int frame_width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
   int frame_height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
   cap >> image;

// Load mash data
    viz::WMesh batman(viz::Mesh::load("../data/bat.ply"));
    viz::WImage3D img(image, Size2d(frame_width, frame_height));

// Show camera frame, mesh and a coordinate widget (for debugging)
    myWindow.showWidget("Image", img);
    myWindow.showWidget("Batman", batman);
    myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem(5.0));

    myWindow.setFullScreen(true);
    myWindow.setViewerPose(viewerPose);

 // Rotation vector of 3D model
    Mat rot_vec = Mat::zeros(1,3,CV_32F);
    cv::Vec3d rvec, tvec;

 // ARUCO outputs
    float roll, pitch, yaw;
    float x, y, z;

    while (!myWindow.wasStopped()) {

if (cap.read(image)) {
    cv::Mat image, imageCopy;
    cap.retrieve(image);
    image.copyTo(imageCopy);

    // Marker detection
    std::vector<int> ids;
    std::vector<std::vector<cv::Point2f> > corners;
    cv::aruco::detectMarkers(image, dictionary, corners, ids);

    if (ids.size() > 0){

        // Draw a green line around markers
        cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
        vector<Vec3d> rvecs, tvecs;

        // Get rotation and translation vectors of each markers
        cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);

        for(int i=0; i<ids.size(); i++){

            cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1);

            // Take only the first marker's rotation and translation to visualize 3D model on this marker
            rvec = rvecs[0];
            tvec = tvecs[0];

            roll = rvec[0];
            pitch = rvec[1];
            yaw = rvec[2];

            x = tvec[0];
            y = tvec[1];
            z = tvec[2];

            qDebug() << rvec[0] << "," << rvec[1] << "," << rvec[2] << "---" << tvec[0] << "," << tvec[1] << "," << tvec[2];
        }
    }
    // Show camera frame in Viz window
    img.setImage(imageCopy);
    img.setPose(imagePose);
}

// Create affine pose from rotation and translation vectors
rot_vec.at<float>(0,0) = roll;
rot_vec.at<float>(0,1) = pitch;
rot_vec.at<float>(0,2) = yaw;

Mat rot_mat;

Rodrigues(rot_vec, rot_mat);

Affine3f pose(rot_mat, Vec3f(x, y, z));

// Set the pose ...
(more)
edit retag flag offensive close merge delete

Comments

"I'm creating an affine transformation" -- if your input comes from a webcam - shouldn't that be a perspective transform ?

berak gravatar imageberak ( 2017-01-05 10:56:21 -0600 )edit

Affine transformation is for 3D model, and its inputs are rotation and translation vectors. In 3D world, Affine can handle rotation-translation to pose transformation. My problem is to convert translation-rotation vectors to 3D pose.

mkocabas gravatar imagemkocabas ( 2017-01-06 00:15:38 -0600 )edit

ah, misread it , then, sorry.

berak gravatar imageberak ( 2017-01-06 05:00:07 -0600 )edit