Ask Your Question

Revision history [back]

OpenCV Android. Error (assertion failed) in projectPoints() in Calib3d.

I'm using OpenCV 3.0.0 for Android. Basically, I'm trying to use the chessboard pattern to find the pose of the smartphone camera, and to simply display x, y, z axes on the chessboard pattern.

To sum up,

  1. I approximate all internal camera parameters by assuming (1) focal length = image width in pixels, (2) principal point = center of image, (3) no radial distortion.
  2. I have my arbitrary world coordinate system, which is centered at the lower left corner of the chessboard pattern and have the chessboard pattern lying on x-y-plane, and have all the chessboard corner coordinates created.
  3. In every video frame, I use Calib3d.findChessboardCorners() to locate all chessboard corners in the 2D frame.
  4. Then, I use Calib3d.solvePnP() to solve for the pose, and Calib3d.projectPoints() to project several points into the current 2D frame.

However, the call to Calib3d.projectPoints() causes an error

CvException [org.opencv.core.CvException: cv::Exception: /home/maksim/workspace/android-pack/opencv/modules/calib3d/src/fisheye.cpp:77: error: (-215) _tvec.total() * _tvec.channels() == 3 && (_tvec.depth() == CV_32F || _tvec.depth() == CV_64F) in function void cv::fisheye::projectPoints(cv::InputArray, cv::OutputArray, cv::InputArray, cv::InputArray, cv::InputArray, cv::InputArray, double, cv::OutputArray)

I know some other people have also encountered such error, either in their own project or in the sample project Camera Calibration provided by OpenCV. But there's not much update from the community, so I try to ask again while giving slightly more info.

Below is the code snippet for the two overridden methods onCameraViewStarted() and onCameraFrame().

@Override
public void onCameraViewStarted(int width, int height) {
    // Initialization: Mat
    mImg = new Mat(width, height, CvType.CV_8UC4); // Current input frame
    mImgOut = new Mat(width, height, CvType.CV_8UC4); // Output frame
    mCameraMat = new Mat(3, 3, CvType.CV_32FC1); // The camera matrix

    // Approximate camera intrinsics
    mFocalLength = width; // Approximate focal length, i.e. image width in no. of pixels
    mCenter = new Point(width / 2, height / 2); // Approximate principal point, i.e. center of image

    // Construct camera matrix
    mCameraMat.put(0, 0, mFocalLength); mCameraMat.put(0, 1, 0); mCameraMat.put(0, 2, mCenter.x);
    mCameraMat.put(1, 0, 0); mCameraMat.put(1, 1, mFocalLength); mCameraMat.put(1, 2, mCenter.y);
    mCameraMat.put(2, 0, 0); mCameraMat.put(2, 1, 0); mCameraMat.put(2, 2, 1);

    // Assume no lens distortion
    mDistCoeffs = new MatOfDouble(0, 0, 0, 0); // Distortion coefficients

    // Create coordinates for all chessboard corners in the world space
    createChessboardPoints();
}

@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
    mImg = inputFrame.rgba(); // Current input frame
    mImgOut = mImg.clone(); // Output frame

    MatOfPoint2f boardCorners = new MatOfPoint2f(); // Chessboard corners in the current 2D frame

    // Find the image coordinates of all chessboard corners in the current frame
    boolean cornersFound;
    cornersFound = Calib3d.findChessboardCorners(mImg, mBoardSize, boardCorners, Calib3d.CALIB_CB_ADAPTIVE_THRESH | Calib3d.CALIB_CB_NORMALIZE_IMAGE);

    if (cornersFound) {
        // The rotation and translation vectors (the pose)
        Mat rVec = new Mat(), tVec = new Mat();

        // Solve for the pose
        Calib3d.solvePnP(mChessboardPoints, boardCorners, mCameraMat, mDistCoeffs, rVec, tVec, false, Calib3d.SOLVEPNP_ITERATIVE);

        // Several 3D points to be projected to the 2D image
        MatOfPoint3f axisTips = new MatOfPoint3f(new Point3(0, 0, 0), new Point3(3, 0, 0), new Point3(0, 3, 0), new Point3(0, 0, 3));

        MatOfPoint2f axisTipsInImage = new MatOfPoint2f();   

        Calib3d.projectPoints(axisTips, rVec, tVec, mCameraMat, mDistCoeffs, axisTipsInImage);

        /**
         * Draw some lines to connect the 2D points
        **/
    }

    return mImgOut;
}

I see that Calib3d.projectPoints() calls the native method projectPoints_1(), and later the C++ function void cv::fisheye::projectPoints() is called. Some people did mentioned that the Java method signature differs from the C++ function signature (the position of the parameter imagePoints). I'm not sure if that will raise issues or it will be taken care of in the native method.

Another thing is I have logged and checked before calling Calib3d.projectPoints() that

tVec.total() * tVec.channels() == 3 && (tVec.depth() == CvType.CV_32F || tVec.depth() == CvType.CV_64F)

is indeed true, so I can't see why in the C++ function (fisheye.cpp:77), the assertion has failed.

One more thing, if I don't use Calib3d.projectPoints(), how can I transform the 3D points in world space to the 2D points in image space? I know world space to camera space is [X Y Z] = R * [U V W] + T, but I'm not sure how to work with the rotation and translation vectors to get the 2D image coordinates.

Any ideas would be appreciated.