Ask Your Question
1

solvePnP large (~100) pixel re-projection error

asked 2016-03-31 11:23:22 -0600

dtvsilva gravatar image

updated 2016-04-04 04:10:34 -0600

Hi,

I have been trying to find a camera pose in relation to an object frame, however I'm getting unstable results and large re-projection errors (100 pixels or more in total).

I know that object points and image points are correct. Intrinsic parameters and distortion coefficients were obtained with OpenCV's calibrateCamera with minimal re-projection error (0.5 pixel).

I have tried CV_EPNP and solvePnPRansac, all of them return about the same results, or worse.

The code:

cv::Mat intrinsic_matrix = (cv::Mat_<double>(3, 3) <<
                          502.21, 0, 476.11,
                          0, 502.69, 360.73,
                          0, 0, 1);

cv::Mat distortion_coeffs = (cv::Mat_<double>(1, 5) <<
-3.2587021051876525e-01, 1.1137886872576558e-01, -8.0030372520954252e-04, 1.4677531243862570e-03, -1.6824659875846807e-02);

// Intrinsic matrix and distortion coefficients are read from a file

vector<cv::Point3f> objectPoints;
vector<cv::Point2f> imagePoints;

if (pcl::io::loadPCDFile<pcl::PointXYZ> ("lms1.pcd", *Lms1PointCloud) == -1)    //* load the file
{
    PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
    return (-1);
}
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("singleCamCloud.pcd", *SingleCamCloud) == -1)            //* load the file
{
    PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
    return (-1);
}

lms1PointCloud.points=Lms1PointCloud->points);
singleCamCloud.points=SingleCamCloud->points;

// Fill vectors objectPoints and imagePoints
for (int i=0; i<singleCamCloud.points.size(); i++)
{
    imagePoints.push_back(cv::Point2f(singleCamCloud.points[i].x, singleCamCloud.points[i].y));
    objectPoints.push_back(cv::Point3f(lms1PointCloud.points[i].x, lms1PointCloud.points[i].y, lms1PointCloud.points[i].z));
}

cv::Mat rotation_vector;
cv::Mat translation_vector;

solvePnP(objectPoints, imagePoints, intrinsic_matrix, cv::noArray(), rotation_vector, translation_vector, false, CV_ITERATIVE);

// Projection of objectPoints according to solvePnP
cv::Mat test_image = cv::Mat::zeros( 720, 960, CV_8UC3 );
vector<cv::Point2f> reprojectPoints;
cv::projectPoints(objectPoints, rotation_vector, translation_vector, intrinsic_matrix, cv::noArray(), reprojectPoints);

float sum = 0.;
sum = cv::norm(reprojectPoints, imagePoints);

std::cout << "sum=" << sum << std::endl;
// Draw projected points (red) and real image points (green)
int myradius=5;
for (int i=0; i<reprojectPoints.size(); i++)
{
    cv::circle(test_image, cv::Point(reprojectPoints[i].x, reprojectPoints[i].y), myradius, cv::Scalar(0,0,255),-1,8,0);
    cv::circle(test_image, cv::Point(imagePoints[i].x, imagePoints[i].y), myradius, cv::Scalar(0,255,0),-1,8,0);
}
imwrite( "test_image.jpg", test_image );

Object points file and image points file (dropbox links).

In these conditions I get a re-projection error of 94.43. The image bellow shows original image points (green) and re-projected image points (red).

image description

I'm also not sure how I should use the distortion coefficients, since image points are already obtained from an undistorted image I opted to not use them on solvePnP and projectPoints, is this correct? Although I don't think this is where the large re-projection error comes from, since the error doesn't really change much by using them or not.

I can't seem to find an explanation for such a large error...

If you need any more details feel free to ask. Thanks in advance.

EDIT: An image to help visualize the problem. See comments bellow.

image description

Green is the Z camera axis, orange with the frame overlapped is my reference frame ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-03-31 16:01:02 -0600

Tetragramm gravatar image

updated 2016-04-03 21:42:29 -0600

My impression is that you're getting bad results from solvePnP. The points are all in a very narrow line. There could be an ambiguity that is causing it to be flipped or rotated. Do you have the truth to check it against?

Second possibility. Your points are in the same order, correct? IE: Object Point 1 is Image Point 1, Object Point 2 is Image Point 2 and so forth.

I'm afraid I don't see anything obvious, but I can take a closer look later or tomorrow.

EDIT: Here's what I'm seeing from solvePnP and drawAxis. Red is x-axis, Green is y, Blue is z. Is this what the coordinate system would look like from the camera location? image description

edit flag offensive delete link more

Comments

I don't have the exact truth, although the resulting transformation matrix from solvePnP is:

T =  [-0.764, -0.642, 0.055, -1.435;
          -0.113, 0.050, -0.992, -0.012;
          0.635, -0.765, -0.111, 2.200;
          0, 0, 0, 1]

and it's inverse:

T^-1 = [-0.764, -0.113, 0.635, -2.495;
  -0.643, 0.050, -0.765, 0.761;
  0.055, -0.992, -0.111, 0.311;
  0, 0, 0, 1]

I know that the rotation part of T^-1 is close to reality, so is X and Y translation, however the Z translation is off by about 0.5 meters.

Yes, points are in the same order.

dtvsilva gravatar imagedtvsilva ( 2016-04-01 03:50:34 -0600 )edit

What kind of shape is this? I just don't have a model in my head for what this should be looking like.

Sorry I didn't get anything done, something broke and I spent all evening on that. If I may suggest, go find the drawAxis function in cv::aruco and check your results. That function draws a 3 axis marker at the location and orientation of the coordinate system in the image. You can use this to get an idea of if the coordinates are where you expect them to be, and if they are oriented how you expect them to be.

Tetragramm gravatar imageTetragramm ( 2016-04-01 06:48:19 -0600 )edit

It's not an actual shape. It's the centroid of a sphere in 25 images. The camera is always going to be at the same position, so I was hoping to take 25 images with a ball in different positions and calculate the transformation from my reference frame to the camera using solvePnP. No need to apologize, I'm grateful for your help.

dtvsilva gravatar imagedtvsilva ( 2016-04-02 13:53:36 -0600 )edit

Hmm, and how do you get the actual ball position?

Tetragramm gravatar imageTetragramm ( 2016-04-02 19:17:46 -0600 )edit

With a laser range scanner.

dtvsilva gravatar imagedtvsilva ( 2016-04-02 19:19:56 -0600 )edit

Does that give you x and y also? Or just depth?

Tetragramm gravatar imageTetragramm ( 2016-04-02 19:31:07 -0600 )edit

It gives me X and Y (error is 30 mm). Z is calculated based on the arch that the laser scanner "sees". IE: If Z is 0.3 it means that the centroid is 0.3 meters above the laser scanner

dtvsilva gravatar imagedtvsilva ( 2016-04-02 19:44:03 -0600 )edit

Ok, so I got the drawAxis working (It helps if you use the right camera matrix) and it looks like a reasonable solution to me. Unfortunately, I don't know how your camera is oriented relative to your coordinate system. I'm editing my answer to have the picture.

That seems like a fairly reliable way to get the location, but would it be possible to get a larger Z-coordinate span? You're looking right along the plane you're moving the ball in, so there's a lot of opportunity for ambiguities there.

Tetragramm gravatar imageTetragramm ( 2016-04-03 21:40:49 -0600 )edit

Looking at the picture the orientation looks right, but the coordinate system should be lower. I'm going to add an image to my post, it's basically the same result that you got, but with both coordinate systems and in 3D.

About increasing the span...unfortunately it's not possible. Camera and laser position are static.

dtvsilva gravatar imagedtvsilva ( 2016-04-04 03:47:17 -0600 )edit

Well, I meant the Z-coordinate range of the ball. Would it be possible to get more points over a larger height range?

Another possibility is that the camera matrix is wrong. The principal point that your camera matrix has is pretty far from the center of your image. Not a guarantee of a bad calibration, but a warning sign. I would recommend doing a proper calibration by printing out a chessboard and taking a short video of you waving that around in front of the camera. Then run the sample code from the tutorial and see if what you get is different.

Tetragramm gravatar imageTetragramm ( 2016-04-04 06:56:10 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-03-31 11:23:22 -0600

Seen: 3,056 times

Last updated: Apr 04 '16