Ask Your Question

dtvsilva's profile - activity

2020-04-26 02:36:25 -0600 received badge  Popular Question (source)
2016-04-05 03:57:46 -0600 commented answer solvePnP large (~100) pixel re-projection error

I now think the points from the laser are probably wrong, when detecting the ball arch it might have caught part of my leg, as a consequence the arch seems bigger than it actually is and the laser position is higher than it actually is.

I'm going to do some tests soon and report back.

2016-04-05 03:56:07 -0600 commented answer solvePnP large (~100) pixel re-projection error
  • Camera calibration is done with a typical chessboard with OpenCV's cameraCalibrate function. This is done to obtain the intrinsic matrix and distortion coefficients.
  • Now, I want to know where the camera is in relation to the laser scanner. Using a ball I get X, Y and Z coordinates of it's centroid on the laser coordinate system (world points)
  • With the camera I get the centroid position on the image (X and Y in pixels - image points).
  • solvePnP is used with these points, however the results have large errors and the resulting transformation is wrong by about 0.5 meters along Z.
2016-04-05 03:31:13 -0600 received badge  Enthusiast
2016-04-04 17:14:44 -0600 commented answer solvePnP large (~100) pixel re-projection error

Don't think it's possible unless I get a bigger ball, which is pretty hard since the one I'm using is already 95 cm in diameter. The other alternatives would be a non-flat ground which would mess with the rest of the program or moving the equipment which is also not possible.

I can redo the calibration with that sample code, however the current matrix was obtained with a very similar code to what that link shows and returned an RMS of ~0.5 pixel.

2016-04-04 04:45:17 -0600 received badge  Student (source)
2016-04-04 03:47:17 -0600 commented answer solvePnP large (~100) pixel re-projection error

Looking at the picture the orientation looks right, but the coordinate system should be lower. I'm going to add an image to my post, it's basically the same result that you got, but with both coordinate systems and in 3D.

About increasing the span...unfortunately it's not possible. Camera and laser position are static.

2016-04-02 19:44:03 -0600 commented answer solvePnP large (~100) pixel re-projection error

It gives me X and Y (error is 30 mm). Z is calculated based on the arch that the laser scanner "sees". IE: If Z is 0.3 it means that the centroid is 0.3 meters above the laser scanner

2016-04-02 19:19:56 -0600 commented answer solvePnP large (~100) pixel re-projection error

With a laser range scanner.

2016-04-02 13:53:36 -0600 commented answer solvePnP large (~100) pixel re-projection error

It's not an actual shape. It's the centroid of a sphere in 25 images. The camera is always going to be at the same position, so I was hoping to take 25 images with a ball in different positions and calculate the transformation from my reference frame to the camera using solvePnP. No need to apologize, I'm grateful for your help.

2016-04-01 03:50:34 -0600 commented answer solvePnP large (~100) pixel re-projection error

I don't have the exact truth, although the resulting transformation matrix from solvePnP is:

T =  [-0.764, -0.642, 0.055, -1.435;
          -0.113, 0.050, -0.992, -0.012;
          0.635, -0.765, -0.111, 2.200;
          0, 0, 0, 1]

and it's inverse:

T^-1 = [-0.764, -0.113, 0.635, -2.495;
  -0.643, 0.050, -0.765, 0.761;
  0.055, -0.992, -0.111, 0.311;
  0, 0, 0, 1]

I know that the rotation part of T^-1 is close to reality, so is X and Y translation, however the Z translation is off by about 0.5 meters.

Yes, points are in the same order.

2016-03-31 11:31:43 -0600 received badge  Editor (source)
2016-03-31 11:28:07 -0600 asked a question solvePnP large (~100) pixel re-projection error

Hi,

I have been trying to find a camera pose in relation to an object frame, however I'm getting unstable results and large re-projection errors (100 pixels or more in total).

I know that object points and image points are correct. Intrinsic parameters and distortion coefficients were obtained with OpenCV's calibrateCamera with minimal re-projection error (0.5 pixel).

I have tried CV_EPNP and solvePnPRansac, all of them return about the same results, or worse.

The code:

cv::Mat intrinsic_matrix = (cv::Mat_<double>(3, 3) <<
                          502.21, 0, 476.11,
                          0, 502.69, 360.73,
                          0, 0, 1);

cv::Mat distortion_coeffs = (cv::Mat_<double>(1, 5) <<
-3.2587021051876525e-01, 1.1137886872576558e-01, -8.0030372520954252e-04, 1.4677531243862570e-03, -1.6824659875846807e-02);

// Intrinsic matrix and distortion coefficients are read from a file

vector<cv::Point3f> objectPoints;
vector<cv::Point2f> imagePoints;

if (pcl::io::loadPCDFile<pcl::PointXYZ> ("lms1.pcd", *Lms1PointCloud) == -1)    //* load the file
{
    PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
    return (-1);
}
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("singleCamCloud.pcd", *SingleCamCloud) == -1)            //* load the file
{
    PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
    return (-1);
}

lms1PointCloud.points=Lms1PointCloud->points);
singleCamCloud.points=SingleCamCloud->points;

// Fill vectors objectPoints and imagePoints
for (int i=0; i<singleCamCloud.points.size(); i++)
{
    imagePoints.push_back(cv::Point2f(singleCamCloud.points[i].x, singleCamCloud.points[i].y));
    objectPoints.push_back(cv::Point3f(lms1PointCloud.points[i].x, lms1PointCloud.points[i].y, lms1PointCloud.points[i].z));
}

cv::Mat rotation_vector;
cv::Mat translation_vector;

solvePnP(objectPoints, imagePoints, intrinsic_matrix, cv::noArray(), rotation_vector, translation_vector, false, CV_ITERATIVE);

// Projection of objectPoints according to solvePnP
cv::Mat test_image = cv::Mat::zeros( 720, 960, CV_8UC3 );
vector<cv::Point2f> reprojectPoints;
cv::projectPoints(objectPoints, rotation_vector, translation_vector, intrinsic_matrix, cv::noArray(), reprojectPoints);

float sum = 0.;
sum = cv::norm(reprojectPoints, imagePoints);

std::cout << "sum=" << sum << std::endl;
// Draw projected points (red) and real image points (green)
int myradius=5;
for (int i=0; i<reprojectPoints.size(); i++)
{
    cv::circle(test_image, cv::Point(reprojectPoints[i].x, reprojectPoints[i].y), myradius, cv::Scalar(0,0,255),-1,8,0);
    cv::circle(test_image, cv::Point(imagePoints[i].x, imagePoints[i].y), myradius, cv::Scalar(0,255,0),-1,8,0);
}
imwrite( "test_image.jpg", test_image );

Object points file and image points file (dropbox links).

In these conditions I get a re-projection error of 94.43. The image bellow shows original image points (green) and re-projected image points (red).

image description

I'm also not sure how I should use the distortion coefficients, since image points are already obtained from an undistorted image I opted to not use them on solvePnP and projectPoints, is this correct? Although I don't think this is where the large re-projection error comes from, since the error doesn't really change much by using them or not.

I can't seem to find an explanation for such a large error...

If you need any more details feel free to ask. Thanks in advance.

EDIT: An image to help visualize the problem. See comments bellow.

image description

Green is the Z camera axis, orange with the frame overlapped is my reference frame ... (more)