Ask Your Question

solvePnPRansac gave unstable results (has video)

asked 2017-10-17 20:19:58 -0500

sonnguyen512 gravatar image


I want to calculate the camera pose correspond with the chessboard. Currently, I use 3 markers at Top Left, Top Right and Bottom Left of the chessboard. I use these markers to estimate the 3D world coordinate of the chessboard corners. The 2D chessboard corners is detected by openCV findChessboardCorners function.

I tested solvePnPRansac with different algorithm SOLVEPNP_ITERATIVE, SOLVEPNP_DLS, SOLVEPNP_EPNP, SOLVEPNP_UPNP and SOLVEPNP_P3P but always got unstable results.

I also followed this article to initial the rvec and tvec but the result didn't improve.

Do you have any idea to improve the situation?

Here the current unstable result:

Thank you in advance.

edit retag flag offensive close merge delete


Can you try drawing the axes on the video? The ArUco module has a function for that HERE.

It is possible for more than one set of rvec to refer to the same orientation, which could be the problem.

Secondly, what do the numbers mean? Are they locations relative to the camera?

It looks like the numbers are just negative numbers of the same magnitude, so you might be able to get away with just taking the absolute value, depending on what they represent.

Tetragramm gravatar imageTetragramm ( 2017-10-17 20:42:37 -0500 )edit

Can you explain what points do you use for the pose estimation method? Because if you use findChessboardCorners, no need to use a RANSAC approach neither to use use 3 markers.

Also, 3 marker points are not enough for the PnP problem (minimum is 4 and RANSAC is useless in this case).

Eduardo gravatar imageEduardo ( 2017-10-18 05:01:19 -0500 )edit

Hi Tetragramm,

My opencv version have yet the ArUco module so I draw the axis of the chessboard by projecting XYZ axis using the rvec and tvec from solvepnp.

Here's the result with solvePnPRansac

Here's the result with solvePnP

About the numbers, they are world coordinate of 3 markers and the negative number is correct due to the chessboard in the negative area. The 3D world coordinate of chessboard corners are calculated base on these markers position. Is it possible to run solvePnP with negative coordinate?

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-18 20:54:40 -0500 )edit


My camera is attached on htc vive hmd and I want to calculate the relative between the camera and the vive hmd. The chessboard is used to calculated this correspondence. After get the relative position between the camera and the vive hmd, the chessboard will be removed.

I use 3 markers to calculate the chessboard corners (9x6) in world coordinate, so I have 54 points in total.

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-18 21:04:59 -0500 )edit

1 answer

Sort by » oldest newest most voted

answered 2017-10-18 22:40:38 -0500

Tetragramm gravatar image

updated 2017-10-18 23:51:53 -0500

This is an answer because it's so long.

Those videos are listed as unavailable.

Wait, so those are Vive tracking pucks in the corners? And those are the XYZ points displayed in between the images? Is the display just cutting off the negative sign that's always in those boxes?

I think what you want to be doing is forgetting the Vive tracking pucks entirely, and using the stereoCalibrate function. With a collection of images with chessboard detections, that will give you the translation and rotation of the cameras relative to eachother. If you set the vive as the first camera, it will be at (0,0,0) in both rvec and tvec, and the R and T outputs will be the relative location of the second camera.

Then you can use the triangulatePoints function to find the 3d world points of the chessboard, then verify that against what you measure from the Vive tracking pucks.

Do be careful, because the rotation and translation you get from the Vive feedback (if you're reading that directly from the API) is an OpenGL coordinate system, which takes some altering to make the OpenCV coordinate system. Specifically, multiply the rotation by

[1  0  0]
[0 -1  0]
[0  0 -1]

Then you have the location and orientation of the Vive relative to the world coordinate system, which is the inverse of what the rvec and tvec stores.


So there are three separate problems I see.

  1. The findChessboard function is returning very messy results. The problem isn't with SolvePnP, it's with the detected chessboard corners. You can see they don't match up in the frames where it goes bad.
  2. Finding the relative location of the second camera. This is exactly what stereoCalibrate is for. Use that, and you can see it gives you the relative transformation from the Vive to the other camera, if the Vive is in imagePoints1, and the other camera in imagePoints2. Importantly, this does not care about what coordinate system you put the objectPoints in. Just use the chessboard corners all at z = 0 like any normal camera calibration.
  3. Finding the location of things in the real world. Once you've stereoCalibrated, you can read the absolute location of the Vive, and calculate the absolute location of the camera from the calibration values. From there, you triangulate to get the absolute location of points in the images. This is properly another question entirely, as finding the relative location of the cameras shouldn't require any dealings in absolute coordinates.
edit flag offensive delete link more


Sorry about the videos, they were set in private mode, please check it again, the links work now.

There are some points I feel unclear. How can I set the vive at the first camera, do you mean set the vive coordinate system (0,0,0) at the first camera? Since there is a distance between the vive hmd and the mounted stereo camera in the front, so I need to calculate this difference (rotation and translation).

Second, the (0,0,0) in both rvec and tvec as you said is it the rvec and tvec between the hmd and the stereo camera?

I admit that the coordinate in opengl and opencv got me confuse in the first place but I did convert all the coordinates into one common coordinate at the end as you mentioned.

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-18 23:21:28 -0500 )edit

And do the 3d world points of the chessboard from triangulatePoints (in 1st camera coordinate) match with the 3d world points from the vive tracking pucks (world coordinate = vive coordinate)?

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-18 23:22:24 -0500 )edit

See the EDIT in the answer, it got long again.

Tetragramm gravatar imageTetragramm ( 2017-10-18 23:45:30 -0500 )edit
  1. For finding chessboard corners, all the corners were founded correct using find chessboard corners function as you see in the 2 bottom images in the video.

2 I didn’t think about using the vive camera on vive hmd. But your suggestion using the vive camera and do the stereo calibration with another camera to find the R and T is a good approach. I’ll try it.

Regarding to the unstable result of solvePnP (the 2 top images In the video ) do you know how to improve it?

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-19 01:22:25 -0500 )edit

Well, step 2 doesn't matter which two cameras, or whether they are tracked, so just use the ones you are using. The vive camera isn't that great.

Something is wrong with the corners between the bottom video Why are they changing? You can clearly see the top video has the drawn chessboard corners not on the actual corners.

Tetragramm gravatar imageTetragramm ( 2017-10-19 17:36:41 -0500 )edit

I call the findChessboardCorners in realtime and the bottom video is the result of that. For the top video after getting the rvec and tvec from solvePnP I reprojected the chessboard corners to the camera image to check the error. And because of the error so the drawn chessboard corners didn't match with the actual corners.

sonnguyen512 gravatar imagesonnguyen512 ( 2017-10-19 21:23:02 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower


Asked: 2017-10-17 20:19:58 -0500

Seen: 262 times

Last updated: Oct 18 '17