solvePnP for camera pose - Am I missing a step?

asked 2020-10-08 15:08:02 -0600

idan gravatar image

updated 2020-10-08 15:16:24 -0600

Hello! I'm attempting to use solvePnP to obtain extrinsic camera information with little luck. I have a camera with calibrated intrinsic information and corresponding 3D world coordinates and 2D detections from that camera.

Running solvePnP to get the object pose works great! I can verify the results with projectPoints (using camera coordinates at zero and the repositioned 3D points) things project quite nicely and everything looks good.

For reference, the T result is [[-58.40485524] [-85.97754376] [186.19652683]] and the R result (after Rodrigues conversion) is [[ 0.9492994 -0.15497146 -0.2735224 ] [-0.00140185 -0.87213378 0.48926548] [-0.3143703 -0.46407599 -0.82813331]]

I've seen a number of answers for how to calculate the inverse of these values to obtain the camera pose. Such as here and here. I've tried multiple different examples for calculating the inverse and they all produce the exact same numbers so I think I'm doing that correctly. But...

I would expect that running a test with projectPoints using the inverse solvePnP values and the original global position of my 3D points should produce a result that is visually indistinguishable from my pre-inverted projectPoints test but instead the visual result is completely wrong.

The inverted T result is [[113.85782537] [ 2.37433238] [180.28635527]] and the inverted R result is [[ 0.9492994 -0.00140185 -0.3143703 ] [-0.15497146 -0.87213378 -0.46407599] [-0.2735224 0.48926548 -0.82813331]]

Is it more likely that my inverse calculation is incorrect or am I missing a step and need to do something else to prepare the camera R and T values before feeding them into projectPoints? Alternatively, is there a better method than projectPoints I could use for validating the result?

edit retag flag offensive close merge delete