solvePnP camera coordinates.

asked 2015-06-18 08:34:41 -0600

stillNovice gravatar image

updated 2015-06-22 01:41:33 -0600

I am running solvePNPransac using arbitrary points, found with FAST detector and triangulated to 3d. I get my rvec and tvec Mats back, which are object-space coordinates. When i print them though, they all stay at around zero, no matter how I move the camera.

I flip the matrix, to get the camera-centric coordinates, and see a similar thing. The camera pose hover between -0.01 and 0.1, no matter the motion.

Is this possibly because the coordinates are from a random one of the sampled points, and it changes?

How could I get the actual camera world coordinates updating as I move it?

Thanks!

edit retag flag offensive close merge delete

Comments

1

Well, solvePNPRansac is quite robust, if you give enough iterations. So your problem is somehow in your input data or how you are processing it: 1) You must be aware that the input and output matrices (camera matrix,, rvec,tvec) are type CV_64F (64 bit float), so not try to read their values using float in the typecasting. 2) You triangulation method is not giving you good results. If your 3d points aren't good, solvePNP wont give you a good movement. Additionally, 3d points should be in metres and in world coordinates. 3)The tracking method is giving you some improper results, so that the 2d input points are wrong. 4) The matching between 2d and 3d points passed as input is somehow wrong. Each i-th element on the 2d vector corresponds to i-th element in the 3d vector. They MUST match.

R.Saracchini gravatar imageR.Saracchini ( 2015-06-22 03:18:31 -0600 )edit

Thank you! I have checked my 3d points in a 3d programme, and they look good. They are not in metres though, so I will look into that. When i loop through the feature points and triangulate, i push the corresponding 2d point into a new vector at the same time, so i am pretty sure they are matching. Also, I think the 3d coordinates i am getting are using the camera sensor as the zero point... Is this maybe the issue? What do you mean by World coordinates? Do i need to offset the points to a different zero point? Thanks again.

stillNovice gravatar imagestillNovice ( 2015-06-22 10:25:00 -0600 )edit
1

If you are using the camera centre as origin, it means that you are using the coordinates in camera space instead world space. You wont be able to compute the motion of the camera in this way. Since you NEED a reference, set the world coordinate space as the camera pose of the first triangulation. Now you should compute the camera pose and rotation relative to the camera pose of this first triangulation. It means that after every triangulation, you must convert such triangulated points to world space coordinates. For this, just use the matrix T = E^(-1) (that is the inverse of the camera extrinsic matrix given my solvePnp) to transform every triangulated point. Note that transforming the point (0,0,0) by T will give you the camera centre in world coordinates.

R.Saracchini gravatar imageR.Saracchini ( 2015-06-23 04:33:12 -0600 )edit

thank you! that helps. So i now have values that make sense. BUT, they are jumping around a LOT. I am trying to use feature matching with previous and current frames, but it seemingly makes no difference. I have actually opened a new question with regards to this, as I think I may be making a mistake somewhere. http://answers.opencv.org/question/64...

thanks again.

stillNovice gravatar imagestillNovice ( 2015-06-23 07:03:59 -0600 )edit