converting triangulation output to world coordinate system

asked 2017-06-10 10:45:00 -0600

Ibra gravatar image

I have a 2 camera stereo setup where i calibrated each camera using opencv's initCameraMatrix2D then stereocalibrate to get the camera matrices K1, K2 and fundamental matrix F.
I tested it using a checker board and the correspondence equation where X't.F.X = 0 and i was able to get 0.00# so i believe this means the intrinsic calibration is correct?

Next i fixed an ArUco marker on the desired world origin and used the estimatePoseSingleMarker to get the RVec and TVec for both cameras and used them to get the projection Matrices P1 and P2 where P = k[R|T] and R is Rodrigues(RVec,R) and T is Tvec directly. This should make my marker center be my world origin?

Now i want to do triangulation so i used the triangulatePoints methond.

I detected the 4 marker corners from both cameras and triangulated them. My ArUco makrer size is 9 cm so each corner coordinate should be located at (+-0.045 , +- 0.045, 0) since the marker center is supposed to be my world origin but as a result of triangulation, i get:
(-0.0482, 0.0678, 0.0028)
( 0.0420, 0.0659, 0.0061)
( 0.0414,-0.0167, 0.0059)
(-0.0476,-0.0167, 0.0047)
This is of course after converting points from homogeneous coordinates using convertPointsFromHomogeneous. The Y values seems way off by a few cm.

I created 5 points with location at (0,0,0,1) and the (+-0.045 , +- 0.045, 0, 1) and projected them by multiplying by projection matrices and i draw the projected pixel coordinates but they are shifted in the image a bit from the expected location

image description and image description

To test the triangulation:

  • I detected the 4 marker corners from both cameras and triangulated them then reprojected the traingulated points back to the image plane using projectedPoints = P * traingulatedPoints and as a result i get back almost exactly the same original marker pixel coordinates that i detected before triangulation and reprojection. This should mean that my calibration, Projection Matrices and triangulation are correct. (or not?)

image description and image description (red is the detected corners and blue is the projection of the triangulated corners).

So i dont get what is wrong. Maybe the trinagulation result is correct but in a different coordinate system not using the marker origin as my center? if so, how can i get the triangulated points results relative to my marker's origin? does it have to do with the need to undistort the points? Is using the output Rvec and Tvec from the aruco marker good way to get a correct extrinsic matrix?

edit retag flag offensive close merge delete

Comments

I don't use stereo cameras, but does it possibly output in the rectified coordinate system? Dunno, but that could be the cause of a the small differences.

More likely, the poses might not be estimated quite perfectly. Check your poses to see if the transform between them is the same as you got from stereoCalibrate. I'll bet there's a difference. I think you can use stereoCalibrate with USE_INTRINSIC_GUESS to get better poses, but I've never done it.

Tetragramm gravatar imageTetragramm ( 2017-06-10 16:22:37 -0600 )edit

I used the USE_INTRINSIC_GUESS after estimating the K matrices using 'initCameraMatrix2D' . After my triangulation test and correspondence equation test, shouldn't this mean that my calibration and triangulation as well as projection matrices are correct?

Ibra gravatar imageIbra ( 2017-06-10 23:36:25 -0600 )edit

I read somewhere that triangulate points gives points in the coordinate system of the left camera. Do i need to multiply it by the transformation from camera coordinates to the world coordinates which is the inverse of what i get from the 'estimatePoseSingleMarker'? but this doesn't make sense since the output values are just a few centimeters while the marker is over a meter away from the camera.

Ibra gravatar imageIbra ( 2017-06-10 23:43:30 -0600 )edit

What I mean is that by using estimatePoseSingleMarker, you get the pose of one camera, then the other separately. If you use stereoCalibrate with USE_INTRINSIC_GUESS instead of estimatePose, then you get a result for both cameras simultaneously.

What I mean is take your camera poses from estimatePoseSingleMarker and see if they match the R and T from your original stereoCalibrate. I'm guessing there's uncertainty in the results of estimatePoseSingleMarker, and the results are slightly off the stereo baseline. Probably closer together since the Z-values are positive.

Tetragramm gravatar imageTetragramm ( 2017-06-11 08:41:08 -0600 )edit

Using the stereoCalibrate returns the R and T between the 1st and 2nd cameras while estimaPose retuns the R and T from the marker to each camera. So do i multiply the output of estimatePose from the 1st camera by the output of the stereoCalibrate and compare it to the output of the estimatePose from the 2nd camera?

Also i didnt do rectification / undistortion for the points. Could this be related?

Ibra gravatar imageIbra ( 2017-06-11 09:21:42 -0600 )edit

.... Huh. You are entirely correct. I don't know what I was reading to make me see that it returned individual rvec and tvec for each camera too.

Yes, you can try that, it should fix the distance, though maybe not the overall accuracy.

Yes you should undistort the points. That information is not contained in the projection matrix.

Tetragramm gravatar imageTetragramm ( 2017-06-11 11:58:23 -0600 )edit

I undistorted the points before triangulation and it solved the problem. thanks!

Ibra gravatar imageIbra ( 2017-06-27 22:51:31 -0600 )edit