1 | initial version |
This is an answer because it's so long.
Those videos are listed as unavailable.
Wait, so those are Vive tracking pucks in the corners? And those are the XYZ points displayed in between the images? Is the display just cutting off the negative sign that's always in those boxes?
I think what you want to be doing is forgetting the Vive tracking pucks entirely, and using the stereoCalibrate function. With a collection of images with chessboard detections, that will give you the translation and rotation of the cameras relative to eachother. If you set the vive as the first camera, it will be at (0,0,0) in both rvec and tvec, and the R and T outputs will be the relative location of the second camera.
Then you can use the triangulatePoints function to find the 3d world points of the chessboard, then verify that against what you measure from the Vive tracking pucks.
Do be careful, because the rotation and translation you get from the Vive feedback (if you're reading that directly from the API) is an OpenGL coordinate system, which takes some altering to make the OpenCV coordinate system. Specifically, multiply the rotation by
[1 0 0]
[0 -1 0]
[0 0 -1]
Then you have the location and orientation of the Vive relative to the world coordinate system, which is the inverse of what the rvec and tvec stores.
2 | No.2 Revision |
This is an answer because it's so long.
Those videos are listed as unavailable.
Wait, so those are Vive tracking pucks in the corners? And those are the XYZ points displayed in between the images? Is the display just cutting off the negative sign that's always in those boxes?
I think what you want to be doing is forgetting the Vive tracking pucks entirely, and using the stereoCalibrate function. With a collection of images with chessboard detections, that will give you the translation and rotation of the cameras relative to eachother. If you set the vive as the first camera, it will be at (0,0,0) in both rvec and tvec, and the R and T outputs will be the relative location of the second camera.
Then you can use the triangulatePoints function to find the 3d world points of the chessboard, then verify that against what you measure from the Vive tracking pucks.
Do be careful, because the rotation and translation you get from the Vive feedback (if you're reading that directly from the API) is an OpenGL coordinate system, which takes some altering to make the OpenCV coordinate system. Specifically, multiply the rotation by
[1 0 0]
[0 -1 0]
[0 0 -1]
Then you have the location and orientation of the Vive relative to the world coordinate system, which is the inverse of what the rvec and tvec stores.
EDIT:
So there are three separate problems I see.