Playing with stereo 3D reconstruction using one camera and rotating the object, but I can't seem to get proper Z mapping.
I follow the typical process: calibrate the camera with the chessboard, get the camera matrix and distorsion. Take left and right images by rotating the object, undistort the images. All this seems fine to me. The images look good.
I get the disparity map with StereoBM.compute
on the left and right images.
There is some black areas but mostly gray, so the Z seems to be computed for most of the image.
then I use stereoRectify
to get the Q matrix: I use a rotation matrix which I built using Rodrigues
on a rotation vector.
My rotation is only along the Y axis, so the rotation vector is [0, angle, 0]
(angle being the angle by which the object was rotated) The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.
I also need the translation vector, so I used [cos(angle), 0, sin(angle)]
since I rotate only along Y, I then have a translation of the camera by the arc of the rotation.
I use stereoRectify
with the same camera matrix and distortion for both cameras since it is the same camera.
when i reprojectImageTo3D
with Q and the disparity map, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)
So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. Especially I wonder if I need to account somewhere for the distance from the camera to the center of rotation of the object: I believe the Rotation matrix and translation vector should be taking care of that, but it's on a unit basis.
I know it is a bit of a vague question, but I hope someone can confirm or infirm my assumptions here.
Thanks