Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

3D Reconstruction with 1 camera

Playing with stereo 3D reconstruction using one camera and rotating the object, but I can't seem to get proper Z mapping.

I follow the typical process: calibrate the camera with the chessboard, get the camera matrix and distorsion. Take left and right images by rotating the object, undistort the images. All this seems fine to me. The images look good.

I get the disparity map with StereoBM.compute on the left and right images.

There is some black areas but mostly gray, so the Z seems to be computed for most of the image.

then I use stereoRectify to get the Q matrix: I use a rotation matrix which I built using Rodrigues on a rotation vector.

My rotation is only along the Y axis, so the rotation vector is [0, angle, 0] (angle being the angle by which the object was rotated) The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.

I also need the translation vector, so I used [cos(angle), 0, sin(angle)] since I rotate only along Y, I then have a translation of the camera by the arc of the rotation.

I use stereoRectify with the same camera matrix and distortion for both cameras since it is the same camera.

when i reprojectImageTo3D with Q and the disparity map, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)

So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. Especially I wonder if I need to account somewhere for the distance from the camera to the center of rotation of the object: I believe the Rotation matrix and translation vector should be taking care of that, but it's on a unit basis.

I know it is a bit of a vague question, but I hope someone can confirm or infirm my assumptions here.

Thanks

3D Reconstruction with 1 camera

Playing with stereo 3D reconstruction using one camera and rotating camera and rotating the object, but I can't seem to get proper Z mapping.

EDITED following comments:

I follow the typical process:

  • calibrate the camera with the chessboard,

  • get the camera matrix and distorsion.

  • Take left and right images by rotating the object, rotating the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)

  • undistort the images.

All this seems fine to me. The images look good.

I get the disparity map with StereoBM.compute on the left and right images.

There is some black areas but mostly gray, so the Z seems to be computed for most of the image.

then I use stereoRectify to get the Q matrix:

I use a rotation matrix Rotation matrix which I built using Rodrigues on a rotation vector.

vector. My rotation is only along the Y axis, so the rotation vector is [0, angle, 0] (angle being the angle by which the object was rotated)

The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.

I also need the translation vector, so I used [cos(angle), 0, sin(angle)] since I rotate only along Y, I then have a unit-less translation of the camera by the arc of the rotation.rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with [d*cos(angle), 0, d*sin(angle)]) to account for the distance from camera to center of rotation, but it only seems to scale the object (in X,Y, and Z, not just one dimension)

I use stereoRectify with the same camera matrix and distortion for both cameras since it is the same camera.

when When i reprojectImageTo3D with Q and the disparity map, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)

So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere.

Especially I wonder if wonder:

  • If I need to account somewhere for the distance from the camera to the center of rotation of the object: I believe the Rotation matrix and as I mentioned I tried to apply that factor to the translation vector should but it only seems to scale the whole thing.

  • I wonder also if it may be taking care of that, but it's on a unit basis.

    a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the disparity map maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the rendering would look wrong, so that may be my problem.

I know it is a bit of a vague question, but I hope someone can confirm or infirm my assumptions here.

Thanks