OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 13 Jan 2016 10:47:19 -06003D Reconstruction with 1 camerahttp://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/Playing with stereo 3D reconstruction using **one camera** and *rotating* the object, but I can't seem to get proper Z mapping.
**EDITED following comments:**
I follow the typical process:
- **calibrate** the camera with the **chessboard**,
- get the camera **matrix** and **distorsion**.
- Take **left** and **right** images by *rotating* the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)
- **undistort** the images.
All this seems fine to me. The images look good.
I get the **disparity** map with `StereoBM.compute` on the left and right images.
There is some black areas but mostly gray, so the Z seems to be computed for most of the image.
then I use `stereoRectify` to get the **Q matrix**:
I use a **Rotation matrix** which I built using `Rodrigues` on a rotation vector. My rotation is only along the Y axis, so the rotation vector is `[0, angle, 0]` (angle being the angle by which the object was rotated)
The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.
I also need the translation vector, so I used `[cos(angle), 0, sin(angle)]` since I rotate only along Y, I then have a *unit-less* translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with `[d*cos(angle), 0, d*sin(angle)]`) to account for the distance from *camera* to *center of rotation*, but it only seems to scale the object (in X,Y, and Z, not just one dimension)
I use `stereoRectify` with the same camera **matrix** and **distortion** for both cameras since it is the same camera.
When i `reprojectImageTo3D` with **Q** and the **disparity map**, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)
So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere.
Especially I wonder:
- If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing.
- I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the *disparity map* maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the rendering would look wrong, so that may be my problem.
I know it is a bit of a vague question, but I hope someone can confirm or infirm my assumptions here.
ThanksMon, 11 Jan 2016 20:26:42 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83567#post-id-83567OK, I got it... I was going to answer my own question but I can't post.
I understand why this does not work: the vanishing point of the cameras is supposed to be at infinity, while mine is at the object rotation center and the stereoRctify can't transform that. Also, the disparity map assume a horizontal shift to recalculate Z, and I really only have a rotation. So, I get it, this won't work. I'll work on a rig with translation. I assumed my rotation implied the translation I needed of the camera, but the vanishing point issue makes it unworkable.Wed, 13 Jan 2016 10:47:19 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83567#post-id-83567Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83454#post-id-83454@Edouardo I do not understand your comment about my **translation** not being right because I **always** have a distance of 1: I *only* have 2 images, so why do you say **always**?
Then, in your flow, you say 'rectify the left and right images to have a fronto-parallel view': This seems to be the step I am missing, so how would I do that? Thanks for help.Tue, 12 Jan 2016 13:40:33 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83454#post-id-83454Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83427#post-id-83427From what I have been reading, the rotation and translation matrices are unit-less. Correct me if I am wrong. So the distance to the object is constant, yes, and unit 1. I did try adding a multiplication factor to account for the distance to the object, but all that seems to do is change the scale (in x,y and z) not change the z depth.
If I scale the disparity map, it does change the z scale but does not seem to map right. So i'm confused about what I am supposed to adjust, or even if it is possible to get reasonable z depth with just 2 images.Tue, 12 Jan 2016 10:56:39 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83427#post-id-83427Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83339#post-id-83339I have only used 2 images right now. I understand that if I wanted to move around the object 360degrees I would need to move each frame by the exact same angle, but would be the same if I was to move the camera. But with 2 images, I only need to know one angle. My question really is: is 2 frame enough to get reasonable Z, or am I supposed to go 360 around? If I have to take multiple frames, how am I supposed to match the Z from one reprojection to another?Tue, 12 Jan 2016 00:21:34 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83339#post-id-83339Comment by Balaji R for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83338#post-id-83338Yes but the translation/Rotation (base line) have to be constant. can you move the object exactly same distance for a given frame?Tue, 12 Jan 2016 00:13:05 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83338#post-id-83338Comment by Eduardo for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83448#post-id-83448Check this sample: [stereo_match.cpp](https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/stereo_match.cpp).
It is for stereo cameras but the principle should be the same for you.
The pipeline when using a stereo rig is:
- Calibrate the left and right cameras once with stereoCalibrate using multiple images
- Take left and right pictures
- undistort the left and right images
- rectify the left and right images in order to have a fronto-parallel view
- compute the disparity map
- convert the disparity map to the depth map
I don't think that your translation is good as you always have a distance of 1 between the two camera positions regardless the angle of rotation. If you imagine the inverse, two cameras and a static object, the distance between the cameras should change.Tue, 12 Jan 2016 12:58:56 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83448#post-id-83448Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83444#post-id-83444You say the views are fronto-parallel, so what does that mean for the Rotation matrix? Does that mean there is NO rotation between the cameras, only translation? In my case since I rotate the object, the 2 cameras point at the same point, which is also the axis of rotation of the object. This is why I used the Rotation matrix as I explained. If this is not correct, what should I do?Tue, 12 Jan 2016 12:30:07 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83444#post-id-83444Comment by Eduardo for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83442#post-id-83442What I meant is for the computation of the Q matrix by [stereoRectify](http://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga617b1685d4059c6040827800e72ad2b6):
- you have to supply the rotation matrix between the camera frame at image 1 and the camera frame at image 2
- you have to supply the translation vector between the camera frame at image 1 and the camera frame at image 2
In Q, you have basically the distance between the left and right camera frames (the baseline). Usually, the stereo rig is constructed so that the views are fronto-parallel, otherwise we usually rectify the images.Tue, 12 Jan 2016 12:16:55 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83442#post-id-83442Comment by Eduardo for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83352#post-id-83352Are you sure your translation vector between the camera at frame 1 and the camera at frame 2 is OK ?
If I understand well your formula, if the object rotate by 20°, the translation vector between the two positions of the camera is [x y z] <==> [0.939692621 0 0.342020143], so a distance of always 1 m whatever the angle is ?Tue, 12 Jan 2016 04:11:40 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83352#post-id-83352Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83336#post-id-83336Well, it's all relative, right? The camera rotate around the object or the object rotates in front of the camera. I use a green screen to remove background, so effectively I rotate the object on itself, but in fact it is as if the camera moved around the object, by the same angle, at a given distance from the object.Mon, 11 Jan 2016 23:47:38 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83336#post-id-83336Comment by Balaji R for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83332#post-id-83332I don't understand " Take left and right images by rotating the object"? what do you mean by rotating the object? If you want to use stereo algorithm for 3D construction, You are supposed to move camera & not the object.Mon, 11 Jan 2016 23:06:56 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83332#post-id-83332Comment by MrE for <div class="snippet"><p>Playing with stereo 3D reconstruction using <strong>one camera</strong> and <em>rotating</em> the object, but I can't seem to get proper Z mapping.</p>
<p><strong>EDITED following comments:</strong></p>
<p>I follow the typical process: </p>
<ul>
<li><p><strong>calibrate</strong> the camera with the <strong>chessboard</strong>, </p></li>
<li><p>get the camera <strong>matrix</strong> and <strong>distorsion</strong>. </p></li>
<li><p>Take <strong>left</strong> and <strong>right</strong> images by <em>rotating</em> the object: fixed camera, the object rotates on itself on the Y axis (the background is green screen I remove)</p></li>
<li><p><strong>undistort</strong> the images. </p></li>
</ul>
<p>All this seems fine to me. The images look good.</p>
<p>I get the <strong>disparity</strong> map with <code>StereoBM.compute</code> on the left and right images.</p>
<p>There is some black areas but mostly gray, so the Z seems to be computed for most of the image.</p>
<p>then I use <code>stereoRectify</code> to get the <strong>Q matrix</strong>: </p>
<p>I use a <strong>Rotation matrix</strong> which I built using <code>Rodrigues</code> on a rotation vector. My rotation is only along the Y axis, so the rotation vector is <code>[0, angle, 0]</code> (angle being the angle by which the object was rotated) </p>
<p>The Rotation matrix seems right as far as I can tell: I tried with trivial angles and I get what is expected.</p>
<p>I also need the translation vector, so I used <code>[cos(angle), 0, sin(angle)]</code> since I rotate only along Y, I then have a <em>unit-less</em> translation of the camera by the arc of the rotation. From my reading Rotation and Translation matrices are unit-less. I have tried applying a scale factor to the translation vector (with <code>[d*cos(angle), 0, d*sin(angle)]</code>) to account for the distance from <em>camera</em> to <em>center of rotation</em>, but it only seems to scale the object (in X,Y, and Z, not just one dimension)</p>
<p>I use <code>stereoRectify</code> with the same camera <strong>matrix</strong> and <strong>distortion</strong> for both cameras since it is the same camera.</p>
<p>When i <code>reprojectImageTo3D</code> with <strong>Q</strong> and the <strong>disparity map</strong>, I get a result that looks OK in Meshlab when looking at the right angle, but the depth seems way off when I move around (i.e the Z depth of the object is ~2x the width, when the object is really 1/10th of the width)</p>
<p>So, I'm just wondering if this is normal, and expected because it's only 2 images from a ~20degree angle difference, or if I'm just messing up somewhere. </p>
<p>Especially I wonder:</p>
<ul>
<li><p>If I need to account somewhere for the distance from the camera to the center of rotation of the object: as I mentioned I tried to apply that factor to the translation vector but it only seems to scale the whole thing. </p></li>
<li><p>I wonder also if it may be a problem with the application of the colors: I use one of the 2 images to get the colors, because I wasn't sure how I could use both. I am not sure how the <em>disparity map</em> maps to the original images: does it map to Left or Right or neither? I could see that if color assignment to the disparity map is wrong, the ...</p></li></ul><span class="expander"> <a>(more)</a></span></div>http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83446#post-id-83446For stereoRectify, the docs says the Rotation is the rotation between the coordinate systems of the cameras. And translation is translation between the 2 cameras. This is what I am doing.
Now if the disparity map process expects the images to be taken from a fronto-parallel setup that's a different story: that means I should account for some 'distortion' when calculating that, is that right? I had calibrated my camera on a chessboard pointed towards the camera, and used that for both distortion in the stereoRectify, but maybe I need to calibrate the 2 views by rotating the chessboard? I'm starting to wonder if this can even work with rotation; I guess I should 'translate' the camera then to simulate a front-parallel setup.Tue, 12 Jan 2016 12:38:33 -0600http://answers.opencv.org/question/83317/3d-reconstruction-with-1-camera/?comment=83446#post-id-83446