Is it necessary to use StereoRectify and StereoCalibrate if I want to find the 3d coordinate of image point with two camera

asked 2019-11-19 11:56:43 -0500

Chun gravatar image

updated 2019-11-19 12:00:20 -0500

Hi, I am new to computer vision.

Right now, I am trying to use two camera (maybe same or different) to find the 3d coordinates of an object with two image taken by two camera respectively.

The camera A maybe just near the camera B 30cm, I have not decide it yet.

And I know I can use cameraCalibrate() to find camera matrix ,distortion coefficients, rotation vector and transition vector of camera A and B. I also know I can

  1. Use Rodrigues() to convert rotation vector into rotation matrix
  2. Append translation vector to rotation matrix become a entrinsic matrix
  3. Compute projection matrix by multiplying camera matrix and entrinsic matrix

Therefore, I have the projection matrix of camera A and B. So, If I want to find the 3d coordinates of an object with two camera, I think I just need to

  1. take two picture, imageA and imageB with camera A, B.
  2. find the object's image coordinates in imageA and imageB.
  3. use undistortPoint() find the object's image coordinates in undistort image, undistrotPointA, undistrotPointB.
  4. Finally, use triangulatePoints(projMatA, projMatB, undistCoordsA, undistCoordsB, triangCoords4D); find the object's coordinate. Actually, what I really want is to find the z distance between object and camera( if two camera have same z ) Is this way could work?

Also, I have seen many people use StereoRectify(), StereoCalibrate(), solvePnP(). I have seen the documentation, but I still don't understand it.

So, Is it necessary to use StereoRectify(), StereoCalibrate(), solvePnP()? or the method I mentioned above is enough for me to find the z distance between object and camera?

edit retag flag offensive close merge delete