Is it necessary to use StereoRectify and StereoCalibrate if I want to find the 3d coordinate of image point with two camera
Hi, I am new to computer vision.
Right now, I am trying to use two camera (maybe same or different) to find the 3d coordinates of an object with two image taken by two camera respectively.
The camera A maybe just near the camera B 30cm, I have not decide it yet.
And I know I can use cameraCalibrate() to find camera matrix ,distortion coefficients, rotation vector and transition vector of camera A and B. I also know I can
- Use Rodrigues() to convert rotation vector into rotation matrix
- Append translation vector to rotation matrix become a entrinsic matrix
- Compute projection matrix by multiplying camera matrix and entrinsic matrix
Therefore, I have the projection matrix of camera A and B. So, If I want to find the 3d coordinates of an object with two camera, I think I just need to
- take two picture, imageA and imageB with camera A, B.
- find the object's image coordinates in imageA and imageB.
- use undistortPoint() find the object's image coordinates in undistort image, undistrotPointA, undistrotPointB.
- Finally, use triangulatePoints(projMatA, projMatB, undistCoordsA, undistCoordsB, triangCoords4D); find the object's coordinate. Actually, what I really want is to find the z distance between object and camera( if two camera have same z ) Is this way could work?
Also, I have seen many people use StereoRectify(), StereoCalibrate(), solvePnP(). I have seen the documentation, but I still don't understand it.
So, Is it necessary to use StereoRectify(), StereoCalibrate(), solvePnP()? or the method I mentioned above is enough for me to find the z distance between object and camera?