Ask Your Question

Revision history [back]

Connection between pose estimation, epipolar geometry and depth map

Hi I am an undergraduate student working on a graduate project, and a beginner to computer vision.

After I went through the tutorial "Camera Calibration and 3D Reconstruction" provided by OpenCV (link) : https://docs.opencv.org/master/d9/db7/tutorial_py_table_of_contents_calib3d.html

I failed to see the connections between the second part to the final part. What I understand here is :

  • The intrinsic and extrinsic parameters of a camera is required to estimate the position of the camera and the captured object
  • To reconstruct a 3D model multiple point clouds are needed, and to generate a point cloud a disparity map is required.

What I do not understand is :

  • The importance of estimating the position of the camera or the object to compute the epiline or epipole in either image planes.
  • The importance of epipolar geometry, and finding out the location of epiline and epipole to compute the disparity map.

As far as I am aware, the code below generate a disparity map

stereo = cv2.createStereoBM(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)

and the input includes a pair of stereo images, minDisparities, numDisparities and blockSize, but not the position of the camera nor the epiline/epipole.

Any help would be greatly appreciated.