Fundamental matrix and epipolar lines from known camera parameters

asked 2020-08-19 02:25:23 -0600

I want to visualize epipolar lines in a given 3-D scene with two cameras. I have code that works when the two cameras are only offset along the X axis. If the cameras are offset along the Y or Z axis or if they are rotated in any way, the epipolar lines are very far off. I assume that this is because I am missing a conversion of units and that the special case with only an offset in the X direction works because the epipolar lines are horizontal, i.e., they have a slope of 0, regardless of a conversion.

My starting point are two Viz3d instances where the cameras are offset relative to one another, e.g., by 0.1 units. I compute the essential matrix from their rotation and translation (extrinsic camera parameters) using the sfm module like this:

  const auto left_camera_pose = left_visualization.getViewerPose();
  const auto right_camera_pose = right_visualization.getViewerPose();
  Mat essential_matrix;
  essentialFromRt(left_camera_pose.rotation(), left_camera_pose.translation(), right_camera_pose.rotation(), right_camera_pose.translation(), essential_matrix);

From the essential matrix I compute the fundamental matrix using the two cameras' intrisic camera parameters like this:

  const auto left_camera = left_visualization.getCamera();
  const auto left_camera_matrix = GetIntrinsicCameraMatrix(left_camera);
  const auto right_camera = right_visualization.getCamera();
  const auto right_camera_matrix = GetIntrinsicCameraMatrix(right_camera);
  Mat fundamental_matrix;
  fundamentalFromEssential(essential_matrix, left_camera_matrix, right_camera_matrix, fundamental_matrix);

The intrinsic camera parameters I construct from the cameras' focal lengths and principal point coordinates like this:

static Matx33d GetIntrinsicCameraMatrix(const Camera &camera)
{
  const auto focal_length = camera.getFocalLength();
  const auto principal_point = camera.getPrincipalPoint();
  const Matx33d intrinsics(focal_length[0], 0, principal_point[0], 0, focal_length[1], principal_point[1], 0, 0, 1);
  return intrinsics;
}

With the fundamental matrix and a given point in the left camera image, I compute the epipolar line parameters using computeCorrespondEpilines (The full source code of is available here with some additional documentation here) . When both cameras are only offset in the X direction, the result appears to be correct when visualized, but it is totally off when offsetting the cameras in the Y or Z direction or when rotating them with respect to each other.

As stated above, I think that this is related to the units of the matrices and that one or multiple conversions is/are required. Unfortunately, I cannot find any documentation in the viz or sfm modules which state whether world coordinates, relative coordinates or any other types of coordinates are used by the individual functions. I already tried normalizeFundamental from the sfm module, but, although it does change the fundamental matrix, it does not make the result look any more plausible.

Am I missing any conversion here? Is there any documentation about the units of the respective input and output matrices of the functions above so that I can build some custom conversion functions? Any hint is appreciated.

edit retag flag offensive close merge delete

Comments

if docs.opencv.org doesn't say, your best bet is to read the source code. hopefully it's not too optimized/obfuscated. apart from that, for the "multi view geometry" math, there's a well known book by that name.

crackwitz gravatar imagecrackwitz ( 2020-08-22 17:03:59 -0600 )edit