Ask Your Question

Refining perspective transformation in epipolar geometry

asked 2013-02-19 10:22:51 -0600

cv tech gravatar image

Given two epipolar pics of the same scene from (slightly) different angles, and given corresponding points in the two pics that were manually curated (golden truth), one can calculate the perspective transformation matrix between the two camera planes.

However, when applying this transformation on a new point in one of the images to calculate the location of the corresponding coordinates of the point in the other image, often time the calculated coordinates hae a (small) offset from the real location (resulting from noise, approximation, etc).

What can be done to improve the accuracy of the location of the calculated point? What is the state of the art in this regard? Are there any post-refinements based on local descriptors and local search that could help improve the accuracy of the location?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2020-02-14 10:41:49 -0600

wisewolf76 gravatar image

When you compute the homography using correspondences between two images, you are computing the real transformation between points if only if they all lie on the same plane or if the camera moved with perfect rotation. Otherwise the homography is just an approximation. The only way to map all the points in a pefect way is computing a dense 3D map between the two views using stereopsis.

edit flag offensive delete link more

Question Tools



Asked: 2013-02-19 10:22:51 -0600

Seen: 520 times

Last updated: Feb 14 '20