Ask Your Question

cv tech's profile - activity

2013-03-01 04:37:12 -0600 received badge  Student (source)
2013-02-19 10:22:51 -0600 asked a question Refining perspective transformation in epipolar geometry

Given two epipolar pics of the same scene from (slightly) different angles, and given corresponding points in the two pics that were manually curated (golden truth), one can calculate the perspective transformation matrix between the two camera planes.

However, when applying this transformation on a new point in one of the images to calculate the location of the corresponding coordinates of the point in the other image, often time the calculated coordinates hae a (small) offset from the real location (resulting from noise, approximation, etc).

What can be done to improve the accuracy of the location of the calculated point? What is the state of the art in this regard? Are there any post-refinements based on local descriptors and local search that could help improve the accuracy of the location?