Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

FundamentalMat with correspondences from a set of images?

I have a set of images of an object (with different point of view, scale, rotation etc.)

Then I have a query image in which is present the object and I have a set of correspondences between keypoints of the object in query image and the training images.

Considering the training images can be different size, scale and/or orientation, does it make sense to calc the fundamental mat with this set of correspondences?

Or is it possible to calc an homography to correctly identify the object considering always that we have a set of matches pointing to different training images?