a reliable matching criteria for object identification in features2d
Hi,
I am trying to use features2d for object identification, but am unable to find a clear way for determining if the two objects match. I looked on the forums, and someone suggested using a ratio of inliers/(total key points) as a method of determining match/no match. However, I find that it doesn't always work. Even if I use a certain %age as a threshold, I either miss correct matches, or get false matches as the number of points/matches vary for each new image. Are there any better approaches to this? I am using distance based filtering, symmetric filtering, and RANSAC to arrive at the final keypoints. I understand that it is not an exact method, but am looking to improve my results.
Thanks.
I understand that it is not an exact method, but am looking to improve my results. only way to improve would be to generalize descriptors for an object over multiple instances and use that as a matching point. Than single object descriptions will have a smaller influence.
Hi Steven. Thanks for the response. By "generalize descriptors" do you mean to not use the features2d module, and use a classification based approach? Or do you mean there is some way to use feature keypoints and generalize descriptors? I am a little new to this field, so it will be useful if you could point me to the correct literature on the topic.
Generalizing descriptors can be done by averaging out. For example, put all your descriptors for all classes in a pool, then apply KNN clustering (where K is the number of output labels/classes) and then take the cluster centers as the actual measurement for each class to compare to.
Thanks. Shall try it out.