Ask Your Question

Revision history [back]

Retrieving the coordinates of pixels identified by SURF

Hi there, I am trying this tutorial here:

http://docs.opencv.org/3.0-beta/doc/tutorials/features2d/feature_homography/feature_homography.html

It works mildly well when the object I'm looking for is in the scene, but when it isn't I end up with a collection of "good" matches that aren't actually any good. For that reason and others, I'd like to find the coordinates of the matched points in my scene, but I can't figure out how to get those. It's obvious that the Homography contains some context that allows for that, because of this line:

perspectiveTransform( obj_corners, scene_corners, H);

does a transformation that gets 3/4 of the way there when the target object is in scene (I say 3/4 because usually one of the points is oddly projected -- it can even produce points with negative projections).

What I'd like to do is to get the set of coordinates that match and filter them based on my knowledge of the scene -- in many case there are matches that are in places the object will never be. I'd also like to take the best matches and do an actual distance calculation on theme -- since the scene will never have the object wildly out of scale, I can use that as another way to filter. All of this depends on being able to take the good matches and get back x/y coordinates for them.

Alternatively I could use a different object detection routine, I'm not attached to SURF. I have a fairly simple case of an object which won't change its orientation, won't be dramatically scaled up or down, and will only have mild lighting adjustments to it. I considered HoG but don't know how to train it on an object that isn't humanoid.

TIA for any help. Pointers to other tutorials or even basic material would be welcome, this is my first exposure to OpenCV and CV in general. (and btw, isn't fairly ironic that captchas are being used to protect this site given the topic?!)