object position detection
I am trying to do object detection inside of an image, where I have a template image and search through a larger image for matches to the template. It needs to be rotation- independent, although scale independence is not as important.
Template matching seems to be slow and is dependent on orientation. I think flann based matching will work, and I have started here: http://answers.opencv.org/question/983/object-detection-using-surf-flann/
Can anyone help with how to modify this code? I need to detect object position and orientation, but I don't really need to map or draw the lines.
Using openCV 2.4.7, visual studio 2012.
Actually this forum is a Q&A forum, not a solve my project please page. Basically you adapt the code yourself and if problems arise then you ask for help. If people need to solve all personal projects then this forum would become a disaster. So please, give it an effort and then report back.
I have made an attempt, but I have not been successful. Basically I find keypoints using SurfFeatureDetector, then calculate descriptors for each set of keypoints. After this I calculate the distances between keypoints.
My question is how to define a position within the image based on these distances. Once I have the keypoints I am not sure how to proceed.
If it helps, I am basically trying to draw a ROI rectangle within the larger image. I am not sure how to get from having a vector of distances to an actual position and orientation of the object. I tried using cv::minAreaRect, but there are often a few points that are outside the ROI which makes this rectangle too large. I have also looked at Mat::locateROI but I do not think this helps.
Any suggestions would be appreciated.
You can try to get a set of matches that all agree on a certain homography. Once you have a set of matches, you can use the descriptors in the original image (not the template image) that were matched to estimate the position of the object.