Ask Your Question

Revision history [back]

way to use in iPhone and iPad App

I am newbie in the field of computer vision. I have gone through a lot of documentation and OpenCV docs. I tried to understand the algorithms also , even I also applied the FAST, SURF, SIFT, ORB and others for detecting key-points and descriptors and for find matching with the Image ,applied the BruteForce, FLANN (using knnMatch method also). They all are working fine for a single Image or a group of 10 Images ( in terms of Time and Match Result Both). In the terms of iOS I stored the Images key-points and descriptors using the FileManager in .yml format. and Later on fetch by using loop, by applying number of count of Images to match one by one. When I tried to match with a 100 Images , it is taking approximately 10 seconds to move through the whole loop.

I am working on scenario where I have 1000 of Images and then when I will take a picture from camera , it will go through the whole Image and return the best identical matched Image.

I have some doubts like : - 1. Is I am doing correct by fetching in first the key-points and descriptors of all the Images and stored in a file ,OR, I have to do something else? 2. Is the method of retrieving the stored descriptor by applying the loop is OK, OR , I have to apply something else?

Please help me in perspective of iPhone App. I am Very thankful for this.