Hello all.
I am truly new computer vision field, but it is fascinating me! I have now a challenge in my hands and I am seeking for mentors/advisers to give me some follow up.
My project is: I take a picture of a video-game cover, and I will search that pic through a video-game cover database, and if there is a very good match the app will return a string with the name of the videogame and the platform available to.
Example of the procediment/problem: 1 -Take a photo of a cover similar to this: http://i.imgur.com/gpZMbRm.jpg 2 - Cover match to this in the database: http://i.imgur.com/WXhPwf8.png 3 - App gives string: "Fifa 12 Playstation 2"
As i said, i have very little background from Computer Vision, but i already have done some research. So far, and by myself, I have read that I should save in my database, the name of the game, the platform, the URL for the cover, and the keypoints/descriptors from image.
I have choose SURF features detector/extractor. My first trials output is something like this: http://i.imgur.com/Rj9lcBr.png . But there are some concepts that I am still confused about... I am not looking for similarity right? I just need to observe if there are some sort of good keypoints-paired/matched, right? because in that images before I get "img1 - 1087 features (query image), img2 - 1755 features - 30 % - SIMILARITY - 321/333 inliers/matched"
What are the inliers? My calculation of similarity seems wrong to me... I would say that these two images are like 70 % look alike...
ps: I am using Python so...
Thanks for your time and help, sorry If I could explain any better my problems/concerns.