2014-01-31 05:48:50 -0600 | commented answer | Algorithm used in Descriptor Matcher trainer in OpenCV So what is happening to other matchers. |
2014-01-29 12:20:07 -0600 | asked a question | Algorithm used in Descriptor Matcher trainer in OpenCV The code snippet below shows the basics of training a descriptor matcher used in object recognition. The code is not syntactically correct, however I want to know how the |
2013-12-01 18:31:33 -0600 | asked a question | Error in calculating perspective transform for opencv in Matlab I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab . My code in Matlab using OpenCV toolbox: The error: Now , I have came across this which replicates my error . The problem faced in this post is similar to mine , and I fixed the issue regarding setting of object coordinates . How did I did that?. I have a working code which takes in live feed from webcam and matches images by checking out for homography in pure OpenCV ie without Matlab platform involved. The problem starts from checking out the |
2013-11-03 18:41:21 -0600 | asked a question | Displaying meta-data of matched image to user after successful image matching via surf I managed to match two images using surf , now I want to say somehow to the user that I have successfully done the matching, via a text string. The rectangle that shows that the matching is done is visually appealing , but is there a way where I can store meta-data within the Mat struct , so that after successful match it can show the user the meta-data . |
2013-11-03 18:23:05 -0600 | commented answer | Sample image as training image Would you add a few more lines . |
2013-11-01 07:02:14 -0600 | asked a question | Sample image as training image So , I was following this this code sample from opencv about surf and homography and I was interested in the train sample that was required to such experiment . I downloaded the two images at the bottom box.png and box_in_scene.png to validate the correctness of this code , I was alright . Now , I went to test this code with my own image , on the left is an image of a flash drive , and on the right is an image of a scissor with an usb drive . I failed to get any rectangular box on the test image ( the scissor and usb drive) . . However I know the code is working when I take different train sample for example this one with a paper box on the left and paper box in the mix with bed sheet . . Now my question is , what sort of training images should I rely on to give a good response , or is it something to do with the scenery that I choose as my test sample. Also had I chosen a video sample as my test case , would I be able to receive more responsive result . Thanks . |
2013-10-08 06:35:12 -0600 | asked a question | Native OpenCV C++ for Android Hi ,I was interested in porting link text to Android . I got this link text so far . Is this good enough . I am particularly interested in porting native OpenCV for C++ to Android since many of the apis is easily obtainable in C++ than in java . Plus I am also looking for Bag of Words featur extractor , which I have a received a lot of support from C++ api. |