1 | initial version |
First of all let me tell you that it seems there is something going wrong in the wrapping of the C++ interface. Indeed one would expect the complete interface to output the exact same results, else there is no reason in creating wrapper functionality. A big chance thus that this is indeed a bug. To answer your questions:
First question: does the order of the rows in a descriptor matrix fundamentally matter to the DescriptorMatcher.match() method? It appears it does, but I can't find any documentation that says it should.
The order of rows, so basically the order of points should not interfere with the end result as long as you perform a brute force matching of your keypoints against eachother which is a 1 versus all approach. Looking at the description of the match function I would expect that this is the case, since each point given is checked against the known descriptor database of the training set.
Second question: does the order of keypoints from the FeatureDetector matter to the DescriptorExtractor.compute() method? Again, empirically, it seems so but I can't find any documentation that states it explicitly.
Same as above. If you have an identical image or apply the featuredetector multiple times on the same image, the order of features should be identical. However if a slight movement is in place, then it is possible the order is changed, but by a brute force matching, this shouldn't create any problem at all.
Final question: Why would the C++ feature detector return keypoints in a different order for the exact same image? I've verified that the image, in Mat structure, is exactly the same in both implementations.
As I said, I am guessing a bug in how the function gets wrapped. However, you need someome with Java and Android experience for that. @berak, you maybe got a clue on what is happening here?