Ask Your Question
0

how to find the parameters needed in descriptorExtractor.match(..)

asked 2015-03-29 05:56:04 -0600

RB gravatar image

updated 2015-03-29 08:43:52 -0600

berak gravatar image

i want to find the similar features in two images, i wrote the below code and followed the steps needed, but i do not know which parameters should pass to the method descripMatcher.match(par1, par2, par3)

as the docs reads, i have to specify queryDescriptors and trainDescriptors. is the queryDescriptor is the output of descriptorExtract.compute(...)?

and what is the trainDescriptors, and how can i find it.

please have a look at the code below:

code:

public static void main(String[] args) {
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

                                /*Feature Detection*/
    FeatureDetector fDetect =  FeatureDetector.create(FeatureDetector.SIFT);

    MatOfKeyPoint matKeyPts_jf01 = new MatOfKeyPoint();
    fDetect.detect(path2Mat(path_jf01), matKeyPts_jf01);
    System.out.println("matKeyPts_jf01.size: " + matKeyPts_jf01.size());

    MatOfKeyPoint matKeyPts_jf01_rev = new MatOfKeyPoint();
    fDetect.detect(path2Mat(path_jf01_rev), matKeyPts_jf01_rev);
    System.out.println("matKeyPts_jf01_rev.size: " + matKeyPts_jf01_rev.size());

    Mat mat_jf01_OutPut = new Mat();
    Features2d.drawKeypoints(path2Mat(path_jf01), matKeyPts_jf01, mat_jf01_OutPut);
    Highgui.imwrite(path_jf01_DetectedOutPut, mat_jf01_OutPut);

    Mat mat_jf0_rev_OutPut = new Mat();
    Features2d.drawKeypoints(path2Mat(path_jf01_rev), matKeyPts_jf01, mat_jf0_rev_OutPut);
    Highgui.imwrite(path_jf01_rev_DetectedOutPut, mat_jf0_rev_OutPut);

                                /*DescriptorExtractor*/
    DescriptorExtractor descExtract = DescriptorExtractor.create(DescriptorExtractor.SIFT);
    Mat mat_jf01_Descriptor = new Mat();
    descExtract.compute(path2Mat(path_jf01), matKeyPts_jf01, mat_jf01_Descriptor);
    System.out.println("mat_jf01_Descriptor.size: " + mat_jf01_Descriptor.size());

    Mat mat_jf01_rev_Descriptor = new Mat();
    descExtract.compute(path2Mat(path_jf01_rev), matKeyPts_jf01_rev, mat_jf01_rev_Descriptor);
    System.out.println("mat_jf01_rev_Descriptor.size: " + mat_jf01_rev_Descriptor.size());

                                /*DescriptorMatcher*/
    MatOfDMatch matDMatch_jf01 = new MatOfDMatch();
    MatOfDMatch matDMatch_jf01_rev = new MatOfDMatch();
    DescriptorMatcher descripMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
    descripMatcher.match(queryDescriptors, trainDescriptors, matDMatch_jf01);//is queryDescriptor = mat_jf01_Descriptor
                                                                      //what is trainDescriptor

}

private static Mat path2Mat(String path2File) {
    // TODO Auto-generated method stub
    return Highgui.imread(path2File);
}
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
2

answered 2015-03-29 08:34:46 -0600

Eduardo gravatar image

Hi,

In your case, you have two images and you want to find the similar features between them. In my opinion you can use for the query descriptor one of the two and for the train descriptor the other, or try the two combinations.

If you deal with a reference or a train image (for example a box) and you want to find it in a query image, you have to supply the correct query (descriptors computed from the query image) and train (descriptors computed from the train image where there is only the object we want to detect) descriptors.

For example (from this tutorial ):

image description

I use this approach because as you can see on the picture, you will match keypoints detected on the box with keypoints detected in the whole query image. If you do the reverse, you will match keypoints detected on the whole query image with keypoints detected on the box. This will lead theoritecally to more false matches in the second case compared to the first case.

Remember that a match for a query keypoint is the keypoint from the train keypoints set which is the closest in term of descriptor distance. Also, OpenCV match function matches each query descriptors with the train descriptors and for the example with the box, the query descriptor argument would be the train descriptor and the train descriptor argument would be the query descriptor...

But it exists different strategies to eliminate false matches in the litterature:

  • use a constant threshold
  • use the ratio between the first two best matches
  • etc.

And you should have approximatively the same result. image description

This could be different if you have multiple train images of your object and one query images. In this case, I will use as the query descriptor the one computed from the query image and as the train descriptor those computed from the train images.

edit flag offensive delete link more

Comments

thank you for your answer the elaboration. but can i think of the query image as the image that has the key/main features i am lookinf for. and the train image is the image that its features will be obtained based on the features of the query image

RB gravatar imageRB ( 2015-03-29 13:29:38 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-03-29 05:56:04 -0600

Seen: 771 times

Last updated: Mar 29 '15