Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

The match procedure of the BFMatcher class will take two inputs, queryDescriptors and trainDescriptors, which are arrays of descriptors (normally each line of a cv::Mat object corresponds to a individual descriptor), and return a std::vector object of type cv::DMatch.

Each element of this output array corresponds to the correlation of a matched query descriptor to a train descriptor. So matches[i] has tree important attributes: trainIdx, queryIdx and distance. This element states that the line queryIdx of queryDescriptors matches with the line trainIdx of trainDescriptors with distance distance.

Well, after matching you can assemble the inputs of solvePnp (which are : array of 2d positions, an array with the corresponding 3d positions) using those indexes. This depends if the 3D positions are from the query or train descriptors. If you know the 3d positions of the train descriptors, you will use the 2d positions of the matched query descriptors and then compute the camera pose of the query image with this matching data.

Normally you use the distance attribute to filter bad matches.

I hope that this helps.