Ask Your Question
0

Feature matching 2d to 3d points. [closed]

asked 2015-06-16 10:08:40 -0600

stillNovice gravatar image

I am attempting a solvePnPRansac implementation, and have coordinates saved as:

 std::vector<cv::KeyPoint> Keypoints; 
 std::vector<Point3f> points3d;

I need to feature match these vectors to get the corresponding matches and store them in:

 std::vector<cv::DMatch> pnp_matches;

How best to go about this?

to feature match images i could do:

 FastFeatureDetector detector(threshold); 
 detector.detect(img1, keypoints1);     
 detector.detect(img2, keypoints2);

 OrbDescriptorExtractor extractor;          
 extractor.compute(img1, keypoints1, descriptors1);             
 extractor.compute(img2, keypoints2, descriptors2);

 BFMatcher matcher(NORM_L2);
 matcher.match(descriptors1, descriptors2, matches);

But what approach do i need when matching 2d to 3d correspondences?

thanks.

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by berak
close date 2015-06-16 12:24:40.443689

Comments

this demo: http://docs.opencv.org/master/dc/d2c/...

uses a Mat that hold descriptors of the 3d points. If i had that, I could use this method. BUT, how do I get the descriptors of 3d points??

stillNovice gravatar imagestillNovice ( 2015-06-16 11:28:56 -0600 )edit

hello,I have a question.in step 5 Pose estimation using PnP + Ransac ,it just get rvec and tvec through solvePnPRansac , how can I compute camera position?According my understand,camera position just have x,y and z.

zeb.Z gravatar imagezeb.Z ( 2016-03-03 03:02:12 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
3

answered 2015-06-16 11:34:36 -0600

R.Saracchini gravatar image

The match procedure of the BFMatcher class will take two inputs, queryDescriptors and trainDescriptors, which are arrays of descriptors (normally each line of a cv::Mat object corresponds to a individual descriptor), and return a std::vector object of type cv::DMatch.

Each element of this output array corresponds to the correlation of a matched query descriptor to a train descriptor. So matches[i] has tree important attributes: trainIdx, queryIdx and distance. This element states that the line queryIdx of queryDescriptors matches with the line trainIdx of trainDescriptors with distance distance.

Well, after matching you can assemble the inputs of solvePnp (which are : array of 2d positions, an array with the corresponding 3d positions) using those indexes. This depends if the 3D positions are from the query or train descriptors. If you know the 3d positions of the train descriptors, you will use the 2d positions of the matched query descriptors and then compute the camera pose of the query image with this matching data.

Normally you use the distance attribute to filter bad matches.

I hope that this helps.

edit flag offensive delete link more

Comments

thank you!

stillNovice gravatar imagestillNovice ( 2015-06-16 12:19:34 -0600 )edit

I got rvec and tvec form solvePnPRansac ,hou can I compute camera position,and draw its trajectory.

zeb.Z gravatar imagezeb.Z ( 2016-03-03 02:43:17 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2015-06-16 10:08:40 -0600

Seen: 2,121 times

Last updated: Jun 16 '15