Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Match using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help

I have a stereo camera, and am triangulating points from it, using feature matching, in frame 1, i match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the points in keyPntsCurrent that are matched to the previous frame in the last step. This is bending my brain, any help would be greatly appreciated!

Match using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help

I have a stereo camera, and am triangulating points from it, using feature matching, in frame 1, i match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the points in keyPntsCurrent that are matched to the previous frame in the last step. This is bending my brain, any help would be greatly appreciated!

Match using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help

I have a stereo camera, and am triangulating points from it, using feature matching, in matching. In frame 1, i I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the points in keyPntsCurrent that are matched to the previous frame in the last step. This is bending my brain, any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some helphelp.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the MATCHED points in keyPntsCurrent that are matched to the previous frame in the last step. This .This is bending my brain, any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the MATCHED points in keyPntsCurrent .This is bending my brain, any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 2

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> match = tPointMatching->matchPoints(descPrevious, descCurrent);

Now, I need to find the 3d points that correspond to the MATCHED points in keyPntsCurrent .This is bending my brain, any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//triangulate:
std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

//copy decriptors to 'previous'
cv::Mat descPrevious;
descCurrent.copyTo(descPrevious);

//frame 22 EDIT

//match previous to current frame
DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);
std::vector<cv::DMatch> 
        //match to previous frame 
        match = tPointMatching->matchPoints(descPrevious, tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
        if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
            }

Now, I need to find This runs, but the projected points flip around and are very far from stable. I have checked the 3d points that correspond to the MATCHED points in keyPntsCurrent .This points, and they look correct (although there are some extras) I assume the issue is bending with the correspondences still?

(I have run my brain, code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

 DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
 DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

 std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//sort after matching
            for (size_t i = 0; i < matchR.size(); i++)
            {
                keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
                keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);

            }

    //triangulate:
 std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsCurrent,keyPntsCurrentR);

triangulate(keyPntsGoodL,keyPntsGoodR);

    //copy decriptors to 'previous'
 cv::Mat descPrevious;
 descCurrent.copyTo(descPrevious);

//frame 2 EDIT

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

        //match to previous frame 
        match = tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
        if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
            }

This runs, but the projected points flip around and are very far from stable. I have checked the 3d points, and they look correct (although there are some extras) I assume the issue is with the correspondences still?

(I have run my code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

    DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
    DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

    std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//sort after matching
            for (size_t i = 0; i < matchR.size(); i++)
            {
                keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
                keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);

            }

    //triangulate:
    std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsGoodL,keyPntsGoodR);

    //copy decriptors to 'previous'
    cv::Mat descPrevious;
    descCurrent.copyTo(descPrevious);

//frame 2 EDIT

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

        //match to previous frame 
        match = tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
        if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
            }

my matching function is:

std::vector<DMatch> PointMatching::matchPoints(cv::Mat descriptors1Cpu, cv::Mat descriptors2Cpu)
{


    descriptors1GPU.upload(descriptors1Cpu);
    descriptors2GPU.upload(descriptors2Cpu);

    // matching descriptors
    Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(NORM_HAMMING);
    vector<cv::DMatch> matches;
    vector< vector< DMatch> > knn_matches;

    matcher->knnMatch(descriptors1GPU, descriptors2GPU, knn_matches, 2);

    //Filter the matches using the ratio test
    for (std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
        if (it->size() > 1 && (*it)[0].distance / (*it)[1].distance < 0.8) {
            matches.push_back((*it)[0]);
        }
    }

    return matches;

}

This runs, but the projected points flip around and are very far from stable. I have checked the 3d points, and they look correct (although there are some extras) I assume the issue is with the correspondences still?

(I have run my code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

EDIT: I notice that because I am filtering the returned matches before they are returned, the initial descriptors will not match up anymore. Is this correct? How can I return the descriptors that match keyPntsGoodL in the above code?

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

    DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
    DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

    std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right
 

//sort after matching matching

        cv::Mat tempDescriptor(matchR.size(), descCurrent.cols, descCurrent.depth());

        int count = 0;
        for (size_t i = 0; i < matchR.size(); i++)
         {
             keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
             keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);

 
            descCurrent.row(i).copyTo(tempDescriptor.row(count));
            count+=1;


        }

    //triangulate:
    std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsGoodL,keyPntsGoodR);

    //copy decriptors to 'previous'
    cv::Mat descPrevious;
    descCurrent.copyTo(descPrevious);
tempDescriptor.copyTo(descPrevious);

//frame 2 EDIT

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

        //match to previous frame 
        match = tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
        if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
            }

my matching function is:

std::vector<DMatch> PointMatching::matchPoints(cv::Mat descriptors1Cpu, cv::Mat descriptors2Cpu)
{


    descriptors1GPU.upload(descriptors1Cpu);
    descriptors2GPU.upload(descriptors2Cpu);

    // matching descriptors
    Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(NORM_HAMMING);
    vector<cv::DMatch> matches;
    vector< vector< DMatch> > knn_matches;

    matcher->knnMatch(descriptors1GPU, descriptors2GPU, knn_matches, 2);

    //Filter the matches using the ratio test
    for (std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
        if (it->size() > 1 && (*it)[0].distance / (*it)[1].distance < 0.8) {
            matches.push_back((*it)[0]);
        }
    }

    return matches;

}

This runs, but the projected points flip around and are very far from stable. I have checked the 3d points, and they look correct (although there are some extras) I assume the issue is with the correspondences still?

(I have run my code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

EDIT: I notice that because I am filtering the returned matches before they are returned, the initial descriptors will not match up anymore. Is this correct? How can I return the descriptors that match keyPntsGoodL in the above code?

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow is:

//frame 1:

  DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
     DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints

     std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, descCurrentR); //match left / right

//sort after matching

matching

            cv::Mat tempDescriptor(matchR.size(), descCurrent.cols, descCurrent.depth());

         int count = 0;
         for (size_t i = 0; i < matchR.size(); i++)
         {
             keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
             keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);


             descCurrent.row(i).copyTo(tempDescriptor.row(count));
             count+=1;


         }

     //triangulate:
     std::vector<cv::Point3f> objectPointsTri = triangulate(keyPntsGoodL,keyPntsGoodR);

     //copy decriptors to 'previous'
     cv::Mat descPrevious;
     tempDescriptor.copyTo(descPrevious);

//frame 2 EDIT

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

        //match to previous frame 
        match = tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
        if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
            }

my matching function is:

std::vector<DMatch> PointMatching::matchPoints(cv::Mat descriptors1Cpu, cv::Mat descriptors2Cpu)
{


    descriptors1GPU.upload(descriptors1Cpu);
    descriptors2GPU.upload(descriptors2Cpu);

    // matching descriptors
    Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(NORM_HAMMING);
    vector<cv::DMatch> matches;
    vector< vector< DMatch> > knn_matches;

    matcher->knnMatch(descriptors1GPU, descriptors2GPU, knn_matches, 2);

    //Filter the matches using the ratio test
    for (std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
        if (it->size() > 1 && (*it)[0].distance / (*it)[1].distance < 0.8) {
            matches.push_back((*it)[0]);
        }
    }

    return matches;

}

This runs, but the projected points flip around and are very far from stable. I have checked the 3d points, and they look correct (although there are some extras) I assume the issue is with the correspondences still?

(I have run my code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

EDIT: I notice that because I am filtering the returned matches before they are returned, the initial descriptors will not match up anymore. Is this correct? How can I return the descriptors that match keyPntsGoodL in the above code?

Match 2d keypoints to 3d Triangulated points using three sets of descriptors?

I am having trouble with a feature matching workflow, and am looking for some help.

I have a stereo camera, and am triangulating points from it, using feature matching. In frame 1, I match points between the left and right image, and triangulate them. In frame 2, I match points between frame 1 and frame 2, in the left frame only.

Now, I need to find correspondences between the matched frame 2 keypoints, and the triangulated 3d points, for solvePnp.

My workflow function is:

//frame 1:

     DetectKeypointsL(imLeft, void Tracking::PnpTests()
{
    cv::Mat rvec, tvec, rvec2, tvec2;
    std::string frames = "00";

    //sequential
    boost::circular_buffer<cv::Mat> frameArray((2));
    //storage for keypoints/descriptors
    cv::Mat descCurrent;
    cv::Mat descCurrentR;
    cv::Mat descPrevious;
    std::vector<cv::KeyPoint> keyPntsCurrent;
    std::vector<cv::KeyPoint> keyPntsGoodL;
    std::vector<cv::KeyPoint> keyPntsGoodR;
    std::vector<cv::KeyPoint> keyPntsCurrentMatch;
    std::vector<cv::KeyPoint> keyPntsCurrentR;
    std::vector<cv::KeyPoint> keyPntsPrevious;
    std::vector<cv::Point3f> Points3d;
    cv::Mat descCurrentMatched;// = cv::Mat(descCurrent.rows, descCurrent.cols, cv::DataType<float>::type);

                               // Retrieve paths to images
    vector<string> vstrImageLeft;
    vector<string> vstrImageRight;
    vector<double> vTimestamps;
    LoadImages2(vstrImageLeft, vstrImageRight, vTimestamps);

    const int nImages = vstrImageLeft.size();
    cv::Size boardSize(8, 6);



    //tringuulate stuff
    std::vector<cv::Point3f> objectPointsTri;
    std::vector<cv::Point3f> objectPointsGood;
    std::vector<cv::KeyPoint> keyPntsTriReturn;
    std::vector<cv::KeyPoint> keyPntsGood;
    std::vector<cv::Point2f> projectedPoints;
    std::vector<cv::DMatch> matchR;
    std::vector<cv::DMatch> match;


    // Main loop

    int frameNumber = 0;
    cv::Mat imLeft, imRight, imStored;
    for (int ni = 0; ni < nImages; ni++)
    {
        imLeft = cv::imread("frames/left/" + vstrImageLeft[ni], CV_LOAD_IMAGE_UNCHANGED);
        imRight = cv::imread("frames/right/" + vstrImageRight[ni], CV_LOAD_IMAGE_UNCHANGED);
        if (imLeft.empty())
        {
            cerr << endl << "Failed to load image at: "
                << string(vstrImageLeft[ni]) << endl;
        }

        if (bFirstRun == false) // every run. 
        {
            int64 t01 = cv::getTickCount();

            //use features.
            tFeatures->DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

            //knn brute force match to previous frame 
            match = tPointMatching->matchPointsOG2(descPrevious, descCurrent);


            Mat img_matches2;
            cv::drawMatches(Mat(imStored), keyPntsPrevious, Mat(imLeft), keyPntsCurrent, match, img_matches2);
            cv::namedWindow("matches2", 0);
            cv::imshow("matches2", img_matches2);
            cv::waitKey(1);


            //start tracker loop
            if (match.size() >= 5)
            {
                objectPointsGood.clear();
                keyPntsGood.clear();

                for (cv::DMatch& m : match)
                {
                    //use matched keys
                    cv::Point3f pos = objectPointsTri[m.trainIdx];
                    cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                    objectPointsGood.push_back(pos);
                    keyPntsGood.push_back(img);


                }


                //solve
                if (objectPointsGood.size() != 0)
                {
                    projectedPoints = tPnPSolvers->CvPnp(keyPntsGood,objectPointsGood, cameraMatrix, distCoeffs, rvec, tvec);
                }

                //flip
                cv::Mat RotMat;
                cv::Rodrigues(rvec, RotMat);
                RotMat = RotMat.t();
                tvec = -RotMat * tvec;


                //project
                for (int i = 0; i < projectedPoints.size(); i++)
                {
                    cv::drawMarker(imLeft, cv::Point(projectedPoints[i].x, projectedPoints[i].y), cv::Scalar(0, 0, 255), cv::MARKER_CROSS, 50, 10);

                }

            }


        }
        if (bFirstRun == true) //first time, store previous frame and get keys
        {

            cameraMatrix.zeros(3, 3, cv::DataType<float>::type);
            R.zeros(3, 3, cv::DataType<float>::type);
            t.zeros(3, 1, cv::DataType<float>::type);

            cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
            fs["cameraMatrix"] >> cameraMatrix;
            fs["dist_coeffs"] >> distCoeffs;

            tFeatures->DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent); //Left image, left descriptors, left keypoints
        DetectKeypointsR(imRight,     tFeatures->DetectKeypointsR(imRight, descCurrentR, keyPntsCurrentR); //Right image, Right descriptors, Right keypoints


            //KNNMATCH MATCHING / FILTER RESULTS.
            std::vector<cv::DMatch> matchR = tPointMatching->matchPoints(descCurrent, tPointMatching->matchPointsOG2(descCurrent, descCurrentR); //match left / right

  //sort after matching

            cv::Mat tempDescriptor(matchR.size(), descCurrent.cols, descCurrent.depth());
             //sort after matching
            int count = 0;
            for (size_t i = 0; i < matchR.size(); i++)
            {
                keyPntsGoodL.push_back(keyPntsCurrent[matchR[i].queryIdx]);
                keyPntsGoodR.push_back(keyPntsCurrentR[matchR[i].trainIdx]);

                //sort descriptors TO MATCH KEYPNTSGOODL

                descCurrent.row(i).copyTo(tempDescriptor.row(count));
                count+=1;

             }

         //triangulate:
        std::vector<cv::Point3f>  objectPointsTri = triangulate(keyPntsGoodL,keyPntsGoodR);

        //copy decriptors to 'previous'
        cv::Mat descPrevious;
        tempDescriptor.copyTo(descPrevious);

//frame 2 EDIT

DetectKeypointsL(imLeft, descCurrent, keyPntsCurrent);

        //match to previous frame 
        match = tPointMatching->matchPointsOG(descPrevious, descCurrent);

        //start tracker loop
tTriangulation->PointsTo3d(keyPntsGoodL, keyPntsGoodR);



            //initial solve
            if (match.size() >= 5)
        {
            objectPointsGood.clear();
            keyPntsGood.clear();

            for (cv::DMatch& m : match)
            {
                cv::Point3f pos = objectPointsTri[m.trainIdx];
                cv::KeyPoint img = keyPntsCurrent[m.queryIdx];

                objectPointsGood.push_back(pos);
                keyPntsGood.push_back(img);
            }


            //solve
            if (objectPointsGood.size() (objectPointsTri.size() != 0)
            {
                projectedPoints = tPnPSolvers->CvPnp(keyPntsGood, objectPointsGood, tPnPSolvers->CvPnp(keyPntsGoodL, objectPointsTri, cameraMatrix, distCoeffs, rvec, tvec);
            }

my matching function is:

std::vector<DMatch> PointMatching::matchPoints(cv::Mat descriptors1Cpu, 

            //copy decriptors to 'previous'         
            tempDescriptor.copyTo(descPrevious);
            keyPntsPrevious = keyPntsGoodL;
            bFirstRun = false;

            imLeft.copyTo(imStored);
        }


        cv::Mat descriptors2Cpu)
DisplayMat;
        const float r = 2;
        cv::cvtColor(imLeft, DisplayMat, CV_BGR2GRAY);
        cv::cvtColor(DisplayMat, DisplayMat, CV_GRAY2BGR);
        for (int i = 0; i < match.size(); i++)
        {


    descriptors1GPU.upload(descriptors1Cpu);
    descriptors2GPU.upload(descriptors2Cpu);

    // matching descriptors
    Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(NORM_HAMMING);
    vector<cv::DMatch> matches;
    vector< vector< DMatch> > knn_matches;

    matcher->knnMatch(descriptors1GPU, descriptors2GPU, knn_matches, 2);

    //Filter the matches using the ratio test
    for (std::vector<std::vector<cv::DMatch> >::const_iterator it = knn_matches.begin(); it != knn_matches.end(); ++it) {
        if (it->size() > 1 && (*it)[0].distance / (*it)[1].distance < 0.8) {
            matches.push_back((*it)[0]);
            cv::Point2f pt1, pt2;
            pt1.x = keyPntsCurrent[match[i].trainIdx].pt.x - r;
            pt1.y = keyPntsCurrent[match[i].trainIdx].pt.y - r;
            pt2.x = keyPntsCurrent[match[i].trainIdx].pt.x + r;
            pt2.y = keyPntsCurrent[match[i].trainIdx].pt.y + r;
            cv::rectangle(DisplayMat, pt1, pt2, cv::Scalar(200, 100, 0));
        }

        cv::imshow("Camera", DisplayMat);
        cv::waitKey(1);


    }

    return matches;

}

This runs, but the projected points flip around and are very far from stable. I have checked the 3d points, and they look correct (although there are some extras) I assume the issue is with the correspondences still?

(I have run my code using findChessBoardCorners instead of using natural features, and it runs as expected, so the calibration / triangulation code seems valid) any help would be greatly appreciated!

EDIT: I notice that because I am filtering the returned matches before they are returned, the initial descriptors will not match up anymore. Is this correct? How can I return the descriptors that match keyPntsGoodL in the above code?