Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this :https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

click to hide/show revision 2
No.2 Revision

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this :https://www.youtube.co/watch?v=59RJeLlDAxQ.: https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

EDIT Here's my new code:

//I create a big image for the mosaic
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//I take the last frame stitched
current_frame = final_img(Rect(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows));
//homography calculation
Mat H = findHomography(obj, scene, CV_RANSAC);

//check if I'm "out of bound"
if (H.at<double>(0, 2) < 0) H.at<double>(0, 2) = 0;
if (H.at<double>(1, 2) < 0) H.at<double>(1, 2) = 0;

//warp and paste images
static Mat rImg;
warpPerspective(current_frame, rImg, H,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);

        Mat roi1(final_img, Rect(img_loop.cols, img_loop.rows, img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);

but it's not working. I think my problem is the roi1 and roi2 definition. How can I define them dinamically?

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

EDIT Here's my new code:

//I create a big image for the mosaic
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//I take the last frame stitched
current_frame = final_img(Rect(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows));
//homography calculation
Mat H = findHomography(obj, scene, CV_RANSAC);

//check if I'm "out of bound"
if (H.at<double>(0, 2) < 0) H.at<double>(0, 2) = 0;
if (H.at<double>(1, 2) < 0) H.at<double>(1, 2) = 0;

//warp and paste images
static Mat rImg;
warpPerspective(current_frame, rImg, H,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);

        Mat roi1(final_img, Rect(img_loop.cols, img_loop.rows, img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);

but it's not working. I think my problem is the roi1 and roi2 definition. How can I define them dinamically?

EDIT 2

Now using the offset from the homography matrix I obtain this: http://youtu.be/TI5hJRr7hkM

It's stitching the right way but now I've the problem that a part of mosaic moves and make my program crash. I've found that the moving part is the Mat where warpPerspective save the resulting image. How can I resolve?

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

EDIT Here's my new code:

//I create a big image for the mosaic
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//I take the last frame stitched
current_frame = final_img(Rect(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows));
//homography calculation
Mat H = findHomography(obj, scene, CV_RANSAC);

//check if I'm "out of bound"
if (H.at<double>(0, 2) < 0) H.at<double>(0, 2) = 0;
if (H.at<double>(1, 2) < 0) H.at<double>(1, 2) = 0;

//warp and paste images
static Mat rImg;
warpPerspective(current_frame, rImg, H,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);

        Mat roi1(final_img, Rect(img_loop.cols, img_loop.rows, img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);

but it's not working. I think my problem is the roi1 and roi2 definition. How can I define them dinamically?

EDIT 2

Now using the offset from the homography matrix Here's my code, I obtain this: http://youtu.be/TI5hJRr7hkMhope someone could help me to see the light at the end of the tunnel!!!

It's stitching the right way but now I've the problem that

// I create the final image and copy the first frame in the middle of it
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//i take only a part of mosaic moves the ccomplete final image
Rect current_frame_roi(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows);

while (true)
{

    //take the new frame
    cap >> img_loop;
    if (img_loop.empty()) break;

    //take a part of the final image
    current_frame = final_img(current_frame_roi);


    //convert to grayscale
    cvtColor(current_frame, gray_image1, CV_RGB2GRAY);
    cvtColor(img_loop, gray_image2, CV_RGB2GRAY);


    //First step: feature extraction with  Orb
    static int minHessian = 400;
    OrbFeatureDetector detector(minHessian);



    vector< KeyPoint > keypoints_object, keypoints_scene;

    detector.detect(gray_image1, keypoints_object);
    detector.detect(gray_image2, keypoints_scene);



    //Second step: descriptor extraction
    OrbDescriptorExtractor extractor;

    Mat descriptors_object, descriptors_scene;

    extractor.compute(gray_image1, keypoints_object, descriptors_object);
    extractor.compute(gray_image2, keypoints_scene, descriptors_scene);



    //Third step: match with BFMatcher
    BFMatcher matcher(NORM_HAMMING,false);
    vector< DMatch > matches;
    matcher.match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;



    //distance between kepoint
    //with orb it works better without it
    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }
    */




    //take just the good points
    //with orb it works better without it
    vector< DMatch > good_matches;

    good_matches = matches;

    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance <= 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }*/
    vector< Point2f > obj;
    vector< Point2f > scene;


    //take the keypoints
    for (int i = 0; i < good_matches.size(); i++)
    {
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    //static Mat mat_match;
    //drawMatches(img_loop, keypoints_object, current_frame, keypoints_scene,good_matches, mat_match, Scalar::all(-1), Scalar::all(-1),vector<char>(), 0);


    // homography with RANSAC
    if (obj.size() >= 4)
    {

        Mat H = findHomography(obj, scene, CV_RANSAC,5);


        //take the x_offset and make my program crash. I've found that the moving part y_offset
        /*the offset matrix is the Mat where of the type

        |1 0 x_offset |
        |0 1 y_offset |
        |0 0 1           |
        */
        offset.at<double>(0, 2) = H.at<double>(0, 2);
        offset.at<double>(1, 2) = H.at<double>(1, 2);



        // use warpPerspective save the resulting image. How can I resolve?

for find how to blend images Mat rImg; warpPerspective(current_frame, rImg,H * offset,Size(current_frame.cols,current_frame.rows), INTER_NEAREST); //find the new frame coordinates /*HERE'S SOMETHING WRONG FOR SURE*/ vector<Point2f> corners(4); corners[0] = Point2f(0, 0); corners[1] = Point2f(0, rImg.rows); corners[2] = Point2f(rImg.cols, 0); corners[3] = Point2f(rImg.cols, rImg.rows); vector<Point2f> corner_trans(4); perspectiveTransform(corners, corner_trans, H); //get the new roi /*HERE'S SOMETHING WRONG FOR SURE*/ //current_frame_roi = Rect(corner_trans[0], corner_trans[3]); //current_frame_roi = (final_img,Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows)); Mat roi1(final_img, Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows)); Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows)); rImg.copyTo(roi2); img_loop.copyTo(roi1); cout << ++counter << endl; namedWindow("Img",WINDOW_NORMAL); imshow("Img",final_img); waitKey(10); } } imwrite("result.jpg",final_img(bound)); }

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

EDIT Here's my new code:

//I create a big image for the mosaic
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//I take the last frame stitched
current_frame = final_img(Rect(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows));
//homography calculation
Mat H = findHomography(obj, scene, CV_RANSAC);

//check if I'm "out of bound"
if (H.at<double>(0, 2) < 0) H.at<double>(0, 2) = 0;
if (H.at<double>(1, 2) < 0) H.at<double>(1, 2) = 0;

//warp and paste images
static Mat rImg;
warpPerspective(current_frame, rImg, H,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);

        Mat roi1(final_img, Rect(img_loop.cols, img_loop.rows, img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);

but it's not working. I think my problem is the roi1 and roi2 definition. How can I define them dinamically?

EDIT 2

Here's my code, I hope someone could help me to see the light at the end of the tunnel!!!

// I create the final image and copy the first frame in the middle of it
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//i take only a part of the ccomplete final image
Rect current_frame_roi(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows);

while (true)
{

    //take the new frame
    cap >> img_loop;
    if (img_loop.empty()) break;

    //take a part of the final image
    current_frame = final_img(current_frame_roi);


    //convert to grayscale
    cvtColor(current_frame, gray_image1, CV_RGB2GRAY);
    cvtColor(img_loop, gray_image2, CV_RGB2GRAY);


    //First step: feature extraction with  Orb
    static int minHessian = 400;
    OrbFeatureDetector detector(minHessian);



    vector< KeyPoint > keypoints_object, keypoints_scene;

    detector.detect(gray_image1, keypoints_object);
    detector.detect(gray_image2, keypoints_scene);



    //Second step: descriptor extraction
    OrbDescriptorExtractor extractor;

    Mat descriptors_object, descriptors_scene;

    extractor.compute(gray_image1, keypoints_object, descriptors_object);
    extractor.compute(gray_image2, keypoints_scene, descriptors_scene);



    //Third step: match with BFMatcher
    BFMatcher matcher(NORM_HAMMING,false);
    vector< DMatch > matches;
    matcher.match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;



    //distance between kepoint
    //with orb it works better without it
    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }
    */




    //take just the good points
    //with orb it works better without it
    vector< DMatch > good_matches;

    good_matches = matches;

    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance <= 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }*/
    vector< Point2f > obj;
    vector< Point2f > scene;


    //take the keypoints
    for (int i = 0; i < good_matches.size(); i++)
    {
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    //static Mat mat_match;
    //drawMatches(img_loop, keypoints_object, current_frame, keypoints_scene,good_matches, mat_match, Scalar::all(-1), Scalar::all(-1),vector<char>(), 0);


    // homography with RANSAC
    if (obj.size() >= 4)
    {

        Mat H = findHomography(obj, scene, CV_RANSAC,5);


        //take the x_offset and y_offset
        /*the offset matrix is of the type

        |1 0 x_offset |
        |0 1 y_offset |
        |0 0 1           |
        */
        offset.at<double>(0, 2) = H.at<double>(0, 2);
        offset.at<double>(1, 2) = H.at<double>(1, 2);



        // use warpPerspective for find how to blend images
        Mat rImg;
        warpPerspective(current_frame, rImg,H * offset,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);



        //find the new frame coordinates
        /*HERE'S SOMETHING WRONG FOR SURE*/
        vector<Point2f> corners(4);
        corners[0] = Point2f(0, 0);
        corners[1] = Point2f(0, rImg.rows);
        corners[2] = Point2f(rImg.cols, 0);
        corners[3] = Point2f(rImg.cols, rImg.rows);

        vector<Point2f> corner_trans(4);
        perspectiveTransform(corners, corner_trans, H);

        //get the new roi
        /*HERE'S SOMETHING WRONG FOR SURE*/
        //current_frame_roi = Rect(corner_trans[0], corner_trans[3]);
        //current_frame_roi = (final_img,Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows));



        Mat roi1(final_img, Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);



        cout << ++counter << endl;

        namedWindow("Img",WINDOW_NORMAL);
        imshow("Img",final_img);
        waitKey(10);


    }

}
imwrite("result.jpg",final_img(bound));

}

Edit 3

I've integrated Eduardo code in mine, and that is the result: https://youtu.be/EY0kumU3fmo. It's like after a certain point the homography is miscalculated. Could someone confirm this? Do someone have any idea to avoid this behaviour?

Panorama mosaic from Aerial Images

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

and it works like this: https://www.youtube.com/watch?v=nq0sdatQeFg

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeLlDAxQ.

What can I do?

EDIT 2

Here's my code, I hope someone could help me to see the light at the end of the tunnel!!!

// I create the final image and copy the first frame in the middle of it
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//i take only a part of the ccomplete final image
Rect current_frame_roi(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows);

while (true)
{

    //take the new frame
    cap >> img_loop;
    if (img_loop.empty()) break;

    //take a part of the final image
    current_frame = final_img(current_frame_roi);


    //convert to grayscale
    cvtColor(current_frame, gray_image1, CV_RGB2GRAY);
    cvtColor(img_loop, gray_image2, CV_RGB2GRAY);


    //First step: feature extraction with  Orb
    static int minHessian = 400;
    OrbFeatureDetector detector(minHessian);



    vector< KeyPoint > keypoints_object, keypoints_scene;

    detector.detect(gray_image1, keypoints_object);
    detector.detect(gray_image2, keypoints_scene);



    //Second step: descriptor extraction
    OrbDescriptorExtractor extractor;

    Mat descriptors_object, descriptors_scene;

    extractor.compute(gray_image1, keypoints_object, descriptors_object);
    extractor.compute(gray_image2, keypoints_scene, descriptors_scene);



    //Third step: match with BFMatcher
    BFMatcher matcher(NORM_HAMMING,false);
    vector< DMatch > matches;
    matcher.match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;



    //distance between kepoint
    //with orb it works better without it
    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }
    */




    //take just the good points
    //with orb it works better without it
    vector< DMatch > good_matches;

    good_matches = matches;

    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance <= 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }*/
    vector< Point2f > obj;
    vector< Point2f > scene;


    //take the keypoints
    for (int i = 0; i < good_matches.size(); i++)
    {
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    //static Mat mat_match;
    //drawMatches(img_loop, keypoints_object, current_frame, keypoints_scene,good_matches, mat_match, Scalar::all(-1), Scalar::all(-1),vector<char>(), 0);


    // homography with RANSAC
    if (obj.size() >= 4)
    {

        Mat H = findHomography(obj, scene, CV_RANSAC,5);


        //take the x_offset and y_offset
        /*the offset matrix is of the type

        |1 0 x_offset |
        |0 1 y_offset |
        |0 0 1           |
        */
        offset.at<double>(0, 2) = H.at<double>(0, 2);
        offset.at<double>(1, 2) = H.at<double>(1, 2);



        // use warpPerspective for find how to blend images
        Mat rImg;
        warpPerspective(current_frame, rImg,H * offset,Size(current_frame.cols,current_frame.rows), INTER_NEAREST);



        //find the new frame coordinates
        /*HERE'S SOMETHING WRONG FOR SURE*/
        vector<Point2f> corners(4);
        corners[0] = Point2f(0, 0);
        corners[1] = Point2f(0, rImg.rows);
        corners[2] = Point2f(rImg.cols, 0);
        corners[3] = Point2f(rImg.cols, rImg.rows);

        vector<Point2f> corner_trans(4);
        perspectiveTransform(corners, corner_trans, H);

        //get the new roi
        /*HERE'S SOMETHING WRONG FOR SURE*/
        //current_frame_roi = Rect(corner_trans[0], corner_trans[3]);
        //current_frame_roi = (final_img,Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows));



        Mat roi1(final_img, Rect(img_loop.cols + H.at<double>(0, 2), img_loop.rows + H.at<double>(1, 2), img_loop.cols, img_loop.rows));
        Mat roi2(final_img, Rect(img_loop.cols, img_loop.rows, rImg.cols, rImg.rows));
        rImg.copyTo(roi2);
        img_loop.copyTo(roi1);



        cout << ++counter << endl;

        namedWindow("Img",WINDOW_NORMAL);
        imshow("Img",final_img);
        waitKey(10);


    }

}
imwrite("result.jpg",final_img(bound));

}

Edit 3

I've integrated Eduardo code in mine, and that is the result: https://youtu.be/EY0kumU3fmo. mine. It's like after a certain point the homography is miscalculated. Could someone confirm this? Do someone have any idea to avoid this behaviour?

EDIT 4 Sorry but I had to remove images and videos :S