Ask Your Question
2

Panorama mosaic from Aerial Images

asked 2015-04-22 02:06:28 -0600

bjorn89 gravatar image

updated 2015-05-12 15:09:34 -0600

I'm writing a program that creates a panorama mosaic in real time from a video. The steps that I've done are:

  1. Find features between the n-th frame and the (n-1)th mosaic.
  2. Calculate homography
  3. Use the homography with warpPerspective for stitch the images.

I'm using this code for stitch the images together:

warpPerspective(vImg[0], rImg, H, Size(vImg[0].cols, vImg[0].rows), INTER_NEAREST);

 Mat final_img(Size(rImg.cols, rImg.rows), CV_8UC3);
 Mat roi1(final_img, Rect(0, 0, vImg[1].cols, vImg[1].rows));
 Mat roi2(final_img, Rect(0, 0, rImg.cols, rImg.rows));
 rImg.copyTo(roi2);
 vImg[1].copyTo(roi1);

If you see, from second 0.33 it starts to lose part of the mosaic. I'm pretty sure that depends by the ROI I've defined. My program should work like this : https://www.youtube.co/watch?v=59RJeL....

What can I do?

EDIT 2

Here's my code, I hope someone could help me to see the light at the end of the tunnel!!!

// I create the final image and copy the first frame in the middle of it
Mat final_img(Size(img.cols * 3, img.rows * 3), CV_8UC3);
Mat f_roi(final_img,Rect(img.cols,img.rows,img.cols,img.rows));
img.copyTo(f_roi);


//i take only a part of the ccomplete final image
Rect current_frame_roi(img.cols, img.rows, final_img.cols - img.cols, final_img.rows - img.rows);

while (true)
{

    //take the new frame
    cap >> img_loop;
    if (img_loop.empty()) break;

    //take a part of the final image
    current_frame = final_img(current_frame_roi);


    //convert to grayscale
    cvtColor(current_frame, gray_image1, CV_RGB2GRAY);
    cvtColor(img_loop, gray_image2, CV_RGB2GRAY);


    //First step: feature extraction with  Orb
    static int minHessian = 400;
    OrbFeatureDetector detector(minHessian);



    vector< KeyPoint > keypoints_object, keypoints_scene;

    detector.detect(gray_image1, keypoints_object);
    detector.detect(gray_image2, keypoints_scene);



    //Second step: descriptor extraction
    OrbDescriptorExtractor extractor;

    Mat descriptors_object, descriptors_scene;

    extractor.compute(gray_image1, keypoints_object, descriptors_object);
    extractor.compute(gray_image2, keypoints_scene, descriptors_scene);



    //Third step: match with BFMatcher
    BFMatcher matcher(NORM_HAMMING,false);
    vector< DMatch > matches;
    matcher.match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;



    //distance between kepoint
    //with orb it works better without it
    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }
    */




    //take just the good points
    //with orb it works better without it
    vector< DMatch > good_matches;

    good_matches = matches;

    /*for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance <= 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }*/
    vector< Point2f > obj;
    vector< Point2f > scene;


    //take the keypoints
    for (int i = 0; i < good_matches.size(); i++)
    {
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    //static Mat mat_match;
    //drawMatches(img_loop, keypoints_object, current_frame, keypoints_scene,good_matches, mat_match, Scalar::all(-1), Scalar::all(-1),vector<char>(), 0);


    // homography with RANSAC
    if (obj.size() >= 4)
    {

        Mat H = findHomography(obj, scene, CV_RANSAC,5);


        //take the x_offset and y_offset
        /*the offset matrix is of the type

        |1 0 x_offset |
        |0 1 y_offset |
        |0 0 1           |
        */
        offset.at<double>(0, 2) = H.at<double>(0, 2);
        offset.at ...
(more)
edit retag flag offensive close merge delete

Comments

2

You could check this blog post on panorama image stitching:

 cv::Mat result;
 warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows));
 cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
 image2.copyTo(half);
 imshow( "Result", result );
Eduardo gravatar imageEduardo ( 2015-04-22 05:05:53 -0600 )edit

Thanks Eduardo, but that blog was my starting point, so I modified the code you posted with the one posted by me! I really don't know how to solve this situation :S

bjorn89 gravatar imagebjorn89 ( 2015-04-22 07:29:03 -0600 )edit

I see the problem now. When you go right, you lose the left part. I don't know how exactly you could solve this.

Maybe you could try to always center the current image to avoid losing parts of mosaic ?

Eduardo gravatar imageEduardo ( 2015-04-22 16:09:20 -0600 )edit

For your Edit 3, I think that the problem is that you try to find the homography between the panorama image and the current image. Repetitive pattern in the global panorama image could false the matching and thus the homoraphy matrix. You could try to:

  • calculate the homography only between two consecutive frames (img_at_n-1 and img_at_n) (like you did in your first version I think)
  • calculate somehow the global homography between the panorama image and the current image to stitch correctly the new image in the panorama

Also, you could try to check the quality of the matching (I use SIFT in my tests, I found it more robust than ORB but much more time consuming if you have a real time constraint).

Eduardo gravatar imageEduardo ( 2015-05-01 07:56:14 -0600 )edit

I've already tried with SIFT, but it fails too after a certain point. I think I'm going to calculate only the homography between two consecutive frames, and I'll let you know! Thanks again for all your support!

bjorn89 gravatar imagebjorn89 ( 2015-05-04 11:28:36 -0600 )edit

How can I proceed if I want to calculate the global homography? For now I'm trying to calculate the homography between two consecutive frames but seems it doesn't work at all :S One more question:is there a way from the code you posted for getting the part of the panorama where you past the last frame? In that way I can use that part as previous frame!

bjorn89 gravatar imagebjorn89 ( 2015-05-05 04:57:49 -0600 )edit

The global homography can be calculated by multiplying each homography between two consecutives frames. You could try to test to stich the images from 00:00 to 00:30 (when it begins to fail), and try to stich from 00:30 to 01:00, etc. to see what happens.

If the source videos is not private maybe you could post it in a private link on YouTube. I may give it a try if I have time.

Eduardo gravatar imageEduardo ( 2015-05-08 07:58:35 -0600 )edit

Sorry bu I think that I can't upload the original video :S I'm trying to get the part of the stitch where I paste the last frame and calculate the homography just between that part and the current frame. By the way, if I take 1 frame over X (I tried with X=10, 20, 30,50) it works! How could be possible?

bjorn89 gravatar imagebjorn89 ( 2015-05-11 10:38:26 -0600 )edit

Hi @bjorn89 I am also working on uav image mosaic similar like boofcv as you mention but i also getting problem while stiching the images. If you have solve your above problem can you help me with your code.

ak1 gravatar imageak1 ( 2016-12-08 13:18:20 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
7

answered 2015-04-22 13:34:03 -0600

Eduardo gravatar image

updated 2015-04-30 04:09:12 -0600

I tried to do the same using a basic video.

There is maybe a better solution but this is how I did to stitch the images (maybe this could help you):

  • for all the images (image1, image2, image3, image4, ..., imageN)
  • we have the corresponding homography matrix (H_2to1, H_3to2, H_4to3, ..., H_NtoN-1) as we match between two consecutives frames (img_prev, img_curr)
  • and the homography between each new image in relation to the first image (H_3to1 = H_2to1*H_3to2)
  • so we can use warpPerspective to warp each new image in relation to image1 into the panorama image

To copy/paste the images, in pseudo-code:

for each new image {
    //Get the new image
    capture >> img_cur

    //Copy the current panorama into panoramaCur
    cv::Mat panoramaCur;
    panorama.copyTo(panoramaCur);

    //panoramaSize: new panorama size
    //Warp and copy img_cur into panoramaCur using the homography H
    cv::warpPerspective(img_cur, panoramaCur, H_curto1, panoramaSize);        

    //ROI for the previous panorama
    cv::Mat half(panoramaCur, cv::Rect(0, 0, panorama.cols, panorama.rows));
    panorama.copyTo(half);

    //Get the new panorama result
    panoramaCur.copyTo(panorama);
}

Finally, the result in video.

Edit:

First of all, it is the first time I "play" with image stitching so the method I present is not necessarily good or optimal. I think that the problem you encounter is that some pixels in the original image are warped in negative coordinates.

In your case, it seems that the view is shot from an UAV. I think that the easiest solution is to divide the mosaic image in a grid of 3x3. The central part will show always the current image. There will be always some free space for the result of the warping.

Some test I made (tl:dr). For example, with the two images below:

img1

img2

If we warp the image 2, some pixels will not be shown (e.g. the roof):

warp

The stitching will look like this:

panorama nok

In fact, if we print the homography matrix:

  double homography[3][3] = {
      {0.999953365335864, -0.0001669222180845182, 507.0299576823942},
      {5.718816824900338e-05, 0.9999404263126825, -191.9941904903286},
      {1.206803293748564e-08, -1.563550523469747e-07, 1},
  };

we can see that the translation in y is negative.

My solution would be to put 0 for t_x or t_y if they are negative for the homography matrix and use it to warp the image. After, I paste the first image not in (0,0) but in (offsetX, offsetY):

panorama

You can also calculate the new coordinates of the image after the warping using perspectiveTransform:

  std::vector<cv::Point2f> corners(4);
  corners[0] = cv::Point2f(0, 0);
  corners[1] = cv::Point2f(0, img2.rows);
  corners[2] = cv::Point2f(img2.cols, 0);
  corners[3] = cv::Point2f(img2.cols, img2.rows);

  std::vector<cv::Point2f> cornersTransform(4);
  cv::perspectiveTransform(corners, cornersTransform, H);

Finally, the result of the stitching I can successfully process:

panorama all

Edit 2:

In fact, setting the translation part to zero in the homography matrix part is not right. It worked in my previous case because there was almost no rotation, only translation. The correct way is to first calculate the maximum offset in x ... (more)

edit flag offensive delete link more

Comments

Ok, I'm going to try this and let you know. Just two questions: this also stitch images vertically(because I don't know if the new frame goes up/down or left/right)? And how can I calculate H_curto1? I'm very sorry but I'm very new to this topic and to opencv :D

bjorn89 gravatar imagebjorn89 ( 2015-04-23 02:34:36 -0600 )edit

I've tried but it doesn't work. I'm pretty sure my problem is H_curto1 and panoramaSize. For H_curto1 I've to multiply at each new image homography all the past homographies matrices? And for panoramaSize,how can I know in advance how bigger the panorama will be?

bjorn89 gravatar imagebjorn89 ( 2015-04-23 04:34:28 -0600 )edit

I tried with H_curto1 = H_cur-1to_cur-2 * H_cur-2to_cur-3 etc. but it still not working :S

bjorn89 gravatar imagebjorn89 ( 2015-04-23 08:58:21 -0600 )edit

Wow man,thanks very much for the explanation and for the work you've done! :D I'll try as soon as possible and let you know!

bjorn89 gravatar imagebjorn89 ( 2015-04-23 14:09:31 -0600 )edit

Sorry if I'm annoying you, but I'm very noob xD. In the perspective transform code you posted, img2 is the stitch at the n-1 step or is the new frame? Is with perspective transform that I calculate the coordinates of the last frame stitched?

bjorn89 gravatar imagebjorn89 ( 2015-04-27 10:39:11 -0600 )edit

Yes perspectiveTransform do almost the same thing than warpPerspective but instead to warp an input image, it computes the new coordinates using a transformation matrix.

In my code img2 is the new frame that is warped and img1 the result of the past stitching.

Maybe you could try with a simple example how to stitch (calculate the offset, calculate the size of the new image, ...) 2 images like I did to see exactly what happen.

Eduardo gravatar imageEduardo ( 2015-04-28 10:18:23 -0600 )edit

Could be a problem if I paste my code on pastebin and you check it? I can't really figure out what it's wrong. Of course you can reply no :P

bjorn89 gravatar imagebjorn89 ( 2015-04-28 11:29:53 -0600 )edit

If you have the right to share your code, you could try to post it and maybe someone else or me could help you. I will edit my answer with some piece of code when I will have some time.

Eduardo gravatar imageEduardo ( 2015-04-28 12:00:29 -0600 )edit
1

Yes I've the rights, I'm simply modifiyng the opencv example for making the things work. Now I'm going to edit my post with the code. Thanks a lot man! :D :D :D

bjorn89 gravatar imagebjorn89 ( 2015-04-28 15:15:11 -0600 )edit

I've tried your code as it is and effectively work. I'm trying to put the code in a loop for stitch all video frame.I will let you know, and thanks again man!

bjorn89 gravatar imagebjorn89 ( 2015-04-30 03:00:50 -0600 )edit

Question Tools

4 followers

Stats

Asked: 2015-04-22 02:06:28 -0600

Seen: 9,243 times

Last updated: May 12 '15