Ask Your Question
0

Recontruct the undefined area using mosaicking

asked 2014-04-16 07:17:13 -0600

Jenny gravatar image

updated 2014-04-19 09:21:56 -0600

I am meeting the mosaicking problem. After warping the frame in video, undefined region appear near the edge of each frame. It was made unpleasant visual artifacts. Now, i want to use previous and next aligned frames to fill out the black region of current frame by using mosaicking. Could you explain how to do that or some code as an example, please? Thank you in advance.

Here is the example picture image description

A image before warping image description

A image after warping image description

edit retag flag offensive close merge delete

Comments

Why don’t wrap the frame till the boarder ?

Haris gravatar imageHaris ( 2014-04-16 08:18:58 -0600 )edit

What do your meant?

Jenny gravatar imageJenny ( 2014-04-18 06:36:39 -0600 )edit

Can you post image before and after warping.

Haris gravatar imageHaris ( 2014-04-18 06:42:08 -0600 )edit

Oh, Because I am working on video stabilization topic. If i use image before warping, it is not correct cause original image is unstable. I can not do that.

Jenny gravatar imageJenny ( 2014-04-19 08:29:31 -0600 )edit
1

What I am saying, just apply prespective transform between bounding rectangle and rotated rectangle of the image area on the frame.

Haris gravatar imageHaris ( 2014-04-19 08:50:17 -0600 )edit

Sorry, I do not understand your idea. Hope you sympathize cause i am newbie in this topic. I posted image before and after warping.

Jenny gravatar imageJenny ( 2014-04-19 09:23:25 -0600 )edit

ah, for the video stabilization purpose, I need be stable video. To do that, I need the smooth image in video and warp it into a stabilized frame.

Jenny gravatar imageJenny ( 2014-04-19 10:37:44 -0600 )edit

But both image look like same except the black region, can you explain how you did the warping.

Haris gravatar imageHaris ( 2014-04-19 10:39:12 -0600 )edit

2 answers

Sort by » oldest newest most voted
4

answered 2014-04-19 12:08:06 -0600

Haris gravatar image

updated 2014-04-22 08:16:30 -0600

One way is apply perspective transform to stretch the image and fill the black region.

The next is apply Inpainting, but it will be helpful when the area to fill is smaller to create the real effect.

For the second method you need to create mask by thresholding the source and then apply inpaint()

like

      Mat src=imread("a.jpg");
      Mat thr,dst;
      cvtColor(src, thr, CV_RGB2GRAY);
      threshold(thr,thr,20,255,THRESH_BINARY_INV);
      inpaint( src,thr,  dst,10, INPAINT_NS);

See the Inpainting result, image description

image description

image description

image description

Using Perspective transform

Here you need to find source and destination points(4 corner) for the transformation.

In the above image the source point is four corner of black boundary.

And the destination points are the four corner of source image.

In the below image the red one is using source point and green one is the points where we are going to transform.

image description

So for getting source co-ordinates,

->Thrshold, ->Find biggest contour. ->approxPolyDP

Here you should get four co-ordinates corresponding to source co-ordinates of warpPerspective.

The destination points are simply the corner of source image.

Sort the co-ordinates from top left to bottom right.

Apply warpPerspective transform.

See the transformed result

image description

Code:

struct point_sorter
{
    bool operator ()( const Point2f a, Point2f  b )
    {
        return ( (a.x + 500*a.y) < (b.x + 500*b.y) );
    }
};
int main()
{

 Mat src=imread("warp.jpg");

 Mat thr;
 cvtColor(src,thr,CV_BGR2GRAY);
 threshold( thr, thr, 30, 255,CV_THRESH_BINARY_INV );
 bitwise_not(thr,thr);

 vector< vector <Point> > contours; // Vector for storing contour
 vector< Vec4i > hierarchy;
 int largest_contour_index=0;
 int largest_area=0;

 Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
 findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
 for( int i = 0; i< contours.size(); i++ ){
    double a=contourArea( contours[i],false);  //  Find the area of contour
    if(a>largest_area){
    largest_area=a;
    largest_contour_index=i;                //Store the index of largest contour
    }
 }

 //drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
 vector<vector<Point> > contours_poly(1);
 approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );


 //Rect boundRect=boundingRect(contours[largest_contour_index]);

 if(contours_poly[0].size()==4){
    std::vector<Point2f> src_pts;
    std::vector<Point2f> dst_pts;

    src_pts.push_back(contours_poly[0][0]);
    src_pts.push_back(contours_poly[0][1]);
    src_pts.push_back(contours_poly[0][2]);
    src_pts.push_back(contours_poly[0][3]);

    dst_pts.push_back(Point2f(0,0));
    dst_pts.push_back(Point2f(0,src.rows));
    dst_pts.push_back(Point2f(src.cols,0));
    dst_pts.push_back(Point2f(src.cols,src.rows));

    sort(dst_pts.begin(), dst_pts.end(), point_sorter());
    sort(src_pts.begin(), src_pts.end(), point_sorter());


    Mat transmtx = getPerspectiveTransform(src_pts,dst_pts);
    Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
    warpPerspective(src, transformed, transmtx, src.size());

    imshow("transformed", transformed);
    imshow("src",src);
    waitKey();
   }
   else
    cout<<"Make sure that your are getting 4 corner using approxPolyDP.....adjust 'epsilon' and try again"<<endl;

    return 0;
}
edit flag offensive delete link more

Comments

Thank you so much, bro. But I have a question. We obtained the result image that filled out black area, but as you see, the filled area has a non-consistent with remain area. So the result is seem not good. How to get pleasant image.? Once again, thank you in advance.

Jenny gravatar imageJenny ( 2014-04-20 10:36:04 -0600 )edit

So the input image always can be same as above?

Haris gravatar imageHaris ( 2014-04-20 10:42:53 -0600 )edit

Yes. Actually, the input image is the warped image in video. There are many original images and warped images as above sample image. (Warped images is the image that have black area).

Jenny gravatar imageJenny ( 2014-04-20 11:06:33 -0600 )edit

I am little confused, above source image before warping look itself fine,then why did you warp it to create such distorted image?

Haris gravatar imageHaris ( 2014-04-21 12:57:48 -0600 )edit

The source image is image frame of unstable video. I stabilized the video with the image frame is stabilized. The image frame must be warped to align stabilize with other frames to create stable video.

Jenny gravatar imageJenny ( 2014-04-22 06:18:03 -0600 )edit

So you need warp the black edge image like your source image?

Haris gravatar imageHaris ( 2014-04-22 06:31:50 -0600 )edit

I need to fill out the black area of warped image based on previous or/and next image of warped imaged

Jenny gravatar imageJenny ( 2014-04-22 06:41:50 -0600 )edit

thank you in advance, bro.

Jenny gravatar imageJenny ( 2014-04-23 02:02:16 -0600 )edit

You are welcome.....:)

Haris gravatar imageHaris ( 2014-04-23 02:28:24 -0600 )edit
0

answered 2014-04-23 03:01:45 -0600

Jenny gravatar image

updated 2014-04-23 03:03:17 -0600

bro, i meet a little error. I tested on some diff frames. But the result image is black image. Should I adjust the parameters in code?

The image:

image description

Result:

image description

edit flag offensive delete link more

Comments

Just make sure that the transformation points are correct order, in order( like top left, top right, bottom left, bottom right), sometimes the sorting algorithm might not work well. And from next time please update your issue by editing your question, instead of answer section.

Haris gravatar imageHaris ( 2014-04-23 05:45:19 -0600 )edit

Also just change the sorting algorithm like this and try,

struct point_sorter { bool operator ()( const Point2f a, Point2f b ) { return ( (1000*a.x + 1000*a.y) &lt; (1000*b.x + 1000*b.y) ); } };

Haris gravatar imageHaris ( 2014-04-23 06:19:52 -0600 )edit

Is your meant src_pts?

Jenny gravatar imageJenny ( 2014-04-23 06:32:36 -0600 )edit

Yes both src_pts and dst_pts, but dst_pts always be in the same order, you have to take care only about src_pts. Ultimately src_pts and dst_pts should be in the same order.

Haris gravatar imageHaris ( 2014-04-23 06:38:45 -0600 )edit

Yes. Thank you so much, Haris bro. I tried it, its very good for image. But when i applied to video, the result video seem stretch. Let me show you the result: The warped video: http://www.youtube.com/watch?v=zc4Yutph2Ck The filled out video: http://www.youtube.com/watch?v=SRP-X3dnj7A

Jenny gravatar imageJenny ( 2014-04-24 03:53:27 -0600 )edit

Ok so you stabilized your video using perspective transform, then there is no point on applying warp perspective again to fill the black area, which will create the unstabilized video right?

Haris gravatar imageHaris ( 2014-04-24 05:11:56 -0600 )edit

Yes. you are right.

Jenny gravatar imageJenny ( 2014-05-08 07:18:54 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2014-04-16 07:17:13 -0600

Seen: 2,165 times

Last updated: Apr 23 '14