Attention! This forum will be made read-only by Dec-20. Please migrate to Most of existing active users should've received invitation by e-mail.
Ask Your Question

Real-time video stitching from initial stitching computation

asked 2016-08-13 01:06:03 -0500

gregoireg gravatar image

updated 2016-08-15 00:25:57 -0500

I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream.

After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result.

Stitcher stitcher = Stitcher::createDefault(false);
stitcher.stitch(imgs, pano);

I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...)

How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended

I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation.

The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result.

Question: how to get all data from stitching for an optimized real-time stitcher?


I have found the class detail::CameraParams here:

I can then get the camera matrixes of each image.

Now, how can I get all info about the blending zone between two images?

edit retag flag offensive close merge delete


@gregoireg could you share your final code/result?

thealse gravatar imagethealse ( 2016-12-22 02:30:07 -0500 )edit

2 answers

Sort by » oldest newest most voted

answered 2016-08-13 14:47:28 -0500

Tetragramm gravatar image

updated 2016-08-13 14:47:57 -0500

I believe that the composePanorama does exactly that. Simply create a new vector with your left and right images, call composePanorama(newVec, newPano) and there you have it.

edit flag offensive delete link more


Good idea but not really unless I have done something wrong. The exact following code:

std::vector<Mat> imgX2;

    Mat pano;
Stitcher stitcher = Stitcher::createDefault(false);
double t = (double)cv::getTickCount();
stitcher.stitch(imgX2, pano);
t = ((double)cv::getTickCount() - t)/cv::getTickFrequency();
std::cout << "Time passed in seconds: " << t << std::endl; 

Mat panoFaster;
t = (double)cv::getTickCount();
stitcher.composePanorama(imgX2, panoFaster);
t = ((double)cv::getTickCount() - t)/cv::getTickFrequency();
std::cout << "Time Faster passed in seconds: " << t << std::endl;


Time passed in seconds: 1.79262
Time Faster passed in seconds: 1.67719

Shouldn't I see a major difference?

gregoireg gravatar imagegregoireg ( 2016-08-14 13:18:49 -0500 )edit

For two images? Probably not. For many? Oh yes.

Try setting the exposure compensator to the NoExposureCompensator since you said the cameras are already synced in that regard.

Tetragramm gravatar imageTetragramm ( 2016-08-14 18:05:52 -0500 )edit

Thanks for your help. Still not what I want. I have edited the question.

gregoireg gravatar imagegregoireg ( 2016-08-15 00:21:46 -0500 )edit

You can turn everything off and just use the warper returned from this method. I'm not sure how precisely the dst image is formatted or what you have to do.

Tetragramm gravatar imageTetragramm ( 2016-08-15 18:19:03 -0500 )edit

answered 2017-06-05 06:05:36 -0500

vin gravatar image

If you want to build panorama from videos or live camera, try this link

edit flag offensive delete link more
Login/Signup to Answer

Question Tools



Asked: 2016-08-13 01:06:03 -0500

Seen: 5,052 times

Last updated: Aug 15 '16