Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Real-time video stitching from initial stitching computation

I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream.

After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result.

imgs.push_back(imgCameraLeft);
imgs.push_back(imgCameraRight);
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.stitch(imgs, pano);

I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...)

How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended

I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation.

The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result.

Question: how to get all data from stitching for an optimized real-time stitcher?

Real-time video stitching from initial stitching computation

I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream.

After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result.

imgs.push_back(imgCameraLeft);
imgs.push_back(imgCameraRight);
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.stitch(imgs, pano);

I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...)

How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended

I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation.

The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result.

Question: how to get all data from stitching for an optimized real-time stitcher?

EDIT 1

I have found the class detail::CameraParams here: http://docs.opencv.org/2.4/modules/stitching/doc/camera.html

I can then get the camera matrixes of each image.

Now, how can I get all info about the blending zones between two images?

Real-time video stitching from initial stitching computation

I have two fixed identical synchronized camera streams at 90 degrees of each other. I would like to stitch in real-time those two streams into a single stream.

After getting the first frames on each side, I perform a full OpenCV stitching and I'm very satisfied of the result.

imgs.push_back(imgCameraLeft);
imgs.push_back(imgCameraRight);
Stitcher stitcher = Stitcher::createDefault(false);
stitcher.stitch(imgs, pano);

I would like to continue the stitching on the video stream by reapplying the same parameters and avoid recalculation (especially of the homography and so on...)

How can I get maximum data from the stitcher class from the initial computation such as: - the homography and the rotation matrix applied to each side - the zone in each frame that will be blended

I'm OK to keep the same settings and apply them for the stream as real-time performance is more important than precision of the stitching. When I say "apply them", I mean I want to apply the same transformation and blending either in OpenCV or with a direct GPU implementation.

The cameras don't move, have the same exposure settings, the frames are synchronized so keeping the same transformation/blending should provide a decent result.

Question: how to get all data from stitching for an optimized real-time stitcher?

EDIT 1

I have found the class detail::CameraParams here: http://docs.opencv.org/2.4/modules/stitching/doc/camera.html

I can then get the camera matrixes of each image.

Now, how can I get all info about the blending zones zone between two images?