Ask Your Question
4

Real-time Image Stitching

asked 2013-02-14 18:44:33 -0600

Possede gravatar image

Hello there,

I have a project in which I am trying to create a real-time stitched panorama from three webcams mounted as shown here: http://i1276.photobucket.com/albums/y475/sesj13/IMG_2959_zpsb710bae8.jpg

The idea is to stitch together the frames captured by the side cameras with the frame captured by the central camera to create a "seamless" video panorama. I have so far hacked together some code that uses SURF to find the keypoints and corresponding descriptors that lie within the overlapping regions, as shown here: http://i1276.photobucket.com/albums/y475/sesj13/ROI_zpsffce8c7d.png

The keypoints are then matched and filtered. A video demonstration of this can be seen here: http://s1276.beta.photobucket.com/user/sesj13/media/SURF_zpsa47b8d56.mp4.html

The resulting "good" matches are then used to compute the corresponding homography matrices. The two homography matrices are then used warp the perspective of the frames captured from the side images. The problem is that the homography matrices change quite dramatically every loop. Also, the findHomography() function (using RANSAC) also appears to slow the frame rate quite dramatically from 30fps to 1-2fps. I have just been using the default settings thus far, so I'm sure it could be speeded up.

I am wondering if anyone knows of a different/better approach to this problem? I have read somewhere that if the cameras are in a fixed position relative to each other then you should only need to calculate the homography matrices once (possibly using stereo calibration). If anyone has any general pointers or input it would be much appreciated.

edit retag flag offensive close merge delete

Comments

This looks like a question relevant to something I'm researching today, but is it really? The links to your photobucket images are dead. Could you upload the images directly to your post?

darenw gravatar imagedarenw ( 2015-08-05 17:38:34 -0600 )edit

3 answers

Sort by ยป oldest newest most voted
4

answered 2013-02-14 23:13:56 -0600

Maybe you can use a calibration target like the one used for stereo calibration (chessboard), move the target around the overlapping areas of the images, detect the chessboard on each pair of overlapping images and use that information to calculate the corresponding homography matrices. This process should be done only once or only when you suspect that the cameras might have moved with respect to each other.

If the cameras are fixed then this homography matrices should remain constant and there is no need to recalculate them by using SURF feature matching on each new trio of images, as it is time consuming and error prone as you already experienced.

edit flag offensive delete link more

Comments

Hello, thank you for your input. I have decided to abandon the SURF implementation for just now and go with your idea. What I have done is compute the homography matrices for the side cameras and then perform a normalised cross correlation to align the edges of the side images to that of the centre image (which is computed each loop). It gives a nice result at about 12FPS. I think with some optimisation and some thinking I should be able to increase the frame rate.

Possede gravatar imagePossede ( 2013-02-18 12:00:13 -0600 )edit

Excellent! Good job!

Martin Peris gravatar imageMartin Peris ( 2013-02-18 18:43:01 -0600 )edit
1

answered 2013-02-19 02:26:22 -0600

Other than Martin's suggestion, take a look at the warpPerspective function. It mainly consists of two parts. The first part calculates a (or 2 depending on the interpolation method) mapping matrix using the homography and then in the second part it will do a remap of the image using the mapping matrix. If the homography hasn't changed, you only need to calculate the mapping matrix only once. So dig up the code, break it into two parts. I can guarantee it will be super fast.

edit flag offensive delete link more

Comments

Hello, sorry for not making it clear in my post but I am using the calculated homography matrices in the warpPerspective function to change the perspective of both the side images. I agree that the captured frames can be transformed in real-time. It's the normalised cross correlation which is causing the frame-rate to drop. I have scaled the ROIs which results in ~12FPS. I think I will be able to reduce the width of the ROI (and therefore reduce the correlation computation time) without hampering the quality of the alignment. Thanks anyway! :)

Possede gravatar imagePossede ( 2013-02-19 08:37:44 -0600 )edit

@Possede are you saying that you specified the dest size in warpPerspective to be of the bounding rect of the corners of the object in your main image upon which you paste your warped image?

bad_keypoints gravatar imagebad_keypoints ( 2015-10-08 01:24:25 -0600 )edit
0

answered 2013-06-11 15:30:23 -0600

Hi, I would like to develope the same application but using 4 Ipcameras. To have a fast result I'll use static homography matrices but the target is exactly your first idea, to find keypoints frame by frame. Would you like to share your code and develope together this application?

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-02-14 18:44:33 -0600

Seen: 8,146 times

Last updated: Jun 11 '13