Ask Your Question

# realtime video stabilization

Hi,

We need a real-time method of video stabilization that does not hog the cpu too much. we also have a gpu in the machine and already do un-distortion and soon also background subtraction there. the camera is fixed, and many features are constantly visible, if that helps. do you have any hints on where best to start to solve this? I have been playing with the stabilization module, but found it to be orders of magnitude too slow. also, does it work online?

Very thankful for any hints, and great work btw! without opencv we wouldnt be close to where we are now (soccercam.nl).

Daniel

edit retag close merge delete

## 2 answers

Sort by » oldest newest most voted

I never dealt with video stabilization myself, but I dealt with problem of aligning 2 similar images that may differ by not very big shift. And I had to perform that alignment really fast (~20 ms per match). So I hope same concept will work for you. The sequence was:

1) Subsample the images.

2) Convert them to gray.

3) Find points of interest in images by using FAST. By the way, I got much better result without non-maximal suppression.

4) Perform Hough voting between 2 sets of points of interest to find best match.

5) Shift one of the images.

Note that:

a) All steps except for 4 can be done by functions in OpenCV.

b) You can save half of the work on steps 1 - 3, if you remember results of matching previous pair of frames, i.e. only new frame should be processed.

c) If integer shift is enough than step 5 is just definition of ROI, i.e. no time at all.

d) Making efficient implementation of Hough voting for step 4 is important. It may take 1 millisecond if you make it right, and it may take 1 second if you make it wrong. Be carefull.

Edit (to answer your questions):

Hough transform that matches shape to set of points actaully takes points on the shape to perform the match (matching line is the only exception). Sometimes it is not said directly but this is what actually happening. For example when matching circle, you will choose discrete number of angles for the match, but it is esntially the same as choosing points on circle. So any Hough transform (except for line match) boils down to matching 2 sets of points, and that is what you need to do here.

In order to match 2 sets of points first allocate array for voting and set its values to zero (as usual in voting algorithms). Then for each point from set A and each point from set B calculate dx and dy between them, and increment appropriate bin in voting array. When this is done find the bin with maximum value. It corresponds to best shift between those sets of points. In your case you can save part of the work by matching point of A only to points of B that are in its neighborhood, because following frames should not be shifted too much.

In my application I got good results without use of descriptors, so I didn't bother to check them. But they might be helpful in your case.

more

## Comments

Hi, thanks for the answer. I dont understand what you do in step 4. How is your voting set up? I never saw hough voting to match point sets. Also, how do you match? Do you extract descriptors? I was thinking about using ORB. is there a particular reason you do not use the matchers provided by opencv? 1ms sounds very nice!

( 2013-02-04 04:34:21 -0600 )edit

response to edit: ah, ok, get it. though this only works for shifts. in reality there is also rotation and 3d effects, though maybe this is enough. thanks a lot!

( 2013-02-04 07:26:54 -0600 )edit

You are welcome. And this don't have to be limited to shift, you can repeat step 4 number of times with different angles or scales in order to find the best one. This won't cost too much computation time if the number of angles / scales is small. If it is getting bigger, then use descriptors to reduce amount of matchings that should be performed.

( 2013-02-04 07:50:10 -0600 )edit

yes, this should work. since I am looking at frame by frame changes I can drastically reduce the search space. wow, so simple, yet so beautiful :).

( 2013-02-04 11:35:11 -0600 )edit
1

Hi Daniel,

concerning the hough-voting, I think these slides could help you:

It is build around SIFT, but the principle with hough voting is much the same!

( 2013-09-23 09:29:17 -0600 )edit

I'm kind of late but I've created powerful & threaded VidGear Video Processing python library that now provides real-time Video Stabilization with minimalistic latency and at the expense of little to no additional computational power requirement with Stabilizer Class. The basic idea behind it is to tracks and save the salient feature array for the given number of frames and then uses these anchor point to cancel out all perturbations relative to it for the incoming frames in the queue. Here's a basic usage example for your convenience:

# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import WriteGear
import cv2

stream = VideoGear(source=0, stabilize = True).start() # To open any valid video stream(for e.g device at 0 index)

# infinite loop
while True:

frame = stream.read()
# read stabilized frames

# check if frame is None
if frame is None:
#if True break the infinite loop
break

# do something with stabilized frame here

cv2.imshow("Stabilized Frame", frame)
# Show output window

key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream


More advanced usage can be found here: https://github.com/abhiTronix/vidgear...

more

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2013-02-03 05:10:46 -0600

Seen: 6,273 times

Last updated: Jul 25 '19