Ask Your Question
0

[Python] Real time image stabilization with Optical Flow

asked 2016-11-29 14:43:34 -0600

Naustvik gravatar image

Hi! I'm new here on this forum, and would love some help with a project I'm working on!

I'm trying to make a small image stabilization programme in Python, but I can't get it to work the way I want.

First, my test programme:

from stabilizer import Stabilizer
import cv2
import sys
from imutils.video import VideoStream
import time



imageCapture = cv2.VideoCapture(0)
imageCapture.open(0)
time.sleep(2.0)
frame=0
counter=0

stabilizer=Stabilizer()

while True:
    image=imageCapture.read()
    frame, result=stabilizer.stabilize(image, frame)

    cv2.imshow("Result", result)
    cv2.imshow("Image", image[1])
    key = cv2.waitKey(1) & 0xFF

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break
    counter+=1
    print counter

print("[INFO] cleaning up...")
cv2.destroyAllWindows()
imageCapture.release()

...and this is my actual stabilization programme:

import numpy as np
import imutils
import cv2



class Stabilizer:
    def stabilize(self,image, old_frame):


            # params for ShiTomasi corner detection
            feature_params = dict( maxCorners = 100,qualityLevel = 0.3,minDistance = 7,blockSize = 7 )

            # Parameters for lucas kanade optical flow
            lk_params = dict( winSize  = (15,15),
                              maxLevel = 2,
                              criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

            # Create some random colors
            color = np.random.randint(0,255,(100,3))

            # Take first frame and find corners in it
            try:
                if old_frame==0:
                    ret, old_frame = image
            except:
                print("tull")

            old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
            p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)


            ret,frame = image
            frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            # calculate optical flow
            p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)


            # Select good points
            good_new = p1[st==1]
            good_old = p0[st==1]


            # Make 3x3 matrix
            h=cv2.findHomography(good_old,good_new)
            #h=cv2.getPerspectiveTransform(good_old,good_new) #not working



            # Now update the previous frame and previous points
            #old_gray = frame_gray.copy()
            #p0 = good_new.reshape(-1,1,2)

            #cv2.destroyAllWindows()

            result=cv2.warpPerspective(frame,h[0], (frame.shape[1],frame.shape[0]))

            return frame, result

This is what I thought making this:

  1. Catch one frame, finding points (p0) to match. The first time the old and new frame will be the same, but the next run it should be two different frames.
  2. Calculate "Optical Flow" from these points.
  3. Make 3x3 transformation matrix from this "Optical Flow"
  4. Apply the transformation to the image

Is there any one who could help me with this one? Thanks!

edit retag flag offensive close merge delete

Comments

3

You need to smooth the transformation somehow. Otherwise you will just be shaking one frame behind. Try limiting it to just translation to start and see how different smoothing types affect the result, and that the rest of the function works as expected. Then find different ways (there are many) to smooth homography transforms.

Tetragramm gravatar imageTetragramm ( 2016-11-29 18:25:15 -0600 )edit

Also, you are aware that there is a video stabilizer in OpenCV?

StevenPuttemans gravatar imageStevenPuttemans ( 2016-11-30 04:21:23 -0600 )edit
1

@Tetragramm: Thank you for your answer! What elements is it that I should apply the smoothing to? Is it the good_new and good_old, or have I misunderstood the whole Optical Flow-thing? @StevenPuttemans: As far as I have figured, this is only for C++, isn't it? If it also is for Python, it would be perfect!

Naustvik gravatar imageNaustvik ( 2016-11-30 05:25:36 -0600 )edit
1

It seems that the stabilizer class has a CV_EXPORTS which would mean the functionality is wrapped into Python... and if not, there are tons of samples on how to program it out there, like this one.

StevenPuttemans gravatar imageStevenPuttemans ( 2016-11-30 05:52:58 -0600 )edit
2

The built in stabilizer isn't real time, unfortunately. It takes a video file as input.

Tetragramm gravatar imageTetragramm ( 2016-11-30 12:12:21 -0600 )edit

What you need to smooth is h, over time. But h is just your estimate of the motion from frame to frame, so whatever you use as the estimate of motion is what needs to be smoothed over time. I suggest you temporarily replace the homography with the median of your optical flow x and y vectors to get translation. Smoothing translation is simple, and you can test that the rest of your stabilization works.

Tetragramm gravatar imageTetragramm ( 2016-11-30 12:15:54 -0600 )edit

I've replaced the homography with the mean of the difference between the good points in both X and Y, and then applied a running mean filter to the last N frames. The stabilization works with slow motions - if I pan the camera to the left, the image is translation to the right. But with high frequent vibrations, it has almost no effect. I've varied N from 2 to 20, but do not quite understand why it won't take the high frequencies. I thout that the running mean would be a LowPass FIR-filter, and therefore stabilize the vibrations. Do you have any suggestions to how I could make this better?

Naustvik gravatar imageNaustvik ( 2016-11-30 17:08:20 -0600 )edit
1

Well, think about it this way. By taking the running mean, you are getting rid of the high frequency portion. If you are shifting your image by the running mean, you are getting rid of the low-frequency portion, but leaving the high frequency part.

So you need to shift the image by the motion you detect, minus the low frequency part. Take a look at Eqs 30-35 in THIS paper.

Tetragramm gravatar imageTetragramm ( 2016-11-30 17:19:13 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
2

answered 2019-07-25 10:25:14 -0600

AbhiTronix gravatar image

I'm the author of powerful & threaded VidGear Video Processing python library that now provides real-time Video Stabilization with minimalistic latency and at the expense of little to no additional computational power requirement with Stabilizer Class. Here's a basic usage example for your convenience:

# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import WriteGear
import cv2

stream = VideoGear(source=0, stabilize = True).start() # To open any valid video stream(for e.g device at 0 index)

# infinite loop
while True:

    frame = stream.read()
    # read stabilized frames

    # check if frame is None
    if frame is None:
        #if True break the infinite loop
        break

    # do something with stabilized frame here

    cv2.imshow("Stabilized Frame", frame)
    # Show output window

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream

More advanced usage can be found here: https://github.com/abhiTronix/vidgear...

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2016-11-29 14:43:34 -0600

Seen: 9,700 times

Last updated: Jul 25 '19