Ask Your Question
1

How to calculate the degree of camera shake

asked 2014-11-10 09:53:36 -0500

Jenny gravatar image

updated 2017-08-28 05:51:59 -0500

I have a shaky video. In order to stabilize the video, I want to calculate the degree of shaky video or the magnitude of sudden change of the motion between two frames for smoothing motions. Anyone suggest for me?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-11-10 16:48:09 -0500

juanmanpr gravatar image

If you are trying to estabilize the video, there is a stabilization module in OpenCV. You can check that out. But if is not related to that, and you want a measure of the shakyness of the camera, you could do something like:

  1. Find and compute Features in consecutive frames.
  2. Match those features for every pair of consecutive frames.
  3. Cross-validate and eliminate bad matches.
  4. Find a Homography transformation between the features in one frame to the ones of the second frame, and reject outliers with RANSAC to achieve robustness for moving objects and matching errors.
  5. You can verify the rotation angle of the camera or maybe compute the magnitude of the rotation part of the homography matrix as a shakyness strength measure.

Every step has an OpenCV method, except for step 3 I think. But you can cross validate by yourself.

edit flag offensive delete link more
1

answered 2014-11-11 08:30:36 -0500

wolfram79 gravatar image

There are a lot of ways to do that...

One way it can be done is like at http://nghiaho.com/uploads/code/videostab.cpp

(frame processing snippet)

cap >> cur;

if(cur.data == NULL) {
    break;
}

cvtColor(cur, cur_grey, COLOR_BGR2GRAY);

// vector from prev to cur
vector <Point2f> prev_corner, cur_corner;
vector <Point2f> prev_corner2, cur_corner2;
vector <uchar> status;
vector <float> err;

goodFeaturesToTrack(prev_grey, prev_corner, 200, 0.01, 30);
calcOpticalFlowPyrLK(prev_grey, cur_grey, prev_corner, cur_corner, status, err);

// weed out bad matches
for(size_t i=0; i < status.size(); i++) {
    if(status[i]) {
        prev_corner2.push_back(prev_corner[i]);
        cur_corner2.push_back(cur_corner[i]);
    }
}

// translation + rotation only
Mat T = estimateRigidTransform(prev_corner2, cur_corner2, false); // false = rigid transform, no scaling/shearing

// in rare cases no transform is found. We'll just use the last known good transform.
if(T.data == NULL) {
    last_T.copyTo(T);
}

T.copyTo(last_T);

// decompose T
double dx = T.at<double>(0,2);
double dy = T.at<double>(1,2);
double da = atan2(T.at<double>(1,0), T.at<double>(0,0));

prev_to_cur_transform.push_back(TransformParam(dx, dy, da));

out_transform << k << " " << dx << " " << dy << " " << da << endl;

cur.copyTo(prev);
cur_grey.copyTo(prev_grey);

cout << "Frame: " << k << "/" << max_frames << " - good optical flow: " << prev_corner2.size() << endl;
k++;


(snippet)

Another way that is very robust is to take a few e.g. 64x64 gray blocks of the image at fixed positions and correlate them with the following frame. The position of the maximum value of the resulting matrix corresponds to the translation.

You might want to apply lens distortion before if you want to have real nice results and you're "zoomed out".

W.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

Stats

Asked: 2014-11-10 09:53:36 -0500

Seen: 1,557 times

Last updated: Nov 11 '14