Ask Your Question

martisan's profile - activity

2017-08-18 05:18:15 -0600 asked a question Video Stabilization using goodFeaturesToTrack and calcOpticalFlowPyrLK

Hi all,

I'm currently trying to achieve video stabilization using goodFeaturesToTrack and calcOpticalFlowPyrLK.

I get the good features and calculate the good optical flow. I then get the image trajectory and smooth out the image trajectory and apply it to the video frames.

This all seems to be working correct, but the problem I'm facing is that the processed frames contain large black portions. How can I get the frames centered in the video without seeing the large black areas? Any help or suggestions will really be appreciated.

Below is the code I'm currently using:

#include <opencv2/opencv.hpp>
#include <iostream>
#include <cassert>
#include <cmath>
#include <fstream>
#include <ctime>

using namespace std;
using namespace cv;

const int SMOOTHING_RADIUS = 30; // In frames. The larger the more stable the video, but less reactive to sudden panning
const int HORIZONTAL_BORDER_CROP = 60; // In pixels. Crops the border to reduce the black borders from stabilisation being too noticeable.

struct TransformParam {
    TransformParam() {}
    TransformParam(double _dx, double _dy, double _da)
    {
        dx = _dx;
        dy = _dy;
        da = _da;
    }

    double dx;
    double dy;
    double da; // angle
};

struct Trajectory {
    Trajectory() {}
    Trajectory(double _x, double _y, double _a)
    {
        x = _x;
        y = _y;
        a = _a;
    }

    double x;
    double y;
    double a; // angle
};

void stabilize(const vector<Mat> &images, vector<Mat> &resultImages) {
  vector<TransformParam> prev_to_cur_transform;
  Mat last_T;

  for (int i = 1; i < images.size(); i++) {
    Mat prev = images[i-1];
    Mat cur = images[i];

    // vector from prev to cur
    vector<Point2f> prevCorner, curCorner;
    vector<Point2f> prevCorner2, curCorner2;
    vector<uchar> status;
    vector<float> error;

    Mat curGrey;
    Mat prevGrey;

    cvtColor(cur, curGrey, COLOR_BGR2GRAY);
    cvtColor(prev, prevGrey, COLOR_BGR2GRAY);

    size_t totalEntries = curGrey.cols * curGrey.rows;

    goodFeaturesToTrack(prevGrey, prevCorner, totalEntries, 0.01, 1);
    calcOpticalFlowPyrLK(prevGrey, curGrey, prevCorner, curCorner, status, error);

    // weed out bad matches
    for (int i = 0; i < status.size(); i++) {
      if (status[i]) {
        prevCorner2.push_back(prevCorner[i]);
        curCorner2.push_back(curCorner[i]);
      }
    }

    // translation + rotation only
    // false = rigidTransform, no scaling/shearing
    Mat T = estimateRigidTransform(prevCorner2, curCorner2, false);

    // in rare cases no valid transform is found, use the last known good transform
    if (T.data == NULL) {
      last_T.copyTo(T);
    }

    T.copyTo(last_T);

    // decompose T
    double dx = T.at<double>(0, 2);
    double dy = T.at<double>(1, 2);
    double da = atan2(T.at<double>(1, 0), T.at<double>(0, 0));

    prev_to_cur_transform.push_back(TransformParam(dx, dy, da));
    cout << "i: " << i << " " << "dx: " << dx << " " << "dy: " << dy << " " << "da: " << da << endl;
    cout << endl;

    curGrey.copyTo(prevGrey);
    cout << "Image: " << i << "/" << images.size() << " - good optical flow: " << prevCorner2.size() << endl;
  }

  // Step 2 - Accumulate transformations to get image trajectory
  // Accumulated frame to frame transformation
  double a = 0;
  double x = 0;
  double y = 0;

  vector<Trajectory> trajectory; // trajectory at all frames

  for (int i = 0; i < prev_to_cur_transform.size(); i++) {
    x += prev_to_cur_transform[i].dx;
    y += prev_to_cur_transform[i].dy;
    a += prev_to_cur_transform[i].da;

    trajectory.push_back(Trajectory(x, y, a));
    cout << "Trajectory " << (i+1) << ":" << " " << "x: " << x << " " << "y: " << y << " " << "a: " << a << endl;
  }

  // Step 3 - Smooth out trajectory using an averaging window
  vector<Trajectory> smoothed_trajectory; // trajectory at all frames

  for (int i = 0; i < trajectory.size(); i++) {
    double sum_x = 0;
    double sum_y ...
(more)
2017-07-05 03:34:08 -0600 received badge  Supporter (source)
2017-07-05 03:33:28 -0600 received badge  Scholar (source)
2017-07-05 03:33:05 -0600 commented answer StabilizerBase with image sequence

This is exactly what I was looking for @berak. I've tested and it's working great! Thank you for the help. Really appreciate it.

2017-07-04 04:41:10 -0600 asked a question StabilizerBase with image sequence

Hi all,

I'm currently trying to do video stabilization using the videostab module.

At the moment I have to set the frame source using:

Ptr<VideoFileSource> source = makePtr<VideoFileSource>(inputPath);

Is there any way I can use an image sequence instead of VideoFileSource?

2017-06-26 05:37:53 -0600 asked a question Video stabilization with videostab module

Hi all,

Really hope that someone can assist with the following problem. I'm currently trying to video stabilization on iOS, but I'm getting an out of memory exception.

I have the following code that attempts to stabilize the video:

+ (int)videoStabFileAtURL:(NSURL *)inputURL writeToURL:(NSURL *)outputURL {
  Ptr<IFrameSource> stabilizedFrames;

  try {
    // 1 - Prepare input video and check it
    String inputPath = *new String(inputURL.path.UTF8String);
    String outputPath = *new String(outputURL.path.UTF8String);

    Ptr<VideoFileSource> source = makePtr<VideoFileSource>(inputPath);
    cout << "Frame count (rough): " << source->count() << endl;

    // 2 - Prepare the motion estimator
    double min_inlier_ratio = 0.1;

    Ptr<MotionEstimatorRansacL2> est = makePtr<MotionEstimatorRansacL2>(MM_TRANSLATION_AND_SCALE);
    RansacParams ransac = est->ransacParams();
    ransac.thresh = 5;
    ransac.eps = 0.5;

    est->setRansacParams(ransac);
    est->setMinInlierRatio(min_inlier_ratio);

    // 3 - Create a feature detector
    int nkps = 20;

    Ptr<GFTTDetector> feature_detector = GFTTDetector::create(nkps);

    // 4 - Create motion estimator
    Ptr<KeypointBasedMotionEstimator> motionEstBuilder = makePtr<KeypointBasedMotionEstimator>(est);
    motionEstBuilder->setDetector(feature_detector);

    Ptr<IOutlierRejector> outlierRejector = makePtr<NullOutlierRejector>();
    motionEstBuilder->setOutlierRejector(outlierRejector);

    // 5 - Prepare stabilizer
    StabilizerBase *stabilizer = 0;

    int radius_pass = 15;
    bool est_trim = true;

//    TwoPassStabilizer *twoPassStabilizer = new TwoPassStabilizer();
//    twoPassStabilizer->setEstimateTrimRatio(est_trim);
//    twoPassStabilizer->setMotionStabilizer(makePtr<GaussianMotionFilter>(radius_pass));

    OnePassStabilizer *onePassStabilizer = new OnePassStabilizer();
    onePassStabilizer->setMotionFilter(makePtr<GaussianMotionFilter>(radius_pass));

    stabilizer = onePassStabilizer;

    // Setup parameters
    int radius = 15;
    double trim_ratio = 0.1;
    bool incl_constr = false;

    stabilizer->setFrameSource(source);
    stabilizer->setMotionEstimator(motionEstBuilder);
    stabilizer->setRadius(radius);
    stabilizer->setTrimRatio(trim_ratio);
    stabilizer->setCorrectionForInclusion(incl_constr);
    stabilizer->setBorderMode(BORDER_REPLICATE);

    // Cast stabilizer to simple frame source interface to read stabilized frames
    stabilizedFrames.reset(dynamic_cast<IFrameSource*>(stabilizer));

    // 6 - Processing the stabilized frames. The results are saved
    processing(stabilizedFrames, outputPath);
  } catch (const exception &e) {
    cout << "Error: " << e.what() << endl;
    stabilizedFrames.release();
    return -1;
  }

  stabilizedFrames.release();
  return 0;
}

void processing(Ptr<IFrameSource> stabilizedFrames, String outputPath) {
  dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
    @autoreleasepool {
      VideoWriter videoWriter;
      Mat stabilizedFrame;
      int nframes = 0;
      double outputFps = 30;

      // For each stabilized frame
      while (!(stabilizedFrame = stabilizedFrames->nextFrame()).empty()) {
        nframes++;

        // Init writer (once) and save stabilized frame
        if (!outputPath.empty()) {
          if (!videoWriter.isOpened()) {
            videoWriter.open(outputPath, VideoWriter::fourcc('H', '2', '6', '4'), outputFps, stabilizedFrame.size());
            videoWriter << stabilizedFrame;
          }
        }
      }
    }
  });
}

I can see it processing the frames, but it's using a lot of memory on an iPhone 6, which eventually leads to the app crashing. Does anyone know what I can do to reduce memory usage?

2017-06-20 06:14:48 -0600 asked a question Slow performance on iOS with video stabilization

Hi all,

I hope someone could please help me understand why processing would take so long on an iPhone 6. I'm currently trying to do video stabilization using the following:

+ (void)stabilizeVideoFileAtURL:(NSURL *)inputURL writeToURL:(NSURL *)outputURL {
  dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
    String inputFile = *new String(inputURL.path.UTF8String);
    cout << "Input: " << inputFile << endl;

    String outputFile = *new String(outputURL.path.UTF8String);
    cout << "Output: " << outputFile << endl;

    VideoCapture cap(inputFile);
    assert(cap.isOpened());

    Mat cur, cur_grey, cur_orig;
    Mat prev, prev_grey, prev_orig;

    cap >> prev;
    cvtColor(prev, prev_grey, COLOR_BGR2GRAY);

    // Step 1 - Get previous to current frame transformation (dx, dy, da) for all frames
    vector<TransformParam> prev_to_cur_transform;

    int frames = 1;
    int max_frames = cap.get(CV_CAP_PROP_FRAME_COUNT);
    cout << "Max Frames: " << max_frames << endl;

    Mat last_T;

    while (true) {
      cap >> cur;

      if (cur.data == NULL) {
        cout << "Current data is NULL, breaking out...1" << endl;
        break;
      }

      cvtColor(cur, cur_grey, COLOR_BGR2GRAY);

      // Vector from prev to cur
      vector<Point2f> prev_corner, cur_corner;
      vector<Point2f> prev_corner2, cur_corner2;
      vector<uchar> status;
      vector<float> error;

      goodFeaturesToTrack(prev_grey, prev_corner, 200, 0.01, 30);
      calcOpticalFlowPyrLK(prev_grey, cur_grey, prev_corner, cur_corner, status, error);

      // Weed out bad matches
      for (size_t i = 0; i < status.size(); i++) {
        if (status[i]) {
          prev_corner2.push_back(prev_corner[i]);
          cur_corner2.push_back(cur_corner[i]);
        }
      }

      // Translation + Rotation only
      Mat T = estimateRigidTransform(prev_corner, cur_corner, false);

      if (T.data == NULL) {
        cout << "No Transform was found" << endl;
        last_T.copyTo(T);
      }

      T.copyTo(last_T);

      // Decompose T
      double dx = T.at<double>(0,2);
      double dy = T.at<double>(1,2);
      double da = atan2(T.at<double>(1,0), T.at<double>(0,0));

      prev_to_cur_transform.push_back(TransformParam(dx, dy, da));

      cur.copyTo(prev);
      cur_grey.copyTo(prev_grey);

      frames++;
    }

    // Step 2 - Accumulate the transformations to get the image trajectory
    // Accumulated frame to frame transform
    double x = 0;
    double y = 0;
    double a = 0;

    vector<Trajectory> trajectory; // Trajectory at all frames

    for (size_t i = 0; i < prev_to_cur_transform.size(); i++) {
      x += prev_to_cur_transform[i].dx;
      y += prev_to_cur_transform[i].dy;
      a += prev_to_cur_transform[i].da;

      trajectory.push_back(Trajectory(x, y, a));
    }

    // Step 3 - Smooth out the trajectory using an averaging window
    vector<Trajectory> smoothed_trajectory; // Trajectory at all frames

    for (size_t i = 0; i < trajectory.size(); i++) {
      double sum_x = 0;
      double sum_y = 0;
      double sum_a = 0;
      int count = 0;

      for (int j = -SMOOTHING_RADIUS; j <= SMOOTHING_RADIUS; j++) {
        if (i + j >= 0 && i + j < trajectory.size()) {
          sum_x += trajectory[i+j].x;
          sum_y += trajectory[i+j].y;
          sum_a += trajectory[i+j].a;

          count++;
        }
      }

      double avg_x = sum_x / count;
      double avg_y = sum_y / count;
      double avg_a = sum_a / count;

      smoothed_trajectory.push_back(Trajectory(avg_x, avg_y, avg_a));
    }

    // Step 4 - Generate new set of previous to current transform, such that the trajectory ends up being the same as the smoothed trajectory
    vector<TransformParam> new_prev_to_cur_transform;

    // Accumulated frame to frame transform
    x = 0;
    y = 0;
    a = 0;

    for (size_t i = 0; i < prev_to_cur_transform.size(); i++) {
      x += prev_to_cur_transform[i].dx;
      y += prev_to_cur_transform[i].dy;
      a += prev_to_cur_transform[i].da;

      // Target - Current
      double diff_x = smoothed_trajectory[i].x - x;
      double diff_y = smoothed_trajectory[i].y - y;
      double diff_a = smoothed_trajectory[i].a - a;

      double dx = prev_to_cur_transform[i].dx + diff_x;
      double dy = prev_to_cur_transform[i].dy + diff_y;
      double da = prev_to_cur_transform[i].da + diff_a;

      new_prev_to_cur_transform ...
(more)
2017-04-06 01:24:20 -0600 commented question 360 Panorama around object

I have tried it and that is why I can do normal 360 panorama, but how do I actually create a 360 around an object like a car?

2017-04-05 05:50:49 -0600 asked a question 360 Panorama around object

Hi guys,

Fairly new to OpenCV and any help would be appreciated.
At the moment I can stitch a normal 360 Panorama on iOS with no problems.

What I'm actually trying to do is the following:

  • Capture images with the iPhone camera walking around an object like a car for example.
  • Stitch all the images to create a 360 rotation around the car.


Does anyone have an idea of how to achieve this or even point me in the right direction?
I've been battling with this for a long time now.

2017-03-22 10:12:07 -0600 commented question Did Opencv blending function well manage full 360 panorama ?

Hey @FMon,

How are you capturing your images and stitching them? I'm currently trying to stitch 360 panorama, but it just doesn't want to work :(