Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Request for an example of using videostab module APIs

Hi there,

Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionRansac(), estimateGlobalMotionLeastSquares(), MotionEstimatorRansacL2::estimate(), but things seem doesn't work well.

Here are 2 original gray frames (left: previous, right: current)

image description

After the extraction of keypoints (using cv::goodFeatureTotrack).

image description

Getting transformation matrix with estimateGlobalMotionRansac(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).

image description

Doing similarly for estimateGlobalMotionLeastSquares()

image description

Continuing with MotionEstimatorRansacL2::estimate()

image description

You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.

    //------------- READ 2 CONSECUTIVE FRAMES
    videoSrc.read(prev_frame);
    videoSrc.read(cur_frame);

    //------------- CONVERT TO GRAY SCALE
    cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
    cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);

    getKeyPoints(prev_gray, prev_keypoints);
    getKeyPoints(cur_gray, cur_keypoints);


    //------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
    /// Using videostab module's APIs, including
    /// estimateGlobalMotionRansac() | estimateGlobalMotionLeastSquares() & MotionEstimatorRansacL2::estimate()
    /// Outputs are affine transformation matrix for each method

    // 1. estimateGlobalMotionRansac()
    videostab::RansacParams ransacParams = videostab::RansacParams::default2dMotion(videostab::MM_AFFINE);
    Mat m_transform_1 = videostab::estimateGlobalMotionRansac(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, ransacParams, NULL, NULL);

    // 2. estimateGlobalMotionLeastSquares()
    Mat m_transform_2 = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);

    // 3. MotionEstimatorRansacL2::estimate()
    videostab::MotionEstimatorRansacL2 L2Estimator (videostab::MM_AFFINE);
    L2Estimator.setRansacParams(ransacParams);
    L2Estimator.setMinInlierRatio(0.5f);

    Mat m_transform_3 = L2Estimator.estimate(prev_keypoints, cur_keypoints);



    //------------- COMPUTE DIFFERENCE IMAGES
    Mat prev_gray_trans, diffImg, normImg;

    // 1. estimateGlobalMotionRansac()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_1, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

    // 2. estimateGlobalMotionLeastSquares()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_2, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

    // 3. MotionEstimatorRansacL2::estimate()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_3, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

Request for an example of using videostab module APIs

Hi there,

Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionRansac(), estimateGlobalMotionLeastSquares(), MotionEstimatorRansacL2::estimate()estimateGlobalMotionLeastSquares(), but things seem doesn't work well.

Here are 2 original gray frames (left: previous, right: current)current) between those I want to find the global motion matrix.

image description

After the extraction of keypoints (using cv::goodFeatureTotrack).

image description

Getting transformation matrix with estimateGlobalMotionRansac()estimateGlobalMotionLeastSquares(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).

image description

Doing similarly for estimateGlobalMotionLeastSquares()

image description

Continuing with MotionEstimatorRansacL2::estimate()

image description

You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.

    //------------- READ 2 CONSECUTIVE FRAMES
    videoSrc.read(prev_frame);
    videoSrc.read(cur_frame);

    //------------- CONVERT TO GRAY SCALE
SCALE & EXTRACT KEY POINTS
    cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
    cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);

    getKeyPoints(prev_gray, prev_keypoints);
    getKeyPoints(cur_gray, cur_keypoints);
goodFeaturesToTrack(prev_gray, prev_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);
    goodFeaturesToTrack(cur_gray, cur_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);


    //------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
    /// Using videostab module's APIs, including
    /// estimateGlobalMotionRansac() | estimateGlobalMotionLeastSquares() & MotionEstimatorRansacL2::estimate()
    /// Outputs are affine transformation matrix for each method

    // 1. estimateGlobalMotionRansac()
    videostab::RansacParams ransacParams = videostab::RansacParams::default2dMotion(videostab::MM_AFFINE);
    Mat m_transform_1 = videostab::estimateGlobalMotionRansac(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, ransacParams, NULL, NULL);

    // 2. estimateGlobalMotionLeastSquares()
    Mat m_transform_2 m_transform = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);

    // 3. MotionEstimatorRansacL2::estimate()
    videostab::MotionEstimatorRansacL2 L2Estimator (videostab::MM_AFFINE);
    L2Estimator.setRansacParams(ransacParams);
    L2Estimator.setMinInlierRatio(0.5f);

    Mat m_transform_3 = L2Estimator.estimate(prev_keypoints, cur_keypoints);



    //------------- COMPUTE DIFFERENCE IMAGES
    Mat prev_gray_trans, diffImg, normImg;

    // 1. estimateGlobalMotionRansac()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_1, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

    // 2. estimateGlobalMotionLeastSquares()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_2, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

    // 3. MotionEstimatorRansacL2::estimate()
    warpPerspective(prev_gray, prev_gray_trans, m_transform_3, m_transform, Size(prev_gray.cols, prev_gray.rows));
    diffImg = cur_gray - prev_gray_trans;
    normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

Request for an example of using videostab module APIs

Hi there,

Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionLeastSquares(), but things seem doesn't work well.

Here are 2 original gray frames (left: previous, right: current) between those I want to find the global motion matrix.

image description

After the extraction of keypoints (using cv::goodFeatureTotrack).

image description

Getting transformation matrix with estimateGlobalMotionLeastSquares(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).

image description

You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.

  //------------- READ 2 CONSECUTIVE FRAMES
  videoSrc.read(prev_frame);
  videoSrc.read(cur_frame);

  //------------- CONVERT TO GRAY SCALE & EXTRACT KEY POINTS
  cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
  cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);

  goodFeaturesToTrack(prev_gray, prev_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);
  goodFeaturesToTrack(cur_gray, cur_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);


  //------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
  Mat m_transform = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);


  //------------- COMPUTE DIFFERENCE IMAGES
  Mat prev_gray_trans, diffImg, normImg;

  warpPerspective(prev_gray, prev_gray_trans, m_transform, Size(prev_gray.cols, prev_gray.rows));
  diffImg = cur_gray - prev_gray_trans;
  normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

Request for an example of using videostab module APIs

Hi there,

Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionLeastSquares(), but things seem doesn't work well.

Here are 2 original gray frames (left: previous, right: current) between those I want to find the global motion matrix.

image description

After the extraction of keypoints (using cv::goodFeatureTotrack).

image description

Getting transformation matrix with estimateGlobalMotionLeastSquares(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).

image description

You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.

 //------------- READ 2 CONSECUTIVE FRAMES
 videoSrc.read(prev_frame);
 videoSrc.read(cur_frame);

 //------------- CONVERT TO GRAY SCALE & EXTRACT KEY POINTS
 cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
 cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);

 goodFeaturesToTrack(prev_gray, prev_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);
 goodFeaturesToTrack(cur_gray, cur_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);


 //------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
 Mat m_transform = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);


 //------------- COMPUTE DIFFERENCE IMAGES
 Mat prev_gray_trans, diffImg, normImg;

 warpPerspective(prev_gray, prev_gray_trans, m_transform, Size(prev_gray.cols, prev_gray.rows));
 diffImg = cur_gray - prev_gray_trans;
 normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());

Request for an example of using videostab module APIsvideostab::estimateGlobalMotionLeastSquares() doesn't work properly

Hi there,

Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionLeastSquares(), but things seem doesn't work well.

Here are 2 original gray frames (left: previous, right: current) between those I want to find the global motion matrix.

image description

After the extraction of keypoints (using cv::goodFeatureTotrack).

image description

Getting transformation matrix with estimateGlobalMotionLeastSquares(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).

image description

You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.

 //------------- READ 2 CONSECUTIVE FRAMES
 videoSrc.read(prev_frame);
 videoSrc.read(cur_frame);

 //------------- CONVERT TO GRAY SCALE & EXTRACT KEY POINTS
 cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
 cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);

 goodFeaturesToTrack(prev_gray, prev_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);
 goodFeaturesToTrack(cur_gray, cur_keypoints, 500, 0.04f, 10, cv::Mat(), 5, 0, 0.04f);


 //------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
 Mat m_transform = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);


 //------------- COMPUTE DIFFERENCE IMAGES
 Mat prev_gray_trans, diffImg, normImg;

 warpPerspective(prev_gray, prev_gray_trans, m_transform, Size(prev_gray.cols, prev_gray.rows));
 diffImg = cur_gray - prev_gray_trans;
 normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());