Hi there,
Could any one give me an example of using videostab APIs in OpenCV-3.3.0 to estimate global motion between two adjacent frames? I couldn't find any of them, so I have figured out my way (as explained step by step bellowed). I am trying with estimateGlobalMotionRansac(), estimateGlobalMotionLeastSquares(), MotionEstimatorRansacL2::estimate(), but things seem doesn't work well.
Here are 2 original gray frames (left: previous, right: current)
After the extraction of keypoints (using cv::goodFeatureTotrack).
Getting transformation matrix with estimateGlobalMotionRansac(), then have it transform the previous frame into coordinate space of the current one (left) and compute difference image (right).
Doing similarly for estimateGlobalMotionLeastSquares()
Continuing with MotionEstimatorRansacL2::estimate()
You can see that the result are terrible. In bellow I attach my piece of code, I would highly appreciate a help to figure out possible mistakes in it, or favorably a well working sample of code.
//------------- READ 2 CONSECUTIVE FRAMES
videoSrc.read(prev_frame);
videoSrc.read(cur_frame);
//------------- CONVERT TO GRAY SCALE
cvtColor(prev_frame, prev_gray, COLOR_RGB2GRAY);
cvtColor(cur_frame, cur_gray, COLOR_RGB2GRAY);
getKeyPoints(prev_gray, prev_keypoints);
getKeyPoints(cur_gray, cur_keypoints);
//------------- ESTIMATE GLOBAL MOTION BASED ON THE ABOVE SETS OF KEY POINTS
/// Using videostab module's APIs, including
/// estimateGlobalMotionRansac() | estimateGlobalMotionLeastSquares() & MotionEstimatorRansacL2::estimate()
/// Outputs are affine transformation matrix for each method
// 1. estimateGlobalMotionRansac()
videostab::RansacParams ransacParams = videostab::RansacParams::default2dMotion(videostab::MM_AFFINE);
Mat m_transform_1 = videostab::estimateGlobalMotionRansac(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, ransacParams, NULL, NULL);
// 2. estimateGlobalMotionLeastSquares()
Mat m_transform_2 = videostab::estimateGlobalMotionLeastSquares(prev_keypoints, cur_keypoints, videostab::MM_AFFINE, NULL);
// 3. MotionEstimatorRansacL2::estimate()
videostab::MotionEstimatorRansacL2 L2Estimator (videostab::MM_AFFINE);
L2Estimator.setRansacParams(ransacParams);
L2Estimator.setMinInlierRatio(0.5f);
Mat m_transform_3 = L2Estimator.estimate(prev_keypoints, cur_keypoints);
//------------- COMPUTE DIFFERENCE IMAGES
Mat prev_gray_trans, diffImg, normImg;
// 1. estimateGlobalMotionRansac()
warpPerspective(prev_gray, prev_gray_trans, m_transform_1, Size(prev_gray.cols, prev_gray.rows));
diffImg = cur_gray - prev_gray_trans;
normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());
// 2. estimateGlobalMotionLeastSquares()
warpPerspective(prev_gray, prev_gray_trans, m_transform_2, Size(prev_gray.cols, prev_gray.rows));
diffImg = cur_gray - prev_gray_trans;
normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());
// 3. MotionEstimatorRansacL2::estimate()
warpPerspective(prev_gray, prev_gray_trans, m_transform_3, Size(prev_gray.cols, prev_gray.rows));
diffImg = cur_gray - prev_gray_trans;
normalize (diffImg, normImg, 0, 255, NORM_MINMAX, CV_8UC1, Mat());