# Motion estimation between 2 frames

Hi everyone:

I want to try a Motion Estimation & Compensation between 2 frames. For that, I need to extract 2 consecutive frames, and then evaluate them.

My result have to be the image of the compensate frame with the first one. Any ideas? Because to implement the algorithm is really a pain.

I did it on MatLab, but I couldn't implemented on openCV. Thanks you!

This is my result using matlab:

UPDATE: I think I could interpolate 2 frames to obtain what I need. Someone could give me a hand please?

edit retag close merge delete

Sort by ยป oldest newest most voted

I have dealt with similar problem before. Here is what you can do: 1) First, extract two consecutive frames (which I guess you already have) 2) Calculate the optical flow of the frames. Optical flow basically uses image matching algorithms like SIFT etc to know the location of object in frame 2 based on its features in frame 1. Thus, it has the displacement of the object from its position in frame 1 to its new position in frame 2. So, optical flow helps us calculate the magnitude of displacement and the direction in which displacement occurred for of all the points in the frame 2 as compared to frame 1. 3) Now, we can interpolate the frame between the two frames using the simple algebra rule. Interpolation is predicting the position of object in a frame located between two given frames.

There are inbuilt functions to calculate the optical flow i.e motion estimation between two frames. You can find more details about optical flow function provided by OpenCV here.

With this function, implementing motion estimation and generating interpolated frame should not be a problem. I hope this helps.

more

I understood that you want to indicate the areas on frame which have motion. You may use motion history approach or contour approach for this work.

Here is a motion history approach used sample method (optimized through forking the OpenCV sample - update_mhi) written with Java. You may call this method in a loop through passing the retrieved video frame's Mat reference to img parameter of method. // Fields... Mat motion, mhi, orient, mask, segmask;

private void update_mhi(Mat img, Mat dst, int diff_threshold) {
if (videoSignalOkay) {
double timestamp = (System.nanoTime() - startTime) / 1e9;
int idx1 = last, idx2;
Mat silh = Mat.zeros(size, CvType.CV_32FC1);
cvtColor(img, buf[last], COLOR_BGR2GRAY);
double angle, count;

idx2 = (last + 1) % Constants.fps; // index of (last - (N-1))th frame
last = idx2;

silh = buf[idx2];
if (silh == null || silh.empty()) {
silh = Mat.zeros(size, CvType.CV_32FC1);
}
absdiff(buf[idx1], buf[idx2], silh);

threshold(silh, silh, diff_threshold, 1, THRESH_BINARY);

updateMotionHistory(silh, mhi, timestamp, Constants.mhiDuration);

mhi.convertTo(mask, mask.type(), 255.0 / Constants.mhiDuration, (Constants.mhiDuration - timestamp) * 255.0 / (Constants.mhiDuration));
dst.setTo(new Scalar(0));
List<Mat> list = new ArrayList<Mat>(3);

merge(list, dst);

MatOfRect roi = new MatOfRect();

int total = roi.toArray().length;
Rect[] rois = roi.toArray();
Rect comp_rect;
Scalar color;

for (int i = -1; i < total; i++) {
if (i < 0) {
comp_rect = new Rect(0, 0, videoWidth, videoHeight);
color = new Scalar(255, 255, 255);
magnitude = 100;
} else {
comp_rect = rois[i];
if (comp_rect.width >= videoWidth/2 || comp_rect.height >= videoHeight/2 ||
comp_rect.width < Constants.recfactorx || comp_rect.height < Constants.recfactory ||
comp_rect.width + comp_rect.height < (Constants.recfactorx*Constants.recfactory)) // reject very small things
continue;
color = new Scalar(0, 0, 255);
magnitude = 30;
}

Mat silhROI = silh.submat(comp_rect);
Mat mhiROI = mhi.submat(comp_rect);
Mat orientROI = orient.submat(comp_rect);

angle = calcGlobalOrientation(orientROI, maskROI, mhiROI, timestamp, Constants.mhiDuration);
angle = 360.0 - angle;
count = Core.norm(silhROI, NORM_L1);

silhROI.release();
mhiROI.release();
orientROI.release();
if (count < comp_rect.height * comp_rect.width * Constants.pixelFactor ||
comp_rect.width == videoWidth || comp_rect.height == videoHeight) {
continue;
} else {
isAnyTrack = true;
}
Point center = new Point((comp_rect.x + comp_rect.width / 2), (comp_rect.y + comp_rect.height / 2));
// Optimizer part! Compare the tracked thing in previous list to control empty area movement detection...
/*if (isAnyTrack) {
TrackingPojo trackPojo = new TrackingPojo(comp_rect, center, null);
if (notNeededTracking(trackPojo)) {
isAnyTrack = false;
continue;
} else {
// Show the warning icon...
}
}*/
circle(img, center, (int) Math.round(magnitude * 1.2), color, 3, LINE_AA, 0);
Point thePoint = new Point(
Math.round(center.x - magnitude * Math.sin(angle * Math.PI / 180)),
Math.round(center.y + magnitude * Math.cos(angle * Math.PI / 180)));
Point thePoint2 = new Point(thePoint.x, thePoint.y - (comp_rect.height / 2 + 10));
Point thePoint3 = new Point(thePoint2.x, thePoint2.y - 15);
Core.putText(img, "(" + center.x + ", " + center.y + ")", thePoint, 16, 0.50, new Scalar(255, 0, 0 ...
more

1

Let me give a check and I will give you an answer. Thanks for the reply, Happy new year!

( 2015-12-30 07:56:00 -0600 )edit

Official site

GitHub

Wiki

Documentation