# Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);


So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

EDIT: OK, I did some improvement on the code. What I try to do is, trying to eliminate the false positives in a detection algorithm over the frames. To do that, I hold each detection in an object called "obstacle" with a field of "frame consistency". So, if an obstacle has a frame consistency of 5 or more, it is a true positive, otherwise it is a false positive. I try to implement this idea but somehow i keep getting the same frame consistency. Here is the code that I am using:

bool firstDetection = true;

while (true)
{
// if it is the first launch, get obstacles by doing image processing, otherwise get it from the previous frame's obstacle vector
if (firstDetection == true)
{
captureIn >> sourceFrame;
if (sourceFrame.empty())
break;

// Do some stuff
process(sourceFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
// Get the obstacles into the first vector
detectedObstaclesPreviousFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);
firstDetection = false;
}
else
{
// If it is not the first detection, then put the current one into previous one
detectedObstaclesPreviousFrame = detectedObstaclesCurrentFrame;
}

// Get the second frame
captureIn >> nextFrame;
if (nextFrame.empty())
break;

// Do the same stuff on this frame as well
process(nextFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
//  // Get the obstacles into the second vector this time
detectedObstaclesCurrentFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);

// If there is a detection for both frames ( so that their vectors aren't empty )
if (detectedObstaclesPreviousFrame.size() != 0)
{
if (detectedObstaclesCurrentFrame.size() != 0)
{
// Loop over both detection arrays
for (int a = 0; a < detectedObstaclesPreviousFrame.size(); a++)
{
for (int b = 0; b < detectedObstaclesCurrentFrame.size(); b++)
{
// Check if the objects are the same, if not, it means we have another detection here
// If they are the same obstacle, increase one's frameconsistency
if (isSameObstacle(detectedObstaclesPreviousFrame[a], detectedObstaclesCurrentFrame[b]) == true)
{       detectedObstaclesCurrentFrame[b].setFrameConsistency(detectedObstaclesCurrentFrame[b].getFrameConsistency() + 1);
cout << "F.C: " << detectedObstaclesCurrentFrame[b].getFrameConsistency() << endl;
}
}
}
}


However the output I get is:

F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1


Which means it does not even save the frameconsistency of the detected object. I am stuck at this point, any thoughts on it? I tried to ...

edit retag close merge delete

I guess you will need to create a buffer variable which will hold the values from the previous frame, which you will compare with the current frame each time. After you extract your result through the comparison or whatever, then you swap the values of the current frame to the buffer variable and the same thing again...

( 2015-05-12 17:10:37 -0500 )edit

So, I should have two vectors to hold the coordinate points. And I should repeat the processing steps two times and fill those vectors and then compare?

( 2015-05-15 04:06:32 -0500 )edit

yup, you will need two vectors one for the previous frame and one for the current frame. You will need to skip any comparison for the first frame in order to fill the two vectors. Once you've done that you can start comparing the values and extracting any kind of result of yours. Then you swap the values of the two vectors, clear the vector which corresponds to the current frame, in order to obtain the values from the next frame which is gonna be the current frame in the next retrieval, and so on....

( 2015-05-15 07:06:04 -0500 )edit

May be you can have a look opencv/samples/lkdemo.cpp

( 2015-05-19 02:39:18 -0500 )edit

@LBerger

fatal error: opencv2/videoio/videoio.hpp: No such file or directory


Are you sure that the example is suitable for OpenCV 2.4.11? It seems it was designed for the new OpenCV, which is 3.0.x

( 2015-05-19 10:20:45 -0500 )edit
1

sorry it's for opencv 3.0 but inside this sample you can find how to manage previous and new images (and keypoints)

( 2015-05-19 10:25:21 -0500 )edit