Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);

So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);

So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

EDIT: OK, I did some improvement on the code. What I try to do is, trying to eliminate the false positives in a detection algorithm over the frames. To do that, I hold each detection in an object called "obstacle" with a field of "frame consistency". So, if an obstacle has a frame consistency of 5 or more, it is a true positive, otherwise it is a false positive. I try to implement this idea but somehow i keep getting the same frameconsistency. Here the code that I am using:

bool firstDetection = true;

while (true)
{
    // if it is the first launch, get obstacles by doing image processing, otherwise get it from the previous frame's obstacle vector
    if (firstDetection == true)
    {
        captureIn >> sourceFrame;
        if (sourceFrame.empty())
            break;

        // Do some stuff
        process(sourceFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
       // Get the obstacles into the first vector
        detectedObstaclesPreviousFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);
        firstDetection = false;
    }
    else
    {
        // If it is not the first detection, then put the current one into previous one
        detectedObstaclesPreviousFrame = detectedObstaclesCurrentFrame;
    }

    // Get the second frame
    captureIn >> nextFrame;
    if (nextFrame.empty())
        break;


    // Do the same stuff on this frame as well
    process(nextFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
    //  // Get the obstacles into the second vector this time
    detectedObstaclesCurrentFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);

    // If there is a detection for both frames ( so that their vectors aren't empty )
    if (detectedObstaclesPreviousFrame.size() != 0)
    {
        if (detectedObstaclesCurrentFrame.size() != 0)
        {
            // Loop over both detection arrays
            for (int a = 0; a < detectedObstaclesPreviousFrame.size(); a++)
            {
                for (int b = 0; b < detectedObstaclesCurrentFrame.size(); b++)
                {
                    // Check if the objects are the same, if not, it means we have another detection here
                    // If they are the same obstacle, increase one's frameconsistency
                    if (isSameObstacle(detectedObstaclesPreviousFrame[a], detectedObstaclesCurrentFrame[b]) == true)
                    {       detectedObstaclesCurrentFrame[b].setFrameConsistency(detectedObstaclesCurrentFrame[b].getFrameConsistency() + 1);
                        cout << "F.C: " << detectedObstaclesCurrentFrame[b].getFrameConsistency() << endl;
                    }
                }
            }
        }

However the output I get is:

F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1

Which means it does not even save the frameconsistency of the detected object. I am stuck at this point, any thoughts on it? I tried to hold this value in a global variable, but the problem is, that works only if you detect and track 1 object. If 5 comes into the screen, everything collapses. I am just trying to eliminate the noisy detections which appear for a couple seconds and then disappear, but it seems quite hard to do this simple thing.

Any help is appreciated.

Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);

So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

EDIT: OK, I did some improvement on the code. What I try to do is, trying to eliminate the false positives in a detection algorithm over the frames. To do that, I hold each detection in an object called "obstacle" with a field of "frame consistency". So, if an obstacle has a frame consistency of 5 or more, it is a true positive, otherwise it is a false positive. I try to implement this idea but somehow i keep getting the same frameconsistency. frame consistency. Here is the code that I am using:

bool firstDetection = true;

while (true)
{
    // if it is the first launch, get obstacles by doing image processing, otherwise get it from the previous frame's obstacle vector
    if (firstDetection == true)
    {
        captureIn >> sourceFrame;
        if (sourceFrame.empty())
            break;

        // Do some stuff
        process(sourceFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
       // Get the obstacles into the first vector
        detectedObstaclesPreviousFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);
        firstDetection = false;
    }
    else
    {
        // If it is not the first detection, then put the current one into previous one
        detectedObstaclesPreviousFrame = detectedObstaclesCurrentFrame;
    }

    // Get the second frame
    captureIn >> nextFrame;
    if (nextFrame.empty())
        break;


    // Do the same stuff on this frame as well
    process(nextFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
    //  // Get the obstacles into the second vector this time
    detectedObstaclesCurrentFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);

    // If there is a detection for both frames ( so that their vectors aren't empty )
    if (detectedObstaclesPreviousFrame.size() != 0)
    {
        if (detectedObstaclesCurrentFrame.size() != 0)
        {
            // Loop over both detection arrays
            for (int a = 0; a < detectedObstaclesPreviousFrame.size(); a++)
            {
                for (int b = 0; b < detectedObstaclesCurrentFrame.size(); b++)
                {
                    // Check if the objects are the same, if not, it means we have another detection here
                    // If they are the same obstacle, increase one's frameconsistency
                    if (isSameObstacle(detectedObstaclesPreviousFrame[a], detectedObstaclesCurrentFrame[b]) == true)
                    {       detectedObstaclesCurrentFrame[b].setFrameConsistency(detectedObstaclesCurrentFrame[b].getFrameConsistency() + 1);
                        cout << "F.C: " << detectedObstaclesCurrentFrame[b].getFrameConsistency() << endl;
                    }
                }
            }
        }

However the output I get is:

F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1

Which means it does not even save the frameconsistency of the detected object. I am stuck at this point, any thoughts on it? I tried to hold this value in a global variable, but the problem is, that works only if you detect and track 1 object. If 5 comes into the screen, everything collapses. I am just trying to eliminate the noisy detections which appear for a couple seconds and then disappear, but it seems quite hard to do this simple thing.

Any help is appreciated.

Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);

So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

EDIT: OK, I did some improvement on the code. What I try to do is, trying to eliminate the false positives in a detection algorithm over the frames. To do that, I hold each detection in an object called "obstacle" with a field of "frame consistency". So, if an obstacle has a frame consistency of 5 or more, it is a true positive, otherwise it is a false positive. I try to implement this idea but somehow i keep getting the same frame consistency. Here is the code that I am using:

bool firstDetection = true;

while (true)
{
    // if it is the first launch, get obstacles by doing image processing, otherwise get it from the previous frame's obstacle vector
    if (firstDetection == true)
    {
        captureIn >> sourceFrame;
        if (sourceFrame.empty())
            break;

        // Do some stuff
        process(sourceFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
       // Get the obstacles into the first vector
        detectedObstaclesPreviousFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);
        firstDetection = false;
    }
    else
    {
        // If it is not the first detection, then put the current one into previous one
        detectedObstaclesPreviousFrame = detectedObstaclesCurrentFrame;
    }

    // Get the second frame
    captureIn >> nextFrame;
    if (nextFrame.empty())
        break;


    // Do the same stuff on this frame as well
    process(nextFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
    //  // Get the obstacles into the second vector this time
    detectedObstaclesCurrentFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);

    // If there is a detection for both frames ( so that their vectors aren't empty )
    if (detectedObstaclesPreviousFrame.size() != 0)
    {
        if (detectedObstaclesCurrentFrame.size() != 0)
        {
            // Loop over both detection arrays
            for (int a = 0; a < detectedObstaclesPreviousFrame.size(); a++)
            {
                for (int b = 0; b < detectedObstaclesCurrentFrame.size(); b++)
                {
                    // Check if the objects are the same, if not, it means we have another detection here
                    // If they are the same obstacle, increase one's frameconsistency
                    if (isSameObstacle(detectedObstaclesPreviousFrame[a], detectedObstaclesCurrentFrame[b]) == true)
                    {       detectedObstaclesCurrentFrame[b].setFrameConsistency(detectedObstaclesCurrentFrame[b].getFrameConsistency() + 1);
                        cout << "F.C: " << detectedObstaclesCurrentFrame[b].getFrameConsistency() << endl;
                    }
                }
            }
        }

However the output I get is:

F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1

Which means it does not even save the frameconsistency of the detected object. I am stuck at this point, any thoughts on it? I tried to hold this value in a global variable, but the problem is, that works only if you detect and track 1 object. If 5 comes come into the screen, everything that approach collapses. I am just trying to eliminate the noisy detections which appear for a couple seconds and then disappear, but it seems quite hard to do this simple thing.

Any help is appreciated.