Ask Your Question

karmaDet's profile - activity

2015-05-19 10:20:45 -0600 commented question Making use of consecutive frames

@LBerger

fatal error: opencv2/videoio/videoio.hpp: No such file or directory

Are you sure that the example is suitable for OpenCV 2.4.11? It seems it was designed for the new OpenCV, which is 3.0.x

2015-05-19 02:11:23 -0600 received badge  Enthusiast
2015-05-15 04:36:31 -0600 answered a question Why Kalman Filter keeps returning the same prediction?

I have the same problem, my output is :

#: 1
Detected at: [247, 264]
Corrected: [247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]

I did the suggested but still doesn't work.

2015-05-15 04:30:12 -0600 commented answer Why Kalman Filter keeps returning the same prediction?

@Siegfried I am facing exactly the same problem, howevever I could not quite understand the first answer you gave, where should we place the correction step? I mean, the person is detected on the above loop, if I place the correction there then the below loop for 10 frames (as seen above code) will be running only for prediction?

#: 1
Detected at: [247, 264]
Corrected: [247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
Predicted Point:[247, 264]
2015-05-15 04:06:32 -0600 commented question Making use of consecutive frames

So, I should have two vectors to hold the coordinate points. And I should repeat the processing steps two times and fill those vectors and then compare?

2015-05-15 03:57:30 -0600 asked a question Kalman Filter always predicts 0

I am trying to track a detected object and the code I am using as follows:

           // if there is a detection
    if (detectedObjectRectanglesPreviousFrame.size() != 0)
    {
        cout << "#: " << detectedObjectRectanglesPreviousFrame.size() << endl;

        //-----------------------------------------------------------
        KalmanFilter KF(4, 2, 0);
        Mat_<float> state(4, 1); /* (x, y, Vx, Vy) */
        Mat processNoise(4, 1, CV_32F);
        Mat_<float> measurement(2, 1);
        measurement.setTo(Scalar(0));
        //-----------------------------------------------------------

        for (int i = 0; i < detectedObjectRectanglesPreviousFrame.size(); i++)
        {
            // Get the center coordinates of the detected object
            int trackedX = detectedObjectRectanglesPreviousFrame[i].x + detectedObjectRectanglesPreviousFrame[i].width / 2;
            int trackedY = detectedObjectRectanglesPreviousFrame[i].y + detectedObjectRectanglesPreviousFrame[i].height / 2;
            cout << "P" << Point(trackedX, trackedY) << endl;

            // if this is the first detection, initialize kalman filter
            if (firstDetection == true)
            {
                //---------------------------------------------------------------------------------------------
                KF.statePre.at<float>(0) = trackedX;
                KF.statePre.at<float>(1) = trackedY;
                KF.statePre.at<float>(2) = 0;
                KF.statePre.at<float>(3) = 0;
                KF.transitionMatrix = *(Mat_<float>(4, 4) << 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1);
                //KF.transitionMatrix = *(Mat_<float>(4, 4) << 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1);

                setIdentity(KF.transitionMatrix);
                setIdentity(KF.measurementMatrix);
                setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
                setIdentity(KF.measurementNoiseCov, Scalar::all(1e-4));
                setIdentity(KF.errorCovPost, Scalar::all(.1));

                points_vector.clear();
                kalman_vector.clear();
                //---------------------------------------------------------------------------------------------

                cout << "First time-> Initialize: " << trackedX << "," << trackedY << endl;
                firstDetection = false;
            }
            // if not, start tracking
            else
            {
                //---------------------------------------------------------------------------------------------
                Mat prediction = KF.predict();
                Point predictPt(prediction.at<float>(0), prediction.at<float>(1));

                KF.statePre.copyTo(KF.statePost);
                KF.errorCovPre.copyTo(KF.errorCovPost);

                measurement(0) = trackedX;
                measurement(1) = trackedY;

                Point measPt(measurement(0), measurement(1));
                points_vector.push_back(measPt);

                Mat estimated = KF.correct(measurement);
                Point statePt(estimated.at<float>(0), estimated.at<float>(1));
                kalman_vector.push_back(statePt);

                for (int i = 0; i < kalman_vector.size() - 1; i++)
                {
                    line(sourceFrame, kalman_vector[i], kalman_vector[i + 1], Scalar(0, 255, 0), 1);
                }

                for (int j = 0; j < kalman_vector.size(); j++)
                {
                    cout << "Kalman Vector Element: " << kalman_vector[j] << endl;
                }
                //---------------------------------------------------------------------------------------------
            }
        }
    }

And unfortunately, the output I am getting is :

 #: 1
P[261, 259]
First time-> Initialize: 261,259
#: 1
P[260, 259]
Kalman Vector Element: [0, 0]
#: 1
P[258, 260]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
#: 1
P[254, 263]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
#: 1
P[255, 263]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
#: 1
P[254, 262]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
#: 1
P[252, 262]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
#: 1
P[252, 263]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]
Kalman Vector Element: [0, 0]

As you can ... (more)

2015-05-12 08:58:35 -0600 asked a question Making use of consecutive frames

I am trying to get the coordinate differences of two detected objects in consecutive frames (typical tracking-filtering stuff), I was wondering how can I make use of the previous frame. Here is how my program flow looks like:

captureIn >> sourceFrame;

// do some image processing
processedFrame = process(sourceFrame);
// give me the coordinates of the objects
detectedObjectCoordinates = getCoordinatesOfDetectedObjects(processedFrame);

// Now I want to have a function here as:
isSame = isSameObject(detectedObjectCoordinatesFirstFrame, detectedObjectCoordinatesSecondFrame);

So that I can know whether the detected object is the same object. How to though process consecutive frames, is what I wonder.

Any thoughts on this?

Note: Using OpenCV 2.8.11 with C++.

EDIT: OK, I did some improvement on the code. What I try to do is, trying to eliminate the false positives in a detection algorithm over the frames. To do that, I hold each detection in an object called "obstacle" with a field of "frame consistency". So, if an obstacle has a frame consistency of 5 or more, it is a true positive, otherwise it is a false positive. I try to implement this idea but somehow i keep getting the same frame consistency. Here is the code that I am using:

bool firstDetection = true;

while (true)
{
    // if it is the first launch, get obstacles by doing image processing, otherwise get it from the previous frame's obstacle vector
    if (firstDetection == true)
    {
        captureIn >> sourceFrame;
        if (sourceFrame.empty())
            break;

        // Do some stuff
        process(sourceFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
       // Get the obstacles into the first vector
        detectedObstaclesPreviousFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);
        firstDetection = false;
    }
    else
    {
        // If it is not the first detection, then put the current one into previous one
        detectedObstaclesPreviousFrame = detectedObstaclesCurrentFrame;
    }

    // Get the second frame
    captureIn >> nextFrame;
    if (nextFrame.empty())
        break;


    // Do the same stuff on this frame as well
    process(nextFrame, clearedFrame, tools, horizonROIFrame, horizonDetector, preProcessedFrame, absoluteGradientY, markedFrame, searchFrame, processFrame);
    //  // Get the obstacles into the second vector this time
    detectedObstaclesCurrentFrame = tools->createBoundingBoxesAroundContours(processFrame, clearedFrame);

    // If there is a detection for both frames ( so that their vectors aren't empty )
    if (detectedObstaclesPreviousFrame.size() != 0)
    {
        if (detectedObstaclesCurrentFrame.size() != 0)
        {
            // Loop over both detection arrays
            for (int a = 0; a < detectedObstaclesPreviousFrame.size(); a++)
            {
                for (int b = 0; b < detectedObstaclesCurrentFrame.size(); b++)
                {
                    // Check if the objects are the same, if not, it means we have another detection here
                    // If they are the same obstacle, increase one's frameconsistency
                    if (isSameObstacle(detectedObstaclesPreviousFrame[a], detectedObstaclesCurrentFrame[b]) == true)
                    {       detectedObstaclesCurrentFrame[b].setFrameConsistency(detectedObstaclesCurrentFrame[b].getFrameConsistency() + 1);
                        cout << "F.C: " << detectedObstaclesCurrentFrame[b].getFrameConsistency() << endl;
                    }
                }
            }
        }

However the output I get is:

F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1
F.C: 2
F.C: 2
F.C: 1
F.C: 1

Which means it does not even save the frameconsistency of the detected object. I am stuck at this point, any thoughts on it? I tried to ... (more)

2015-05-11 09:37:51 -0600 commented question Camshift Algorithm and Grayscaled Images

there are some problems regarding tracking in my case.

my camera is not static/stable, it changes its location constantly (it's mounted on a moving vehicle), would camshift be roboust to that?

2015-05-11 03:37:12 -0600 commented question Camshift Algorithm and Grayscaled Images

Sorry, I guess I phrased myself wrongly. I am conducting my detection algorithm on grayscaled input, since my camera is infrared camera. However when I am done with the detection, i create a red boundingbox around the detected object, and for this i use an rgb image. At the end, you can safely assume that my image has 3 channels. The detected objects are surrounded by colored rectangles, everything else is grayscaled.

2015-05-11 03:30:48 -0600 commented question Camshift Algorithm and Grayscaled Images

I have my object surrounded with a red boundingbox, this red color is quite unique in my images, since all other colors are non-red colors. Given this, do you think camshift will be able to track the boundingbox?

Edit: I want to do more or less the same as this: http://blog.christianperone.com/?p=2768

What different is, my image is grayscaled. I have my object detection algorithm running already, all I want to do is to track the detected object now.

2015-05-11 02:53:05 -0600 asked a question Camshift Algorithm and Grayscaled Images

I was wondering whether we could use Camshift with non-HSV images, as every solution on internet simply convert the image into HSV space, single out the H channel and then go further. However, I am supposed to work with grayscaled images, therefore I cannot go for HSV space.

Is there a way to use Camshift algorithm in OpenCV, without being bothered by HSV space and all? Or does it algorithm strictly require HSV space convertion? I looked into OpenCV exaple of Camshift, and there also HSV is used.

What do you think?

2015-05-07 02:26:15 -0600 received badge  Editor (source)
2015-05-06 10:27:13 -0600 commented question undefined reference to cv::groupRectangles

@Guanta, Do you have the function running? Without the function everything is fine here too, just the function itself is making the problem.

Anyway I tried LIBS as well, it still gives the same error.

2015-05-06 08:55:51 -0600 asked a question undefined reference to cv::groupRectangles

Hey all,

My code was working totally fine until I included the function

groupRectangles(boundRect, 1, 0.2);

and the header required for that, which is

#include <opencv2/objdetect/objdetect.hpp>

After that I got the following error:

lib/libfilters.so: undefined reference to `cv::groupRectangles(std::vector<cv::Rect_<int>, std::allocator<cv::Rect_<int> > >&, int, double)'

I have no idea why this error comes up one of a sudden. With other functions it was all fine, why is this singled out?

Here are some information regarding platform and etc.

  • OpenCV 2.4.1
  • Ubuntu 12.04
  • CMake 2.8
  • Eclipse CDT

And here is my CMake file:

cmake_minimum_required(VERSION 2.8)

project(vessel-detector)

set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)

find_package(OpenCV REQUIRED)

include_directories( 
  ${OpenCV_INCLUDE_DIRS} 
  include/
)

link_directories( 
  ${OpenCV_LINK_DIRS}
)

add_library(filters SHARED
  src/horizon-detector.cpp
  src/generic-tools.cpp
)

add_executable(vessel-detector
src/vessel-detector.cpp)

target_link_libraries(vessel-detector
${OpenCV_LIBRARIES}
  filters)

What do you think is the problem here?

EDIT: I get exactly the same error when I use the following:

KalmanFilter kf(stateSize, measSize, contrSize, type);

which results with:

lib/libfilters.so: undefined reference to `cv::KalmanFilter::KalmanFilter(int, int, int, int)'

It is extremely frustrating. Some libraries work, some do not. How inconsistent can it get? I suspect this is a CMake problem, and I also suspect it is a simple thing like the order of declarations in CMake. Sigh.