Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

kalman filter

hi to everyone, currently i am working on project where i have to track some blobs in a video sequence and determine when these blobs approaching each other or even to predict possible crash of the blobs. For the detection of the blobs i am using background subtraction and more specifically the integrated Mixture of Gaussian class (i.e. the BackgroundSubtractorMOG2 method). The results are quite nice, so i do not have any problems related to that part, for now :-p. At the end the final information about each blob is extracted by using the contours functions.

Then i have to track these blobs by taking as a reference point the centroid of each blob, which i am extracting by using the moments from the extracted contours. Searching the web and playing around i managed to implement a data association algorithm in order to extract the trajectories of each blob by just using the centroid point and saving this point through the frames to the correspondent blob id. However, this method is quite weak and hard to provide robust results in case of some occlusion or if for some reason the distance between the current centroid point and the previous one is bigger than the threshold that i have specified in order to map each new centroid to one of the blobs. The latter if it fails has as a result to default the previous vector that holds the centroid history of the blob and create a new blob id and creating everything from the beginning. So for that reason i came up with the kalman filter technique, which from what i am reading it can provide me with a certain safety to this kind of problems and furthermore using the prediction part maybe to provide with some prediction information.

After having a look in the theory and knowing that opencv provides a class related to kalman filter i tried to look around for any example which might help me to clear any doubts related to it. However, there is almost nothing, that describes quite step by step how to do it in opencv and especially using the new C++ API of opencv. The only good examples that i came up with are these:

http://www.morethantechnical.com/2011/06/17/simple-kalman-filter-for-tracking-using-opencv-2-2-w-code/ https://code.ros.org/trac/opencv/browser/trunk/opencv_extra/learning_opencv_v2/ch10_ex10_2.cpp

which are more or less describing the basic idea behind kalman but without getting into further details. So, here are coming my questions:

  1. Do you have in mind or have you met any example related to kalman filter and the new C++ API of opencv where you can point me to. If it can handle multiple tracking that would be great :-).
  2. For how far can the kalman filter predict, it is only for the next-time instance or can i get a prediction somehow for two or three time instances after, is there a way to customize this.
  3. In the links above the people are using as measurement parameters the current centroid point each time. Somewhere i read, i cannot recall now the link, that i can use more measurement parameters if i want, which are not related the centroid point of the blob but with some other information of the blob like, the area, bounding box points, height, width, etc. and by doing this i will get some more accurate results to my estimation and prediction. My question is, is that true, and if yes how it is explained.
  4. Probably except the centroid point, i will need to get the bounding boxes of each blob, should i add this information to the initialization phase of the kalman filter by using more dynamic parameters or it would be better to create different kalman filter for each feature that i want to estimate/predict.
  5. To be honest i quite confused, for example in the examples above why the do not use the extracted velocity as an addition measurement parameter. If the answer is that i do not have this information in the beginning so i can not use something that i do not have, then i case that i have the velocity from another method could be possible to pass it to the kalman filter not as an unknown.

i do know as i said i am quite confused, and newbie in the subject and i would appreciate if someone could provide me with some further knowledge.

thanks in advace

kalman filter

hi to everyone, currently i am working on project where i have to track some blobs in a video sequence and determine when these blobs approaching each other or even to predict possible crash of the blobs. For the detection of the blobs i am using background subtraction and more specifically the integrated Mixture of Gaussian class (i.e. the BackgroundSubtractorMOG2 method). The results are quite nice, so i do not have any problems related to that part, for now :-p. At the end the final information about each blob is extracted by using the contours functions.

Then i have to track these blobs by taking as a reference point the centroid of each blob, which i am extracting by using the moments from the extracted contours. Searching the web and playing around i managed to implement a data association algorithm in order to extract the trajectories of each blob by just using the centroid point and saving this point through the frames to the correspondent blob id. However, this method is quite weak and hard to provide robust results in case of some occlusion or if for some reason the distance between the current centroid point and the previous one is bigger than the threshold that i have specified in order to map each new centroid to one of the blobs. The latter if it fails has as a result to default the previous vector that holds the centroid history of the blob and create a new blob id and creating everything from the beginning. So for that reason i came up with the kalman filter technique, which from what i am reading it can provide me with a certain safety to this kind of problems and furthermore using the prediction part maybe to provide with some prediction information.

After having a look in the theory and knowing that opencv provides a class related to kalman filter i tried to look around for any example which might help me to clear any doubts related to it. However, there is almost nothing, that describes quite step by step how to do it in opencv and especially using the new C++ API of opencv. The only good examples that i came up with are these:

http://www.morethantechnical.com/2011/06/17/simple-kalman-filter-for-tracking-using-opencv-2-2-w-code/ https://code.ros.org/trac/opencv/browser/trunk/opencv_extra/learning_opencv_v2/ch10_ex10_2.cpp

which are more or less describing the basic idea behind kalman but without getting into further details. So, here are coming my questions:

  1. Do you have in mind or have you met any example related to kalman filter and the new C++ API of opencv where you can point me to. If it can handle multiple tracking that would be great :-).
  2. For how far can the kalman filter predict, it is only for the next-time instance or can i get a prediction somehow for two or three time instances after, is there a way to customize this.
  3. In the links above the people are using as measurement parameters the current centroid point each time. Somewhere i read, i cannot recall now the link, that i can use more measurement parameters if i want, which are not related the centroid point of the blob but with some other information of the blob like, the area, bounding box points, height, width, etc. and by doing this i will get some more accurate results to my estimation and prediction. My question is, is that true, and if yes how it is explained.
  4. Probably except the centroid point, i will need to get the bounding boxes of each blob, should i add this information to the initialization phase of the kalman filter by using more dynamic parameters or it would be better to create different kalman filter for each feature that i want to estimate/predict.
  5. To be honest i quite confused, for example in the examples above why the do not use the extracted velocity as an addition measurement parameter. If the answer is that i do not have this information in the beginning so i can not use something that i do not have, then i case that i have the velocity from another method could be possible to pass it to the kalman filter not as an unknown.

i do know as i said i am quite confused, and newbie in the subject and i would appreciate if someone could provide me with some further knowledge.

thanks in advaceadvance


edit: i wrote a small example where i am just tracking one blob (preferable of red color) but it seems that i cannot keep tracking the blob when i am hiding it behind an object, does someone of you has any suggestion on my code? The code below is a working example, just find a red object, change the input video source (e.g. to a webcam, or to another video) and see what i am meaning.

#include <iostream>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/tracking.hpp>

using namespace std;
using namespace cv;

#define drawCross( img, center, color, d )\
line(img, Point(center.x - d, center.y - d), Point(center.x + d, center.y + d), color, 2, CV_AA, 0);\
line(img, Point(center.x + d, center.y - d), Point(center.x - d, center.y + d), color, 2, CV_AA, 0 )\

int main()
{
  Mat frame, thresh_frame;
  vector<Mat> channels;
  VideoCapture capture;
  vector<Vec4i> hierarchy;
  vector<vector<Point> > contours;

  capture.open("capture.avi");

  if(!capture.isOpened())
    cerr << "Problem opening video source" << endl;

  KalmanFilter KF(4, 2, 0);
  Mat_<float> state(4, 1);
  Mat_<float> processNoise(4, 1, CV_32F);
  Mat_<float> measurement(2,1); measurement.setTo(Scalar(0));

  KF.statePre.at<float>(0) = 0;
  KF.statePre.at<float>(1) = 0;
  KF.statePre.at<float>(2) = 0;
  KF.statePre.at<float>(3) = 0;

  KF.transitionMatrix = *(Mat_<float>(4, 4) << 1,0,1,0,   0,1,0,1,  0,0,1,0,  0,0,0,1); // Including velocity
  KF.processNoiseCov = *(cv::Mat_<float>(4,4) << 0.2,0,0.2,0,  0,0.2,0,0.2,  0,0,0.3,0,  0,0,0,0.3);

  setIdentity(KF.measurementMatrix);
  setIdentity(KF.processNoiseCov, Scalar::all(1e-4));
  setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));
  setIdentity(KF.errorCovPost, Scalar::all(.1));

  while((char)waitKey(1) != 'q' && capture.grab())
    {
      capture.retrieve(frame);

      split(frame, channels);
      add(channels[0], channels[1], channels[1]);
      subtract(channels[2], channels[1], channels[2]);
      threshold(channels[2], thresh_frame, 50, 255, CV_THRESH_BINARY);
      medianBlur(thresh_frame, thresh_frame, 5);

      findContours(thresh_frame, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

      Mat drawing = Mat::zeros(thresh_frame.size(), CV_8UC1);
      for(size_t i = 0; i < contours.size(); i++)
        {
//          cout << contourArea(contours[i]) << endl;
          if(contourArea(contours[i]) > 500)
            drawContours(drawing, contours, i, Scalar::all(255), CV_FILLED, 8, vector<Vec4i>(), 0, Point());
        }
      thresh_frame = drawing;

      findContours(thresh_frame, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

      drawing = Mat::zeros(thresh_frame.size(), CV_8UC1);
      for(size_t i = 0; i < contours.size(); i++)
        {
//          cout << contourArea(contours[i]) << endl;
          if(contourArea(contours[i]) > 500)
            drawContours(drawing, contours, i, Scalar::all(255), CV_FILLED, 8, vector<Vec4i>(), 0, Point());
        }
      thresh_frame = drawing;

// Get the moments
      vector<Moments> mu(contours.size() );
      for( size_t i = 0; i < contours.size(); i++ )
        { mu[i] = moments( contours[i], false ); }

//  Get the mass centers:
      vector<Point2f> mc( contours.size() );
      for( size_t i = 0; i < contours.size(); i++ )
        { mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }

      Mat prediction = KF.predict();
      Point predictPt(prediction.at<float>(0),prediction.at<float>(1));

      for(size_t i = 0; i < mc.size(); i++)
        {
          drawCross(frame, mc[i], Scalar(255, 0, 0), 5);
          measurement(0) = mc[i].x;
          measurement(1) = mc[i].y;
        }

      Point measPt(measurement(0),measurement(1));

      Mat estimated = KF.correct(measurement);
      Point statePt(estimated.at<float>(0),estimated.at<float>(1));

      drawCross(frame, statePt, Scalar(255, 255, 255), 5);

      vector<vector<Point> > contours_poly( contours.size() );
      vector<Rect> boundRect( contours.size() );
      for( size_t i = 0; i < contours.size(); i++ )
       { approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
         boundRect[i] = boundingRect( Mat(contours_poly[i]) );
       }

      for( size_t i = 0; i < contours.size(); i++ )
       {
         rectangle( frame, boundRect[i].tl(), boundRect[i].br(), Scalar(0, 255, 0), 2, 8, 0 );
       }


      imshow("Video", frame);
      imshow("Red", channels[2]);
      imshow("Binary", thresh_frame);
    }
  return 0;
}