no idea, if it will work in your case, but you could try to seperate foreground / background motion like in the paper mentioned above:
// 3.1 Step 3: Camera/background motion estimation
// pointsPrev, pointsCur are vector<Point2f> from LK
vector<Point2f> foreground;
Mat H = findHomography(pointsPrev, pointsCur, RANSAC);
cerr << H << endl;
// now, backproject the points, and see, how far off they are:
for(size_t i=0; i<pointsCur.size(); i++)
{
Point2f p0 = pointsPrev[i];
Point2f p1 = pointsCur[i];
// homogeneous point for mult:
Mat_<double> col(3, 1);
col << p0.x, p0.y, 1;
col = H * col;
col /= col(2); // divide by W
double dist = sqrt(pow(col(0) - p1.x, 2) +
pow(col(1) - p1.y, 2));
// small distance == inlier == camera motion
// large distance == outlier == object motion
if (dist >= 1.5) // some heuristical threshold value
{
foreground.push_back(p1);
}
}
cerr << "fg " << pointsCur.size() << " " << foreground.size() << endl;
If the camera is moving then it is impossible to know if movement in image is due to camera movement or to object movement. Optical flow does not work for your scenario
@Pedro Batista, i would not say, that this is entirely hopeless, e.g. one could try to find the homography between previous and current points, and do a backprojection with the homography matrix. then, the "inliers" are most likely the camera motion, while the "outliers" are caused by object motion.
http://www.cv-foundation.org/openacce... , 3.1, step 3
I guess it may achieve some results if the camera movement is stable and uniform.