Ask Your Question

Revision history [back]

how to improve my code for calcOpticalFlowPyrLK ?

hi im developing code for a moving camera that captures images and computes optical flow every time the camera moves a certain distance.

the second frame in the current time instant becomes the first frame of the next time instant and the loop continues ( two frames needed for flow computation).

part of the code I am using is shown below:

        Image rawImage;//instantiate image 
        error = cam.StartCapture();//capture image and return error if any.
        // Retrieve an image
        error = cam.RetrieveBuffer( &rawImage );//save  image and return error if any
        // Create a converted image
        Image convertedImage;//instantiate class for converted image

        error = rawImage.Convert( PIXEL_FORMAT_RGB, &convertedImage );//convert to RGB
                    unsigned int rowBytes =(double)convertedImage.GetReceivedDataSize()/(double)convertedImage.GetRows();
                    Mat fr2 = Mat(convertedImage.GetRows(), convertedImage.GetCols(), CV_8UC3, convertedImage.GetData(),rowBytes);//frame 2 as RGB MAt.


        cvtColor(fr2, mono_fr2, CV_RGB2GRAY);//convert to gray
        equalizeHist( mono_fr2, mono_fr2 );//histogram


                    //do optical flow

        Mat stat;
        Mat erre;


        calcOpticalFlowPyrLK(mono_fr1,mono_fr2,corn,corn2,stat,erre);


         vector<Point2f> im2;
        for(int i=0;i<corn.size();i++)
        {                   
            circle(mono_fr2,corn[i],3,Scalar(200,200,100),2,3,0); line(mono_fr2, Point(corn[i].x, corn[i].y), Point(corn2[i].x,corn2[i].y),Scalar(0,00,0),1,8,0);

            }
        }//imshow("flow",fr2);robot.setVel(0);waitKey();destroyAllWindows();

        //remove non matching features
        vector<Point2f> cornc;vector<Point2f> corn2c;
        for(int i=0;i<corn.size();i++)
        {
            int fc=(int)stat.at<uchar>(i);
            if(fc==1)
            {
                cornc.push_back(corn[i]);
                //corn2c.push_back(corn[i]);
                corn2c.push_back(corn2[i]-(corn[i]+flow[i])+corn[i]);//rotation compensation using predicted flow

            }
        } 
        corn=cornc; corn2=corn2c; 


        //copy latest frame to previous

            mono_fr1=mono_fr2.clone(); 

            goodFeaturesToTrack(mono_fr1,corn,180,0.05,20);//get features from image


    }

as you can see im using the goodFeaturesToTrack(mono_fr1,corn,180,0.05,20); function and I tried different parameters to improve the result but my tracking algorithm performs poorly due to either bad corners (weak corners hard to track) or corners biased to one side of the image. I need corners to be distributed evenly on left and right parts of the image.

frame 15 frame 16 frame 17

three frames are shown with optical flow vectors shown as lines originating from circles representing original pixel location. the motion between those frames are the same yet optical flow is not similar and its use in tracking algorithm yields poor results. Further, some points are chosen as corners and apparently they don't look like corners.

Any ideas how to modify my code or the parameters given to goodFeaturesToTrack to improve features.