Ask Your Question

Amr's profile - activity

2016-03-06 09:48:06 -0600 answered a question Detecting circles/ellipse/whatever in an image

I have not done this exactly before but how I would do it is to capture your image then process it thresholding to prepare for contour extraction. once you have done that find the length of contours. according to the results of your image processing so far you can specify that the contours are closed or else you will need to combine close enough contours. Next find the centroid of your contours and view the results if in centre of circles then you can find the radius and knowing the circumference (length of contour) divide contour length by 2пr for a circle the value should be close to 1.

2016-03-06 09:07:45 -0600 commented question how to improve my code for calcOpticalFlowPyrLK ?

@berak thanks for reply I tried the Farne method but the time it took was significant I cant emplohy in real time. As far as I know the optical flow using LK method needs features such as corners so how do you suggest that I add the features? should I try random points or use other corner detection functions? im aware only of harris corner in opencv but it could not detect many corners

2016-03-06 08:45:25 -0600 commented question how to improve my code for calcOpticalFlowPyrLK ?

@berak it is not important as internal sensor can provide it although I tried to do this but found out it is too noisy to get results comparable to odometry. However im trying to compute depth from motion for this I have a function to compensate for camera rotation and cancel flow due to rotation then the FOE is computed and depth can then be found. The vectors need to be evenly distributed to get depth all over the scene. Do you have any suggestions to improve the computations or any processing to apply to the image to enhance it for example

2016-03-06 08:30:57 -0600 asked a question how to improve my code for calcOpticalFlowPyrLK ?

hi im developing code for a moving camera that captures images and computes optical flow every time the camera moves a certain distance.

the second frame in the current time instant becomes the first frame of the next time instant and the loop continues ( two frames needed for flow computation).

part of the code I am using is shown below:

        Image rawImage;//instantiate image 
        error = cam.StartCapture();//capture image and return error if any.
        // Retrieve an image
        error = cam.RetrieveBuffer( &rawImage );//save  image and return error if any
        // Create a converted image
        Image convertedImage;//instantiate class for converted image

        error = rawImage.Convert( PIXEL_FORMAT_RGB, &convertedImage );//convert to RGB
                    unsigned int rowBytes =(double)convertedImage.GetReceivedDataSize()/(double)convertedImage.GetRows();
                    Mat fr2 = Mat(convertedImage.GetRows(), convertedImage.GetCols(), CV_8UC3, convertedImage.GetData(),rowBytes);//frame 2 as RGB MAt.


        cvtColor(fr2, mono_fr2, CV_RGB2GRAY);//convert to gray
        equalizeHist( mono_fr2, mono_fr2 );//histogram


                    //do optical flow

        Mat stat;
        Mat erre;


        calcOpticalFlowPyrLK(mono_fr1,mono_fr2,corn,corn2,stat,erre);


         vector<Point2f> im2;
        for(int i=0;i<corn.size();i++)
        {                   
            circle(mono_fr2,corn[i],3,Scalar(200,200,100),2,3,0); line(mono_fr2, Point(corn[i].x, corn[i].y), Point(corn2[i].x,corn2[i].y),Scalar(0,00,0),1,8,0);

            }
        }//imshow("flow",fr2);robot.setVel(0);waitKey();destroyAllWindows();

        //remove non matching features
        vector<Point2f> cornc;vector<Point2f> corn2c;
        for(int i=0;i<corn.size();i++)
        {
            int fc=(int)stat.at<uchar>(i);
            if(fc==1)
            {
                cornc.push_back(corn[i]);
                //corn2c.push_back(corn[i]);
                corn2c.push_back(corn2[i]-(corn[i]+flow[i])+corn[i]);//rotation compensation using predicted flow

            }
        } 
        corn=cornc; corn2=corn2c; 


        //copy latest frame to previous

            mono_fr1=mono_fr2.clone(); 

            goodFeaturesToTrack(mono_fr1,corn,180,0.05,20);//get features from image


    }

as you can see im using the goodFeaturesToTrack(mono_fr1,corn,180,0.05,20); function and I tried different parameters to improve the result but my tracking algorithm performs poorly due to either bad corners (weak corners hard to track) or corners biased to one side of the image. I need corners to be distributed evenly on left and right parts of the image.

frame 15 frame 16 frame 17

three frames are shown with optical flow vectors shown as lines originating from circles representing original pixel location. the motion between those frames are the same yet optical flow is not similar and its use in tracking algorithm yields poor results. Further, some points are chosen as corners and apparently they don't look like corners.

Any ideas how to modify my code or the parameters given to goodFeaturesToTrack to improve features.

2015-05-20 06:05:47 -0600 commented answer advice for a hand tracking algorithm

Yes sure particle filter has advantages compared to kalman filter it can be used for multi hypothesis tracking and model non gaussian distribution but you need motion model still you can learn the motion of the object first using a classifier for example then pass that model to a filter for tracking with the assumption that the object motion is not varying rapidly for example human waving hand is fairly predictable motion also people walking around can be described by a learned motion model

2015-05-20 05:59:47 -0600 commented question goodFeaturesToTrack crashing or producing one dimensional vector

@berak Thanks a lot yes I managed to get vs2012 and it worked with 2.4.9 version of opencv until now it is fine

2015-05-10 12:31:29 -0600 commented question goodFeaturesToTrack crashing or producing one dimensional vector

@berak

Thanks for help I followed a tutorial on using cmake and tried it myself using opencv2.4.9 but when I build the vs file created by cmake it produces errors. I tried another version 2.3.1 which had vc9 in it but then my exe crashes as soon I declare a Mat instant it says error msvcr90d.dll error in fopen.c the error is access violation what could be the cause?

2015-05-10 12:24:41 -0600 commented answer advice for a hand tracking algorithm

That is right but with the kalman filter there is prediction phase you can rely on the previous motion (history) of hand and use it to predict model until the hand is found again but you need data association to reject wrong features. I tried on mobile robots localisation and when features are not present the kalman filter relies on the model for the robot motion until a feature is spotted again. I have not tracked by image I'm sorry but I suppose there should be something similar

2015-05-07 04:40:45 -0600 commented question goodFeaturesToTrack crashing or producing one dimensional vector

Oh I see unfortunately I am new to openCv and programming and I do not think I can do that, do you suggest downloading an earlier version of opencV? if so what version do you think will work? I do not have an idea what is cmake the only license I have is VS2008 do I need vc08 ?

2015-05-06 14:49:40 -0600 commented question goodFeaturesToTrack crashing or producing one dimensional vector

@berak No but I need to extract features before using LK method for optical flow. The code that yields the error (when running) is vectors one:

Mat img = imread("t.JPeG", CV_LOAD_IMAGE_UNCHANGED);//read the image data  
    vector<Point2f> corn;
     goodFeaturesToTrack(img,corn,100,0.01,0.01);

The code produces an error as soon as it goes to the function the error is:Unhandled exception at 0x535451bf in P3DX.exe: 0xC0000005: Access violation writing location 0xcccccccc.

The call stack stops at: opencv_core2411d.dll with a yellow arrow

I am using VS2008 32 debug my laptop is 64 bit and the version of openCV is 2.4.11 I used the dll from opencv\build\x86\vc10\bin

2015-05-06 14:32:52 -0600 commented question goodFeaturesToTrack crashing or producing one dimensional vector

@berak The problem is that the goodfeaturestotrack function fails as soon as I pass a vector to it but when I pass a MAT to it I see it is working. I did cout corn.cols " " corn.rows ; and I am getting 1 and 100 so the array is one dimensional but I thought it should be 2 dimensional

2015-05-06 14:28:23 -0600 answered a question advice for a hand tracking algorithm

not sure if I understood your problem as I am new to image processing. however the problem of tracking an object can be improved by having a state estimator like Kalman filter I searched the internet and found this may be it could help you

http://opencvexamples.blogspot.com/20...

2015-05-06 14:21:44 -0600 asked a question goodFeaturesToTrack crashing or producing one dimensional vector

I am trying to implement optical flow in Vs2008 using opencv2.4.11 I use the following code:

 Mat img = imread("t.JPeG", CV_LOAD_IMAGE_UNCHANGED);//read the image data  
vector<Point2f> corn;
 goodFeaturesToTrack(img,corn,100,0.01,0.01);

I get an error when I run the code and it crashes it says due to opencv_core2411d.dll

when I modify the code to:

 Mat corn instead of  vector<Point2f> corn;

the code runs however, it produces the corn vector 100 rows by 1 column i.e. there is no xy it is only x I assume is that normal? how to infer the coordinates of the features then?