Ask Your Question

aliciadominica's profile - activity

2020-07-01 12:02:25 -0600 received badge  Popular Question (source)
2019-05-26 19:10:20 -0600 received badge  Popular Question (source)
2018-03-27 11:21:44 -0600 received badge  Popular Question (source)
2014-11-09 04:24:30 -0600 received badge  Taxonomist
2013-05-23 06:04:55 -0600 asked a question Feature Counts in a Grid

Hello everyone, I am working on a feature detection project based on GridAdaptedFeatureDetector. I need to calculate the number of keypoints in each grid but I'm getting incompatible results. It returns "0" for some grids where there are keypoints and matches. I am pretty sure I'm making a mistake in the counting part. Is there a simpler way to obtain the number of features in each grid?

Entire code is below, I'd appreciate any insight.

int main()
{
    CvCapture* cap = cvCreateFileCapture(VIDEO_NAME);
    int height    = (int) cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT);
    int width    = (int) cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH);

    int x = 300;
    int y = 300;

    int counts[4][4];
    int all[4][4];


    int inx = 200;


    IplImage* src1= cvQueryFrame(cap);

    Mat frame_prev(src1);

    cv::Mat imageROI1,imageROI2;
    imageROI1=frame_prev(Rect(x,y,width-x-inx,height-y));

    //ORB detector(500);
    BriefDescriptorExtractor extractor;
    //SURF extractor;

    //ORB extractor;
    vector<KeyPoint> keypoints1, keypoints2;
    Mat descriptors1, descriptors2;

    BFMatcher matcher(NORM_HAMMING);
    vector<vector<DMatch>> matches, good_matches;
    vector<DMatch>matches2,good_matches2;

    Ptr<FeatureDetector> detector = FeatureDetector::create("ORB");

    cv::GridAdaptedFeatureDetector det(detector,5000);


    IplImage* src2;
    det.detect(imageROI1,keypoints1);
    extractor.compute(imageROI1,keypoints1,descriptors1);



    while(src2 = cvQueryFrame(cap))
    {
        for(int a = 0; a<4; a++)
        {for(int b = 0; b<4; b++)
        {counts[a][b]=0;
        all[a][b]=0;
        }}

        Mat frame(src2);

        imageROI2=frame(Rect(x,y,width-x-inx,height-y));

        det.detect(imageROI2,keypoints2);
        extractor.compute(imageROI2,keypoints2,descriptors2);

        matcher.radiusMatch(descriptors2,descriptors1,matches,5);
        //matcher.match(descriptors2,descriptors1,matches2);

        for(int i=0; i<matches.size(); i++)
            {int num = matches[i].size();
            for(int k=0; k<num; k++)
             {if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(imageROI2.cols/4))
             counts[0][0]++;
            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(imageROI2.cols/2) && keypoints2[matches[i][k].queryIdx].pt.y>(imageROI2.cols/4) )
             counts[0][1]++;
            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(3*imageROI2.cols/4) && keypoints2[matches[i][k].queryIdx].pt.y>(imageROI2.cols/2) )
             counts[0][2]++;
            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(imageROI2.cols) && keypoints2[matches[i][k].queryIdx].pt.y>(3*imageROI2.cols/4) )
             counts[0][3]++;

            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/2) && keypoints2[matches[i][k].queryIdx].pt.x>(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(imageROI2.cols/4))
             counts[1][0]++;
            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/2) && keypoints2[matches[i][k].queryIdx].pt.x>(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(imageROI2.cols/2) && keypoints2[matches[i][k].queryIdx].pt.y>(imageROI2.cols/4) )
             counts[1][1]++;
            if(keypoints2[matches[i][k].queryIdx].pt.x<=(imageROI2.rows/2) && keypoints2[matches[i][k].queryIdx].pt.x>(imageROI2.rows/4) && keypoints2[matches[i][k].queryIdx].pt.y<=(3*imageROI2.cols/4) && keypoints2[matches[i][k].queryIdx].pt ...
(more)
2013-05-23 05:55:45 -0600 answered a question error using StarFeatureDetector + GridAdaptedFeatureDetector

Maybe this could work? This is what I am using:

Ptr<FeatureDetector> detector = FeatureDetector::create("ORB");

cv::GridAdaptedFeatureDetector det(detector,5000);

det.detect......
2013-03-13 06:14:47 -0600 commented question BoW + SVM won't work

Solved the problem, I was getting the "Bad input argument" in SVM.predict line, turns out the file was corrupt, that's all.

2013-03-06 02:21:47 -0600 commented question BoW + SVM won't work

solved the BowDE.compute issue. But still the SVM says that the input argument is not a valid vector.

2013-03-04 15:33:35 -0600 received badge  Self-Learner (source)
2013-03-04 09:35:19 -0600 received badge  Nice Question (source)
2013-03-04 05:51:54 -0600 asked a question BoW + SVM won't work

Hello, I'm trying to implement a BoW + SVM method for classification that takes several images for class one and two and then classifies the query image based on those. For some reason after constructing bow descriptor, it returns an empty descriptor vector thus I can't train the SVM let alone classify. Relevant part of my code is below, I'd appreciate any insight. Thanks a lot.

Ptr<FeatureDetector> features = FeatureDetector::create("SIFT");
Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create("SIFT");
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");

//defining terms for bowkmeans trainer
TermCriteria tc(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 10, 0.001);
int dictionarySize = 100;
int retries = 1;
int flags = KMEANS_PP_CENTERS;
BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);

CvSVMParams params;
params.svm_type    = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

BOWImgDescriptorExtractor bowDE(descriptor, matcher);

Mat features1, features2;
Mat bowDescriptor, bowDescriptor2;

Mat trainme(0, dictionarySize, CV_32FC1); 
Mat labels(0, 1, CV_32FC1);

.....
while (dirp = readdir( dp ))
    {

        filepath = dir + "/" + dirp->d_name;
        // If the file is a directory (or is in some way invalid) we'll skip it 
        if (stat( filepath.c_str(), &filestat )) continue;
        if (S_ISDIR( filestat.st_mode ))         continue;

        Mat img = imread(filepath);
        if (!img.data) {
            cout <<"Can't open file." << endl;
            continue;
        }
        features->detect(img, keypoints);
        descriptor->compute(img, keypoints, features1);
        bowDE.compute(img, keypoints, bowDescriptor);
        trainme.push_back(bowDescriptor);
        float label = 1.0;
        labels.push_back(label);
        bowTrainer.add(features1);
        cout << "." << endl;
    }

    while (dirp2 = readdir( dp2 ))
    {
        filepath2 = dir2 + "/" + dirp2->d_name;
        // If the file is a directory (or is in some way invalid) we'll skip it 
        if (stat( filepath2.c_str(), &filestat2 )) continue;
        if (S_ISDIR( filestat2.st_mode ))         continue;

        Mat img2 = imread(filepath2);
        if (!img2.data) {
            cout <<"Can't open file." << endl;
            continue;
        }
        features->detect(img2, keypoints2);
        descriptor->compute(img2, keypoints2, features2);
        bowDE.compute(img2, keypoints2, bowDescriptor2);
        trainme.push_back(bowDescriptor2);
        float label = 0.0;
        labels.push_back(label);
        bowTrainer.add(features2);
        cout << "." << endl;
    }




Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);

CvSVM SVM;
SVM.train(trainme,labels);

Mat tryme(0, dictionarySize, CV_32FC1);
Mat tryDescriptor;
Mat img3 = imread("c:\\Users\\Elvan\\Desktop\\frame_0118.jpg", 0);
vector<KeyPoint> keypoints3;
features->detect(img3, keypoints3);
bowDE.compute(img3, keypoints3, tryDescriptor);
tryme.push_back(tryDescriptor);

cout<<SVM.predict(tryme)<<endl;
2013-02-26 03:07:17 -0600 commented question SIFT Feature Descriptor Doesn't Work With ORB Keypoinys

No, I couldn't.

2013-01-07 01:59:02 -0600 commented answer SURF slower than SIFT

This is part of a longer code and it already runs in the loop, so measuring the time out of the loop would result in measuring the time of the entire code but I'll check if per count measuring gives better results. I already modified the parameters so that SURF and SIFT would detect and extract approximately the same number of features but even a small change in numbers can make a difference in time measurement. Thanks for the input.

Edit: Per feature is still slower.

2013-01-04 07:32:11 -0600 asked a question SURF slower than SIFT

Hi everyone, I'm testing the performance of opencv feature detection and description algorithms and even though the paper claims otherwise, Surf works slower than Sift by about a milisecond or two. I couldn't make sense of it. I changed the cvRound input to float as suggested here, but it doesn't do anything. My code is below, there are also other detectors and descriptors there and I uncomment whichever I'd like to test, I'd appreciate anyone who can shed light on this, they both match around 600-700 keypoints:

//// DETECTION
    //OrbFeatureDetector detector(500);
    SurfFeatureDetector detector(1500,4);
    ////cv::FAST(imgB, keypointsB, 20);
    //SiftFeatureDetector detector;


    // DESCRIPTOR ORB, SURF, BRIEF, SIFT. Uncomment these lines if you want to use SURF, ORB, BRIEF or SIFT.
    //OrbDescriptorExtractor extractor;
    SurfDescriptorExtractor extractor;
    //BriefDescriptorExtractor extractor;
    //SiftDescriptorExtractor extractor;

        // detect
    double t11 = (double)getTickCount();
    detector.detect( img1, keypointsB );
    t11 = ((double)getTickCount() - t11)/getTickFrequency();

    double t1 = (double)getTickCount();
    detector.detect( img2, keypointsA );
    t1 = ((double)getTickCount() - t1)/getTickFrequency();

    double t22 = (double)getTickCount();
    extractor.compute( img1, keypointsB, descriptorsB);
    t22 = ((double)getTickCount() - t22)/getTickFrequency();

        // extract
    double t2 = (double)getTickCount();
    extractor.compute( img2, keypointsA, descriptorsA);
    t2 = ((double)getTickCount() - t2)/getTickFrequency();


    // match
    double t3 = (double)getTickCount();
    matcher.match(descriptorsA, descriptorsB, matches);
    t3 = ((double)getTickCount() - t3)/getTickFrequency();
    //std::cout << "matching time [s]: " << t3 << std::endl;
2013-01-04 07:26:47 -0600 commented answer SIFT Feature Descriptor Doesn't Work With ORB Keypoinys

I tried it both ways to no avail. It didn't work :/

2012-12-28 05:40:15 -0600 asked a question SIFT Feature Descriptor Doesn't Work With ORB Keypoinys

As stated in the title, SIFT Feature Descriptor doesn't work with ORB keypoints, I changed the useProvidedKeypoints to true but it still doesn't work and I get the "Vector subscript out of range" error in the extractor.compute line. Part of the code is below.

ORB detector(500);
detector.detect( img1, keypointsB );

SiftDescriptorExtractor extractor;
extractor.compute( img1, keypointsB, descriptorsB);
2012-12-27 06:11:24 -0600 commented answer FREAK selectPairs Error

Thanks a lot. I figured the syntax was incorrect but since I'm not a computer scientist, didn't know why.

2012-12-27 04:17:54 -0600 asked a question FREAK selectPairs Error

Hello, since I'm not happy with the results achieved via FREAK, I decided to use FREAK::selectPairs() . But I keep getting an error at a parameter. The error is: "illegal call of non-static member function" My code is below:

vector<Mat> images;
vector<vector<KeyPoint>> keypoints;

images[0].push_back(img1);
images[1].push_back(img2);
for (int i=0; i<keypointsB.size();i++)
{
    keypoints[0].push_back(keypointsB[i]);
}
for (int i=0; i<keypointsA.size();i++)
{
    keypoints[1].push_back(keypointsA[i]);
}

FREAK::selectPairs(images,keypoints,0.7,true);

Any help would be appreciated.

2012-12-17 07:57:17 -0600 marked best answer OpenCV Distance Metrics

Hello everyone, I'm doing a benchmark test on keypoint detectors, descriptors and matching of those for a research project. I'm aware that there are great benchmark tests out there but this will be done for a specific experimental environment. Descriptors will be Orb and Freak and detectors are ORB, SURF and maybe FAST. I will use BruteForceMathcer.

So, in order to make the test fair, I decided I will use approximately same amount of keypoints and the same distances. But I can't seem to find a detailed explanation of the distance metrics used in opencv.

For instance; when I use "BruteForceMatcher<l2<float> > matcher;" when both keypoint detector and descriptor is orb, it gives me a correct match between two points with coordinates point1(130,339) and point2(130,340). Obviously the distance between those two is 1, but when I look at the matches vector, the distance value is 272.71964 which is very confusing for me.

My question is, is there any documentation that explains why this is the case? I googled it, but haven't found a decent explaination. If not, I would really appreciate if you could explain this.

Thank you

2012-12-17 03:11:58 -0600 commented answer Code not working, not sure why

Yeah that solved it :) Thanks

2012-12-17 02:40:09 -0600 asked a question Code not working, not sure why

Hello, I'm trying to implement an Optical Flow Farneback Project that takes random frames of the video and matches them. But for some reason I can't seem to load the frames, it doesn't give me error, but it also doesn't show the images nor it does the computations. Could you take a look at the code and tell me what's wrong? I'm trying to take the first two consequent frames from the video at this point and then I will change the code to take random points when I get the code working. I'm not sure if the problem is with the part that loads the video or with the optical flow code. Code is below. Thanks in advance.

CvCapture* cap = cvCreateFileCapture("C:\\Users\\Elvan\\Documents\\Panic1000people.mpeg");
if (cap == NULL) 
{
    fprintf(stderr, "Error: Couldn't open image.");
    system("PAUSE");
    exit(-1);
}

int height = (int) cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT);
int width  = (int) cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH);

Mat prevgray, gray, flow, cflow;
namedWindow("flow", 1);
IplImage *src1, *src2;

src1 = cvQueryFrame(cap);

Mat frameprev(src1);
cvtColor(frameprev, prevgray, CV_BGR2GRAY);

src2m = cvQueryFrame(cap);

Mat frame(src2);

cvtColor(frame, gray, CV_BGR2GRAY);
if (prevgray.data)
{
    calcOpticalFlowFarneback(prevgray, gray, flow, 0.5, 3, 15, 3, 5, 1.2, 0);
    cvtColor(prevgray, cflow, CV_GRAY2BGR);
    //Draw the optical flow field.
    for(int y = 0; y < cflow.rows; y += step)
    {
        for(int x = 0; x < cflow.cols; x += step)
        {
            const Point2f& fxy = flow.at<Point2f>(y, x);
            line(cflow, Point(x,y), Point(cvRound(x+fxy.x),       cvRound(y+fxy.y)), CV_RGB(0, 255, 0),1);
            circle(cflow, Point(x,y), 0, CV_RGB(0, 255, 0), -1);
        }
        imshow("flow", cflow);
    }
    //imshow("image",gray); 
}

system("PAUSE");

return 0;
2012-12-13 07:08:51 -0600 commented question Optical Flow Arrow Tips Pointing the Wrong Way

The video is called 861-13_70.mov in UCF Crowd Dataset

2012-12-03 04:58:57 -0600 commented answer cvCalcOpticalFlowLK issue

Thanks, I get the optical flow field now. Still no arrows to mark motion but at least now I know it works.

2012-12-03 04:49:31 -0600 commented answer cvCalcOpticalFlowLK issue

I am getting an unhandled exception when I do that.

2012-12-03 04:27:28 -0600 commented answer cvCalcOpticalFlowLK issue

When I show the Graysrc2 image, I can't see the arrows :/

2012-12-03 03:54:01 -0600 asked a question cvCalcOpticalFlowLK issue

So, I'm running a simple cvCalcOpticalFlowLK code but I can't seem to get any flow information nor I can visualize it. Can anyone help me? My code is below:

int main()
{
//First get two images from a AVI file.
CvCapture* pCapture= cvCaptureFromFile("C:\\Users\\Elvan\\Desktop\\UCF Crowd Dataset\\861-13_70.mov");
IplImage *src1=cvQueryFrame(pCapture);

while(IplImage *src2=cvQueryFrame(pCapture))
{

CvSize cvsize;
cvsize.width=src1->width;
cvsize.height=src1->height;


IplImage* Graysrc1=cvCreateImage(cvsize,IPL_DEPTH_8U,1);
cvCvtColor(src1,Graysrc1,CV_RGB2GRAY);

IplImage* Graysrc2=cvCreateImage(cvsize,IPL_DEPTH_8U,1);
cvCvtColor(src2,Graysrc2,CV_RGB2GRAY);


IplImage* flowX=cvCreateImage(cvsize,IPL_DEPTH_32F,1);
IplImage* flowY=cvCreateImage(cvsize,IPL_DEPTH_32F,1);

cvsize.width=3;
cvsize.height=3;

cvCalcOpticalFlowLK(Graysrc1,Graysrc2,cvsize,flowX,flowY);

for (int x=0; x<cvsize.width; x=x+10) {
for (int y=0; y<cvsize.width; y=y+10) {
int vel_x_here = (int)cvGetReal2D( flowX, y, x);
int vel_y_here = (int)cvGetReal2D( flowY, y, x);
cvLine( Graysrc2, cvPoint(x, y), cvPoint(x+vel_x_here,
y+vel_y_here), cvScalarAll(255));
}
}



cvShowImage("Velx",flowX);
cvShowImage("Vely",flowY);
src1=src2;
cvWaitKey(1);

}


return 0;
}
2012-12-03 02:31:50 -0600 received badge  Teacher (source)
2012-12-02 06:50:44 -0600 asked a question Optical Flow Arrow Tips Pointing the Wrong Way

This is an extremely trivial question but it kind of bugs me. I modified the optical flow algorithm here to use the entire video and display it: http://robots.stanford.edu/cs223b05/notes/CS%20223-B%20T1%20stavens_opencv_optical_flow.pdf

I extract the optical flow correctly and it shows the arrows and everything but when the motion is from left to right the arrows are pointing to left rather than to right. Actually arrows are always pointing to left and it doesn't change. I tried changing the angle equation below, it doesn't change anything. I posted the original arrow calculation and drawing code below and I hope someone can help.

Thanks in advance

CvPoint p,q; /*  "p" is the point where the line begins.          
/*  "q" is the point where the line stops. 
/*  "CV_AA" means antialiased drawing. 
/*  "0" means no fractional bits in the center cooridinate or radius.     
*/
      p.x = (int) frame1_features[i].x;
      p.y = (int) frame1_features[i].y;
      q.x = (int) frame2_features[i].x;
      q.y = (int) frame2_features[i].y;
      double angle;   
      angle = atan2( (double) p.y - q.y, (double) p.x - q.x );
      double hypotenuse;  hypotenuse = sqrt( square(p.y - q.y) + square(p.x - q.x) )
      q.x = (int) (p.x - 3 * hypotenuse * cos(angle));
      q.y = (int) (p.y - 3 * hypotenuse * sin(angle));
      cvLine( frame1, p, q, line_color, line_thickness, CV_AA, 0 );
      /* Now draw the tips of the arrow.  I do some scaling so that the
        * tips look proportional to the main line of the arrow.
        */   
      p.x = (int) (q.x + 9 * cos(angle + pi / 4));
      p.y = (int) (q.y + 9 * sin(angle + pi / 4));    
      cvLine( frame, p, q, line_color, line_thickness, CV_AA, 0 );
      p.x = (int) (q.x + 9 * cos(angle - pi / 4));
      p.y = (int) (q.y + 9 * sin(angle - pi / 4));    
      cvLine( frame, p, q, line_color, line_thickness, CV_AA, 0 );
    }
2012-12-02 06:25:34 -0600 answered a question Optical flow - color images
2012-11-27 06:41:54 -0600 commented question Windows installation WORKING step by step guide

I replied to another thread "How to install OpenCV under Windows" how I did it. I am not sure if it will be ok to post the same answer to another thread so you could check that answer out, see if it works for you.

2012-11-27 06:38:40 -0600 commented answer OpenCV Distance Metrics

Thanks, I did use radiusMatch, and it improved the results over simple matching. I needed the optical flow info too so thanks for that :)