Ask Your Question

aripod's profile - activity

2020-05-01 08:19:00 -0600 received badge  Notable Question (source)
2019-07-28 18:23:58 -0600 received badge  Popular Question (source)
2018-08-20 10:43:02 -0600 received badge  Famous Question (source)
2018-01-11 23:46:51 -0600 received badge  Notable Question (source)
2017-10-22 09:02:38 -0600 received badge  Popular Question (source)
2016-06-07 04:11:49 -0600 commented question Coordinates of keypoints using FLANN matcher

I tried with histogram comparison but I still have better results with keypoints, only if I could check that all the points are in the same Y axis would be enough for my simple application...

2016-06-06 09:50:12 -0600 asked a question Coordinates of keypoints using FLANN matcher

I have two images (faces) and I want to check if it is the same person or not. So far I implemented as it is in this tutorial and I obtained this results:

image description image description

So far I could work with that. I would just need to check that each point from left and right are in the same horizontal lines I could same it is the same face in both images. But the thing is, how do I get the coordinates of each of the points shown in the images above?

This is the code:

    // Step 1: Detect the keypoints using SURF detect.
    int minHessian = 400;
    SurfFeatureDetector detector(minHessian);
    std::vector<KeyPoint> keypoints_1, keypoints_2;
    detector.detect(roiFrameMaster, keypoints_1);
    detector.detect(roiFrameSlave, keypoints_2);

    // Step 2: Calculate descriptors (feature vectors).
    SurfDescriptorExtractor extractor;
    Mat descriptors_1, descriptors_2;
    extractor.compute(roiFrameMaster, keypoints_1, descriptors_1);
    extractor.compute(roiFrameSlave, keypoints_2, descriptors_2);

    // Step 3 : Matching descriptor vectors using FLANN matcher.
    FlannBasedMatcher matcher;
    std::vector< DMatch > matches;
    matcher.match(descriptors_1, descriptors_2, matches);

    double max_dist = 0; double min_dist = 100;
    //-- Quick calculation of max and min distances between keypoints
    for( int i = 0; i < descriptors_1.rows; i++ )
    { 
        double dist = matches[i].distance;
        if(dist < min_dist)
            min_dist = dist;
        if(dist > max_dist)
            max_dist = dist;
    }
    printf("-- Max dist : %f \n", max_dist );
    printf("-- Min dist : %f \n", min_dist );

  //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
  //-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
  //-- small)
  std::vector< DMatch > good_matches;

  for( int i = 0; i < descriptors_1.rows; i++ )
  { 
    if( matches[i].distance <= max(2*min_dist, 0.02))
        good_matches.push_back( matches[i]);
  }

  //-- Draw only "good" matches
  Mat img_matches;
  drawMatches(roiFrameMaster, keypoints_1, roiFrameSlave, keypoints_2,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

  //-- Show detected matches
  imshow( "Good Matches", img_matches);

Thanks for the help.

2016-04-21 03:53:15 -0600 commented answer About opencv_traincascade....

Sorry for the delay....It did not train very well....If I move the camera around there are no false detections, but it also does not detect the phone but next to it in the background as you can see here . Even thought I would like to make it work, I've been using in parallel find_object_2d ROS package and managed to do what I needed, but it is no training model.....

2016-04-19 05:32:30 -0600 commented answer About opencv_traincascade....

After leaving it training with this parameters:

PARAMETERS:
cascadeDirName: training
vecFileName: positives.vec
bgFileName: bg.txt
numPos: 220
numNeg: 2200
numStages: 25
precalcValBufSize[Mb] : 4096
precalcIdxBufSize[Mb] : 4096
stageType: BOOST
featureType: LBP
sampleWidth: 80
sampleHeight: 80
boostType: GAB
minHitRate: 0.998
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100

I see that it got stuck:

===== TRAINING 14-stage =====
<BEGIN
POS count : consumed   220 : 220

It's been like that for over a day......

2016-04-14 02:18:01 -0600 commented answer About opencv_traincascade....

Should I give it a try with 2500 or 400 numNeg?.....I'll try with 2500 and 400l (LBP) and why not, 2500 HAAR as well......I changed to 25 stages and increased the buffers to 1024MB....

2016-04-13 11:11:11 -0600 commented answer About opencv_traincascade....

I re-trained with:

PARAMETERS:
cascadeDirName: dataPos220Neg4000LBP
vecFileName: positives.vec
bgFileName: bg.txt
numPos: 220
numNeg: 4000
numStages: 10
precalcValBufSize[Mb] : 256
precalcIdxBufSize[Mb] : 256
stageType: BOOST
featureType: LBP
sampleWidth: 80
sampleHeight: 80
boostType: GAB
minHitRate: 0.998
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100

And again I got:

===== TRAINING 5-stage =====
<BEGIN
POS count : consumed   220 : 220
NEG count : acceptanceRatio    4000 : 0.000638472
Required leaf false alarm rate achieved. Branch training terminated.
2016-04-13 03:56:25 -0600 commented answer About opencv_traincascade....

Next time I visit Belgium I'll buy you a beer!

2016-04-13 03:27:26 -0600 commented answer About opencv_traincascade....

I'll let it training then. and will try with minHitRate 0.998 then. I'll let you know when I have some news

2016-04-13 02:12:42 -0600 commented answer About opencv_traincascade....

As you can see, I wanted 10 stages but I got the Required leaf false alarm rate achieved. Branch training terminated. at stage 6. Plus if you see NEG count : acceptanceRatio 2500 : 0.000851414, if I keep going for more stages I would get the 10e-5 that I should avoid to have an over trained classifier. Am I right?

2016-04-13 01:58:24 -0600 commented answer About opencv_traincascade....

After that I tested with numNeg 4000 and it made it even worst. I've also been playing with the numNeighbour but I still have false detections. I believe I should stick with numPos 2500 and gather more positive samples so it can differentiate the object better?

2016-04-11 08:42:02 -0600 commented answer About opencv_traincascade....

I tested with:

PARAMETERS:
cascadeDirName: dataPos220Neg2500LBP
vecFileName: positives.vec
bgFileName: bg.txt
numPos: 220
numNeg: 2500
numStages: 10
precalcValBufSize[Mb] : 256
precalcIdxBufSize[Mb] : 256
stageType: BOOST
featureType: LBP
sampleWidth: 80
sampleHeight: 80
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100

===== TRAINING 5-stage =====
<BEGIN
POS count : consumed   220 : 221
NEG count : acceptanceRatio    2500 : 0.000851414
Required leaf false alarm rate achieved. Branch training terminated.

and I'm getting more false detections now....how's that possible? I guess it needs a bigger numPos?

2016-04-11 07:51:11 -0600 commented answer About opencv_traincascade....

And I guess the bigger the numNeg the better, more robust, right? I'll keep testing with that number with the negative samples I have for now and get back to you with the results. Is there a way to specify how it gets the negative samples? It might be nice to specify a number of windows for each image to have this process more "controlled" and gather the full picture rather than randomly For the rotation, I don't need 360, but around 45 degrees to each side.....which for now seems correct.....in the case I want to increase this, the solution is ALWAYS to get more samples, right?

2016-04-11 07:19:18 -0600 commented answer About opencv_traincascade....

Should I get more negative images of with the ones I have it's enough? Just increasing the numNeg will be enough? And you think I should do all this tests to LBP until getting the right training values and then retrain it with HAAR? And what about the rotation of the object? It keeps detecting it when I rotate close to 45 deg to each side...

2016-04-11 07:05:24 -0600 commented answer About opencv_traincascade....

I changed minNeighbours to 5 and then 10 but I still have a lot of false positives. For the temporal approach, these false detections also remain in time so they would still be detected and never discarded..... I'm running the training again by setting the maxFalseRateAlarm to 0.25 (half default), I switch back to GAB. And in parallel I'm running the same but setting to LBP rather than HAAR to see which one works better......what do you think?

Should I get more positive samples to avoid the false negatives....? Or increase the negative windows?

2016-04-11 04:52:04 -0600 commented answer About opencv_traincascade....

I found the issue! In the detection I'm downscaling so it is faster and I commented the rescaling of the bounding box. The code is in the question, as EDIT 1. Now the detection works but I still have a lot of false positives. Look in this short video

2016-04-11 03:02:28 -0600 commented answer About opencv_traincascade....

I took 250 new images of the object (with different backgrounds) and 305 images of the office without the object and left it training using opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 220 -numNeg 1000 -numStages 10 -w 80 -h 80 -bt RAB. After 10 and a half hours I got:

POS count : consumed   220 : 221
NEG count : acceptanceRatio    1000 : 0.000370403
Required leaf false alarm rate achieved. Branch training terminated.

The result of the detection is: without the object there are false detections and with the object there are also false detections plus no good ones....

2016-04-07 07:14:12 -0600 commented answer About opencv_traincascade....

Ok, got it. So I don't need to get more neg but increase the windows for the training algorithm, but for the 250 pos I would need around 280 positives, right? I chose 24 based on the frontal_face cascade.....plus if I make it bigger, the detection would be slower? I don't mind if the training is slow, but detection should be fast as there would be more than 5 objects to detect, and considering a few cascades per image to have it rotation variant, it might take a long time to run de detection (it needs to be ~ real time)

2016-04-07 07:07:32 -0600 commented answer About opencv_traincascade....

Sorry for the dumb question..A window would be a section of an images, right?

2016-04-07 06:55:48 -0600 commented answer About opencv_traincascade....

I don't believe there's something wrong with the line to run the training" opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 150 -numNeg 300 -numStages 10 -w 24 -h 24 ....

2016-04-07 04:27:34 -0600 commented answer About opencv_traincascade....

Maybe I don't use a rotating table but "I" rotate around the object to get the background? Here is my dataset. The object is rarely detected. Once again, thanks for your big help!

2016-04-06 08:16:40 -0600 commented answer About opencv_traincascade....

I've just gathered 60 images from the phone different angles (a few degrees...) and hights with 300 negative samples of randomly walking in the office....training took 1 o 2 minutes, got a Required leaf false alarm rate achieved with NEG count : acceptanceRatio 250 : 0.000752328 but still got around 25 false positives......I trained it with RAB....

2016-04-06 07:28:26 -0600 commented answer About opencv_traincascade....

I thought the viewpoint variation was due to how the cascade was trained. So if I want to detect a phone, I should capture pictures of it from the front, train a a mode, rotate it a few angles and train a new model again....but, let's say I end up with 3 models per object and I have 5 o 6 objects. This leads to ~15 models which I have to run for each frame to see if any of my objects is there. I guess it will take a lot of time and it won't be possible to have a real-time detection, right? Of course I believe that the idea of using a turning table to capture as many pictures as possible of the object is forbidden....

And how do youcalculate the precision to recall values after each stage and see if it increases or decreases ... (more)

2016-04-06 04:49:30 -0600 commented answer About opencv_traincascade....

Forgot to ask. By default the boosting algorithm is GAB. Which one gives better results? I believe the chose GAB as it is the one that consumes less ram? The computer I'm using has 32gb of ram plus another 32gb of swap so, should I go with RAB?

2016-04-06 04:30:28 -0600 commented answer About opencv_traincascade....

I've started reading the book, but in the meantime I'd like to ask a few things to leave it training while I read. I read that you answered that it's better to have 50 good positive images than getting one and generate 50 with opencv_createsamples. Therefore, as I can take 50 images of my object from different angles and use them as positives. Would this be better? The other thing is the negatives. As I want to detect the objects in a controlled environment (e.g my office) I can do a 'random' walk gathering images without the object, right?

I also read that I should aim for NEG count : acceptanceRatio around 0.0004 to consider a good cascade and if it is ~5.3557e-05 over trained?

2016-04-04 11:25:09 -0600 received badge  Student (source)
2016-04-04 07:20:55 -0600 asked a question About opencv_traincascade....

Hello,

I am currently trying to train my own cascade based on Naotoshi Seo's tutorial and Codin Robiin's tutorial. I still have a couple of questions that I haven't find answers.

I will have some objects to detect in a controlled scenario (it will always be in the same room and the objects will not vary). Therefore, my idea is to grab my camera and save frames for an certain amount of time of the room with NONE of the objects present to gather the negative images. Then I would get the objects I want to detect on a turning table (one at the time, of course...), set the camera on a tripod and for different heights choose a ROI surrounding the object plus choosing when I want to start and stop saving the image, I can make the object rotate. Thus, I would have several views of the same objects from different angles plus I can get the X, Y position plus the size of the bounding box and easily save the file with the path, number of objects in the scene plus these four parameters to create the .vec file.

My questions are:

  1. I should save the images as grey scale, right?
  2. Which resolution should I save the negative images? (My camera is 1280x1024) Original or resized to.....?
  3. Should I save the entire image or just the ROI for the positive images?

I'd like to test this because as a first approach I took a picture of an object with my phone, cropped it and removed the background image description (50x50 grey scale image) and with opencv_createsamples plus the negatives that I took as described before (saved as grey scale 100x100).

Then to got my positive samples for training I run:

opencv_createsamples -img mouse1Resized.png -bg bg.txt -info pos/info.lst -jpgoutput pos/ -maxxangle 0.5 -maxyangle -0.5 -maxzangle 0.5 -num 1690 - bgcolor - -bgthresh 0

where 1690 is the number of negative images that I captured. Then I create the vec file with:

opencv_createsamples -info pos/info.lst -num 1690 -w 20 -h 20 -vec positives.vecInfo file name: pos/info.lst

And start training with:

opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1400 -numNeg 700 -numStages 15 -w 20 -h 20

When this finished, I tied the detector and I got a LOT of false positives, even when the object were not in the scene.

So here are some more questions.

  1. Should the negatives be 100x100?
  2. Should the positive be 50x50?
  3. When I create the .vec file, how large can -w and -h be?

I would like to test best approaches to see which gives the best results....or based on your experience, which one should I follow?

Thanks for the help.

EDIT 1:

This is the code I use for detections:

void detect(Mat frame, std::vector<Rect> &objects)
{
    int i, div=2;
    Mat frame_gray;
    resize(frame, frame_gray, Size(frame.cols/div,frame.rows/div));
    cvtColor(frame_gray, frame_gray ...
(more)
2016-03-29 07:25:30 -0600 marked best answer Autotools with opencv (undefined reference to cv::meanShift)

Hello,

I'm using autotools to build a library that incorporates several custom functions that are based on opencv that I will use in another project.

So first I build this library with the following structure:

src/ (all .cpp files)
dpf-template/ (all .h files)
test/
configure.ac
Makefile.am
dpf_template.pc.in

configure.ac:

AC_PREREQ([2.69])

AC_INIT([calc_mean], [1.0])
AM_INIT_AUTOMAKE([foreign])
AM_MAINTAINER_MODE([enable])

AC_CONFIG_MACRO_DIR([m4])

# Checks for programs.
AC_PROG_CXX
AC_PROG_LIBTOOL

#PKG_CHECK_MODULES([calc_mean])

AC_OUTPUT([Makefile
    src/Makefile
    test/Makefile
    dpf_template.pc])

Makefile.am:

ACLOCAL_AMFLAGS = -I m4

AUTOMAKE_OPTIONS = foreign
SUBDIRS = src test

pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = dpf_template.pc

src/Makefile.am

lib_LTLIBRARIES = libdpf_template.la

libdpf_template_la_SOURCES = \ (plus all the files in src/*.cpp and dpf-template/*.h

AM_CPPFLAGS = -I$(top_srcdir) `pkg-config --cflags opencv`
AM_CFLAGS = -g -Wall `pkg-config --cflags opencv` -I/usr/include/eigen3
AM_CXXFLAGS=`pkg-config --cflags opencv`

libdpf_templateincludedir = $(includedir)/dpf_template
libdpf_templateinclude_HEADERS = \ (plus all the files in dpf-template/*.h)

I also saw where opencv.pc is and then checked that it is in PKG_CONFIG_PATH.

With these, I run make and make install with no errors. So far so go, but when I build a simple project that includes this dpf_template.so (through .pc file) and I have only one error which is

libdpf_template.so: undefined reference to `cv::meanShift(cv::_InputArray const&, cv::Rect_<int>&, cv::TermCriteria)'
collect2: error: ld returned 1 exit status

Shouldn't I have been prompted something when I build the libdpf_template? Thanks for the help.

2016-03-25 06:39:18 -0600 commented question Autotools with opencv (undefined reference to cv::meanShift)

Solved it by adding AM_LDFLAGS = pkg-config --libs opencv to the Makefile.am