2020-05-01 08:19:00 -0600 | received badge | ● Notable Question (source) |
2019-07-28 18:23:58 -0600 | received badge | ● Popular Question (source) |
2018-08-20 10:43:02 -0600 | received badge | ● Famous Question (source) |
2018-01-11 23:46:51 -0600 | received badge | ● Notable Question (source) |
2017-10-22 09:02:38 -0600 | received badge | ● Popular Question (source) |
2016-06-07 04:11:49 -0600 | commented question | Coordinates of keypoints using FLANN matcher I tried with histogram comparison but I still have better results with keypoints, only if I could check that all the points are in the same Y axis would be enough for my simple application... |
2016-06-06 09:50:12 -0600 | asked a question | Coordinates of keypoints using FLANN matcher I have two images (faces) and I want to check if it is the same person or not. So far I implemented as it is in this tutorial and I obtained this results:
So far I could work with that. I would just need to check that each point from left and right are in the same horizontal lines I could same it is the same face in both images. But the thing is, how do I get the coordinates of each of the points shown in the images above? This is the code: Thanks for the help. |
2016-04-21 03:53:15 -0600 | commented answer | About opencv_traincascade.... Sorry for the delay....It did not train very well....If I move the camera around there are no false detections, but it also does not detect the phone but next to it in the background as you can see here . Even thought I would like to make it work, I've been using in parallel find_object_2d ROS package and managed to do what I needed, but it is no training model..... |
2016-04-19 05:32:30 -0600 | commented answer | About opencv_traincascade.... After leaving it training with this parameters: I see that it got stuck: It's been like that for over a day...... |
2016-04-14 02:18:01 -0600 | commented answer | About opencv_traincascade.... Should I give it a try with 2500 or 400 numNeg?.....I'll try with 2500 and 400l (LBP) and why not, 2500 HAAR as well......I changed to 25 stages and increased the buffers to 1024MB.... |
2016-04-13 11:11:11 -0600 | commented answer | About opencv_traincascade.... I re-trained with: And again I got: |
2016-04-13 03:56:25 -0600 | commented answer | About opencv_traincascade.... Next time I visit Belgium I'll buy you a beer! |
2016-04-13 03:27:26 -0600 | commented answer | About opencv_traincascade.... I'll let it training then. and will try with minHitRate 0.998 then. I'll let you know when I have some news |
2016-04-13 02:12:42 -0600 | commented answer | About opencv_traincascade.... As you can see, I wanted 10 stages but I got the |
2016-04-13 01:58:24 -0600 | commented answer | About opencv_traincascade.... After that I tested with numNeg 4000 and it made it even worst. I've also been playing with the numNeighbour but I still have false detections. I believe I should stick with numPos 2500 and gather more positive samples so it can differentiate the object better? |
2016-04-11 08:42:02 -0600 | commented answer | About opencv_traincascade.... I tested with: and I'm getting more false detections now....how's that possible? I guess it needs a bigger numPos? |
2016-04-11 07:51:11 -0600 | commented answer | About opencv_traincascade.... And I guess the bigger the numNeg the better, more robust, right? I'll keep testing with that number with the negative samples I have for now and get back to you with the results. Is there a way to specify how it gets the negative samples? It might be nice to specify a number of windows for each image to have this process more "controlled" and gather the full picture rather than randomly For the rotation, I don't need 360, but around 45 degrees to each side.....which for now seems correct.....in the case I want to increase this, the solution is ALWAYS to get more samples, right? |
2016-04-11 07:19:18 -0600 | commented answer | About opencv_traincascade.... Should I get more negative images of with the ones I have it's enough? Just increasing the numNeg will be enough? And you think I should do all this tests to LBP until getting the right training values and then retrain it with HAAR? And what about the rotation of the object? It keeps detecting it when I rotate close to 45 deg to each side... |
2016-04-11 07:05:24 -0600 | commented answer | About opencv_traincascade.... I changed minNeighbours to 5 and then 10 but I still have a lot of false positives. For the temporal approach, these false detections also remain in time so they would still be detected and never discarded..... I'm running the training again by setting the maxFalseRateAlarm to 0.25 (half default), I switch back to GAB. And in parallel I'm running the same but setting to LBP rather than HAAR to see which one works better......what do you think? Should I get more positive samples to avoid the false negatives....? Or increase the negative windows? |
2016-04-11 04:52:04 -0600 | commented answer | About opencv_traincascade.... I found the issue! In the detection I'm downscaling so it is faster and I commented the rescaling of the bounding box. The code is in the question, as EDIT 1. Now the detection works but I still have a lot of false positives. Look in this short video |
2016-04-11 03:02:28 -0600 | commented answer | About opencv_traincascade.... I took 250 new images of the object (with different backgrounds) and 305 images of the office without the object and left it training using The result of the detection is: without the object there are false detections and with the object there are also false detections plus no good ones.... |
2016-04-07 07:14:12 -0600 | commented answer | About opencv_traincascade.... Ok, got it. So I don't need to get more neg but increase the windows for the training algorithm, but for the 250 pos I would need around 280 positives, right? I chose 24 based on the frontal_face cascade.....plus if I make it bigger, the detection would be slower? I don't mind if the training is slow, but detection should be fast as there would be more than 5 objects to detect, and considering a few cascades per image to have it rotation variant, it might take a long time to run de detection (it needs to be ~ real time) |
2016-04-07 07:07:32 -0600 | commented answer | About opencv_traincascade.... Sorry for the dumb question..A window would be a section of an images, right? |
2016-04-07 06:55:48 -0600 | commented answer | About opencv_traincascade.... I don't believe there's something wrong with the line to run the training" |
2016-04-07 04:27:34 -0600 | commented answer | About opencv_traincascade.... Maybe I don't use a rotating table but "I" rotate around the object to get the background? Here is my dataset. The object is rarely detected. Once again, thanks for your big help! |
2016-04-06 08:16:40 -0600 | commented answer | About opencv_traincascade.... I've just gathered 60 images from the phone different angles (a few degrees...) and hights with 300 negative samples of randomly walking in the office....training took 1 o 2 minutes, got a Required leaf false alarm rate achieved with NEG count : acceptanceRatio 250 : 0.000752328 but still got around 25 false positives......I trained it with RAB.... |
2016-04-06 07:28:26 -0600 | commented answer | About opencv_traincascade.... I thought the viewpoint variation was due to how the cascade was trained. So if I want to detect a phone, I should capture pictures of it from the front, train a a mode, rotate it a few angles and train a new model again....but, let's say I end up with 3 models per object and I have 5 o 6 objects. This leads to ~15 models which I have to run for each frame to see if any of my objects is there. I guess it will take a lot of time and it won't be possible to have a real-time detection, right? Of course I believe that the idea of using a turning table to capture as many pictures as possible of the object is forbidden.... And how do youcalculate the precision to recall values after each stage and see if it increases or decreases ... (more) |
2016-04-06 04:49:30 -0600 | commented answer | About opencv_traincascade.... Forgot to ask. By default the boosting algorithm is GAB. Which one gives better results? I believe the chose GAB as it is the one that consumes less ram? The computer I'm using has 32gb of ram plus another 32gb of swap so, should I go with RAB? |
2016-04-06 04:30:28 -0600 | commented answer | About opencv_traincascade.... I've started reading the book, but in the meantime I'd like to ask a few things to leave it training while I read. I read that you answered that it's better to have 50 good positive images than getting one and generate 50 with opencv_createsamples. Therefore, as I can take 50 images of my object from different angles and use them as positives. Would this be better? The other thing is the negatives. As I want to detect the objects in a controlled environment (e.g my office) I can do a 'random' walk gathering images without the object, right? I also read that I should aim for NEG count : acceptanceRatio around 0.0004 to consider a good cascade and if it is ~5.3557e-05 over trained? |
2016-04-04 11:25:09 -0600 | received badge | ● Student (source) |
2016-04-04 07:20:55 -0600 | asked a question | About opencv_traincascade.... Hello, I am currently trying to train my own cascade based on Naotoshi Seo's tutorial and Codin Robiin's tutorial. I still have a couple of questions that I haven't find answers. I will have some objects to detect in a controlled scenario (it will always be in the same room and the objects will not vary). Therefore, my idea is to grab my camera and save frames for an certain amount of time of the room with NONE of the objects present to gather the negative images. Then I would get the objects I want to detect on a turning table (one at the time, of course...), set the camera on a tripod and for different heights choose a ROI surrounding the object plus choosing when I want to start and stop saving the image, I can make the object rotate. Thus, I would have several views of the same objects from different angles plus I can get the X, Y position plus the size of the bounding box and easily save the file with the path, number of objects in the scene plus these four parameters to create the .vec file. My questions are:
I'd like to test this because as a first approach I took a picture of an object with my phone, cropped it and removed the background (50x50 grey scale image) and with opencv_createsamples plus the negatives that I took as described before (saved as grey scale 100x100). Then to got my positive samples for training I run: where 1690 is the number of negative images that I captured. Then I create the vec file with: And start training with: When this finished, I tied the detector and I got a LOT of false positives, even when the object were not in the scene. So here are some more questions.
I would like to test best approaches to see which gives the best results....or based on your experience, which one should I follow? Thanks for the help. EDIT 1: This is the code I use for detections: (more) |
2016-03-29 07:25:30 -0600 | marked best answer | Autotools with opencv (undefined reference to cv::meanShift) Hello, I'm using autotools to build a library that incorporates several custom functions that are based on opencv that I will use in another project. So first I build this library with the following structure: configure.ac: Makefile.am: src/Makefile.am I also saw where opencv.pc is and then checked that it is in PKG_CONFIG_PATH. With these, I run make and make install with no errors. So far so go, but when I build a simple project that includes this dpf_template.so (through .pc file) and I have only one error which is Shouldn't I have been prompted something when I build the libdpf_template? Thanks for the help. |
2016-03-25 06:39:18 -0600 | commented question | Autotools with opencv (undefined reference to cv::meanShift) Solved it by adding AM_LDFLAGS = pkg-config --libs opencv to the Makefile.am |