2020-09-22 01:59:37 -0600 | received badge | ● Student (source) |
2015-07-18 02:39:05 -0600 | commented answer | Traincascade is stuck over 3 weeks I haven't dug into the C++, and I'm not sure about how you would use the classifier halfway through training. If I asked it to do 20 iterations then it will only spit out a cascade.xml file at the end. How can I make it spit out cascade.xml files before? Or is there a way to string the stage.xml files together that will give me the same result? |
2015-07-17 12:11:12 -0600 | asked a question | opencv_traincascade: what resizing is done by opencv? I am training a cascade to detect objects. Some of the samples I have do not have the same width to height ratio as the window I am planning use for training/detection. I assume that opencv does some sort of resizing to the images when I feed them. Does anyone know what type of resizing is done by the algorithm? |
2015-07-16 10:29:30 -0600 | asked a question | opencv_traincascade preprocessing I have seen folks use equalizeHist as follows: where gray is the image being tested during the detection phase of ViolaJones. Are people also applying equalizeHist to the positive and negative image examples during training? I wasn't doing it and I'm wondering if that's what's getting me poor results (amongst other things, I'm sure) Also, is this recommended in general? When I don't apply it, the image seems more understandable to the human eye. However, the algorithm may still prefer having more contrast. Below is an example of what my training example looks like after equalization. A lot of the patterns in the central area of the image are no longer visible to the human eye. I'm basically wondering if information can be lost during equalization. |
2015-07-16 10:19:40 -0600 | received badge | ● Scholar (source) |
2015-07-16 07:23:02 -0600 | commented answer | opencv_traincascade detection phase: obtain confidence of each detection window Yep, it looks like openCV 2.4 will just leave the vectors empy. :( Sad truth is I don't know much C++ and I don't think I have the time for this project to figure it out. :( Does the Python OpenCV 3.0 work or should I not bother installing it? |
2015-07-16 07:00:57 -0600 | commented answer | opencv_traincascade detection phase: obtain confidence of each detection window The link is broken? Sorry, it worked (and works) fine on my end. The version I'm using is 2.4, and the function I include in my question is in the docs. But I'm not sure what I should pass in for the required parameters rejectLevels and levelWeights. Also, what do you mean that groupRectangles is better? I don't know if it's relevant for my purpose. My goal is to have a Recall Precision curve, and therefore I want to be able to accept/reject more or less rectangles depending on some threshold. I was hoping the confidence level of the algorithm would be the threshold I could change. |
2015-07-16 06:57:23 -0600 | commented question | opencv_traincascade detection phase: obtain confidence of each detection window Thanks for comments. What do you mean by 'there were some numbers' but you 'couldn't use them'? Do you mean that confidence levels were provided but it wasn't clear what they meant so you chose not to use them? Also, so this funcionality was added for 3.0 then, right? |
2015-07-16 06:42:59 -0600 | commented question | opencv_traincascade detection phase: obtain confidence of each detection window Help on CascadeClassifier object: class CascadeClassifier(__builtin__.object)
| Methods defined here:
| |
2015-07-16 06:24:18 -0600 | asked a question | opencv_traincascade detection phase: obtain confidence of each detection window Has anyone created a recall precision curve for traincascade? I am thinking of doing this at the detection stage: Looking at the docs, opencv provides this method:
My questions are:
|
2015-07-16 06:16:02 -0600 | commented answer | Return confidence factor from detectMultiScale 1.5 years later: was this solved? Is there a way to get confidence levels from Python? :) |
2015-07-16 06:00:19 -0600 | asked a question | opencv_traincascade: obtaining number of features per stage I'm using opencv_traincascade: How can I check how many features are being learned at each stage of the training cascade? |
2015-07-11 05:33:09 -0600 | received badge | ● Enthusiast |
2015-07-10 14:13:06 -0600 | commented question | Normalizing images for opencv_traincascade I thought about this a little longer, and it doesn't seem possible to normalize the test data according to the training data. Let's suppose I do normalize all horizontal roof training patches, and store the normalization factors (the mean and the standard deviation). My test example will be a large image, not just a small patch. If I want to apply the transformation I learnt from the training set patches to the test example I can't do it -- the transformation makes sense only for small patches, but not for the entire image in which I am searching for a roof. So, what to do about this? Does the opencv_traincascade already take care of doing subtracting the mean and dividing by the std. dev of the training and the testing sets? |
2015-07-10 13:24:05 -0600 | commented question | Normalizing images for opencv_traincascade Thanks! I will definitely look into that. I have actually learnt 3 different detectors with ViolaJones (one for roofs that are aligned diagonally, one for vertical and one for the horizontally aligned). They are giving me a lot of false positives, but were not missing almost no roofs. I am then feeding these detections to a neural network, that classifies them into either a nonroof or two other types of roofs. If I want to continue with this approach, how do you recomment I do with the normalization? Should I be normalizing them? Similar to what I did with the detector, I'm thinking I could normalize the horizontal, diagonal and vertical roofs separately. Does the traincascade prefer something else? |
2015-07-10 05:49:16 -0600 | asked a question | Normalizing images for opencv_traincascade I have a set of satellite images with roofs in them. I am using opencv_traincascade to train a ViolaJones detector for roofs. Because I want to do tranformations (rotations, flips, etc) to the original roofs, I have cut the roofs out from the original images (with 10 pixel padding) and are using those patches as my positive examples. This is in contrast to using the entire image (with multiple roofs in it) and then telling opencv where the roofs are located in the image. I'm wondering what the right way is to normalize the images:
|
2015-06-25 04:23:08 -0600 | commented question | Detections too small with OpenCV Cascade classifier Thanks for your suggestion! I added some of the images used in training. Does your first suggestion still apply? The issue with setting the minSize to avoid small false positives is that I do have a few roofs that are very small and need to be detected. I guess I could have multiple detectors: one for smaller roofs and one for larger roofs, but that seems rather messy. |
2015-06-25 04:20:29 -0600 | received badge | ● Editor (source) |
2015-06-25 03:32:20 -0600 | asked a question | Detections too small with OpenCV Cascade classifier Goal I am trying to detect roofs in aerial images using Viola and Jones implementation provided by OpenCV. I have trained classifier with the following parameters: The problem Roofs are being recognized, but if the roof is large, the detection windows is drawn around a part of the roof rather than around the entire roof. The image shows detections in red, green and blue. Green corresponds to the ground truth - notice how the green rectangle covers the whole roof, whereas all other rectangles do not. Training images: Below are 3 examples of training images I have used: What I have tried
Question: I'm wondering if anyone knows which parameters I can modify to ensure larger roofs are detected as a single roof rather than being detected in small patches. |
2015-06-23 14:57:49 -0600 | commented answer | How does the parameter scaleFactor in detectMultiScale affect face detection? Thanks for the explanation. If the scaleFactor is small, does the algorithm still only go through the same number of scalings as when the scaleFactor is large? Or does it adapt to ensure that it shrinks the image down as much as the larger scaleFactor in the last iteration? If the number of scalings remains the same, it would imply that, if the scaleFactor is small, the algorithm does not shrink the image as much. Is that correct? |
2015-06-23 13:50:08 -0600 | commented answer | opencv mergevec haartraining issues Just to make sure I understand: are you advising against data augmentation? I'm working on detecting roofs as seen on aerial images and there is quite a bit of variation. I don't have a lot of annotated data (around 300 examples). I was planning to augment the data by: flipping it, rotating it and changing the contrast. Do you think this is a bad idea? I expect to see this sort of variation in the testing data. Also, if this is advisable, is there a good way to merge .vec files? |