Ask Your Question

angela's profile - activity

2020-09-22 01:59:37 -0600 received badge  Student (source)
2015-07-18 02:39:05 -0600 commented answer Traincascade is stuck over 3 weeks

I haven't dug into the C++, and I'm not sure about how you would use the classifier halfway through training. If I asked it to do 20 iterations then it will only spit out a cascade.xml file at the end. How can I make it spit out cascade.xml files before? Or is there a way to string the stage.xml files together that will give me the same result?

2015-07-17 12:11:12 -0600 asked a question opencv_traincascade: what resizing is done by opencv?

I am training a cascade to detect objects. Some of the samples I have do not have the same width to height ratio as the window I am planning use for training/detection.

I assume that opencv does some sort of resizing to the images when I feed them. Does anyone know what type of resizing is done by the algorithm?

2015-07-16 10:29:30 -0600 asked a question opencv_traincascade preprocessing

I have seen folks use equalizeHist as follows:

gray = cv2.equalizeHist(gray) 
detections = cascade.detectMultiScale(gray)

where gray is the image being tested during the detection phase of ViolaJones. Are people also applying equalizeHist to the positive and negative image examples during training? I wasn't doing it and I'm wondering if that's what's getting me poor results (amongst other things, I'm sure)

Also, is this recommended in general? When I don't apply it, the image seems more understandable to the human eye. However, the algorithm may still prefer having more contrast. Below is an example of what my training example looks like after equalization. A lot of the patterns in the central area of the image are no longer visible to the human eye. I'm basically wondering if information can be lost during equalization.

image description

2015-07-16 10:19:40 -0600 received badge  Scholar (source)
2015-07-16 07:23:02 -0600 commented answer opencv_traincascade detection phase: obtain confidence of each detection window

Yep, it looks like openCV 2.4 will just leave the vectors empy. :(

Sad truth is I don't know much C++ and I don't think I have the time for this project to figure it out. :(

Does the Python OpenCV 3.0 work or should I not bother installing it?

2015-07-16 07:00:57 -0600 commented answer opencv_traincascade detection phase: obtain confidence of each detection window

The link is broken? Sorry, it worked (and works) fine on my end.

The version I'm using is 2.4, and the function I include in my question is in the docs. But I'm not sure what I should pass in for the required parameters rejectLevels and levelWeights.

Also, what do you mean that groupRectangles is better? I don't know if it's relevant for my purpose. My goal is to have a Recall Precision curve, and therefore I want to be able to accept/reject more or less rectangles depending on some threshold. I was hoping the confidence level of the algorithm would be the threshold I could change.

2015-07-16 06:57:23 -0600 commented question opencv_traincascade detection phase: obtain confidence of each detection window

Thanks for comments. What do you mean by 'there were some numbers' but you 'couldn't use them'? Do you mean that confidence levels were provided but it wasn't clear what they meant so you chose not to use them? Also, so this funcionality was added for 3.0 then, right?

2015-07-16 06:42:59 -0600 commented question opencv_traincascade detection phase: obtain confidence of each detection window

Help on CascadeClassifier object:

class CascadeClassifier(__builtin__.object) | Methods defined here: |
| __repr__(...) | x.__repr__() <==> repr(x) |
| detectMultiScale(...) | detectMultiScale(image[, scaleFactor[, minNeighbors[, flags[, minSize[, maxSize]]]]]) -> objects or detectMultiScale(image, rejectLevels, levelWeights[, scaleFactor[, minNeighbors[, flags[, minSize[, maxSize[, outputRejectLevels]]]]]]) -> objects |
| empty(...) | empty() -> retval |
| load(...) | load(filename) -> retval |
| ---------------------------------------------------------------------- | Data and other attributes defined here: |
| __new__ = <built-in method="" __new__="" of="" type="" object=""> | T.__new__(S, ...) -> a new object with type S, a subtype of

2015-07-16 06:24:18 -0600 asked a question opencv_traincascade detection phase: obtain confidence of each detection window

Has anyone created a recall precision curve for traincascade? I am thinking of doing this at the detection stage:

Looking at the docs, opencv provides this method:

Python: cv2.CascadeClassifier.detectMultiScale(image,rejectLevels, levelWeights[,scaleFactor[, minNeighbors[, flags[,minSize[, maxSize[,outputRejectLevels]]]]]]) → objects

My questions are:

  • What 'objects' are being returned by this method?
  • Assuming rejectLevels refers to the confidence level for each detection window, why do I need to pass it in as a parameter?
  • What do I need to pass in for rejectLevels and levelWeights?
  • What is outputRejectLevels? Is it related to how certain the cascade should be to accept a detection?
2015-07-16 06:16:02 -0600 commented answer Return confidence factor from detectMultiScale

1.5 years later: was this solved? Is there a way to get confidence levels from Python? :)

2015-07-16 06:00:19 -0600 asked a question opencv_traincascade: obtaining number of features per stage

I'm using opencv_traincascade:

How can I check how many features are being learned at each stage of the training cascade?

2015-07-11 05:33:09 -0600 received badge  Enthusiast
2015-07-10 14:13:06 -0600 commented question Normalizing images for opencv_traincascade

I thought about this a little longer, and it doesn't seem possible to normalize the test data according to the training data. Let's suppose I do normalize all horizontal roof training patches, and store the normalization factors (the mean and the standard deviation). My test example will be a large image, not just a small patch. If I want to apply the transformation I learnt from the training set patches to the test example I can't do it -- the transformation makes sense only for small patches, but not for the entire image in which I am searching for a roof.

So, what to do about this? Does the opencv_traincascade already take care of doing subtracting the mean and dividing by the std. dev of the training and the testing sets?

2015-07-10 13:24:05 -0600 commented question Normalizing images for opencv_traincascade

Thanks! I will definitely look into that.

I have actually learnt 3 different detectors with ViolaJones (one for roofs that are aligned diagonally, one for vertical and one for the horizontally aligned). They are giving me a lot of false positives, but were not missing almost no roofs. I am then feeding these detections to a neural network, that classifies them into either a nonroof or two other types of roofs. If I want to continue with this approach, how do you recomment I do with the normalization? Should I be normalizing them? Similar to what I did with the detector, I'm thinking I could normalize the horizontal, diagonal and vertical roofs separately. Does the traincascade prefer something else?

2015-07-10 05:49:16 -0600 asked a question Normalizing images for opencv_traincascade

I have a set of satellite images with roofs in them. I am using opencv_traincascade to train a ViolaJones detector for roofs.

Because I want to do tranformations (rotations, flips, etc) to the original roofs, I have cut the roofs out from the original images (with 10 pixel padding) and are using those patches as my positive examples. This is in contrast to using the entire image (with multiple roofs in it) and then telling opencv where the roofs are located in the image.

I'm wondering what the right way is to normalize the images:

  • Is it enough to simply divide the pixels of my roof patches by 255 or does the algorithm perform better if I do something more complicated?
  • When I perform testing on a held out test set, I assume I will also want to divide the test satellite images by 255, correct?
2015-06-25 04:23:08 -0600 commented question Detections too small with OpenCV Cascade classifier

Thanks for your suggestion! I added some of the images used in training. Does your first suggestion still apply?

The issue with setting the minSize to avoid small false positives is that I do have a few roofs that are very small and need to be detected. I guess I could have multiple detectors: one for smaller roofs and one for larger roofs, but that seems rather messy.

2015-06-25 04:20:29 -0600 received badge  Editor (source)
2015-06-25 03:32:20 -0600 asked a question Detections too small with OpenCV Cascade classifier

Goal

I am trying to detect roofs in aerial images using Viola and Jones implementation provided by OpenCV. I have trained classifier with the following parameters:

opencv_traincascade -vec roof_samples.vec -bg bg.txt -numStages 20
-minHitRate 0.99999 -numPos 393 -numNeg 700 -w 25 -h 25

The problem

Roofs are being recognized, but if the roof is large, the detection windows is drawn around a part of the roof rather than around the entire roof. The image shows detections in red, green and blue. Green corresponds to the ground truth - notice how the green rectangle covers the whole roof, whereas all other rectangles do not.

Detections are in red, blue, white. Green is ground truth.

Training images:

Below are 3 examples of training images I have used:

Training_1Training_3image description

What I have tried

  • I tried altering the scale parameter, but it didn't help
  • I also modified -w and -h, but I don't expect that to help either (I increased them and the training phase is currently running, but as expected, it is extremely slowly). I really don't expect things to get better with this change though.

Question:

I'm wondering if anyone knows which parameters I can modify to ensure larger roofs are detected as a single roof rather than being detected in small patches.

2015-06-23 14:57:49 -0600 commented answer How does the parameter scaleFactor in detectMultiScale affect face detection?

Thanks for the explanation.

If the scaleFactor is small, does the algorithm still only go through the same number of scalings as when the scaleFactor is large? Or does it adapt to ensure that it shrinks the image down as much as the larger scaleFactor in the last iteration? If the number of scalings remains the same, it would imply that, if the scaleFactor is small, the algorithm does not shrink the image as much. Is that correct?

2015-06-23 13:50:08 -0600 commented answer opencv mergevec haartraining issues

Just to make sure I understand: are you advising against data augmentation? I'm working on detecting roofs as seen on aerial images and there is quite a bit of variation. I don't have a lot of annotated data (around 300 examples). I was planning to augment the data by: flipping it, rotating it and changing the contrast. Do you think this is a bad idea? I expect to see this sort of variation in the testing data. Also, if this is advisable, is there a good way to merge .vec files?