Ask Your Question

asaakius's profile - activity

2015-01-18 08:32:25 -0600 commented answer OpenCV and Latent SVM Detector

What are the existing vehicle models available in OpenCV? I'm looking for cars, trucks, and buses. And is it still the case that OpenCV doesn't support training our own models?

2014-03-19 10:27:24 -0600 asked a question Createsamples creating no samples

I'm using this tutorial, and I'm on the stage of creating lots of samples from my positive images. I'm using Windows.

This is the command (using absolute paths!):

C:\opencv_built\bin\Release\opencv_createsamples.exe -bgcolor 0 -bgthresh 0 -maxxangle 1.1\   -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 80 -h 40 -img C:\work_asaaki\code\opencv-haar-classifier-training-master\positive_images\60inclination_315azimuth.jpg -bg tmp -vec samples\60inclination_315azimuth.jpg.vec -num 62

And this is the kind of output I get:

Info file name: (NULL)
Img file name: 60inclination_315azimuth.jpg
Vec file name: samples0inclination_315azimuth.jpg.vec
BG  file name: tmp
Num: 62
BG color: 0
BG threshold: 0
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 80
Height: 40
Create training samples from single image applying distortions...
Done

But when I check the samples folder, there are none! What am I doing wrong?

2014-01-23 08:21:56 -0600 asked a question Using the -w and -h parameters of the createsamples utility for cascaded training

0 down vote favorite

So I've come across lots of tutorials about OpenCV's haartraining and cascaded training tools. In particular I'm interested in training a car classifier using the createsamples tool but there seem to be conflicting statements all over the place regarding the -w and -h parameters, so I'm confused. I'm referring to the command:

$ createsamples -info samples.dat -vec samples.vec -w 20 -h 20

I have the following three questions:

  • I understand that the aspect ratio of the positive samples should be the same as the aspect ratio you get from the -w and -h parameters above. But do the -w and -h parameters of ALL of the positive samples have to be the same size, as well? Eg. I have close to 1000 images. Do all of them have to be the same size after cropping?

  • If it is not the size but the aspect ratio that matters, then how precisely matching must the aspect ratio be of the positive samples, compared to the -w and -h parameters mentioned in the OpenCV tools? I mean, is the classifier very sensitive, so that even a few pixels off here and there would affect its performance? Or would you say that it's safe to work with images as long as they're all approximately the same ratio by eye.

  • I have already cropped several images to the same size. But in trying to make them all the same size, some of them have a bit more background included in the bounding boxes than others, and some have slightly different margins. (For example, see the two images below. The bigger car takes up more of the image, but there's a wider margin around the smaller car). I'm just wondering if having a collection of images like this is fine, or if it will lower the accuracy of the classifier and that I should therefore ensure tighter bounding boxes around all objects of interest (in this case, cars)?

image description image description

2013-11-11 23:48:08 -0600 asked a question About image backgrounds while preparing training dataset for cascaded classifier

I have a question about preparing the dataset of positive samples for a cascaded classifier that will be used for object detection.

As positive samples, I have been given 3 sets of images:

  1. a set of colored images in full size (about 1200x600) with a white background and with the object displayed at a different angles in each image
  2. another set with the same images in grayscale and with a white background, scaled down to the detection window size (60x60)
  3. another set with the same images in grayscale and with a black background, scaled down to the detection window size (60x60)

My question is that in set 1, should the background really be white? Should it not instead be an environment that the object is likely to be found in in the testing dataset? Or should I have a fourth set where the images are in their natural environments? How does environment figure into the training samples?

2013-11-11 09:17:42 -0600 asked a question Training a cascade classifier with image labels also as features

Does OpenCV's cascaded classifier allow for the training of images using not only features that the classifier itself exracts from the image, but also using the tags or annotations on each image. For example, I have around 600 images to use as positive samples to train the classifier, but I need to annotate each image with a short vector of additional features, such as ("aerial view", "planar view", or "city background" or "landscape background").

Is it possible to build a classifier that combines dynamically discovered features and annotations?

2013-11-11 09:01:10 -0600 received badge  Supporter (source)
2013-11-11 09:00:54 -0600 commented answer Time to train a cascaded classifier?

Oh yes I should have clarified - I was referring to the images used as training samples for the classifier, not for testing and final detection. The reason I am asking this is because in my understanding, having large images to train a classifier makes the training take longer. So I was wondering about 480x320 in terms of the image resolution for training samples - is that unnecessarily large, given that the objects I will be detecting in the end are unlikely to be that large.

2013-11-11 08:23:43 -0600 commented answer Time to train a cascaded classifier?

Thanks very much! I just noticed something else, you understood my 60x60 resolution requirement as being related to my sliding detection window. That's correct. But is my image resolution reasonable? Is 480x320 reasonable or unnecessarily large? The final testing data won't have objects as big as 480x320, although the images and videos themselves maybe large.

2013-11-11 07:35:39 -0600 commented answer Time to train a cascaded classifier?

Thank goodness, I am already using a 64-bit architecture. 6 parallel classifiers training is somewhat a consolation - but approximately how long would the training be for one classifier? Let's say I only have 6 to train... how long approximately do you think the time range should be for a single training session?

2013-11-11 04:25:59 -0600 received badge  Editor (source)
2013-11-11 02:58:18 -0600 asked a question Time to train a cascaded classifier?

I have about 600 positive images and 1500 negative images from which I need to train a cascaded GentleBoost classifier (using very simple decision stumps as weak classifiers). About 2 thirds of all my images are 60x60, but the big ones are 480 x 320.

I'm going to need to train approximately 500 different classifiers and if possible would like to be doing parallel training.

I haven't started the training yet but I'm really worried as to how much time it might take, should I go for another implementation (like in MatLab)?

I'm on a workstation with 16GB RAM.

Help!

2013-11-11 02:39:00 -0600 commented question How to view positives during Haar Cascade classification

Hi - completely irrelevant but if you don't mind, how long did you training take? And how many images did you train it on?