Ask Your Question

get images from opencv_createsamples, annotations, png files

asked 2015-04-24 04:39:37 -0600

Tonystark124 gravatar image

I followed the documentation from here and created samples using opencv_createsamples for positive images . I am trying to get these images, than storing it in .vec files.

From the documentation, there's a way to get it, as PNG or JPG files, but am not able to undertsand few things :-

    1. It asks for the back ground or negative images information. Is it going to use that to rotate and create samples of the positive images?  How exactly is the information of negative images going to help here? For example, lets say I want to detect images of cars. Can the background images be that of a ship? or should it be literally the images that come in its background.?

   2. for creating **.vec** files and samples, the background images information was not necessarily required. Does it mean that, even here, for png or jpg, it wont be used to manipulate the positive samples?

  3. I have included the back ground images as objects, that can appear in the scene , but not around the car or as its background. Am I going write or by back ground it just means th foreground-background of an image, in image processing terms?

 4. should annotations.lst be created by me or will it be done by the executable?

Please help me. I am really confused here

edit retag flag offensive close merge delete



-1: Let me add that these parameters have been discussed more than 100 times in this forum ... it should be possible to get an answer to all those question by simply using the search button at the top of this forum ... I know that the documentation is terribly outdated, and that it is the next ToDo on my OpenCV list, but dont got the time yet.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-04-24 05:53:57 -0600 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2015-04-24 05:04:30 -0600

la lluvia gravatar image

You have to create your own training set (positive and negative(background) images). opencv_createsamples prepares your positive images for traincascade. If you want to detect cars in the images, you should crop N images of cars saved as .png or .jpg. And make a list of them and with opencv_createsamples make a .vec file which traincascade use. Negative (background) images are images that doesn't contain object you want to detect, in this case car. It can be anything you want (flowers, empty road, sky, wood, people...) but it's better if it was the environment you expect to find your cars (road, parking place...).

So to answer your questions:

  1. it's not used for creating positive samples. it's used to help your traincascade to learn what is not a car.

  2. background images are not used to create .vec file of positive images. Only positives images (images of cars) are needed.

  3. i not sure what are you trying to say here but negative images should be without object you are trying to detect.

  4. You have to create by your own positive and negative images and make a list of each in .txt file with correct path.

edit flag offensive delete link more



Let me add here that I discourage people to use the createsamples rotation and translation parameters. Just use the interface with the previous selected original positive training samples. The deformation is unnatural and will reduce the efficiency of your classifier... also it requires you to define a single background color, which is quite difficult in real life applications ...

StevenPuttemans gravatar imageStevenPuttemans ( 2015-04-24 05:58:48 -0600 )edit

answered 2015-04-24 05:00:00 -0600

LorenaGdL gravatar image

Great tutorial to understand everything about the createsamples app is here (first Google result, btw): . Be careful though, as it refers also to old haartraining app.

  1. Supplied background images will be used to paste positive images above them. Background can contain anything but the positive objects to detect.
  2. The option to create .vec files does not generate a new dataset from a single/few samples, it only takes a previously created dataset and converts it to the right format. Therefore it does not need any extra background images.
  3. I don't really understand the question, Once again, background images are images containing anything but the object to detect.
  4. When creating a new dataset from a single image, the annotations are generated for you to use later
edit flag offensive delete link more


I am sorry, but that tutorial is one of the main reasons why people do not grasp the new interface :D

StevenPuttemans gravatar imageStevenPuttemans ( 2015-04-24 05:54:58 -0600 )edit

It is perfectly clear to understand the createsamples app. I already warned about the training app being different. Anyway, I don't think I answered anything wrong at all

LorenaGdL gravatar imageLorenaGdL ( 2015-04-24 05:56:17 -0600 )edit

... create training samples from one - part is what leads to people getting completely terrible models. Also the explanation of the positive - negative images before training is plain wrong (just above step 4). To complete that the mergevec is a very weird approach. It is not available in the original opencv interface at all. But that is all from experience here, that I notice it is very misleading if you do not fully grasp the interface.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-04-24 06:02:24 -0600 )edit

Question Tools



Asked: 2015-04-24 04:39:37 -0600

Seen: 2,225 times

Last updated: Apr 24 '15