# Detection of objects on the side

Hello everybody,

the problem i'm having is to detect the objects on the side as seen in the uploaded Image. The same problem occurs in the sample application "OpenCV face detection". The faces on the side of the camera-frame would not be detected.

Some of those could solve the problem, i think:

1. Train the positive set with smaller size: opencv_createsamples -w -h.
2. Train new positive set to detect "half of the object"

But I'm not experienced with opencv, any suggestion how to fix the problem would be very appreciated!

edit retag close merge delete

Sort by » oldest newest most voted

Okay lets respond to this, kind of my thing here!

1. First off all please supply your full detection command. It highly depends on your model size and the scale range that you have defined for detections. Also add the command you used for cascade classifier sample creation and the training. Things can go wrong early but only noticable during processing.
2. Detections not occuring on the edges even if the object is viewable --> means your training data had to many background information around the object and thus your model heavily depends on this. Avoid it by making bounding boxes that shear the edges of your actual object.
3. The not detected one is due to the fact that it can only find full objects. There is simply no information for that one making it a complete different object.

As to the size of your model. Keep in mind that -w -h parameter define the smallest object size you can find. Therefore it is indeed better to train a smaller model to find smaller instances also. Reason is that upscaling an image introduces artifacts that screw up detections results, therefore the original input image is only downscaled and thus only larger or equal object to the object size will be found.

There is a reason why the standard Viola and Jones model has a 24x24 pixel model. It is a tradeoff between accuracy and the smallest face detection possible.

more

1

Hi Steven,

thanks for the respond. Could you please make the 2. point clearer? Especially the way how "to avoid by making bounding boxes"?

Thank you

( 2015-01-27 09:40:54 -0500 )edit

To train a face detector, you need to specify face regions in your positive image set. If you define it inside the head sculpture, then your model will not look for face edges. But if you take about 10-15 pixels around the head sculpture than your model will expect that to be there also.

( 2015-01-27 09:55:22 -0500 )edit

Hi Steven,

According to the way how the training set are created, i.e. each of many positive images is merged into one image with negative background to produce a positive sample. It's also a line in the positve-info.txt file: imgname 1 x y w h.

That way will produce the positive samples with a larger size because of the negative background. The classifier would be then not able to detect the object near the image's edge due to the lack of background information.

I'm not sure if it's right. If yes, no object around the edge could be detected.

My purpose is to detect the objects all around the edge of an image. It would be nice if it's possible.

( 2015-01-27 10:17:09 -0500 )edit

I do think that you completely misunderstand my statement, but lets try to explain it a bit more clear.

• The traincascade software allows you to either make artificial training data like you described by adding a positive object to random backgrounds and applyin rotation, translation and shearing effects. However this generates artificial training data and my experience is that this does not work properly in real life applications. Therefore I suggest using the second approach where you take original object images, objects in their natural context, and you apply a bounding box annotation as you specify through an info file. This will yield in much better training data.
• When manually specifying that training data, do make sure that your box is close to the object.
( 2015-01-28 02:23:54 -0500 )edit

And to continue

• Making sure that it is close to the object results in a model that doesn't need to much background information to identify an object candidate in a test image.
• But partial objects will NEVER be detected this way, since cascade classifiers are NOT robust to occlusion. No immediat idea how to tackle that problem.
• So all cases in your image are detectable with cascade classifiers, except from the bottom right one!
( 2015-01-28 02:25:55 -0500 )edit

Hi Steven,

To tackle/detect the partial object, i think, a new classifier with new trainng set is needed, whereas only partial objects used as positive objects.

The problem of training a cascade classifier is the big amount of positive samples. In some papers, they need more than 5000 positive samples. This big number would not easyif possible to achieve manually. There are some tools to help, but you still have to go through all images to draw the bounding boxes

Yesterday night, i did try to test my classifier with different params. The classifier works well on frontal objects. Objects tilting right were also detected. Objects tilting left were not. How can i modify the current classifier.xml file without re-train the classifier, so that those left tilting objects also recognized?

( 2015-01-28 04:29:02 -0500 )edit

@mucdev a quick response

1. To your first idea of training partial classifiers, YES, but the downside is that partial models will also yield false positive detections inside the image region. So you might want to limit their detection masks to the side corresponding to the model only.
2. The large amount of training data all depens on how much energy you want to put in. I am annotating data on a monthly base. 5000 positives is a rather small set. I sometimes got cases with 25000 positives...
3. A cascade classifier is NOT rotation invariant, so in principle the detected tilts are just plain luck! A ViolaJones model has about 5-10 degrees of robustness according to my research so I suggest performing a rotation over 20 degree segments of the input image and combining the detections.
( 2015-01-28 04:48:09 -0500 )edit
1
1. I have the same idea, that the partial classifier to operate only around the side-edge.

2. I will try <- some belgian beers should help :)

3. It would be nice if it's possible to manipulate the classifier.xml directly without re-train with the new dataset. I think, it would be somehow possible.

4. I think, the reason, why the object near the image-edge is not always detected, is the minNeighbors param. Sliding windows on multiscale input-image will be applied. The object on the edge will become partial object when the input-image zoomed-in in the next iterattion!!

( 2015-01-28 05:03:32 -0500 )edit
1

Here is the call to the classifier:

myClassifier.detectMultiScale(mGray, things, 1.2, 3, 2, new Size(minWidth, minHeight), new Size(maxWidth,   maxHeight));

( 2015-01-28 05:10:02 -0500 )edit
• Images will never loose information on the edges when applying the scaling down, that is absurd :)
• Belgian beers do the trick! I apply the same technique during my evening annotations!
• Manipulating the model directly in its parameters seems quite impossible to me. No way in knowing how the threshold parameters vary in relation to the orientation of a feature.
( 2015-01-28 05:58:00 -0500 )edit

Official site

GitHub

Wiki

Documentation