Okay i just made it with your steps. First i extracted the features and trained a Linear SVM Classifier.
To classify i made a python script using sliding windows and then predict the window. First i load the Classifier i created, and then i load the testimage. I downscale the image and iterate. In this iteration i use the sliding window. For each window i calculate the HOG features and use predict. The detections are stored in a list. The detector is working, but i got two problems.
First problem is, that it's very slow. Is there a alternative to sliding windows, because they are very slow? Some kind of contour detection to find the signs? The second problem is, that i receive the following DepricationWarning message:
Traceback (most recent call last)
File "classify.py", line 79, in <module>
pred = clf.predict(fd)
File "/home/pi/.virtualenvs/py2cv3/local/lib/python2.7/site-packages/sklearn/linear_model/base.py", line 336, in predict
scores = self.decision_function(X)
File "/home/pi/.virtualenvs/py2cv3/local/lib/python2.7/site-packages/sklearn/linear_model/base.py", line 312, in decision_function
X = check_array(X, accept_sparse='csr')
File "/home/pi/.virtualenvs/py2cv3/local/lib/python2.7/site-packages/sklearn/utils/validation.py", line 395, in check_array
DeprecationWarning)
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
I don't know why it does appear, but it's so annoying, because it shows up for each iteration (>100 times per image).
please clarify, do you want to:
I wanto to distinguish between several signs. I'm going to learn SVM Light and compute the vector to DetectMultiScale. In my implementation i would use one detectMultiScale for each sign. But I wonder if it is still running in real time. Whats the difference beetween the two options? Or otherwise asked how is the second different from mine?
I just made a test with one street sign and it worked quiet well. I used images size 48 x 48 with a narrow edge. I used 180 pos and 4000 neg images. I have found that close signs are not recognized, is this due to the image size?
detectMultiScale can only be used to detect a single object class
if you use an SVM as a multi-class classifier, you will need some other tool for detection / segmentation, e.g. findContours()
Okay I'll look at this. But if i use it as multi-class classifier, the training process differs from my variant right?
So if I understand you correctly, the first step would be to find the contours of the signs in the frame. Then in the second step i calculate the hog features of the contours (rectangles) and then use predict() to classify the sign?