interpret svm predict score
I'm operating object detection on an image using svm and sliding windows (with opencv 3 and python)
When testing a region of an image with svm predict i get a classification and a score (i.e. Class: 1, Score: -1.035665
), which I obtain with svm.predict( features, flags=cv2.ml.STAT_MODEL_RAW_OUTPUT )
I want to apply non-maximum suppression on overlapping regions, but I'm a bit fuzzy about how to rate the highest score, seeing as higher values don't really correspond to better accuracy, and it seems like the most accurate windows are around -0.5 score (but this is a conclusion I drew empirically). Documentation on this isn't abundant, any clarification would be awesome.
The score shows signed distance to the separating hyperplane between classes. Therefore, the higher the score (absolute value I mean), the higher the confidence of the SVM. So I'd say you're wrong, getting -1.03 should be better than -0.5