1 | initial version |
Quick answer: the larger, the better (the more confident)
A little bit more explanation:
The weight returned by the method for each ROI is the distance from the sample to the SVM separating hyperplane (in its corresponding kernel space). Therefore, a larger distance indicates a sample classifier with a larger confidence, as it is more far away for the other class samples. There is no upper limit for such distance, so you can't normalize and get direct probabilities from the distances. However, there are some methods to perform such convertion (e.g. the LIBSVM library implements some probability estimates, refer to its manual for details and related papers).
Also, every SVM is normalized during training, so that the distance from the separating hyperplane to each support vector sample is 1.0. Therefore, every sample with weight 0.0-1.0 lies in th SVM margin. If you want to know more about SVM in detail, refer to
Christopher J. C. Burges. 1998. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 2, 2 (June 1998), 121-167