training an SVM with 68 Point2f data.
I am looking into the opencv SVM implementation, and have a couple of questions.
My data is the landmark points from dlib, which I have as std::vector<cv::Point2f> resultPnts;
this vector contains 68 * Point2f.
Each of my training samples is one of these vectors. So for example, with the label '1', I might have 200 vectors of 68 points, and the same for label '2'.
In the SVM example, the Mat
for training is set up as follows:
int labels[4] = { 1, -1, -1, -1 };
float trainingData[4][2] = { { 501, 10 },{ 255, 10 },{ 501, 255 },{ 10, 501 } };
Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
Mat labelsMat(4, 1, CV_32SC1, labels);
in my example, should I:
int labels[27200] = {1,1,1,2,2,2... }; // 400 * 68 labels
float trainingData[27200][2] = { { 501, 10 },{ 255, 10 },...... }; // 400 * 68 points
Mat trainingDataMat(4, 2, CV_32FC1, trainingData);
Mat labelsMat(4, 1, CV_32SC1, labels);
or is there a cleaner way?
Additionally, Is it possible to return the 'percentage of a label' ? For example, if the result is half way between '1' and '2' labels, it would return 50% for each. Or is it just a 'on' or 'off' classifier?
thanks!
hmm, you want to classify 400 sets of landmarks, not single points, right ?
yup! probably more, as i add more emotions...