1 | initial version |

ok, assuming, you get a `std::vector<cv::Point2f> resultPnts;`

from dlib, you'd make a single 1d row Mat of those for both training and testing, and add a single label per landmark set:

```
Mat trainData; // initially empty
Mat trainLabels;
for each set of landmarks:
std::vector<cv::Point2f> resultPnts = ...;
Mat row(resultPoints, true); // deep copy
trainData.push_back(row.reshape(1,1)); // one flat row per set
trainLabel.push_back( theLabel );
```

so, for 200 sets, you get a [96*2 x 200] data Mat, and a [1 x 200] labels mat. prediction is similar:

```
std::vector<cv::Point2f> resultPnts = ...;
Mat row(resultPoints, true); // deep copy
svm->predict(row.reshape(1,1));
```

while you cannot get a percentage confidence instead of the labels here, you *can* get the distance to the margin:

```
float dist = svm->predict(query, noArray(), ml::StatModel::RAW_OUTPUT)
```

2 | No.2 Revision |

ok, assuming, you get a `std::vector<cv::Point2f> resultPnts;`

from dlib, you'd make a single 1d row Mat of those for both training and testing, and add a single label per landmark set:

```
Mat trainData; // initially empty
Mat trainLabels;
for each set of landmarks:
std::vector<cv::Point2f> resultPnts = ...;
Mat
```~~row(resultPoints, true); // deep copy
~~row(resultPoints);
trainData.push_back(row.reshape(1,1)); // one flat row per set
trainLabel.push_back( theLabel );

so, for 200 sets, you get a [96*2 x 200] data Mat, and a [1 x 200] labels mat. prediction is similar:

```
std::vector<cv::Point2f> resultPnts = ...;
Mat
```~~row(resultPoints, true); // deep copy
~~row(resultPoints);
svm->predict(row.reshape(1,1));

while you cannot get a percentage confidence instead of the labels here, you *can* get the distance to the margin:

```
float dist = svm->predict(query, noArray(), ml::StatModel::RAW_OUTPUT)
```

Copyright OpenCV foundation, 2012-2018. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.