I am training a classifier and I have a function that computes the error on groups; in other words I have more classes for positives and more classes for negatives. I have don this because there are too many different negatives and if I am putting them all together in a class I risk to train the classifier very bad. Because the positives differ enough I have also split them in 2 classes. What I want is to compute the error between the positives and the negatives (I do not care if the positives are misclassified between them, neither the negatives between them - which is more probable). I have done this until now:
double ImagesDLN::evaluatePerGroup(const cv::Mat& resultsIn) // resultsIn are the prediction
{
cv::Mat errorsPerImage = m_labels - resultsIn;
double minLbl = 0, maxLbl = 0;
cv::minMaxLoc(m_labels, &minLbl, &maxLbl);
double groupTh = (minLbl + maxLbl) / 2;
double groupError = 0;
for (int i = 0; i < errorsPerImage.rows; i++)
{
for (int j = 0; j < errorsPerImage.cols; j++)
{
if (std::abs(errorsPerImage.ptr<float>(i)[j]) > groupTh)
{
++groupError;
}
}
}
return groupError / (errorsPerImage.rows * errorsPerImage.cols);
}
The labels are something like this: 101, 102, 103, 104 for negatives and 201, 202 for positives. So I have tried to say that a big difference between the prediction and the label is an error.
What I am asking is : Is there another way that may be faster? It is not very slow, but if there is countNonZero
that is much faster than counting in for loops, there may be some other thing for this. Thanks in advance.