Ask Your Question

Revision history [back]

Neuronal network predict access violation

Hello,

I'v been trying to use the ANN_MLP class for learning and predicting objects regarding their shape. But I'm getting an Access Violation as soon as i try to predict a test sample. I have an input vector with train samples that looks like that for 2 different classes:

35546, 0.62142855, 1125, 5, 145, 50;
40080, 0.60714287, 1202, 4, 151, 5;
38718, 0.60000002, 1136, 4, 150, 7;
39375, 0.60714287, 1182, 4, 148, 42;
39408, 0.61428571, 1217, 4, 151, 9;
38437, 0.60144925, 1129, 4, 148, 8;
39152, 0.62857145, 1433, 8, 124, 0;
39090, 0.58571428, 1191, 4, 150, 10;
37558, 0.60431653, 1090, 4, 149, 8;
38442, 0.55319148, 1125, 4, 150, 2;
42014, 0.61428571, 1249, 5, 145, 72;
37994, 0.55714285, 1037, 4, 149, 1;
38091, 0.62142855, 1396, 4, 149, 24]

My corresponding classes vector looks like this:

[1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
0, 1;
0, 1;
0, 1;
0, 1;
0, 1;
0, 1]

I think there is something going wrong while learning. Because the weights in the .XML of the saved neuronal Network seem almost identical and extremely small when I compare them to the results of a .XML I found online. Extract from my .XML:

< weights ><_> 1.1756590589982415e+127 1.1757670944776390e+127
1.1650475546531125e+127 1.1789157016611397e+127
1.1883370026274758e+127 1.1731971573174209e+127
1.1804587352134008e+127 1.1793558914094777e+127

For me this seems kind of odd. Since the values are just rediculous small. However here the part of the code where I do the setup and training:

 cv::Ptr<cv::ml::ANN_MLP>  m_cvNeuronalNetwork = cv::ml::ANN_MLP::create();
cv::Mat cvLayer(3, 1, CV_32S);
cvLayer.at<int>(0,0) = 6;
cvLayer.at<int>(1,0) = 14;
cvLayer.at<int>(2,0) = 2;
m_cvNeuronalNetwork->setLayerSizes(cvLayer);
m_cvNeuronalNetwork->setActivationFunction(cv::ml::ANN_MLP::ActivationFunctions::SIGMOID_SYM, 0.3,0.3);
m_cvNeuronalNetwork->setTrainMethod(cv::ml::ANN_MLP::BACKPROP,0.01,0.01);
m_cvNeuronalNetwork->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 1000, 0.000001));
m_cvTrainData = cv::ml::TrainData::create(cvTrainingData, cv::ml::SampleTypes::ROW_SAMPLE,        cvTrainingDataClassification);
m_cvNeuronalNetwork->train(m_cvTrainData);

I make the prediction with an row of values, for example:

37994, 0.55714285, 1037, 4, 149, 1

cv::Mat cvTest(1,6,CV_32F);
float fResult = m_cvNeuronalNetwork->predict(cvTest); `ACCESS VIOLATION`

Both the classification and train matrices have the type CV_32F (float). I tried different settings and layer sizes - even if I should get at least a moderate result with any settings (these ones are just the last I tried). I already normalized the values in the train matrix ( I think the algorithm does this by itself ) with no success. Maybe you guys have a clue about it.

Thank you so much.