Ask Your Question
0

Neuronal network predict access violation

asked 2015-05-06 10:30:36 -0600

borks gravatar image

Hello,

I'v been trying to use the ANN_MLP class for learning and predicting objects regarding their shape. But I'm getting an Access Violation as soon as i try to predict a test sample. I have an input vector with train samples that looks like that for 2 different classes:

35546, 0.62142855, 1125, 5, 145, 50;
40080, 0.60714287, 1202, 4, 151, 5;
38718, 0.60000002, 1136, 4, 150, 7;
39375, 0.60714287, 1182, 4, 148, 42;
39408, 0.61428571, 1217, 4, 151, 9;
38437, 0.60144925, 1129, 4, 148, 8;
39152, 0.62857145, 1433, 8, 124, 0;
39090, 0.58571428, 1191, 4, 150, 10;
37558, 0.60431653, 1090, 4, 149, 8;
38442, 0.55319148, 1125, 4, 150, 2;
42014, 0.61428571, 1249, 5, 145, 72;
37994, 0.55714285, 1037, 4, 149, 1;
38091, 0.62142855, 1396, 4, 149, 24]

My corresponding classes vector looks like this:

[1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
1, 0;
0, 1;
0, 1;
0, 1;
0, 1;
0, 1;
0, 1]

I think there is something going wrong while learning. Because the weights in the .XML of the saved neuronal Network seem almost identical and extremely small when I compare them to the results of a .XML I found online. Extract from my .XML:

< weights ><_> 1.1756590589982415e+127 1.1757670944776390e+127
1.1650475546531125e+127 1.1789157016611397e+127
1.1883370026274758e+127 1.1731971573174209e+127
1.1804587352134008e+127 1.1793558914094777e+127

For me this seems kind of odd. Since the values are just rediculous small. However here the part of the code where I do the setup and training:

 cv::Ptr<cv::ml::ANN_MLP>  m_cvNeuronalNetwork = cv::ml::ANN_MLP::create();
cv::Mat cvLayer(3, 1, CV_32S);
cvLayer.at<int>(0,0) = 6;
cvLayer.at<int>(1,0) = 14;
cvLayer.at<int>(2,0) = 2;
m_cvNeuronalNetwork->setLayerSizes(cvLayer);
m_cvNeuronalNetwork->setActivationFunction(cv::ml::ANN_MLP::ActivationFunctions::SIGMOID_SYM, 0.3,0.3);
m_cvNeuronalNetwork->setTrainMethod(cv::ml::ANN_MLP::BACKPROP,0.01,0.01);
m_cvNeuronalNetwork->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 1000, 0.000001));
m_cvTrainData = cv::ml::TrainData::create(cvTrainingData, cv::ml::SampleTypes::ROW_SAMPLE,        cvTrainingDataClassification);
m_cvNeuronalNetwork->train(m_cvTrainData);

I make the prediction with an row of values, for example:

37994, 0.55714285, 1037, 4, 149, 1

cv::Mat cvTest(1,6,CV_32F);
float fResult = m_cvNeuronalNetwork->predict(cvTest); `ACCESS VIOLATION`

Both the classification and train matrices have the type CV_32F (float). I tried different settings and layer sizes - even if I should get at least a moderate result with any settings (these ones are just the last I tried). I already normalized the values in the train matrix ( I think the algorithm does this by itself ) with no success. Maybe you guys have a clue about it.

Thank you so much.

edit retag flag offensive close merge delete

Comments

How many samples are you using for training?

Gino Strato gravatar imageGino Strato ( 2015-05-09 02:42:23 -0600 )edit

About 15. But that should be enough to give me at least a not very precise network.

borks gravatar imageborks ( 2015-05-12 03:53:30 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2015-05-12 04:34:26 -0600

berak gravatar image

hmm, tried your code, works fine here.

Mat_<float> trainData(13,6); trainData <<
    35546, 0.62142855, 1125, 5, 145, 50,
    40080, 0.60714287, 1202, 4, 151, 5,
    38718, 0.60000002, 1136, 4, 150, 7,
    39375, 0.60714287, 1182, 4, 148, 42,
    39408, 0.61428571, 1217, 4, 151, 9,
    38437, 0.60144925, 1129, 4, 148, 8,
    39152, 0.62857145, 1433, 8, 124, 0,
    39090, 0.58571428, 1191, 4, 150, 10,
    37558, 0.60431653, 1090, 4, 149, 8,
    38442, 0.55319148, 1125, 4, 150, 2,
    42014, 0.61428571, 1249, 5, 145, 72,
    37994, 0.55714285, 1037, 4, 149, 1,
    38091, 0.62142855, 1396, 4, 149, 24;

Mat_<float> trainClass(13,2); trainClass <<
    1, 0,
    1, 0,
    1, 0,
    1, 0,
    1, 0,
    1, 0,
    1, 0,
    0, 1,
    0, 1,
    0, 1,
    0, 1,
    0, 1,
    0, 1;

    cv::Ptr<cv::ml::ANN_MLP>  m_cvNeuronalNetwork = cv::ml::ANN_MLP::create();
    cv::Mat cvLayer(3, 1, CV_32S);
    cvLayer.at<int>(0,0) = 6;
    cvLayer.at<int>(1,0) = 14;
    cvLayer.at<int>(2,0) = 2;
    m_cvNeuronalNetwork->setLayerSizes(cvLayer);
    m_cvNeuronalNetwork->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM, 0.3,0.3);
    m_cvNeuronalNetwork->setTrainMethod(cv::ml::ANN_MLP::BACKPROP,0.01,0.01);
    m_cvNeuronalNetwork->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 1000, 0.000001));
    m_cvNeuronalNetwork->train(cv::ml::TrainData::create(trainData, cv::ml::SampleTypes::ROW_SAMPLE, trainClass));


    Mat_<float> cvTest(1,6); cvTest << 37994, 0.55714285, 1037, 4, 149, 1;
    float fResult = m_cvNeuronalNetwork->predict(cvTest1);
    cerr << fResult << endl;
edit flag offensive delete link more

Comments

@berak, thanks to your example I found uninitialized memory access which probably causes this problem.

Fix is here: https://github.com/Itseez/opencv/pull...

mshabunin gravatar imagemshabunin ( 2015-05-13 09:29:20 -0600 )edit

oh, nice. ;) just seen it.

well, it did not crash for me, but debug mode returns -2, and release mode 0, so you probably hit the spot.

berak gravatar imageberak ( 2015-05-13 09:41:07 -0600 )edit

@mshabunin. Maybe that was the reason for it. I don't have the time to check it myself. Thanks anyway!

borks gravatar imageborks ( 2015-05-19 08:11:33 -0600 )edit
0

answered 2015-05-12 03:50:26 -0600

borks gravatar image

I switched to the FANN library. Couldn't find the reason for the Crash.

edit flag offensive delete link more

Question Tools

2 followers

Stats

Asked: 2015-05-06 10:30:36 -0600

Seen: 516 times

Last updated: May 12 '15