Ask Your Question
0

training neural net..

asked 2018-04-01 04:33:44 -0600

Shivanshu gravatar image

updated 2018-04-01 06:47:14 -0600

I am using opencv 2.4 with visual studio 2012... here is the code ..

using namespace std;
using namespace cv;
using namespace ml;
int val=0;
float a[]={0,0,1,1};
float b[]={0,1,0,1};
float C[]={0,0,0,1};
    void main()
{
    Ptr<ANN_MLP>neural_net=ANN_MLP::create(); //empty neural model..
    neural_net->setActivationFunction(ANN_MLP::SIGMOID_SYM); //neural activation function...
    neural_net->setTrainMethod(ANN_MLP::BACKPROP);  ///training algorithm...
    Mat layers(3,1,CV_32FC1);
    layers.row(0)=2;
    layers.row(1)=2;
    layers.row(2)=1;
    std::cout<<"neural layers\n"<<layers;
    neural_net->setLayerSizes(layers);
    TermCriteria Term(CV_TERMCRIT_ITER,10000,0.1);
    neural_net->setTermCriteria(Term);
    Mat trainig_data(4,2,CV_32FC1);
    Mat sampleDat(4,1,CV_32FC1);
Mat trainDat(4,2,CV_32FC1);
int count=0;
for(int r=0;r<trainDat.rows;r++)
{
    for(int c=0;c<trainDat.cols;c++)
    {
    trainDat.at<float>(r,0)=a[count];
    trainDat.at<float>(r,1)=b[count];
    }
    count+=1;

}
std::cout<<"\n"<<trainDat;
count=0;
for(int r=0;r<sampleDat.rows;r++)
{
    for(int c=0;c<sampleDat.cols;c++)
    {
        sampleDat.at<float>(r,c)=C[count];
    }
    count+=1;

}
std::cout<<"\n"<<sampleDat;

Ptr<TrainData>TrainingData=TrainData::create(trainDat,ROW_SAMPLE,sampleDat);
printf("training...");
neural_net->train(TrainingData);
printf("..done!");
Mat test(1,2,CV_32FC1);
test.at<float>(0,0)=1;
test.at<float>(0,1)=1;
printf("neural output %f",neural_net->predict(test));

    _getch();

}

This is the code for my neural net implementation... Actuallty my program is crashing in runtime..when I am calling prediction function().It is encountering (memory access violation ) I don't know why..??? Thank you..

edit retag flag offensive close merge delete

Comments

note, that your code is using opencv3, and won't run with 2.4

berak gravatar imageberak ( 2018-04-01 04:38:00 -0600 )edit

I didn't get???

Shivanshu gravatar imageShivanshu ( 2018-04-01 04:44:29 -0600 )edit

you mean to say this network design cant implement OR operation...?

Shivanshu gravatar imageShivanshu ( 2018-04-01 04:49:04 -0600 )edit

no, only that your code won't compile on opencv2.4

berak gravatar imageberak ( 2018-04-01 04:52:08 -0600 )edit

Know what ....I stuck in prediction....actually training is done but when I call predict function().,program encounters a runtime memory violation error...what it could be???

Shivanshu gravatar imageShivanshu ( 2018-04-01 06:29:19 -0600 )edit

your code above is incomplete

berak gravatar imageberak ( 2018-04-01 06:39:20 -0600 )edit

I have updated above problem to most recent one..One which I am facing currently...Have a look

Shivanshu gravatar imageShivanshu ( 2018-04-01 06:44:54 -0600 )edit

neural output 0.000000

(can't reproduce the crash)

berak gravatar imageberak ( 2018-04-01 06:53:46 -0600 )edit

mine is crashing but??also neural output must be 1 instead for given test data...

Shivanshu gravatar imageShivanshu ( 2018-04-01 07:10:07 -0600 )edit

shall I check my dll configuration??also

Shivanshu gravatar imageShivanshu ( 2018-04-01 07:24:51 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2018-04-01 10:32:10 -0600

sjhalayka gravatar image

updated 2018-04-01 11:01:39 -0600

Below is code to train the neural network to solve the XOR problem using OpenCV 3.x. Changing it to the OR problem is a trivial matter.

// https://stackoverflow.com/questions/37500713/opencv-image-recognition-setting-up-ann-mlp

#include <opencv2/opencv.hpp>
using namespace cv;
#pragma comment(lib, "opencv_world331.lib")

#include <iostream>
#include <iomanip>

using namespace cv;
using namespace ml;
using namespace std;

void print(Mat& mat, int prec)
{
    for (int i = 0; i<mat.size().height; i++)
    {
        cout << "[";
        for (int j = 0; j<mat.size().width; j++)
        {
            cout << fixed << setw(2) << setprecision(prec) << mat.at<float>(i, j);
            if (j != mat.size().width - 1)
                cout << ", ";
            else
                cout << "]" << endl;
        }
    }
}

int main(void)
{
    const int hiddenLayerSize = 4;
    float inputTrainingDataArray[4][2] = {
        { 0.0, 0.0 },
        { 0.0, 1.0 },
        { 1.0, 0.0 },
        { 1.0, 1.0 }
    };
    Mat inputTrainingData = Mat(4, 2, CV_32F, inputTrainingDataArray);

    float outputTrainingDataArray[4][1] = {
        { 0.0 },
        { 1.0 },
        { 1.0 },
        { 0.0 }
    };
    Mat outputTrainingData = Mat(4, 1, CV_32F, outputTrainingDataArray);

    Ptr<ANN_MLP> mlp = ANN_MLP::create();

    Mat layersSize = Mat(3, 1, CV_16U);
    layersSize.row(0) = Scalar(inputTrainingData.cols);
    layersSize.row(1) = Scalar(hiddenLayerSize);
    layersSize.row(2) = Scalar(outputTrainingData.cols);
    mlp->setLayerSizes(layersSize);

    mlp->setActivationFunction(ANN_MLP::ActivationFunctions::SIGMOID_SYM);

    TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 1, 0.000001);
    mlp->setTermCriteria(termCrit);

    mlp->setTrainMethod(ANN_MLP::TrainingMethods::BACKPROP);

    Ptr<TrainData> trainingData = TrainData::create(inputTrainingData, SampleTypes::ROW_SAMPLE, outputTrainingData);

    mlp->train(trainingData);

    for(int i = 0; i < 10000; i++)
    mlp->train(trainingData, ANN_MLP::TrainFlags::UPDATE_WEIGHTS);

    for (int i = 0; i < inputTrainingData.rows; i++) 
    {
        Mat sample = Mat(1, inputTrainingData.cols, CV_32F, inputTrainingDataArray[i]);
        Mat result;
        mlp->predict(sample, result);
        cout << sample << " -> ";// << result << endl;
        print(result, 0);
        cout << endl;
    }
    return 0;

}
edit flag offensive delete link more

Comments

Thank you for your help....But can you please tell me why yours working but mine is crashing...

Shivanshu gravatar imageShivanshu ( 2018-04-01 11:00:02 -0600 )edit

Not really, no. I have no time to debug someone else's code. I recommend starting again from the ground up.

sjhalayka gravatar imagesjhalayka ( 2018-04-01 11:02:44 -0600 )edit

If you find that the answer was helpful, please mark it as correct, and upvote it. Thank you.

sjhalayka gravatar imagesjhalayka ( 2018-04-01 11:03:21 -0600 )edit

I am going out for Easter today... maybe I'll have time to debug your code tonight or tomorrow.

sjhalayka gravatar imagesjhalayka ( 2018-04-01 11:18:06 -0600 )edit

Good luck...enjoy you Easter...free of mind..'cause I got it..Its because first I created neural layer then activation function then the term criteria and then the training ...the program was crashing because something was being accessed without even it was deceleared..Now its fine Thanks..!!

Shivanshu gravatar imageShivanshu ( 2018-04-01 11:31:32 -0600 )edit

Right on!!

sjhalayka gravatar imagesjhalayka ( 2018-04-01 11:32:32 -0600 )edit

I created for AND operation....but for test data like giving input [1,1] it outputs [0.0243243].I also really don't know what term criteria really mean here 'cause I never saw such things in backpropagation tutorial so....

Shivanshu gravatar imageShivanshu ( 2018-04-01 11:33:54 -0600 )edit

An answer like 0.02 is not really a problem... you can't always expect perfect answers. However, I tried the AND problem with the code I posted, and it works just fine. Maybe start a new question and post your most recent code. Below is the changes I made to the XOR code to solve for the AND problem.

float outputTrainingDataArray[4][1] = {
    { 0.0 },
    { 0.0 },
    { 0.0 },
    { 1.0 }
};
sjhalayka gravatar imagesjhalayka ( 2018-04-01 14:18:22 -0600 )edit

P.S. The first criterion is the number of iterations (learning epochs) per training session. I set this to 1 -- you set it to 10000, which is fine. The second criterion is the maximum error (mean squared error? I don't know). I set this to 0.000001 -- you set this to 0.1, which is not really small enough.

See: https://docs.opencv.org/3.4.1/d9/d5d/...

sjhalayka gravatar imagesjhalayka ( 2018-04-01 14:30:48 -0600 )edit

well Then Why we need to train neural net in loop even if I set criteria for 1000 iteration???calling training function() once isn't enough...because if I told neural net to iterate 1000 times then calling training function must do training for 1000 iteration...is so or not??or actually it is doing the same but from scratch...!

Shivanshu gravatar imageShivanshu ( 2018-04-01 23:47:22 -0600 )edit

Question Tools

Stats

Asked: 2018-04-01 04:33:44 -0600

Seen: 236 times

Last updated: Apr 01 '18