Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

ANN_MLP huge training sets

I am having difficulty understanding the artificial neural network in OpenCV.

I found a code sample that trains the ANN to perform the XOR operation. The code looks something like this:

TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 10000, 0.000001);
mlp->setTermCriteria(termCrit);

Ptr<TrainData> trainingData = TrainData::create(inputTrainingData, SampleTypes::ROW_SAMPLE, outputTrainingData);

mlp->train(trainingData);

.. which, as I understand, trains the network for up to 10000 times while looking for a maximum error of 0.000001.

This code says to me that all of the training data must be made available to the ANN before training occurs. Is this true, and if so, what if the training data consists of thousands of images?

I tried the following code, but it appears that the training is reset whenever mlp->train() is called:

TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 1, 0.000001);
mlp->setTermCriteria(termCrit);

Ptr<TrainData> trainingData = TrainData::create(inputTrainingData, SampleTypes::ROW_SAMPLE, outputTrainingData);

for(size_t i = 0; i < 10000; i++)
mlp->train(trainingData);

ANN_MLP huge training sets

I am having difficulty understanding the artificial neural network in OpenCV.

I found a code sample that trains the ANN to perform the XOR operation. The code looks something like this:

TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 10000, 0.000001);
mlp->setTermCriteria(termCrit);

Ptr<TrainData> trainingData = TrainData::create(inputTrainingData, SampleTypes::ROW_SAMPLE, outputTrainingData);

mlp->train(trainingData);

.. which, as I understand, trains the network for up to 10000 times while looking for a maximum error of 0.000001.

This code says to me that all of the training data must be made available to the ANN before training occurs. Is this true, and if so, what if the training data consists of thousands of images?

I tried the following code, but it appears that the training is reset whenever mlp->train() is called:

TermCriteria termCrit = TermCriteria(TermCriteria::Type::COUNT + TermCriteria::Type::EPS, 1, 0.000001);
mlp->setTermCriteria(termCrit);

Ptr<TrainData> trainingData = TrainData::create(inputTrainingData, SampleTypes::ROW_SAMPLE, outputTrainingData);

for(size_t i = 0; i < 10000; i++)
mlp->train(trainingData);

Is there no way to continue the training instead of resetting it each time?