Ask Your Question

majkel's profile - activity

2017-12-16 04:21:38 -0600 commented question Setting up MLP for image recognition

@sjhalayka, since this are binary flags, it has no effect on result - 0100 + 0010 is 0110, the same as using OR operator

2017-12-15 20:02:18 -0600 received badge  Student (source)
2016-06-03 04:38:38 -0600 received badge  Necromancer (source)
2016-06-02 10:42:37 -0600 answered a question Neural Networks UPDATE_WEIGHTS does not work

You can't train NN with UPDATE_WEIGHTS flag at start. Firstly, you have to make an iteration without this flag - the initial weights will be computed with Nguyen-Widrow algorithm.

Then you can train the NN with UPDATE_WEIGHTS in loop.

2016-06-02 09:58:44 -0600 commented question MLP Train iteration limit - bug?

I'm not optymising yet, just checking the learning process - if I get complete garbage and nonsense result from learning data the NN is useless. And the question was only about learning bug - i just wan't to increase traning iterations to increase hidden layer size...

2016-06-02 08:58:31 -0600 received badge  Enthusiast
2016-06-02 08:58:31 -0600 received badge  Enthusiast
2016-06-02 08:58:31 -0600 received badge  Enthusiast
2016-06-01 13:18:36 -0600 commented question MLP Train iteration limit - bug?

if the NN can't recognize the traning images it also can't recognise testing images ;)

2016-06-01 09:30:46 -0600 commented question MLP Train iteration limit - bug?

but when i train the NN with 5 images it can't recognise this 5 images again! i'm not talking about generalization feature and learning, the problem is with learning iteration - I decreased the hidden layer size to 50 and it can learn and recognize all 4 images per class ;)

2016-06-01 06:42:13 -0600 asked a question MLP Train iteration limit - bug?

I am trying to learn my neural network to classify images. The whole learning data preparing and other stuffs work ok, the NN can learn recognizing 5 images (1 per class).

But when I get it 2 images per class, I got quite bad results - I think it's because the NN is undertained. So I updated the term criteria from 1000 to 10 000:

mlp->setTermCriteria(TermCriteria(
    TermCriteria::Type::MAX_ITER, 
    10000,
    0.0001
));

but with no success - still the same training time and recognizing results. It looks like the max iteration parameter only scale from 1-3000 - higher values don't make a difference.

So I tryied to update the weight in loop:

mlp->train(trainingData);

for (int i = 0; i < 100; i++) {
    cout << "Traning iteration: " << i << endl;
    mlp->train(trainingData
        , ANN_MLP::TrainFlags::UPDATE_WEIGHTS
    );
}

But only first 2-3 iterations take time, the other aren't learining anything and I just got text spaming in console.

My layers size are 1250-300-5. When I set the hidden layers to 100 i got better results and with 50 the results are perfect, so it means that the NN is working ok but I can't force the OpenCV to extend the learning time.

So my question is how to force the OpenCV ANN_MLP to pefrorm larger training? Any tips will be helpful ;)

2016-05-29 09:41:33 -0600 commented question Setting up MLP for image recognition

Thanks, the backprop weight scale was too big - I set it to 0.0001 or 0.001 and I got better results. Then I changed the hidden layer size to 100 and i was able to learn the NN my images ;)

2016-05-29 02:11:30 -0600 asked a question Setting up MLP for image recognition

Hi! I am new in OpenCV world and neural networks but I have some coding experience in C++/Java.


I created my first ANN MLP and learned it the XOR:

#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/ml/ml.hpp>

#include <iostream>
#include <iomanip>

using namespace cv;
using namespace ml;
using namespace std;

void print(Mat& mat, int prec)
{
    for (int i = 0; i<mat.size().height; i++)
    {
        cout << "[";
        for (int j = 0; j<mat.size().width; j++)
        {
            cout << fixed << setw(2) << setprecision(prec) << mat.at<float>(i, j);
            if (j != mat.size().width - 1)
                cout << ", ";
            else
                cout << "]" << endl;
        }
    }
}

int main()
{
    const int hiddenLayerSize = 4;
    float inputTrainingDataArray[4][2] = {
        { 0.0, 0.0 },
        { 0.0, 1.0 },
        { 1.0, 0.0 },
        { 1.0, 1.0 }
    };
    Mat inputTrainingData = Mat(4, 2, CV_32F, inputTrainingDataArray);

    float outputTrainingDataArray[4][1] = {
        { 0.0 },
        { 1.0 },
        { 1.0 },
        { 0.0 }
    };
    Mat outputTrainingData = Mat(4, 1, CV_32F, outputTrainingDataArray);

    Ptr<ANN_MLP> mlp = ANN_MLP::create();

    Mat layersSize = Mat(3, 1, CV_16U);
    layersSize.row(0) = Scalar(inputTrainingData.cols);
    layersSize.row(1) = Scalar(hiddenLayerSize);
    layersSize.row(2) = Scalar(outputTrainingData.cols);
    mlp->setLayerSizes(layersSize);

    mlp->setActivationFunction(ANN_MLP::ActivationFunctions::SIGMOID_SYM);

    TermCriteria termCrit = TermCriteria(
        TermCriteria::Type::COUNT + TermCriteria::Type::EPS,
        100000000,
        0.000000000000000001
    );
    mlp->setTermCriteria(termCrit);

    mlp->setTrainMethod(ANN_MLP::TrainingMethods::BACKPROP);

    Ptr<TrainData> trainingData = TrainData::create(
        inputTrainingData,
        SampleTypes::ROW_SAMPLE,
        outputTrainingData
    );

    mlp->train(trainingData
        /*, ANN_MLP::TrainFlags::UPDATE_WEIGHTS
        + ANN_MLP::TrainFlags::NO_INPUT_SCALE
        + ANN_MLP::TrainFlags::NO_OUTPUT_SCALE*/
    );

    for (int i = 0; i < inputTrainingData.rows; i++) {
        Mat sample = Mat(1, inputTrainingData.cols, CV_32F, inputTrainingDataArray[i]);
        Mat result;
        mlp->predict(sample, result);
        cout << sample << " -> ";// << result << endl;
        print(result, 0);
        cout << endl;
    }

    return 0;
}

It works very well for this simple problem, I also learn this network the 1-10 to binary conversion.


But i need to use MLP for simple image classification - road signs. I write the code for loading training images and preparing matrix for learning but I'm not able to train the network - it "learn" in one second even with 1 000 000 iterations! And it produce garbage results, the same for all inputs!

#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/ml/ml.hpp>

#include <iostream>
#include <chrono>
#include <memory>
#include <iomanip>
#include <climits>

#include <Windows.h>

using namespace cv;
using namespace ml;
using namespace std;
using namespace chrono;

const int WIDTH_SIZE = 50;
const int HEIGHT_SIZE = (int)(WIDTH_SIZE * sqrt(3)) / 2;
const int IMAGE_DATA_SIZE = WIDTH_SIZE * HEIGHT_SIZE;

void print(Mat& mat, int prec)
{
    for (int i = 0; i<mat.size().height; i++)
    {
        cout << "[ ";
        for (int j = 0; j<mat.size().width; j++)
        {
            cout << fixed << setw(2) << setprecision(prec) << mat.at<float>(i, j);
            if (j != mat.size().width - 1)
                cout << ", ";
            else
                cout << " ]" << endl;
        }
    }
}

bool loadImage(string imagePath, Mat& outputImage)
{
    // load image in grayscale
    Mat image = imread(imagePath, IMREAD_GRAYSCALE);
    Mat temp;

    // check for invalid input
    if (image.empty()) {
        cout << "Could not open or find the image" << std::endl;
        return false;
    }

    // resize the image
    Size size(WIDTH_SIZE, HEIGHT_SIZE ...
(more)