Ask Your Question

Angulu's profile - activity

2016-10-27 08:07:11 -0600 commented question Predicting MLP with single sample

Thank you for the help. Your advice has been so helpful to me...

2016-10-27 07:55:48 -0600 commented question Predicting MLP with single sample

Thank you. Now am doing my prediction as shown in function in question above at the line:- float pred = ann->predict(hist, results). This gives me predicted labels between 0 and 3. But my class labels are 1, 2, 3, 4. my question is, can I add 1 to predicted labels before I check my cumulative scores? Thank you

2016-10-27 07:46:33 -0600 commented question Predicting MLP with single sample

Thank you berak. I just realized I had a mistake in my code. All along I was loading a wrong model file before prediction. I have corrected this now no exception.

2016-10-27 07:22:28 -0600 commented question Predicting MLP with single sample

I did responses.at<float>(i, (x-1)) = 1.f because my class labels are 1, 2, 3,4 and matrix indexing start at 0. So class 1 labels are found in column with index 0 and class 4 labels in column with index 3 I have also update the train function now my outer layer has 4 nodes

2016-10-27 07:12:23 -0600 commented question Predicting MLP with single sample

I have created my responses using the code below and predicted with an empty results Matrix but still throwing exception vector subscript out of range

int classes = remove_dups(labels).size();
    Mat responses = Mat::zeros(rows, classes, CV_32F);
    Mat labs = Mat(labels).reshape(0, rows);
    //prepare responses where number of images is not same in all classes
    for (int i = 0; i < hists.rows; i++)
    {
        int x = labs.at<int>(i);
        responses.at<float>(i, (x-1)) = 1.f;
    }
2016-10-27 03:29:08 -0600 commented question Predicting MLP with single sample

Thank you for the insight. So shoul I prepare my responses to look like:- 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 Where 1 means sample belong to that respective class (classes columns rows number of training samples)

2016-10-26 12:17:24 -0600 asked a question Predicting MLP with single sample

I have trained MLP as shown in the code below to classify images into 4 classes. My class labels are 1,2,3,4.This trains the model successfully but it thorws an Exception vector out of range during prediction

void train::trainANN(Mat hists, vector<int> labels)
{
    int cols = hists.cols;//size of input layer must be equal to this number of cols
    int rows = hists.rows;//used to determine rows in Mat of responses
    Mat_<float> responses = Mat(labels).reshape(0, rows);

    Ptr<TrainData> trainData = TrainData::create(hists, ROW_SAMPLE, responses);
    Ptr<ANN_MLP> ann = ml::ANN_MLP::create();
    vector<int> layers = { cols, 500, 1 };
    ann->setLayerSizes(layers);
    ann->setActivationFunction(ml::ANN_MLP::ActivationFunctions::SIGMOID_SYM, 1.0, 1.0);
    ann->setTrainMethod(ANN_MLP::TrainingMethods::BACKPROP);
    ann->setBackpropMomentumScale(0.1); 
    ann->setBackpropWeightScale(0.1); 
    ann->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 10000, 0.00001));
    ann->train(trainData);
}

This is how I predict the model

float train::predictANN(Mat hist)//hist is a row matrix(one sample)
{
    Mat results(1, 4, CV_32FC1);
    float pred = ann->predict(hist, results);//This is the line that throws vector out of range exception
    return pred;
}

I have tried to debug this code but have not fixed the error. Kindly help with why the prediction throws vector out of range exception. Thank you. NB:Am using different number of images per class during training. class 1 = 300 images, class 2 = 340 images etc

2016-10-21 05:58:50 -0600 commented answer Preparing LBP Texture Feature for SVM

I will take that advice. Thank you for you help sir

2016-10-21 05:28:09 -0600 commented answer Preparing LBP Texture Feature for SVM

Thank you. I will give it a shot in a short while and get back here

2016-10-21 04:56:53 -0600 commented answer Preparing LBP Texture Feature for SVM

Yes, I know the theory of uniform patterns. But I have no clue of how to use this to compress 256 bin histogram to 59 bin histogram. I have tried to follow your answer at Uniform LBP but I am not getting it clearly. I will appreciate any further explanations especially how to do Look-up table with LUT. Regards

2016-10-21 03:34:22 -0600 commented answer Preparing LBP Texture Feature for SVM

Thank you berak. This solves the problem. But the resultant data as so many zeros which I think is ok.

2016-10-21 03:19:40 -0600 commented question Preparing LBP Texture Feature for SVM

I have edited and added the part am calculating LBP, histograms and populating the vector. Thank you too

2016-10-21 02:46:12 -0600 asked a question Preparing LBP Texture Feature for SVM

I have extracted LBP Texture features on an image by splitting the image into small cells and then calculating LBP for each sub-image and then latter concatenating spatial histograms of each sub-image following the Philip Code. I push these spatial histograms to a vector<Mat>. I understand my spatial texture features are now contained in the histograms.

The problem I have is, I want to prepare this texture feature vector for training SVM. I convert the vector<Mat>hists of histograms to a row Mat where each row represent a sample. I use the code below to convert to row Mat.

Mat cv::asRowMatrix(const vector<Mat>& src, int rtype, double alpha, double beta) {
// number of samples
size_t n = src.size();
// return empty matrix if no matrices given
if (n == 0)
    return Mat();
// dimensionality of (reshaped) samples
size_t d = src[0].total();
// create data matrix
Mat data(n, d, rtype);
// now copy data
for (int i = 0; i < n; i++) {
    // make sure data can be reshaped, throw exception if not!
    if (src[i].total() != d) {
        string error_message = format("Wrong number of elements in matrix #%d! Expected %d was %d.", i, d, src[i].total());
        CV_Error(CV_StsBadArg, error_message);
    }
    // get a hold of the current row
    Mat xi = data.row(i);
    // make reshape happy by cloning for non-continuous matrices
    if (src[i].isContinuous()) {
        src[i].reshape(1, 1).convertTo(xi, rtype, alpha, beta);
    }
    else {
        src[i].clone().reshape(1, 1).convertTo(xi, rtype, alpha, beta);
    }
}
return data;}

When I do this as Mat data = asRowMatrix(hists, CV_32F, 1, 0) the matrix data comes with same values in each column, showing that all the images are identical. I get the same results if I use LBP image itself instead of its histogram. I would like to prepare LBP texture for training SVM. I have spent weeks figuring out this but am now stuck. Kindly help. Regards

This is how am filling the vector with the code below

string path = ".\\Images\\*.JPG"; //path to my images
vector<String> names; //hold all URLs to images
cv::glob(path, names, false); //Reading URLs to names
//Loop through names, load image and calculate lbp and hists
vector<Mat> hists;
Mat img;
Mat lbp;
Mat hist;
for(int i = 0; i < names.size(); i++)
{
      img = imread(names[i]);
      cvtColor(img, img, CV_BGR2GRAY);
      lbp::OLBP(img, lbp); //call OLBP function in Phillip code
      lbp::spatial_histogram(lbp, hist, 256, Size(10, 10)); //Call to spatial histograms function in Phillip code
      hists.push_back(hist); //push back histogram of this image to vector of mat
}
2016-09-20 03:41:50 -0600 commented question Support Vector Regression prediction values

This is the training code am using for multi-class classification. Kindly advice how to improve it for training accuracy improvement

void train::trainSVM(Mat hists, vector<int> labels){

    Ptr<TrainData> trainData = TrainData::create(hists, ml::ROW_SAMPLE, labels);
    Ptr<SVM> svm = SVM::create();
    svm->setKernel(SVM::KernelTypes::RBF);//RBF needs only 2 parameters C and gamma
    svm->setType(SVM::Types::C_SVC);
    svm->setC(1.0);
    svm->setGamma(0.5);
    svm->train(trainData);  
}
2016-09-20 03:38:42 -0600 commented question Support Vector Regression prediction values

OK. Thank you. Maybe I need to ask a different question for I have done multi-class classification and got very low accuracy. Am grateful

2016-09-20 03:20:08 -0600 commented question Support Vector Regression prediction values

Thank you. I have tried multi-class classification using One-vs-All but am confused on how to combine these N classifiers for prediction. Could you advise me how to approach that? Am grateful for your input

2016-09-20 02:27:18 -0600 commented question Support Vector Regression prediction values

What would you advice I use? My feature vector is a 2D Mat with 5 samples from each of the 10 classes. My labels are in the form of 1, 1, 1, 1,1,2,2,2,2,2.....10,10,10,10,10

2016-09-19 12:14:32 -0600 asked a question Support Vector Regression prediction values

I am training a regression model to predict a label given a feature vector. Training and testing samples are drawn from 10 classes. I have trained SVM for regression as shown in the code below

void train::trainSVR(Mat data, vector<int> labels)
{
    Ptr<TrainData> trainData = TrainData::create(data, ml::ROW_SAMPLE, labels);
    Ptr<SVM> svr = SVM::create();
    svr->setKernel(SVM::KernelTypes::POLY);
    svr->setType(SVM::Types::NU_SVR);//For n-class classification problem with imperfect class separation
    //svr->setC(5);
    //svr->setP(0.01);
    svr->setGamma(10.0);//for poly
    svr->setDegree(0.1);//for poly
    svr->setCoef0(0.0);//for poly
    svr->setNu(0.1);

    cout << "Training Support Vector Regressor..." << endl;
    //svr->trainAuto(trainData);
    svr->train(trainData);
    bool trained = svr->isTrained();
    if (trained)
    {
        cout << "SVR Trained. Saving..." << endl;
        svr->save(".\\Trained Models\\SVR Model.xml");
        cout << "SVR Model Saved." << endl;
    }
}

I have predicted my model as shown in the function below

void train::predictSVR(Mat data, vector<int> labels){
    Mat_ <float> output;
    vector<int> predicted;
    //Create svm smart pointer and load trained model
    Ptr<ml::SVM> svr = Algorithm::load<ml::SVM>(".\\Trained Models\\SVR Model.xml");
    ofstream prediction;
    prediction.open("SVR Prediction.csv", std::fstream::app);//open file for writing predictions in append mode
    for (int i = 0; i < labels.size(); i++)
    {
        float pred = svr->predict(data.row(i));
        prediction << labels[i] << "," << pred << endl;

    }
}

The prediction gives me some value against every label. My questions are:-

  1. What does the value against the label signify?
  2. How do I retrieve Mean squared error (mse), support vectors and constants for the regression function y=wx +b. How do I get b, and w Kindly advice.
2016-08-29 00:46:21 -0600 received badge  Enthusiast
2016-08-28 11:51:20 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

When I write to file as image.at<float>(i, j) I visualize very small values. But if I write as image.at<int>(i, j) I get values between 0-255. My question is, should I use CV_8UC1 type instead of CV_32FC1 Mat types for preparing training data? Converting to CV_32FC1 seems to make values be so minimal and normalization after PCA makes them all 0

2016-08-28 11:35:41 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

Those values am getting from a gray-scale image before LBP feature extraction. I just read image from a file, convert to gray-scale, apply Gaussian blur then dump it to a csv file. I have been shocked to see those values. Does Gaussian blur really change gray-level values to that extent?

2016-08-28 11:08:42 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

Typically what should be the range of gray values in a gray-scale image? I have dumped my original image to a csv file, after performing a Gaussian Blur on it. the values in the file are as small as 1.0E-43!!! I was expecting the pixel values to be between 0 and 255. Kindly advice

2016-08-28 05:29:16 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

I tried this simple approach with this trainData.push_back(samples[i].reshape(1, 1)); modification on last line, but still same data appears in the Matrices.

2016-08-28 04:53:19 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

Below is sample data from Mat trainData for 4 rows 6 columns

0.0111769 0.000657462 0.000657462 0 0 0.00197239 0.0111769 0.000657462 0.000657462 0 0 0.00197239 0.0111769 0.000657462 0.000657462 0 0 0.00197239 0.0111769 0.000657462 0.000657462 0 0 0.00197239

The value 000657462 appear in several columns than any other value. Kindly advice what could be wrong. Thank you.

2016-08-28 04:47:35 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

I just dumped Mat trainData, Mat projection (PCA Data) and Mat ldaProjected to separate csv files to visualize data. Mat trainData one that I get after converting vector<Mat> to Mat has different values in each of the 4096 columns but a good number of columns have value 0 and some values appear in several columns. Mat projection from PCA operation has same value 2.79E-06 in all rows all columns. Mat ldaProjected from LDA operation has 0 in all rows all columns. So probably my data preparation pipeline could be the problem?

2016-08-28 04:13:22 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

Thank you berak. I have more samples per class (Atleast 4 and atmost 7). Should I use same number of images per class? For 40 classes, I have a total of 220 images am using to train SVM.

2016-08-28 03:16:09 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

Even in my LDA Model, eigenvectors consist of so many 0s and few 1s only, no any other value in the XML file. Kindly advice on that logic of converting from vector<Mat> to Mat and generally how to prepare a training matrix. Thank you

2016-08-28 03:07:16 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

This is how am Converting from vector<Mat> histogram to Mat trainData

Void convertToMat(vector<Mat> &samples, Mat &trainData){
    int rows = samples.size();
    int cols = max(samples[0].cols, samples[0].rows);
    Mat tmp(1, cols, CV_32FC1); //used for transposition if needed
    trainData = Mat(rows, cols, CV_32FC1);
    vector< Mat >::const_iterator itr = samples.begin();
    vector< Mat >::const_iterator end = samples.end();
    for (int i = 0; itr != end; ++itr, ++i){
        CV_Assert(itr->cols == 1 || itr->rows == 1);
        if (itr->cols == 1){
            transpose(*(itr), tmp);
            tmp.copyTo(trainData.row(i));
        }
        else if (itr->rows == 1){
            itr->copyTo(trainData.row(i));
        }
    }
}
2016-08-28 03:04:56 -0600 commented question Using PCA and LDA for dimensionality reduction for SVM

I have tried with both methods still not fine. Could I be preparing my data wrongly? Because when I look at my trained model, I have got 3304 Support Vectors but all populated with 0. I am extracting LBP features from images then push LBP Histograms to a vector of Mat. Then I am converting this vector<Mat> histograms to Mat trainData as shown in the code below. Kindly advice if my logic is OK