Problem using Bayes Classifier

asked 2017-12-28 05:05:22 -0500

XanderC_ gravatar image

Hello, I'm a first timer with OpenCV and while I have read the documentation and devs tutorial found here and there, I still haven't managed to make the machine learning library work in any way: I'm still trying to use a Bayes Classifier in order to train it with a dataset of potholes images (on which I have manually applied regions using the OpenCV tool).

What I specifically don't understand is what does the Bayes->train(...) want as training set and class/label set? A vector of Mats? A Mats of Mats? The latter don't work since the image have all different size and some of the aren't that big so resizing them would be a very nice option. The first also don't work, giving me the following error

OpenCV Error: Assertion failed (0 <= i && i < (int)v.size()) in cv::debug_build_guard::_InputArray::getMat_, file D:\opencv\sources\modules\core\src\matrix.cpp, line 1275

The code I'm executing is the following:

enum CLASSES {
    BACKGROUND, // 0
    POTHOLE // 1

struct Box {
    Point2i bottom_right;
    Point2i top_left;

    bool contains(Point2i);

struct Annotation {

    string image_path;
    string image_name;
    vector<Box> objects;


const string data_root_path = "D:/Xander_C/Downloads/sctm/potehole_detection_dataset/";

bool Box::contains(Point2i p) {
    return p.x <= this->bottom_right.x && p.x >= this->top_left.x && p.y <= this->bottom_right.y && p.y >= this->top_left.y;

void hold() {
    char * c = "a";
    int i = scanf_s(c);

void load_annotations(const string & filename, vector<Annotation> & annotations) 

    ifstream infile(filename);
    string line;
    while (std::getline(infile, line))

        istringstream iss(line);
        vector<string> results(istream_iterator<string>{iss}, istream_iterator<string>());
        vector<Box> locators;

        for (size_t i = 2; i < results.size(); i += 4) 

            Point2i bottom_right(stoi(results[i]), stoi(results[i + 1]));
            Point2i top_left(stoi(results[i + 2]), stoi(results[i + 3]));

            Box b = { bottom_right, top_left };


        string image_name = results[0].substr(results[0].find_last_of("\\")+1);

        clog << image_name << endl;

        struct Annotation a = {results[0], image_name, locators };


void images_load(const vector<Annotation> & annotations, vector< Mat > & loaded_images)
    for (size_t i = 0; i < annotations.size(); ++i)
        Mat img = imread(annotations[i].image_path, IMREAD_COLOR);
        if (img.empty())
            cout << annotations[i].image_path << " is invalid!" << endl;
        else {

void mask_extraction(const vector<Mat> & images, const vector<Annotation> & annotations, vector<Mat> & training_set, vector<Mat> & classes_set) 
    for (size_t i = 0; i < images.size(); ++i) 

        Mat image = images[i];
        vector<Box> windows = annotations[i].objects;
        Mat classes(image.rows, image.cols, CV_32FC1);

        for (size_t row = 0; row < image.rows; ++row) 

            for (size_t col = 0; col < image.cols; ++col) 

                for (size_t w = 0; w < windows.size(); ++w) 

                    if (windows[w].contains(Point2i(row, col))) 
              <int>(row, col) = CLASSES::POTHOLE;
              <int>(row, col) = CLASSES::BACKGROUND;

        image.convertTo(image, CV_32FC1);

        training_set.push_back(image.reshape(1, 1));
        classes_set.push_back(classes.reshape(1, 1));

int main(int argc, char*argv[]) {

    string annotations_path = data_root_path + "positive_annotations.txt";
    string algorithm_save_location = data_root_path + "trained_bayes.xml";

    //Load images from annotation file
    vector<Annotation> ...
edit retag flag offensive close merge delete


can you try to explain, what your data is, and what you're trying to achieve here ?

opencv's ml classes need a single Mat (with stacked, row-features) for data, and a single col Mat with integer or float labels (one per row-feature)

you seem to do something completely different, so can you explain, please ?

berak gravatar imageberak ( 2017-12-28 06:19:13 -0500 )edit

Thank you for your answer!

My data are images of potholes, like this one

What I wanted to achieve is to training a classifier that is able to say "this image contains potholes, this one not".

But the way I thought doing this was pixel by pixel not image by image, by assigning a label for each pixel of the image by discriminating is the pixel was inside the marked region (pothole aka positive) or outside (aka negative).

So, from what I can understand you can only classify and assign a label 1:1 to an image with this library. And I'll try like this by adding negatives images but I'll probably need to cut off the regions outside the road with some pre-processing to make this work.

XanderC_ gravatar imageXanderC_ ( 2017-12-28 06:42:44 -0500 )edit

PS: Having images with different sizes will be a problem? Or I have to size the images to be all equals in dimensions?

XanderC_ gravatar imageXanderC_ ( 2017-12-28 06:44:57 -0500 )edit

"Having images with different sizes will be a problem" -- yes, that is problem 1 also, you need 1 label per image, not many, that's problem 2

and again, it's not an answer, so far. that will follow, once we know your exact situation / requirements.

berak gravatar imageberak ( 2017-12-28 06:52:32 -0500 )edit

hmm, can it be you wanted a pothole detector (is there one, and where ?) , not a classification ?

if so, maybe the HOGDescriptor would be more aedequate, than Bayes here

berak gravatar imageberak ( 2017-12-28 06:56:13 -0500 )edit

My requirements are to build an application that will run on Raspberry Pi 2+, analyze in real-time video captured with the Pi-Camera and detecting the presence of potholes on the road ahead. So yes, I need to detect potholes, but I was trying to figure out if it was possible to detect them by using a classifier applied on the video frames.

If the detection is positive, the GPS coordinates of the vehicle will be stored on a remote DB.

XanderC_ gravatar imageXanderC_ ( 2017-12-28 07:14:13 -0500 )edit

As far as I know there are some papers, where in general they use different techniques of image processing, some in order to detect the dark edges and shadows of potholes on the road surface; other just work on frontal images of potholes, like this, and doing other processing stuff. The latter lack of real applicability in real world since you will never have images like that.

I hope to have been more precise this time.

XanderC_ gravatar imageXanderC_ ( 2017-12-28 07:14:41 -0500 )edit