Ask Your Question
0

Unable to generate Freeman chain code using OpenCv

asked 2018-10-29 01:30:41 -0600

I have some 77 vertices data.I am trying to generate freeman chain code for this contour using OpenCv in C++.

I have written the following function.This function basically takes the x and y values as inputs and generates a freeman chain code.

void GenerateFreemanChainCode(std::vector<double> X, std::vector<double> Y, std::vector<char> &freemanChainCode)
{
    cv::Mat img = cv::Mat::zeros(2, 2, CV_8UC1);
    for (int Idx = 0; Idx < X.size()-1; Idx++)
    {
        cv::line(img, Point(X.at(Idx), Y.at(Idx)), Point(X.at(Idx+1), Y.at(Idx+1)), cv::Scalar(255), 1);
    }

    imshow("Test", img);

    vector<vector<Point> > contours;

    findContours(img, contours, RETR_EXTERNAL, CV_CHAIN_CODE);
    //cout << Mat(contours[0]) << endl;

    findContours(img, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
    //cout << "CHAIN_APPROX_SIMPLE" << endl;
    //cout << Mat(contours[0]) << endl;

    CvChain* chain = 0;
    CvMemStorage* storage = 0;
    storage = cvCreateMemStorage(0);
    cvFindContours(&IplImage(img), storage, (CvSeq**)(&chain), sizeof(*chain), CV_RETR_EXTERNAL, CV_CHAIN_CODE);

    for (; chain != NULL; chain = (CvChain*)chain->h_next)
    {
        //chain=(CvChain*)chain ->h_next; 
        //if(chain==NULL){break;}
        CvSeqReader reader;
        int i, total = chain->total;
        cvStartReadSeq((CvSeq*)chain, &reader, 0);
        //printf("--------------------chain\n");

        for (i = 0; i<total; i++)
        {
            char code;
            CV_READ_SEQ_ELEM(code, reader);
            //printf("%d", code);
            freemanChainCode.push_back(code);
        }
    }
}

My vertices data looks as follows(X,Y Data seperate by space):

0 0
0 0.00144928
0.00144928 0.00144928
0.00289855 0.00144928
0.00724638 0.00434783
0.015942 0.00434783
0.0275362 0.00434783
0.0434783 0.00434783
0.0637681 0.00434783
0.0782609 0.00434783
0.0869565 0.00434783
0.102899 0.0057971
0.111594 0.0057971
0.126087 0.0057971
0.136232 0.0057971
0.15942 0.0057971
0.172464 0.0057971
0.182609 0.00434783
0.198551 0.00434783
0.214493 0.00434783
0.23913 0.00434783
0.250725 0.00434783
0.269565 0.00434783
0.284058 0.00434783
0.305797 0.00434783
0.331884 0.00434783
0.344928 0.00434783
0.371014 0.0057971
0.386957 0.0057971
0.402899 0.00724638
0.423188 0.00724638
0.436232 0.00724638
0.45942 0.00724638
0.47971 0.00724638
0.5 0.00724638
0.513043 0.00724638
0.524638 0.00724638
0.553623 0.00724638
0.578261 0.00724638
0.592754 0.00724638
0.602899 0.00724638
0.62029 0.00724638
0.634783 0.00724638
0.646377 0.00724638
0.672464 0.00724638
0.686957 0.00724638
0.701449 0.00724638
0.718841 0.00724638
0.737681 0.00724638
0.763768 0.00724638
0.773913 0.00724638
0.797101 0.00724638
0.817391 0.00724638
0.824638 0.00724638
0.826087 0.00724638
0.831884 0.00724638
0.846377 0.00724638
0.862319 0.00724638
0.886957 0.00724638
0.895652 0.00724638
0.902899 0.00724638
0.917391 0.00724638
0.928986 0.00724638
0.949275 0.00724638
0.965217 0.00724638
0.997101 0.00869565
1 0.00869565

I would be really glad,if someone can help me ,ideintify the mistake. Thanks in advance.

edit retag flag offensive close merge delete

Comments

which opencv version are you using here ?

also: cv::line() expects integer coords, not double. where do you get those contours from, originally ?

berak gravatar imageberak ( 2018-10-29 01:48:00 -0600 )edit

I am using 3.4.3. I have a software,which records the vertices data,while user creates a certain geometrical figure.So,I get the vertices data from that.What do you suggest here then?

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 02:03:58 -0600 )edit

i can only guess, but your lines probably end up in a single point at (1,0).

you could try to scale the vertices with the image size.

then, using deprecated 1.0 c-api code is mostly a bad idea. not sure even, if findContours() still calculates a freeman chain. (maybe you're better off, doing that manually)

berak gravatar imageberak ( 2018-10-29 02:07:18 -0600 )edit

Basically, from my vertices raw data, I am scaling the data,such that both x and y coordinate values lie between 0 and 1. Could you please elaborate a bit more, as to ,how I can modify my code to scale my vertices data with image size. In the above code, I guess, the image size is 2/2 right? Could you please explain the changes to be done exactly.

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 02:12:26 -0600 )edit

such that both x and y coordinate values lie between 0 and 1

well, that's problem #1. make it : x=[0..w] y=[0..h] instead.

also, the image you draw to, should be significantly larger, like 256x256.

do you understand, that pixel coords are integers ?

berak gravatar imageberak ( 2018-10-29 02:15:07 -0600 )edit

ah I see! So, since currently I have my x and y values between 0 and 1,I would multiply each x value with W and y value with H and then case each value to integer.I will also make the following change in the code. cv::Mat img = cv::Mat::zeros(256, 256, CV_8UC1);

That should be enough right?

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 04:07:12 -0600 )edit
1

that would at least solve the 1st half of your program.

i'm not sure, if findContours() still calculates freeman chains correctly, and the c-api code you're trying with is definitely no more maintained.

it might be better / easier, to calculate the chains from scratch

berak gravatar imageberak ( 2018-10-29 04:10:49 -0600 )edit
1

Yes,I quickly tested with some of my sample data,which I have.The code is is able to generate the chain code,for each data set (I mean,for each vertices data,now I have one freeman chain code).I now,want your suggestion regarding my approach. I basically have training data in case of some geometrical figures such as Line,Circle, Ellipse,Square and Arc. For any given data set,I want to classify my geometrical figure as one of these 5 types. So, once I generate the freeman chain code my trainingset as well as testing dataset. As we know, freeman chain code consists of values,from 0-7,I am calculating the count of 0's,1's...and 7's for testing and training dataset and finally,just calculating the distance as {|xo-so| +..|x7-s7|},where x0 is no. of zeros in dataset1,s0 is no. of ...(more)

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 05:01:10 -0600 )edit

yea, now it gets interesting ;)

what you're doing is basically a "histogram" comparison, right ?

since your contours / chains will differ in length, you probably have to normalize the histograms before the comparison, also there is compareHist() which offers some other distance metrics apart from L2 (e.g. chi-sqr)

my objection to this would be, that it is loosing the order of the chain values. e.g. a triange and it's upside-down variant would yield the same result.

berak gravatar imageberak ( 2018-10-29 05:27:32 -0600 )edit

Yes,I am okay with that.My main point is: I need to detect these shapes at any orientation (0,45,90,180,270 and even 360 degrees). Can you please briefly desribe, how can I classify my shape once,I have all my freeman chain codes.

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 05:31:48 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-10-29 06:58:21 -0600

berak gravatar image

here is another idea, the 1 dollar shape recognition

//
// mostly from: https://github.com/roxlu/ofxOneDollar/
//

#include "opencv2/opencv.hpp"
using namespace cv;
using std::vector;

namespace onedollar {

    double length(const vector<Point> &points) {
        double len = 0;
        for (int i=1; i<points.size(); ++i) {
            len += norm(points[i-1] - points[i]);
        }
        return len;
    }

    Rect2d boundingBox(const vector<Point2d> &pts) {
        double min_x = FLT_MAX, min_y = FLT_MAX, max_x = FLT_MIN, max_y = FLT_MIN;
        std::vector<Point2d>::const_iterator it = pts.begin();
        while (it != pts.end()) {
            Point2d v = (*it);
            if(v.x < min_x) min_x = v.x;
            if(v.x > max_x) max_x = v.x;
            if(v.y < min_y) min_y = v.y;
            if(v.y > max_y) max_y = v.y;
            ++it;
        }

        Rect2d rect;
        rect.x = min_x;
        rect.y = min_y;
        rect.width = (max_x - min_x);
        rect.height = (max_y - min_y);
        return rect;
    }

    void resample(const vector<Point> &points, int n, vector<Point2d> &pts) {
        double I = length(points)/(n - 1);
        double D = 0;

        for (int i = 1; i < points.size(); ++i) {
            Point2d curr = points[i];
            Point2d prev = points[i-1];
            Point2d dir = prev - curr;
            double d = norm(dir);
            if ( (D + d) >= I) {
                double qx = prev.x + ((I-D)/d) * (curr.x - prev.x);
                double qy = prev.y + ((I-D)/d) * (curr.y - prev.y);
                Point2d resampled(qx, qy);
                pts.push_back(resampled);
                D = 0.0;
            }
            else {
                D += d;
            }
        }
        // we had to do some freaky resizing because of rounding issues.
        while (pts.size() <= (n - 1)) {
            pts.push_back(points.back());
        }
        if (pts.size() > n) {
            pts.erase(pts.begin(), pts.begin()+n);
        }
    }

    Point2d centroid(const vector<Point2d> &pts) {
        Point2d center(0,0);
        vector<Point2d>::const_iterator it = pts.begin();
        while (it != pts.end()) {
            center += (*it);
            ++it;
        }
        center /= double(pts.size());
        return center;
    }

    vector<Point2d> rotateBy(vector<Point2d> &pts, double nRad, const Point &c) {
        vector<Point2d> rotated;
        double cosa = cos(nRad);
        double sina = sin(nRad);
        vector<Point2d>::iterator it = pts.begin();
        while (it != pts.end()) {
            Point2d v = (*it);
            double dx = v.x - c.x;
            double dy = v.y - c.y;
            v.x = dx * cosa - dy * sina + c.x;
            v.y = dx * sina + dy * cosa + c.y;
            rotated.push_back(v);
            ++it;
        }
        return rotated;
    }

    void rotateToZero(vector<Point2d> &pts, const Point &c) {
        double angle = atan2(c.y - pts[0].y, c.x - pts[0].x);
        angle = 1.0 - angle;
        angle /= (2*CV_PI);
        pts = rotateBy(pts, angle, c);
    }

    void scaleTo(vector<Point2d> &pts, double nSize = 250.0) {
        Rect2d rect = boundingBox(pts);
        vector<Point2d>::iterator it = pts.begin();
        while (it != pts.end()) {
            Point2d* v = &(*it);
            v->x = v->x * (nSize/rect.width);
            v->y = v->y * (nSize/rect.height);
            ++it;
        };
    }

    // translates to origin.
    void translate(vector<Point2d> &pts, const Point &c) {
        vector<Point2d>::iterator it = pts.begin();
        while (it != pts.end()) {
            Point2d* v = &(*it);
            v->x = v->x - c.x;
            v->y = v->y - c.y;
            ++it;
        };
    }

    void normalize(const vector<Point> &points, int nNumSamples, vector<Point2d> &pts) {
        resample(points, nNumSamples, pts);
        Point2d c = centroid(pts);
        rotateToZero(pts, c);
        scaleTo(pts);
        translate(pts, c);
    }

    // distance between two paths.
    double pathDistance(const vector<Point2d> &p, const vector<Point2d> &q) {
        // sizes are not equal (?)
        if (p.size() != q.size()) {
            return -1.0;
        }
        double d = 0;
        for (int i = 0; i < q.size(); ++i) {
             d += norm(p[i ...
(more)
edit flag offensive delete link more

Comments

Thanks a lot for the suggestion mate! I would once like to describe my problem statement again. In my case, geometrical figures such as line,circle,ellipse,arc and square can be drawn in any orientation and also from any point (I mean, if you consider circle,in order to complete the 360 degrees,you can consider your startpoint at 0 degrees (or) 90 degrees (or) even 180 degrees etc...similarly for ellipse too).So, the classification algorithm needs to take care of the size (i.e. scaling), orientation as well as the starting point.Do you think the above algo does that? Can you please provide some info,so that it can be helpful for me to test the above code,which you have posted).

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 07:44:39 -0600 )edit

yes, it can handle arbitrary rotations / scaling / differnt point counts.

you would use the distance() function at the bottom, to compare 2 contours (e.g. from findContours())

berak gravatar imageberak ( 2018-10-29 07:53:06 -0600 )edit

As far I understand, are you suggesting this: Lets say first I collect sample training data for classes Line(R1),Circle(R2), Ellipse(R3),Square(R4) and Arc(R5). Then, I take one of the testing data (T1) and use the function distance() function like this: distance ( R1, T1) -> val1..... distance(R5,T1)-> val5.So, I then compare the values from val1,val2...., val5. Whichever category has the lowest value,I can categorize the trajectory into that particular category. Right? (or) do you have something different in your mind?

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 08:04:23 -0600 )edit

yes, that's about it ;)

berak gravatar imageberak ( 2018-10-29 08:12:00 -0600 )edit

Thanks a lot again! I will try this algorithm and update you with the results mate! I really hope,this works. As usual, the main challenge lies in finding the threshold :) My main motive is, somehow classify the known entities right.In case of unknown entity,I do not want to classify it into anything! :)

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 08:16:04 -0600 )edit
1

@berak: just tried with a simple dataset mate! It's not even able to detect line :(. I would definitely need some more inputs from you!

Programming_Enthusiast gravatar imageProgramming_Enthusiast ( 2018-10-29 09:30:53 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2018-10-29 01:30:41 -0600

Seen: 485 times

Last updated: Oct 29 '18