# Writing your own freeman chain code without using openCv

Hi,

Since the openCv freeman chain code is a bit ambiguous. I am trying to write my own 8 freeman chain code.

I am planning to follow this steps: ->Lets fix the Image size as [W,H]. Now, scale all my vertices data to this Image size i.e. x coordinates need to be multiplied with W and cast it as integers and y coordinates need to be multiplied with H and cast it as integers.

-> Now, I find angle between consecutive vertices v1 and v2 from aTan(y2-y1/x2-x1). ->If this angle is between 337.5 degrees - 22.5 degrees, then the code is 0 (or) if the angle is between 22.5 degrees - 67.5 degrees, then the code is 1. ->So, for a contour,if my freeman chain code is abcdefghab. We try to form a closed chain from this,like:

```
" babcdefghab "
```

then we find difference is freeman chain code,like: b - a, a -b , b -c,..........., a - b. Final code: cdefabcd. So, then,I compare final codes of various contours by summing up then differences of 0's, 1's,2's,3's etc. Whichever class has the least sum,I can take that as the classification.

I would be glad,if someonecan tell me,if my approach is fine.

@berak: Could you please have a look at the above procedure and suggest changes.Thanks

http://answers.opencv.org/question/14... , 2nd answer there should work for non-discrete points, too.

yeah,thanks! Even the above procedure is fine right? Also, I have one query regarding your suggestion for yesterdays problem.When we resample points, some of the points are same,for the contour having lesser number of points. In that case,if we try to generate freeman chain code for that particular contour,what would we evaluate for two points,having same data??It does not make any sense right?

Don't you think,I need to consider minimum number of points while sampling?? I mean, minimum of the points size in the training contours.

if you duplicate points this way, either your N is too large, or you could try to upscale the whole thing (so more points fit inbetween)

Could you elaborate a bit more? our main aim is to get the same number of characters size,when we generate freeman chain code right??

As of now,I am doing this ->Resampling all my data to the size N ( N is the minimum number of points in a contour among the training sets) -> Applying PCA To my data now -> Scaling such that all points lie in between 0-1 -> Scaling again,such that, its equal to Image size (like,scaling to [W,H] (I dont think,i require this right? Becuase,I have written my own chaincode function and not getting it from Image) ->Finally, finding freeman chain codes to the updated contours.

Please suggest the changes in the workflow.I want to be as good as I can,while preprocessing the data.

the "scaling to image size" was from when you tried to use findContours() for this (which needed

integerpoints).what's the PCA for ? to find the rotation ?

right.So,now,I dont have to rescale to image coordinates.I will remove that line. Yes,To take care of rotation,I use PCA. (Works atleast for line,circle and Ellipse).

Regarding resampling,could you share some other good function.The function used in 1$ Code is really bad and gives duplicate points.I think,in our case, we need something like interpolation. As you said, taking maximum number of points is better.

@berak: Please do provide a code,which does resampling for a given Number.

ah, cmon, i probably won't. (we simply can't spoonfeed every one here)