Need to use c for a project and i saw this screenshot in a pdf which gave me the idea http://i983.photobucket.com/albums/ae313/edmoney777/Screenshotfrom2013-11-10015540_zps3f09b5aa.png
It say's you can treat each pixel of an image as a graph node(or vertex i guess) so i was wondering how
i would do this using OpenCV and the CvGraph set of functions. Im trying to do this to learn about and how
to use graphs in computer vision and i think this would be a good starting point.
I know i can add a vetex to a graph with
int cvGraphAddVtx(CvGraph* graph, const CvGraphVtx* vtx=NULL, CvGraphVtx** inserted_vtx=NULL )
and the documentation says for the above functions vtx parameter
"Optional input argument used to initialize the added vertex (only user-defined fields beyond sizeof(CvGraphVtx) are copied)"
is this how i would represent a pixel as a graph vertex or am i barking up the wrong tree...I would love to learn more about graphs so if someone could help me by maybe posting code, links, or good ol' fashioned advice...Id be grateful=)