Feature extraction 3D
Please my case is as follows, I have and adjacency matrix(Graph data structure, 0-1) it's originally mesh having 3d coordinates (XYZ). Now the idea is that I want to extract features, but I read that I have to obtain interest points first then for each I obtain feature vector..Correct?
So if I would like to read the matrix how this could be done? I'm only finding reading images taking .jpg how could it be in my case, and will the process follows normally I mean showing the feature points on my object and like that. Hopefully I'm in the correct place to ask this.
Thank you
it's unclear, what you have, what you need, or what even would be the general context / purpose of it.
What I have is mesh (XYZ-coordinates) represented as graph data structure (adjacency matrix) What I need is to obtain descriptors for it and visualize them on the mesh purpose of it make featured connected component from the obtained descriptors(feature points) to follow it by solving a matching problem.
sorry, but just repeating it isn't too helpful.
pcl might be no more maintained, but it has 3d feature extraction , while opencv has no such thing.
it's just a trying to clarify.. it seems not working... I wasn't lucky to find a tutorial for using pcl 3d feature extraction using python. and badly I don't know c++. it might be not the correct place, but hoping you may assist, or anyone sees it. If I obtained 3d descriptor like zernike moments or fourier transform (they are represented as vectors and arrays) how I might find the corresponding XYZ-points to be able to visulaize, or the case Is reversed I have to extract feature points and for each I have to obtain descriptors?
again, WHY do you need to do that ? what is the purpose ?
can it also be you confuse it, and that it would be 3d keypoints, and the feature dimensionality is somewhat irrelevant ?
I mentioned above I'm trying to make matching, I'm trying to organize an approach I'm using at my PhD studies?? I don't know if this what you meant to know!!
matching what and why again ?
What I'm tending to obtain is 3d descriptors like what I mentioned before 3d zernike moments, 3d shape context... But what I lately read is that I should make detection then extraction then matching, After obtaining the descriptors I have to make featured connected components that let make me match two objects...I don't know what Is the unclear thing. Matching featured conneced components of two objects
my object( mesh) will be represented as graph( nodes and edges) and when I obtain the features I should show them on the mesh, colorize the vertices, obtaining the connected components and continue the rest
Look - what berak wants is a USE CASE. What exactly you want to archive with this solution. If its secret - its fine. Usually when you really know what people want to do - you can help them - even with suggestion they don't thought about yet( "Thinking out of the box" )
You mentioned a phd study - it could be that the whole purpose is just to show and document your technology skills and you abilities to learn new things.If that's the case - i am sorry for the upper case.
Sir, I mentioned it's a Ph.D. thesis and I'm trying to figure out whether I can do what I need with OpenCV or not, all the situation is that I find OpenCV is great library when dealing with image, but in my case, I don't have image I have mesh... But the idea is that Image and mesh(in my case it's modeled as a graph data structure: matrix) both have the same structure. So I was wondering logically if it fits in this case or not..Thanks for dealing with my situation and spending time to check it.