Hey guys,
I am currently working on my thesis and for that I am trying to recognize different hand postures for controlling an embedded system. My first implementation used simple skin color segmentation, calculation of contours, convex hull, defects and so on. This works, but I am not satisfied, since it is not very robust. This approach also detects other skin colored objetcs and I don't want to compute all these things every frame.
Because of that I started to think about machine learning algorithms. I thought about using a Support Vector Machine for my task. My problem is that I don't really know what to use. Is a SVM a good choice for hand posture recognition or is another machine learning algorithm more appropriate? I am also unsure about what features to use for training my model. Currently I use SURF-Features, since SIFT is to slow. Are other features more suitable? And my last problem is the training set. What kind of images do I need to train my model?
So my questions are:
Is a Support Vector Machine a good choice for hand posture recognition?
What are good features to train the model?
- What kind of images do I need for my training? How many for each class? Grayscale, Binary, Edges?
I know that these are no specific questions, but there is so many literature out there and I need some advide where to look at.
I am working with the OpenCV Python Bindings for my image processing and use the scikit package for the machine learning part.
Best Regards Missing