Proper ways of detection and tracking. I am confused.
Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.
And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.
Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?
What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?
So what are ways of detection, tracking and gathering an orientation of tracked objects?
For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.
Do object descriptors like moment or ORB descriptor might help me in detection and tracking?
Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.
Can someone help put me back on the right track?
Thanks for your help, I appreciate that.
detection can be quite expensive, so you do that only now & then, and in the meantime, use a tracker to follow it
That's exactly what I figured out. I could do detection only when my tracker gets lost or every specific amount of frames to make sure it's accurate.
Thanks.
Here I found an interesting video showing implemented real-time tracking using key points. https://www.youtube.com/watch?v=PVWyZ...
i think, you should not build your own lk based tracker, but use one of those in opencv_contrib/tracking
@berek sorry it doesn't help me ;/
https://docs.opencv.org/master/d9/df8...
@berek I've already known that the thing is how to track a recognized object. I wrote this whole question about it.
Thanks for help guys, wrote the question here was worth it...
Answer my own question... So yeah I am right, tracking and detection are two different topics. Detection, in most cases are in general high CPU-consuming where trackerrs are consider less cpu consuming i.e.simply centroid tracker, or other implementation you can find in opencv_contrib/tracking like kcf, moose, or much more advanced that uses neural network GOTURN, are for tracking problem, you can detect object with object detector like Haar cascades, HOG + Linear SVM, or just thresh, countor it does't make difference. The main thing iis that you init tracker with ROI or box(points) and for updated frame it return new bounded box, that shuld contain your tracked object. Tracker can get make mistakes over time, so it is good practise to supervise is again with new ROI of object provided..
..by object detector algorithms, or any other way like even simply selection ROI on image. I hope that helps and not only me had that problem. ;)