Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of the tracker algorithm and when I should use them? my current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN. And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable.

So what are ways of detection, tracking and gathering an orientation of tracked objects?

For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all eight points using the correlation between them.

Everything is mixing in my head, feature vectors, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of the tracker algorithm and when I should use them? my current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN. And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking?

tracking? What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable.

suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects?

objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all eight points using the correlation between them.

Everything is mixing in my head, feature vectors, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please algorithms.

Please, some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of the tracker algorithm and when I should use them? my current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN. CNN.

And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking? tracking, so first I detect an object and pass coordinates to the tracker?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all eight their points using the correlation between them.

Everything is mixing in my head, feature vectors, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please, some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of the tracker algorithm algorithms and when I should use them? my current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

Everything is mixing in my head, feature vectors, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please, some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of tracker algorithms and when I should use them? my My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

Everything is mixing in my head, feature vectors, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please, some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

Do object descriptors like moment or ORB descriptor might help me in detection and tracking?

Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please, some help me and pull me back on the correct path.

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?

What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So what are ways of detection, tracking and gathering an orientation of tracked objects? For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

Do object descriptors like moment or ORB descriptor might help me in detection and tracking?

Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Please, some help me and pull me back on the correct path.Thanks for your help, I appreciate that.*

Proper ways of detection and tracking. I am confused.

Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.


And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.


Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?


What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?


So what are ways of detection, tracking and gathering an orientation of tracked objects? objects?
For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

Do

**Do object descriptors like moment or ORB descriptor might help me in detection and tracking?

Everything
*Everything
is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.


Thanks for your help, I appreciate that.*

Proper ways of detection and tracking. I am confused.

Could  Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

And
 And
here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.

Does track.
 Does
it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?

What tracker?
 What
is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

So


 So what are ways of detection, tracking and gathering an orientation of tracked objects?
For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.

**Do Do object descriptors like moment or ORB descriptor might help me in detection and tracking?
*Everything

 Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Thanks for your help, I appreciate that.

Proper ways of detection and tracking. I am confused.

 Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

 And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.
 Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?
 What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?


 So what are ways of detection, tracking and gathering an orientation of tracked objects?
objects?
For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.
Do object descriptors like moment or ORB descriptor might help me in detection and tracking?

 Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.

Thanks for your help, I appreciate that.

Proper ways of detection and tracking. I am confused.

 Could someone explain to me what is the purpose of tracker algorithms and when I should use them? My current thinking is that if I need tracking an object I can use for example thresh + contours + moments to obtain its position, but for most robust applications I would go with some detection algorithm neural network based like CNN.

 And here is my question, what the purpose of a tracker, I know I can use it to track the object in a supervised mode when I specify ROI to track.
 Does it have any sense to combine tracker with an object detector to I guess accelerate tracking, so first I detect an object and pass coordinates to the tracker?
 What is even purpose of cv:: calcOpticalFlowPyrLK then. I could extract good features of an object and then use it to track it, and the result for some applications will be still suitable?

 So what are ways of detection, tracking and gathering an orientation of tracked objects?
For example, if I track good features using a Lucas-Kanade method, I assume there is a way to calculate average rotation from all their points using the correlation between them.
Do object descriptors like moment or ORB descriptor might help me in detection and tracking?

 Everything is mixing in my head, feature vectors, descriptor, feature detection, feature extraction, moments, an object detector, trackers algorithms.
Can someone help put me back on the right track?

Thanks for your help, I appreciate that.