Hey friends
I'm developing a object detection and tracking algorithm. The available CPU resources are low so I'm using simple blob analysis; no heavy tracking algorithms.
The detection framework is created and working accordingly to my needs. It uses information from background subtraction and 3D depth data to create a binary image with white blobs as objects to detect. Then, a simple matching algorithm will apply an ID to and object and keep tracking it. So far so good.
The problem:
The problem arises when objects are too close together. The algorithm just detects it as a whole big object and I thats the problem I need to solve. In the example image above, I have clearly 3 distinct objects, so how to solve this problem?
Things I've tried
I've tried a distance transform + adaptiveThreshold approach to obtain fairly good results in individualizing objects. However, this approach is only robust with circular or square objects. If a rectangle shaped object (such as in the example) shows up in the image then this approach just doesn't work due to how distance transform algorithm is applied. So, Distance Transform approach is invalid.
Stuff that wont work
- watershed on the original image is not an option, firstly because the original image is very noisy due to the setup configuration.
- Approaches solely based on morphological operations are very unlikely to be robust.
My generic idea to solve the problem (comment this please)
I thought about a way to detect the connection points of the objects, erase the pixels between them with a line and finally let the detector work as it is.
The challenge is to detect the points. So I thought that it may be possible to do that by calculating the distance between all points of a blob, and identify connection points as the ones that have a low euclidean distance between each other, but are far away in the contour point vector (so that sequential points are not validated), but this is easy to say but not so easy to implement and test.
I welcome ideas and thoughts :)