Ask Your Question

Single blob, multiple objects (Ideas on how to separate objects)

asked 2015-09-25 11:16:09 -0600

updated 2015-09-25 11:21:36 -0600

Hey friends

I'm developing a object detection and tracking algorithm. The available CPU resources are low so I'm using simple blob analysis; no heavy tracking algorithms.

The detection framework is created and working accordingly to my needs. It uses information from background subtraction and 3D depth data to create a binary image with white blobs as objects to detect. Then, a simple matching algorithm will apply an ID to and object and keep tracking it. So far so good.

The problem:


The problem arises when objects are too close together. The algorithm just detects it as a whole big object and thats the problem I need to solve. In the example image above, I have clearly 3 distinct objects, so how to solve this problem?

Things I've tried

I've tried a distance transform + adaptiveThreshold approach to obtain fairly good results in individualizing objects. However, this approach is only robust with circular or square objects. If a rectangle shaped object (such as in the example) shows up in the image then this approach just doesn't work due to how distance transform algorithm is applied. So, Distance Transform approach is invalid.

Stuff that wont work

  • watershed on the original image is not an option, firstly because the original image is very noisy due to the setup configuration, secondly because of the strain on the CPU.
  • Approaches solely based on morphological operations are very unlikely to be robust.

My generic idea to solve the problem (comment this please)


I thought about a way to detect the connection points of the objects, erase the pixels between them with a line and finally let the detector work as it is.

The challenge is to detect the points. So I thought that it may be possible to do that by calculating the distance between all contour points of a blob, and identify connection points as the ones that have a low euclidean distance between each other, but are far away in the contour point vector (so that sequential points are not validated), but this is easy to say but not so easy to implement and test.

I welcome ideas and thoughts :)

edit retag flag offensive close merge delete


take a look at my answer as a hint analyse distance between points . if you post the original image it would be nice.

sturkmen gravatar imagesturkmen ( 2015-09-25 14:11:48 -0600 )edit

@PedroBatista I would like also to see the original image, because it might be easier to discriminate the objects before you reach to the binary image. The point is to somehow preserve and sharpen the edges of each object before you use the distanceTransform() for example.

theodore gravatar imagetheodore ( 2015-09-26 12:16:46 -0600 )edit

There is no "normal" image in this project because I use a Axus Xtion 3D sensor (instead of usual camera) and use the Infra-Red image as one of the inputs. The infrared image is good because is resistant to illumination changes (good for background subtraction), but it is bad for almost everything else because its very noisy.

The other input is the 3D data, so I guess this binary image is really the starting point

PedroBatista gravatar imagePedroBatista ( 2015-09-28 04:10:33 -0600 )edit

Am I missing why the following would not work?

  • Distance transform to find centers combined with peak thresholding
  • Give those centers to the watershed algorithm
  • You now got 3 seperated blobs and can look for neighboring pixels

More info on similar problem: It is a python tutorial but it does fairly identical things like the C++ interface.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-10-20 04:49:38 -0600 )edit

Even assuming that the distance transform + threshold outputs perfect seeds for all scenarios (which is not the case, mainly for non-round objects), then it requires the original image to perform watershed, am I right? I really do not know what happens in the watershed algorithm, so there might be a misconception here, but I'm assuming that it computes the edges of image and then "fills" the image with different labels according to the edges.

My original image is really noisy, and not coherent edges can be computed from it.

PedroBatista gravatar imagePedroBatista ( 2015-10-20 05:53:50 -0600 )edit

You have the binary image or not? What watershed does is, taken for each blob a pooring center, then one center in the background, start pooring water until edges bounce on eachother. there a seperation is made. the binary image is used to define borders on how far the fluid can flow.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-10-21 03:29:31 -0600 )edit

Oh, now I get it. I had the wrong idea about watershed then, thanks. I'll give it a try.

PedroBatista gravatar imagePedroBatista ( 2015-10-21 04:29:31 -0600 )edit

Yep I use it for fruit segmentation that hangs close together after applying a detector which yields 1 big blob... it works perfectly fine for me.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-10-21 06:17:55 -0600 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2015-10-19 10:06:40 -0600

I developed an algorithm that performs my task well.

It assumes that within a blob the connection areas are thinner than the non connection areas. So the algorithm performs the following steps:

1 - Measure the distance between all contour points.

2 - For each contour point, select the corresponding, non-sequential point that is separated by the least distance, with the condition that between the pair there are white pixels. (this is shown in the small red and blue dots in the image).

3 - Cluster pairs in groups corresponding to the same separation zone, and select the pair separated by the least distance (bigger colored circles).

4 - Draw a black line between selected pairs.

image description

edit flag offensive delete link more


Can you share the code? I am unable to get what function you used for the points mentioned above.

Vikas Ranga gravatar imageVikas Ranga ( 2016-07-30 04:18:41 -0600 )edit

answered 2015-09-25 22:26:45 -0600

i trial code based on convexityDefects maybe you will improve it.

result image :

image description

#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;

int main( int argc, char** argv )
    char* filename = argc >= 2 ? argv[1] : (char*)"Obj_Sep.png";
    Mat src = imread(filename);
    if (src.empty())
        return -1;

    Mat bw;
    cvtColor( src, bw, COLOR_BGR2GRAY );
    bw = bw > 127;

    // Find contours
    vector<vector<Point> > contours;
    vector<int> contoursHull;
    vector<Vec4i> defects;
    findContours( bw, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE );

    for ( size_t i = 0; i < contours.size(); i++)
        if( contourArea(contours[i]) > 500 )
            convexHull(contours[i], contoursHull,true);
            convexityDefects(contours[i], contoursHull,defects);

            for ( size_t j = 0; j <  defects.size(); j++)
                Vec4i defpoint = defects[j];
           imshow("bw", src);
    return 0;
edit flag offensive delete link more


I'll give it a try, thank you for the sugestion :)

PedroBatista gravatar imagePedroBatista ( 2015-09-28 04:10:50 -0600 )edit

Question Tools



Asked: 2015-09-25 11:14:50 -0600

Seen: 11,242 times

Last updated: Oct 19 '15