Ask Your Question

PedroBatista's profile - activity

2020-10-24 03:01:47 -0600 received badge  Good Answer (source)
2020-03-26 00:35:58 -0600 received badge  Notable Question (source)
2019-08-13 18:50:24 -0600 received badge  Nice Answer (source)
2019-01-21 08:20:40 -0600 received badge  Famous Question (source)
2018-06-14 03:00:15 -0600 received badge  Teacher (source)
2018-04-13 02:30:08 -0600 received badge  Popular Question (source)
2017-07-05 13:40:02 -0600 received badge  Notable Question (source)
2016-12-09 12:24:33 -0600 received badge  Popular Question (source)
2016-05-23 12:49:16 -0600 received badge  Nice Question (source)
2016-02-14 06:05:55 -0600 received badge  Self-Learner (source)
2015-12-04 05:37:45 -0600 asked a question Selective hole filling

Hello everyone

At the moment I use cv::floodFill to fill holes in binary images. So, if I have the following image (These images are examples produced with a painting tool).

image description

After using cv::floodFill I get:

image description

However, I'd like to know if there is any way to get this instead:

image description

I can't think of a possible solution, so this is a shot in the dark.. no harm in trying though :)

Best regards!

2015-10-30 05:56:57 -0600 commented answer Whyare all the images in uint8 data type?

You got your first reasoning backwards. The RGB/BGR values are [0, 255] BECAUSE uint8 has that range :) It turns out that having 255 possible values for one pixel is enough resolution to have a clear image while not consuming much memory at the same time.

As for the other question, a cv::Mat isn't only needed to display images. A cv::Mat may be used to store any kind of data in the form of a matrix. For example, if you need to perform matrix calculations with floating point precision, uint8 isn't enough. If you have a sensor that outputs data in the range of [0-10000] and need to read it into a cv::Mat, you need uint16 datatype.

Basically OpenCV tries to be as flexible as possible :)

2015-10-29 09:49:15 -0600 answered a question Whyare all the images in uint8 data type?

In OpenCV an "image" is represented by a cv::Mat datatype

Not all Mats are uint8 data type.

Mat initialization can be done in the following way

cv::Mat test = cv::Mat(rows, cols, type)

As an example, to initialize a 640*480 Mat of type uint8 with one channel:

cv::Mat test = cv::Mat(480, 640, CV_8UC1);

In which "CV" is the prefix of all data types, 8 means the number of bytes of each pixel, U stands for unsigned and C1 means that Mat has one channel.

Other examples:

CV_16UC1 : Mat of type unsigned short (uint16) with one channel

CV_32FC3 : Mat of type float with three channels

CV_8UC3: Mat of type unsigned char (uint8) with 3 channels (used to display RGB images)

CV_32SC2: Mat of type signed integer (int32) with two channels

The reason why Uint8 is very common is because it is the standard way to display images, in which each pixel ranges between 0 and 255. If it is a gray scale image, a pixel with value 0 is black and a pixel with value 255 is white, and gray in the middle

2015-10-29 05:48:26 -0600 commented answer Create mask to select the black area

Share your code, otherwise I wont be able to help you

2015-10-28 12:01:31 -0600 received badge  Scholar (source)
2015-10-27 05:55:49 -0600 answered a question Create mask to select the black area

OpenCV allows for easy indexation to create masks.

So imagine you want a cv::Mat with white pixels in the black zones of original image.

cv::Mat mask = cv::Mat::zeros(Original.size(), CV_8UC1);

mask.setTo(255, Original == 0);


mask = (Original == 0);

This also works for > or < symbols, so, if you have

mask = (Original < 5);

Mask contains white pixels where Original has pixels lower than 5.

2015-10-21 04:29:31 -0600 commented question Single blob, multiple objects (Ideas on how to separate objects)

Oh, now I get it. I had the wrong idea about watershed then, thanks. I'll give it a try.

2015-10-20 05:53:50 -0600 commented question Single blob, multiple objects (Ideas on how to separate objects)

Even assuming that the distance transform + threshold outputs perfect seeds for all scenarios (which is not the case, mainly for non-round objects), then it requires the original image to perform watershed, am I right? I really do not know what happens in the watershed algorithm, so there might be a misconception here, but I'm assuming that it computes the edges of image and then "fills" the image with different labels according to the edges.

My original image is really noisy, and not coherent edges can be computed from it.

2015-10-19 10:06:40 -0600 answered a question Single blob, multiple objects (Ideas on how to separate objects)

I developed an algorithm that performs my task well.

It assumes that within a blob the connection areas are thinner than the non connection areas. So the algorithm performs the following steps:

1 - Measure the distance between all contour points.

2 - For each contour point, select the corresponding, non-sequential point that is separated by the least distance, with the condition that between the pair there are white pixels. (this is shown in the small red and blue dots in the image).

3 - Cluster pairs in groups corresponding to the same separation zone, and select the pair separated by the least distance (bigger colored circles).

4 - Draw a black line between selected pairs.

image description

2015-10-06 04:35:12 -0600 received badge  Enthusiast
2015-09-28 04:10:50 -0600 commented answer Single blob, multiple objects (Ideas on how to separate objects)

I'll give it a try, thank you for the sugestion :)

2015-09-28 04:10:33 -0600 commented question Single blob, multiple objects (Ideas on how to separate objects)

There is no "normal" image in this project because I use a Axus Xtion 3D sensor (instead of usual camera) and use the Infra-Red image as one of the inputs. The infrared image is good because is resistant to illumination changes (good for background subtraction), but it is bad for almost everything else because its very noisy.

The other input is the 3D data, so I guess this binary image is really the starting point

2015-09-28 04:09:25 -0600 received badge  Supporter (source)
2015-09-25 14:12:15 -0600 received badge  Student (source)
2015-09-25 11:19:37 -0600 received badge  Editor (source)
2015-09-25 11:16:15 -0600 asked a question Single blob, multiple objects (Ideas on how to separate objects)

Hey friends

I'm developing a object detection and tracking algorithm. The available CPU resources are low so I'm using simple blob analysis; no heavy tracking algorithms.

The detection framework is created and working accordingly to my needs. It uses information from background subtraction and 3D depth data to create a binary image with white blobs as objects to detect. Then, a simple matching algorithm will apply an ID to and object and keep tracking it. So far so good.

The problem:


The problem arises when objects are too close together. The algorithm just detects it as a whole big object and thats the problem I need to solve. In the example image above, I have clearly 3 distinct objects, so how to solve this problem?

Things I've tried

I've tried a distance transform + adaptiveThreshold approach to obtain fairly good results in individualizing objects. However, this approach is only robust with circular or square objects. If a rectangle shaped object (such as in the example) shows up in the image then this approach just doesn't work due to how distance transform algorithm is applied. So, Distance Transform approach is invalid.

Stuff that wont work

  • watershed on the original image is not an option, firstly because the original image is very noisy due to the setup configuration, secondly because of the strain on the CPU.
  • Approaches solely based on morphological operations are very unlikely to be robust.

My generic idea to solve the problem (comment this please)


I thought about a way to detect the connection points of the objects, erase the pixels between them with a line and finally let the detector work as it is.

The challenge is to detect the points. So I thought that it may be possible to do that by calculating the distance between all contour points of a blob, and identify connection points as the ones that have a low euclidean distance between each other, but are far away in the contour point vector (so that sequential points are not validated), but this is easy to say but not so easy to implement and test.

I welcome ideas and thoughts :)