Ask Your Question

# How to remove unwanted areas/pixels from image ?

Hi guys,

I do not know that the question has been asked or not before this. You think that you have an leaf image in your handle. The image does not only contains leaf but also some lines, or fuzzy-things. Okay I hope It is clear until this. Now, You have to remove that lines and fuzzy things in order to make some processes. You just want to take leaf in the image and also just the external contours of the leaf. Do you have an idea ? Which functions or steps do you use for this ? Finally, This process must be very clear. I mean, there should be any fuzzy things in the image, except leaf.

I put an example. These are before and after. Just give me an opinion or something like that.

As you see, The problem is that. I just want to take the external contours of the main leaf in the image. Or If it is possible, The image can be cleaned. The second important point is that the process should be standard for all images.

I'm waiting your ideas. Thanks in advance !

edit retag close merge delete

## 2 answers

Sort by » oldest newest most voted

This is not a very easy problem, but by looking at the image I have a few ideas:

• Do a texture analysis. Apply the Gabor filters on the image, then train an SVM to discriminate the leaf from the rest. You can also use the Haralick descriptors (but you have to implement them yourself).
• Do a frequency analysis; the leafs have higher frequency components than the background. Probably the easiest would be a wavelet decomposition, where the dH, dD and dV will contain the high-frequency components. You can also use frequency-domain transformations as the DCT or FFT.
• As there are a lots of edges in the image that don't belong to the leaf, I don't really recommend edge detection like Canny, or area-based segmentation like Watershed. However you could try to detect the spikes of this kind of leaf using a Harris corner detector. Then use these points as a starting point for contour detection.

If you use the first two techniques, the algorithm is the following:

• apply the desired filters on the image. it will give you several "descriptor" images. For wavelets you get 4, for Haralick 6-10 (depending on the number of features used), for Gabor filters you'll get an image for each filter (direction and frequency). You can combine these (e.g. Haralick+Wavelet).
• labelize manually a few images. I suggest to use 3 classes: leaf, background and leaf edge. As edges have different textural caracteristics, you need them for a correct result.
• For each labelized pixel, get the features for that pixel. So for each pixel you should get:

[ f1  f2  f3  ... fN ] [ l1 ]  (f=features, l=label)

• Use this matrix to train an SVM.

Then, for each image, compute the descriptors and use the trained SVM to get the answer for each pixel. Normally you should get an image containing 3 classes: background, leaf and edge!

more

I wrote a few blog posts earlier this year on how to do some of this. Creating thresholds, canny edge detection, finding contours, etc.

In case my site ever goes down, here is a bit of example code taken from those pages. But I strongly suggest you read through the post if possible prior to blindly trying to use the code.

#include <opencv2/opencv.hpp>

int main(void)
{
cv::Mat original_image = cv::imread("capture.jpg", cv::IMREAD_COLOR);
cv::namedWindow("Colour Image", cv::WINDOW_AUTOSIZE);
cv::imshow("Colour Image", original_image);

for (double canny_threshold : { 40.0, 90.0, 140.0 } )
{
cv::Mat canny_output;
cv::Canny(original_image, canny_output, canny_threshold, 3.0 * canny_threshold, 3, true);

const std::string name = "Canny Output Threshold " + std::to_string((size_t)canny_threshold);
cv::namedWindow(name, cv::WINDOW_AUTOSIZE);
cv::imshow(name, canny_output);
}

// ---------------

cv::Mat canny_output;
const double canny_threshold = 100.0;
cv::Canny(original_image, canny_output, canny_threshold, 3.0 * canny_threshold, 3, true);

typedef std::vector<cv::Point> Contour;     // a single contour is a vector of many points
typedef std::vector<Contour> VContours;  // many of these are combined to create a vector of contour points

VContours contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(canny_output, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);

for (auto & c : contours)
{
std::cout << "contour area: " << cv::contourArea(c) << std::endl;
}

const cv::Scalar green(0, 255, 0);
cv::Mat output = original_image.clone();
for (auto & c : contours)
{
cv::polylines(output, c, true, green, 1, cv::LINE_AA);
}
cv::namedWindow("Contours Drawn Onto Image", cv::WINDOW_AUTOSIZE);
cv::imshow("Contours Drawn Over Image", output);

// ---------------

cv::Mat blurred_image;
cv::GaussianBlur(original_image, blurred_image, cv::Size(3, 3), 0, 0, cv::BORDER_DEFAULT);

const size_t erosion_and_dilation_iterations = 3;

cv::Mat eroded;
cv::erode(blurred_image, eroded, cv::Mat(), cv::Point(-1, -1), erosion_and_dilation_iterations);

cv::Mat dilated;
cv::dilate(eroded, dilated, cv::Mat(), cv::Point(-1, -1), erosion_and_dilation_iterations);

cv::Canny(dilated, canny_output, canny_threshold, 3.0 * canny_threshold, 3, true);
cv::findContours(canny_output, contours, hierarchy, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);

cv::Mat better_output = original_image.clone();
for (auto & c : contours)
{
cv::polylines(better_output, c, true, green, 1, cv::LINE_AA);
}
cv::namedWindow("Another Attempt At Contours", cv::WINDOW_AUTOSIZE);
cv::imshow("Another Attempt At Contours", better_output);

cv::waitKey(0);

return 0;
}

more

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2018-07-15 08:28:00 -0500

Seen: 1,713 times

Last updated: Jul 19 '18