Measuring of lanes or stripes on noisy and underexposed images

asked 2020-09-24 02:23:29 -0500

aliakseis gravatar image

updated 2020-11-11 12:50:37 -0500

I've got a task of measuring of lanes or stripes on severely varied, often noisy and underexposed images using C++. The example ot the input image and of what should be measured is below:

An example of what should be measured on images provided

I've tried a couple of approaches using OpenCV so far. The first one basically consisted of the next steps:

Filtering, background substruction -> adaptiveThreshold -> thinning -> HoughLinesP -> and then filtering and merging of lines.

Please see the llustration image below:

The first attempt result

The second approach comprised of search for the beginnings of short stripes with SURF and movement to the left and up along long lines.

Please see the llustration image below, note that SURF was done on the original halftone image:

The second attempt result

The third approach I've tried: doing the Fourier transform for frames - image fragments (a 4-dimensional matrix is obtained), then finding basic patterns using PCA. Got this result below:

The third attempt result

Not sure what to do with that PCA output. Have tried to select lines using adaptiveThreshold using original image, then teach the multilayer perceptron based on this threshold and the PCA result so that it would yield "refined" threshold. An attempt was made to select the parameters resulting in a cleared threshold for further treatment - it works occasionally, but the result is very unstable.

Unfortulately all the approaches above work only with few selected "good" images.

I presume that the ML approach would be the way to go. Unfortunately I have only few images for learning.

I would greatly appreciate any suggestions on moving forward to solving this task.

Some test source images can be found here:

Please find an update here: Any suggestions would be highly appreciated.

edit retag flag offensive close merge delete


I think the best advice you can get is to fix scene lighting and image formation/acquisition. machine vision does not mean you have to accept whatever bad picture they're giving you. the input pictures I see here have simply too much going on to be good data. you have a highpassed version in your github, and that comes close to what I would want, but you see on the right noise just drowns out the signal. that can only be fixed by fixing the image formation stage, i.e. before they throw it over the fence into your yard.

crackwitz gravatar imagecrackwitz ( 2020-11-12 06:57:47 -0500 )edit