# Is it possible to use neural nets for line detection?

Neural nets are known for their ability of error correction. Straight lines are often broken, contain small gaps. Using large parameter for these gaps in probabilistic Hough has the side effect that the line continues beyond its true ends. Did anybody try to solve this problem using NN? I have a good output from edge detector with lines as white dots on black background (1/0). Maybe it is possible not only correct gaps, but do full detection (slope, end coordinates)? For those who are interested in this approach I can hint that David Marr in his classical book "Vision" proposed to use combination of Gauss distribution with Laplace operator for edge detection. Gauss provides blurring and Laplacian - gradient calculation. He used a convolutional net with the characteristic connections like a Mexican hat.

That is, each neuron supports itself and the nearest neighbours, but suppresses the more remote.

This is what Marr's filter does to Canny output from Rubik's cube. Looks like what is required. Now need to downsample it. How to do it correctly? Just take every second or third pixel or there are some tricky methods? I used this kernel:

Another set of parameters shows off full power of the method. It restores true edges even if they are completely absent on Canny output. Note that it draws 3 lines where there are only 2. The most important - it restores the central dot where 3 edges come together. It is absent on source image too.

Look how the simple perceptron technology (see the answer) works for 3D corner detection.

There are 2 output neural patterns here because the angle range is 0 - 180. The upper image corresponds to the lower half of the input screen.

The last thing left. How to detect the central point where 3 lines come together? I think, it's time to introduce more complicated nets combining several standard solutions. I suggest convolution + recursion (RNN). The idea is to highlight all pixels of the same line by positive biofeedback. Then the brightness of their common end will get triple increase like this.

Any ideas how to do it better?

edit retag close merge delete

I think the complete automotive industry is solving this issue with deep learning and thus neural networks. The question will be if OpenCV will have support for these networks, which I haven't seen around yet. However there are tons of github repos out there doing exactly this!

( 2018-03-05 07:33:55 -0500 )edit

This link to github reads: Monocular vehicle detection using SVM and Deep Learning classifiers I asked about a different task, not line following which is indeed very popular, say, in educational robotics. There is a rectangular object from which you extract edges. These edges will have defects - gaps. Need to correct them.

( 2018-03-05 08:44:26 -0500 )edit

Ow i completely misunderstood your problem then :D That might indeed be another issue. Let me overthink this tonight :D

( 2018-03-05 08:57:18 -0500 )edit

Also nowadays deep learning overshadowed everything. People forgot that there are different architectures with different abilities. Deep learning is hierarchical concept formation from examples. This task is of a different class - signal processing. I think, it may be implemented without learning whatsoever. Just need to determine the microstructure of the net. This is similar to various methods using kernel processing.

( 2018-03-05 09:22:43 -0500 )edit

I agree, interesting idea. I have no idea if there are studies on line detection with DNN.

As lines are very easy to describe analytically, the classical methods like Hough transform and RANSAC work generally well and fast, even in presence of noise.

Another interesting method for line or shape detection in noisy images is the marked point process. Check the articles from the team of X. Descombes and J. Zerubia (link) at INRIA (Check here or here)

( 2018-03-05 09:28:20 -0500 )edit
1

this seems to be some kind of "gedankenexperiment", once you have real-world constraints, even "done-on-a-napkin", it might look less feasible.

what would be the input ? a hd-image ? for sure not. you'd have to downsample, and at that stage already miss the gaps, you're trying to correct.

and the output ? (dnn's are usually some kind of pyramid, large data at the bottom, small data at the top (aka the "prediction") the larger you make that, the more expensive it gets.

but again, +1 for starting this discussion !

( 2018-03-05 13:22:52 -0500 )edit
1

The solution for the problems with performance is convolution. CNN doesn't use connections of all-to-all so runs much faster. In principle, existing methods using kernel do the same, but I think that NN are slightly different. Standard programming is algorithmic so deterministic in the foundation. Neural nets are essentially probabilistic, based on Fuzzy Logic. So errors may be corrected automatically in the process of mainstream calculations.

( 2018-03-06 02:40:07 -0500 )edit

To kbarni. Both RANSAC and another method which you provided require heavy computations. RANSAC will not catch a line of 50 dots in the image containing hundreds of other dots. The second one has such heavy math that even understanding the principle of operation requires substantial efforts. It includes statistical Student test among other things. This is clearly for overnight rather than real-time computing. I agree that Hough transform works fine, but it has a problem with lines which end up near some cloud of dots. To determine exact ends, you need to use Probabilistic Hough. When you increase the possible gap in the line, it starts to overshoot into such clouds. The error of line length may be up to 100%. In fact, only zero gap works without errors. Other values produce defects.

( 2018-03-06 06:12:14 -0500 )edit

@ya_ocv_user, just curious, what kind of nn was used for the image above ?

( 2018-03-07 11:32:08 -0500 )edit
1

This was a simple convolutional net with 2 layers. The first layer takes the gray image. The second layer has the same matrix. Activity of each neuron is calculated as weighted sum from its neighbours from 1 layer. I have posted the array of weights (kernel) in the question.

( 2018-03-07 12:35:54 -0500 )edit

Sort by ยป oldest newest most voted

My answer is yes provided that the line is anchored, for example if one of its ends is in the center of the image. The solution is using NN for reverse transformation. The net is a universal function. You can remember discrete associations input-output, then use it in reverse direction. In this case, direct transformation is rotation around the centre. Here, the simplest 2-layered perceptron was used. An input image is at the left. The central matrix represents NN output similarly to Hough transform. The angle of rotation (clockwise) is X and line length is Y (downwards).

Of course, this method has a lot of conditions. I am going to post a more detailed report later.

more

1

The problem with a classical MLP is that it's very position-dependant. That's why your line must begin at the center of the image.

I propose to take a simple DNN architecture (like the LeNet with a larger input layer) in Keras, generate a large number of black images with a random white line, the label being the angle of the line. You could even draw several lines on the image and activate the output neurons.

Now your network should be able to correctly detect the angle of the lines (and it will be quite robust), but you won't get the position. But it's a good start.

I'm curious about the results of this method. Then, we could add deconvolution layers to get the endpoints of the lines...

( 2018-04-03 09:38:35 -0500 )edit

The sooner you understand that this approach is fruitless the better. All are literally zombied with DNN. Seemingly, there was some influential group which promoted this particular architecture and now all think that it is panacea. It is not by a simple reason. Humans don't learn from raw data. You go to school, read textbooks, etc. Industrial application of these primitive DNNs have already led to 3 fatal accidents and expect more. What I try to explain here - NNs can not only learn, they can be directly programmed. This is better because you know what it does. IMO rejection of even the early perceptron was an error. Yes, it has limitations, but there are simple tasks too. Just need that they fit each other.

( 2018-04-04 12:19:11 -0500 )edit

If the task is more complicated - add to the existing solution, don't reject it as such. Another disadvantage - using ready toolkits. They are convenient, but limit you to some certain framework.

( 2018-04-04 12:21:40 -0500 )edit
1

As I said in my first comment, there are better suited algorithms for this task - but the question was explicitly about NNs for line detection.

OTOH, DNNs mimic quite well the human visual system, that's why they are so successful (but they still have their limitations). And yes, humans learn a lot from raw data. Babies learn to recognize their environment (objects, persons, sounds, etc.) without textbooks. And, as kids "learn" lines quite quickly, it should be possible to solve this problem using DNNs.

But, as current line-detection algorithms (especially the Hough transform) are simple, fast and robust, line-detection wasn't really an application for DNNs (especially as they are much slower). I still find this an interesting problem.

( 2018-04-05 04:19:12 -0500 )edit

Glad to see your interest. The more opinions - the closer truth. I resorted to this method because Hough is unsustainable in tough conditions. Maybe possible to fine-tune it, but the available regulation not enough. Had a choice: to dig into source code or make NN from scratch. 2 has a sort of guarantee. If humans can do it, then a model should too. The problem is that visual system (or better all sensory analyzers together) are only 1/3 of all human computing gear. There are also motor system (output) and limbic system (sentiment and decision making). Now suppose you take only 1/3 of standard computer. Even if the parts are best chosen, will it be workable? I don't feel confident with DNN because nobody knows what it have learned. Prefer to find some efficient filter that does the job.

( 2018-04-05 10:25:12 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Asked: 2018-03-05 05:58:43 -0500

Seen: 1,921 times

Last updated: May 05 '18