Hi Community!
I'm attempting to improve a hand segmentation algorithm I'm working on, and as an experiment, I'd like to give higher priority to pixels that are near an edge. (My most typical failure case is caused by walls that are similarly colored as skin, but walls often have very few edges compared to hands.)
Anyway, I was getting ready to build a custom algorithm for this, but I can't help but think OpenCV probably has image processing functionality for this. Here's what I want to do: Given a binary image generated by Canny(), I want to produce a matrix of the same size, with floating point values normalized from 0.0 to 1.0, with 1.0 representing an edge pixel, 0.0 representing a pixel that is some fairly large distance from any edge pixels, and intermediate values following a curve, with the value dropping more rapidly the further the pixel is from an edge.
This sounds an awful lot like a Gaussian filter, but there are a couple issues I have with that. First, I'm looking to have some non-zero values for pixels that are pretty far from an edge, so the kernel would have to be very large - hence, very poor performance. Second, running something like GaussianBlur would probably need further post-processing to normalize the results to 0.0 to 1.0.
Ideally, a custom implementation could scan through each pixel, iteratively spiraling out until it finds an edge pixel - my guess is that such a solution would be much more efficient than using convolutional approaches since it can stop once it finds a single edge pixel.
Anyway, before I spend effort implementing this in C++ (and then converting to Python as well - long story), is there a better, efficient approach to do what I'm trying to do provided (or supported) by OpenCV?
Thanks!