Distances to surroundings in image
using a rotating robot and kinect depth data I am able to create a bw-image of the surroundings of my robot (black is free space, white are obstacles).
The robot is looking for a tag and if not found should try to move to another location and repeat the search.
I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?
If I'm understanding your question right, you can use reduce to get the average of each column, ie: the average distance in that direction. Then you can loop through just a single row, which is much faster.
As far as defining what is a promising and non-promising direction, that's up to you.