Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Distances to surroundings in image

using a rotating robot and kinect depth data I am able to create a bw-image of the surroundings of my robot (black is free space, white are obstacles).
The robot is looking for a tag and if not found should try to move to another location and repeat the search.
I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?

click to hide/show revision 2
retagged

updated 2018-03-13 10:39:08 -0600

berak gravatar image

Distances to surroundings in image

using a rotating robot and kinect depth data I am able to create a bw-image of the surroundings of my robot (black is free space, white are obstacles).
The robot is looking for a tag and if not found should try to move to another location and repeat the search.
I am a bit confused as of where the robot should next move to and thought maybe best in a direction with no or far away obstacles and not too close to an already proofed unsuccessful scan position.
I know I could walk through every pixel in an extending circle and eliminate non-promising directions - however - I am in a python environment and stepping through all the pixels in a loop will be slow and using lots of cpu cycles.
Any functions in opencv to rotate a beam around a fixed location (position of my robot) and get distancies (e.g. for each degree) to the next obstacle (in my case a white pixel) in reasonable time?