Ask Your Question

Chris Parker's profile - activity

2016-08-28 10:50:11 -0600 received badge  Famous Question (source)
2015-04-23 01:32:47 -0600 received badge  Notable Question (source)
2014-07-17 02:20:17 -0600 received badge  Nice Question (source)
2014-05-30 17:20:35 -0600 received badge  Popular Question (source)
2012-09-21 14:07:03 -0600 received badge  Supporter (source)
2012-09-19 17:58:37 -0600 received badge  Editor (source)
2012-09-19 13:07:00 -0600 asked a question Baby's first SLAM algorithm

I have a strong math background, but am a self-taught programmer. There are gaps in my knowledge/abilities. I've recently had some really invigorating successes with coding in OpenCV, that has bolstered my confidence. I want to specialize in mechatronics, and specifically in computer vision. Within that, I find SLAM (simultaneous localization and mapping) algorithms very interesting. I want to gain experience in implementing SLAM algorithms.

How do I do that? What is the smallest meaningful & useful approach that I can implement first?

I've been doing a lot of reading on the extant research, and am fairly comfortable with the material (though I am not yet comfortable building something on my own). Where is the best starting point for me? Should I do FastSLAM? FastSLAM 2? Build my own 2-d bearings-only SLAM simulator, for a virtual agent? Estimate a path from a set video? Some much simpler task?

I am looking for a roadmap: stages I can plan to build, with each building off of the other. For example, I wanted to build my own CUDA-enabled genetic algorithm that could approximate a given image using a finite number of colored object primitives. This is what my roadmap for that project looked like:

Build a bare-bones, single population GA (find a float x such that e^x = 1024) in Python

Convert Python to C++

Use OpenCV's drawing functions to output a visual plot of the state of the population over time (I already had experience with OCV)

--in a separate program, build a "Hello World" equivalent in CUDA

Port the evaluation function for the GA to CUDA

Alter the eval function to find integers a and b such that e^(a/b) = 1024

Rewrite the eval function to find integers x,y,r,g so that it finds the single circle which best approximates a grayscale input image

Alter it to find a series of circles.

Alter it to use the HSV color space instead of grayscale

Alter it to use triangles instead of circles

Alter it to use a set of image primitives (say, a binary image)

Expand the GA to use multiple populations

Experiment with further customization

2012-09-08 07:48:36 -0600 commented question Outsider seeking advice on cuboid detection & robot localization

Unfortunately it'll be a few days before I have the time to post pictures, but I will.

I don't think that was blocked: it worked the day before, and my employer's network admin is exceedingly generous.

I only really care about the two questions in bold. That is why I segmented off the summary of my attempts so far and said that any quests asked in there were not the main thrust of this post. I assume that the lesser questions are things that will become apparent to me in time.

To reiterate: how would a knowledgeable person go about finding the orientation & distance of a distinctive colored block, given that most everything else is constant? Should I instead be trying to localize with fastSLAM, or some such? Also, am I better served by writing my code in C++?

2012-09-07 15:05:25 -0600 received badge  Student (source)
2012-09-07 11:23:01 -0600 asked a question Outsider seeking advice on cuboid detection & robot localization

I am on two inexperienced college robotics teams that need to use computer vision to solve similar types of problems. I am focusing on using a video stream for localization ("where is the robot relative to this object?").

The first (and seemingly simplest) task I am trying to accomplish is to, given an image which contains a single block (cuboid, aka rectangular prism, of known color and dimension) lying on an even floor, determine the block's distance from the robot and its orientation. The camera's height, pitch, FOV, etc are all presumed to be known and constant.

I am self taught and thus lack the benefits of knowing how I should approach common problems. Thankfully I have a strong math background and can understand the computer vision theory which I have read so far. All the same, I would like some insight into how knowledgeable people would go about solving this problem. If there is a preferred text on computer vision, I would appreciate a link to it as well.

What follows is simply a representative summary of my attempts and undirected tinkering so far. Any questions mentioned in passing are not the primary purpose of this post:

I know enough to be able to generate a binary image of blobs that are sufficiently close to the target HSV color. And of course, I've experimented with blurring the image to varying degrees before doing any of this.

The block is slightly glossy, and thus the binary image has a corresponding hole. I know I can fix this with the closure operator, but that seems to turn the entire image a bit blocky. Also, vertical (never horizontal) stripes of white appear between sufficiently close blobs. What sort of kernel should I pass to it MorphologyEx to prevent this?

I tried using contours to find the boundary of an object, but found that it uses far more points than the 6 that a human would use. It also seems to be noisy. I've yet to try a convex hull approach because seems to be down at the moment.

GoodFeaturesToTrack frequently has false positives/negatives in detecting the six/seven visible corners of the block, even under seemingly ideal conditions. As an alternative, I suppose I could run edge detection, then Hough lines, then pair lines together based off of similarity of angle, and look for the outline of my block in triplets of pairs of lines... but I have the feeling that this is not the proper way to approach the problem. Hence why I am asking for insight.

PS: I started using OCV 2.4 about 5 days ago. I start using Python at the encouragement of my team leaders. Should I bite the bullet and learn to use OCV in C++ instead of Python? I understand the OCV C++ code that I have seen. I am one of/the most capable programmer on either team and have never used SWIG or Boost.Python ... (more)