Minimum distance my setup can detect

asked 2019-07-24 12:42:01 -0600

I've been using two gray cameras to create a disparity map. The object I'm trying to measure the distance of is close to the cameras (less than a meter). The camera's I'm using are 54 mm apart and the focal length is 2.3 mm. What is the minimum distance this setup can detect?

edit retag flag offensive close merge delete

Comments

1

This should be easy for you to determine empirically (by trying it). From a mathematical perspective, what are your settings for min disparity and number of disparities, and what stereo disparity algorithm and implementation are you using? From a practical perspective: What is your depth of field? How much contrast on the subject? What is your illumination? Is the subject moving relative to the baseline of the cameras? All of these things affect the minimum distance you can measure. That is why it is best measured empirically.

opalmirror gravatar imageopalmirror ( 2019-07-24 18:47:41 -0600 )edit
1

Thanks for your reply! I have not tried it yet as I'd like to know if it's possible before I do all the work to implement it. Any settings from the software side can be anything that helps get a better result. I was thinking of using StereoSGBM but again, that can change. The area should be very well let and the object being detected is black so there should be a good amount of contrast. Lastly, the subject is not stationary relative to the cameras. I'm only looking for a rough estimate of the minimum distance to see if it's worth trying to implement.

stephannash gravatar imagestephannash ( 2019-07-25 08:30:22 -0600 )edit

Experimentation is ALWAYS worth doing. There is little alternative to learning by doing, because this is a complex field, and all solutions are compromises. Start cheap and work your way up depending on application needs.

  • StereoSGBM yields great results but computation time/processor requirements/latency is an order of magnitude greater than StereoBM or cuda algorithms
  • Low-feature surfaces are prone to large depth errors without pattern/structured light projectors.
  • Motion accuracy/tolerance typically requires time syncing two global shutter cameras (as opposed to rolling shutter cameras).
  • Pulsed lighting (fluorescent) can cause a large errors in ranging; active illumination is better.
  • A well constrained application can mitigate some of the above.
opalmirror gravatar imageopalmirror ( 2019-07-29 14:44:20 -0600 )edit

The StereoSGBM algorithm (and to a lesser degree, StereoBM) is optimized to find common visual elements of locally flat surfaces that have occasionally locally straight edges - surfaces of polygonal solids. A black (no contrast) object input is a different problem, solving the distance of the outline/edge of a no-contrast object - the algorithm is not really designed to be good at that. If instead, you illuminate the surfaces brightly with a projected pattern on them with good contrast, then this will help the algorithms do what they are designed to do well.

opalmirror gravatar imageopalmirror ( 2019-07-29 14:54:26 -0600 )edit