Ask Your Question
0

Understanding capability of camera with two lenses: measuring speed & distance?

asked 2016-09-09 18:20:59 -0600

Crashalot gravatar image

Assume you have a camera with two lenses and you want to measure the distance a golf ball traveled, up to 1000 feet. Also assume the camera system offers a HFOV of 180 degrees.

1) What are the drawbacks to measuring distance with a single camera (with two lenses)?

2) What are the drawbacks to measuring speed with a camera system like this?

3) In particular, how does a camera system like this compare to a laser device like this for measurement and a radar device like this for speed?

Accuracy presumably depends in part on resolution, i.e., higher resolution yields more accurate results. So for distance assume acceptable tolerances are +/- 10 feet for distance and for speed +/- 5 mph. If resolution is the only variable, how do you determine the minimum resolution required to achieve the aforementioned tolerances?

edit retag flag offensive close merge delete

Comments

1

What do you mean by two lenses?

Tetragramm gravatar imageTetragramm ( 2016-09-09 18:28:14 -0600 )edit

Apologies for the poor terminology. My understanding was if you wanted to measure distance with a camera you needed a minimum of two cameras in a system (to create a stereo view like human vision), and "lens" is supposed to represent a camera? What's the right way to phrase this?

Crashalot gravatar imageCrashalot ( 2016-09-09 18:35:47 -0600 )edit
1

Ah, yes. It's probably best to phrase it as "system with two cameras". Two lenses could mean a stereo camera, or doing depth from defocus or something like that.

Tetragramm gravatar imageTetragramm ( 2016-09-09 18:53:01 -0600 )edit

OK updated question, thanks for helping! Do you know the answer by chance?

Crashalot gravatar imageCrashalot ( 2016-09-09 19:01:09 -0600 )edit

Actually isn't a stereo camera what I mean? I used "lenses" because that's how people described the iPhone 7+ camera system?

Crashalot gravatar imageCrashalot ( 2016-09-09 19:01:58 -0600 )edit

1 answer

Sort by » oldest newest most voted
2

answered 2016-09-09 19:17:04 -0600

Tetragramm gravatar image

So, the answer is, it's complicated.

The big problem with the laser or radar device is pointing it at the ball. Lasers are, obviously, very narrow, and you'll need that for position. That radar gun probably won't work on golf balls, so you'd need a better one, and it is probably narrow too.

For two cameras, there are three main factors driving accuracy. All assuming you can track the ball. If you can't reliably detect and track the ball in both cameras, you're not going to be able to do it at all.

  1. Are the cameras widely separated? If the cameras are looking at the target from 90 degrees apart, errors in one don't correlate with errors in the other. If they're close together looking in the same direction, a small error is amplified.
  2. Camera position and angular direction. This comes in two flavors, precision and accuracy. Position mostly matters in accuracy. If your camera positions are off by 3 feet, your estimate will be off by 3 feet. For angular measurements, you need both. Accuracy is a constant bias. Imagine drawing two lines that intersect. Now from the start of one, turn your straight edge a bit, and no matter where they now intersect, it's wrong by some amount. Precision is a cone around where you think the line is, where the object could be. This is where resolution matters.
  3. Algorithms. Some algorithms introduce bias of their own, or can remove noise. Kalman filtering a series of position measurements can reduce the noise and give you an estimate of velocity at the same time.

With the right choice of algorithm, you can mitigate 1 and 2 by adding more than two cameras. The noise and biases will (hopefully) cancel out as you add more cameras, giving you a more accurate result.

If you want to play with things, I have an OpenCV contrib module that I am (slowly) working on. Right now it calculates 3d position or position and velocity from a series of camera measurements. You input time, the location of the object in the image, camera matrix, distortion matrix, camera translation and rotation, and the size of the image. With a moving camera (or more than one stationary camera), it can give position or position and velocity.

edit flag offensive delete link more

Comments

Cool thanks! You're too kind. Will follow your GitHub repo. Tracking the ball just depends on resolution right and your detection & tracking algorithms, right? Because you can assume some things to simplify the task, like the camera knows where the ball is at the start of its flight (i.e., doesn't need to pick a golf ball out of a scene full of other balls). What defines widely separated? Feet or inches or cm? The idea is to have both cameras in the same system like the iPhone 7+. What constitutes an "off" camera position, or what do you mean by that?

Crashalot gravatar imageCrashalot ( 2016-09-09 19:32:32 -0600 )edit

Right, tracking depends those yes.

Widely separated depends ultimately on your angular accuracy and precision, and your range. If your worst possible angle error is measured in micro-radians, you can have a small separation and still do ok. If it's, let's say, 1 degree, and you're looking at something a few feet away, then inches is ok. If it's 1 degree and you're looking a couple hundred feet away, then you need feet.

If you're using something like an iphone 7, with lenses about an inch apart, you don't have to worry about angular or position accuracy too much. They're hard mounted together. But the precision matters a lot. Assuming an FOV of 36 degrees, looking at a golfball at 300 feet, an error of one pixel is over 250 feet off in depth.

Tetragramm gravatar imageTetragramm ( 2016-09-09 20:00:09 -0600 )edit

Hmmm ok so is the angular precision a byproduct of the camera or algorithms? How can you minimize angular error? Does a higher FOV help (e.g., 180 degrees)? +/- 250 feet wouldn't help much for measuring distance of a golf ball. So I can work out the math, in the example where you're off by 1 pixel did you mean off by 1 degree (i.e., actual FOV is 35/37 degrees not 36)? Trying to understand the equation so I determine the specs needed for a tolerance of +/- 10 feet.

Crashalot gravatar imageCrashalot ( 2016-09-09 20:13:37 -0600 )edit

Assume 2d to start. Take a right triangle. Two cameras and the target, one of the cameras is the 90 angle. Figure out all the distances and angles. Take your camera FOV divided by the number of pixels. If you're wrong by one pixel, you're wrong by that many degrees. Add and subtract that angle from one of the angles in the triangle, and see how the distances from camera to target change. This is close to the best case. Solve backwards to get your necessary specs.

Tetragramm gravatar imageTetragramm ( 2016-09-09 21:38:05 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-09-09 18:20:59 -0600

Seen: 915 times

Last updated: Sep 09 '16