Ask Your Question

Multi-camera calibration for tracking objects

asked 2017-09-24 07:37:56 -0500

fatben gravatar image

Hello everyone,

I have some questions for more experienced OpenCV users concerning making a multi-camera tracking program. So to quickly present a problem I want to track multiple objects with multiple cameras. The result I want to achieve is more or less something like this:

Eventually I came to the conclusion that I want to use Kalman filter for tracking. The issues I want to ask about are:

  1. Is there a way to calibrate multiple cameras based on dataset of videos like those in video link? Can it be done somehow automatically? I know you can calibrate a camera using a chessboard ( but that's not the case as you don't have it in the video. There's also something like this: but I guess it has the very same disadvantage.
  2. What would be the most efficient way to approach tracking? Should I use Kalman filter for each view and try to merge individual result or somehow try to reconstruct the objects in 3d and then apply filter?

Any suggestions will be welcomed. Thanks.

edit retag flag offensive close merge delete


So do you control the cameras? Could you carry a chessboard into the scene and calibrate them? Can you measure the scene? Like, literally go take a tape-measure and figure out how big things in the scene are?

Tetragramm gravatar imageTetragramm ( 2017-09-24 17:36:50 -0500 )edit

I want to use downloaded videos so the anser is no. That's what I meant by saying that's not the case here as I can't easily put it there.

fatben gravatar imagefatben ( 2017-09-25 06:29:52 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2017-09-25 19:59:31 -0500

Tetragramm gravatar image

Without a known size object or calibration target within the videos, you'll have to use a Structure From Motion (SfM) algorithm. They take unknown cameras and an unknown scene and solve for the 3d locations of both.

Part of the output would be the locations of each of the cameras (the camera extrinsics), the camera matrix and distortion (the camera intrinsics), and the locations of your calibration points within the space. From there, you can do your tracking as normal.

You probably want to merge the results and kalman filter the resulting 3d track. An alternative is to use a kalman filter to do the actual merging, which gives you an already filtered 3d result. Those are much more complicated, especially if you have an algorithm to do the 3d triangulation already.

On that note, THIS is the mapping3d contrib module I'm working on, which contains an algorithm for getting the 3d location, or 3d location and velocity of a point from multiple cameras. That much works, and I'm also looking for feedback and additional algorithms to include. So if you find something that fits and would help you, I might be able to add it.

edit flag offensive delete link more

Question Tools

1 follower


Asked: 2017-09-24 07:37:56 -0500

Seen: 2,402 times

Last updated: Sep 25 '17