Multi-camera calibration for tracking objects
Hello everyone,
I have some questions for more experienced OpenCV users concerning making a multi-camera tracking program. So to quickly present a problem I want to track multiple objects with multiple cameras. The result I want to achieve is more or less something like this: https://www.youtube.com/watch?v=7Dy9c...
Eventually I came to the conclusion that I want to use Kalman filter for tracking. The issues I want to ask about are:
- Is there a way to calibrate multiple cameras based on dataset of videos like those in video link? Can it be done somehow automatically? I know you can calibrate a camera using a chessboard (http://docs.opencv.org/3.3.0/dc/d43/t...) but that's not the case as you don't have it in the video. There's also something like this: http://docs.opencv.org/master/d2/d1c/... but I guess it has the very same disadvantage.
- What would be the most efficient way to approach tracking? Should I use Kalman filter for each view and try to merge individual result or somehow try to reconstruct the objects in 3d and then apply filter?
Any suggestions will be welcomed. Thanks.
So do you control the cameras? Could you carry a chessboard into the scene and calibrate them? Can you measure the scene? Like, literally go take a tape-measure and figure out how big things in the scene are?
I want to use downloaded videos so the anser is no. That's what I meant by saying that's not the case here as I can't easily put it there.