Ask Your Question
0

OpenCV Motion Tracking in Broadcast

asked 2016-07-04 14:13:29 -0600

Hello, I am part of a development team on a new project for our television studio. We are actively trying to create a LVI (live video insertion) technology system. We are trying to put the yellow first down line for american football. This technology is advanced, and is currently done in the following way: https://www.youtube.com/watch?v=1Oqm6...

We want to simplify this process greatly. We are looking to motion track points on the field, thing like the yardage markers, numbers, and other things that stand out.

We are thinking of doing it in the following way:

  1. Video is taken into the computer from a Decklink card (this is simply a card that can take input and output from an SDI video feed)
  2. The image is motion tracked
  3. A 3D model of the field is layered on top of the video source and kept in place by the tracker
  4. The graphics can be played inside this layer out of the decklink card into the main switcher for use on air.

Can OpenCV do everything I am looking for here? If so how could I use a 3D system to layer the graphics with OpenCV?

Please keep in mind this is a live show, and must be done in real time (60fps)

Thank you,

Jack

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
1

answered 2016-07-05 02:17:11 -0600

berak gravatar image

updated 2016-07-05 02:20:25 -0600

most of it is probably doable, but you're way off the well-trodden path here, don't expect any ready-mades.

  1. opencv has support for usb/firewire cams, but i''m afraid, you'll need some special SDK for your grabber card.

  2. sure. once you're there, this might be the "easy part" of your endeavour.

  3. yes, you can use opengl combined with opencv (but again, no readymades, you'll need your own 3d engine, model loading, ways to supply & retrieve texture, etc.)

  4. you're absolutely on your own here.

  5. idk, what other sotware you're already using, - maybe it's far easier, to incorporate opencv in e.g. an adobe plugin, than doing all the "wiring" yourself.

edit flag offensive delete link more

Comments

There is an SDK for the Decklink configuration, and I dont see a problem with incorporating that. Could an engine like Blender for instance maybe be able to incorporate this? It was suggested by a user on the Blackmagic Design Forum: "I would propose you are best doing your image analysis on a separate machine, and simply streaming vertex data across the network into a scene which is simulating 3D in a Flash 2D layer, or an HTML5 Canvas Scene using something like three.js. If you have a mixer you can then Key the Graphics on and off, rather than using CasparCG as an in-the-chain device.

This nicely separates the project into 2 parts, (1) the Graphics overlay layer, and (2) the image analysis process." Could this be an option?

jackreynolds gravatar imagejackreynolds ( 2016-07-05 09:28:57 -0600 )edit

In addition (sorry character limit), this software is currently only used professionally, and it is a standalone system. We are limited on resources as far as computational power goes, however we have enough for opencv. As for incorporating it into something like an adobe program - 1. There isn't an adobe program which can do the output for what we need 2. There is no 3D engine in adobe softwares and 3. I have never ever seen anything from Adobe move quick enough to be used in real time.

jackreynolds gravatar imagejackreynolds ( 2016-07-05 09:36:28 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2016-07-04 14:13:29 -0600

Seen: 1,000 times

Last updated: Jul 04 '16