Hi, I use Python/opencv to read video frames from a live rtsp ip-camera. Unlike reading from offline Mpeg file, encoded video frames come in a real-time way. The problem is if the frame capture command does not run fast enough (suppose I run it on a weak ARM board, while camera has high FPS), system misses next video frame, which as far as I know this is catastrophic. because it prevents proper decoding of Mpeg stream (in Mpeg coding, encoded frames are dependent to other frames). In this case, next frame capture command issues an error. How can I overcome this problem in opencv?
my first idea is if opencv can access low level Mpeg packets (without decoding), I can buffer them inside the program without the fear of missing them, and then decode them slowly (assume there is an infinite buffer). Is this possible to do so in opencv?
if opencv does not give access to raw Mpeg packets, is there any other way to do it, like by ffMpeg in C++? and then pass decoded frame to python/opencv somehow?