Ask Your Question

Revision history [back]

[Python] VideoCapture .read() is way too CPU intensive

So I'm trying to process a 720p@30FPS video live from a webcam, on a 1.7GHz quad core ARM CPU. Right now, the only thing holding me back is using the .read() feature. In my loop, I have nothing but "camera.read()" (not even assigned to any variables) and some time.time() counters to tell me the framerate. Nothing else is happening, and yet, this maxes out the CPU core. Best case scenario, I get 21FPS.

This doesn't make sense to me - how is it so expensive to grab a frame from the camera and literally do nothing with it? Can anything be done about this?

[Python] VideoCapture .read() is way too CPU intensive

So I'm trying to process a 720p@30FPS video live from a webcam, on a 1.7GHz quad core ARM CPU. Right now, the only thing holding me back is using the .read() feature. In my loop, I have nothing but "camera.read()" (not even assigned to any variables) and some time.time() counters to tell me the framerate. Nothing else is happening, and yet, this maxes out the CPU core. Best case scenario, I get 21FPS.

This doesn't make sense to me - how is it so expensive to grab a frame from the camera and literally do nothing with it? Can anything be done about this?

EDIT: I just came to realize that this problem doesn't appear to be specific to VideoCapture or read() - just using imread() with a 720p JPEG is equally as slow. This leads me to believe that the slowdown is specific to the decoding process.

I tried a little experimentation and took the same JPEG image but converted it into BMP and PNG, using the same imread() command in my loop. I found the PNG dropped down to around 14FPS, while the BMP was in the mid 60s.

My webcam only supports MJPG for 720p@30FPS. Is there any way at all I can speed up this process? I was thinking maybe using PIL could help. Assuming PIL can read data into Python faster than 30FPS, I was thinking I could use that to read from the MJPG stream and then pass it off to OpenCV. I assume I'd have to do this in 2 separate threads. Would this idea work, or is it not worth the time?

[Python] VideoCapture .read() is way too CPU intensive

So I'm trying to process a 720p@30FPS video live from a webcam, on a 1.7GHz quad core ARM CPU. Right now, the only thing holding me back is using the .read() feature. In my loop, I have nothing but "camera.read()" (not even assigned to any variables) and some time.time() counters to tell me the framerate. Nothing else is happening, and yet, this maxes out the CPU core. Best case scenario, I get 21FPS.

This doesn't make sense to me - how is it so expensive to grab a frame from the camera and literally do nothing with it? Can anything be done about this?

EDIT: I just came to realize that this problem doesn't appear to be specific to VideoCapture or read() - just using imread() with a 720p JPEG is equally as slow. This leads me to believe that the slowdown is specific to the decoding process.

I tried a little experimentation and took the same JPEG image but converted it into BMP and PNG, using the same imread() command in my loop. I found the PNG dropped down to around 14FPS, while the BMP was in the mid 60s.

My webcam only supports MJPG for 720p@30FPS. Is there any way at all I can speed up this process? I was thinking maybe tried using PIL could help. Assuming PIL can read data into Python faster than 30FPS, I was thinking I could use that to read from the MJPG stream and then pass it off to OpenCV. I assume I'd have to do this in 2 separate threads. Would this idea work, or is it not worth the time?along with the multiprocessing library, but they only slow things down further.