OpenCV webcam stream slows down alongside caffe prediction

asked 2016-10-17 13:56:04 -0600

bfc_opencv gravatar image

updated 2016-10-18 00:58:14 -0600

I'm attempting use caffe and python to do real-time image classification. I'm using OpenCV to stream from my webcam in one process, and in a separate process, using caffe to perform image classification on the frames pulled from the webcam. Then I'm passing the result of the classification back to the main thread to caption the webcam stream.

The problem is that even though I have an NVIDIA GPU and am performing the caffe predictions on the GPU, the main thread gets slown down. Normally without doing any predictions, my webcam stream runs at 30 fps; however, with the predictions, my webcam stream gets at best 15 fps.

Even when I run the two components are separate python programs (i.e. pull frames from the webcam in one script and run a separate script doing caffe predictions in an infinite loop) I still get a slowdown in OpenCV's ability to grab webcam frames. I've run the code in C++ with multithreading and experienced the exact same result.

I've verified that caffe is indeed using the GPU when performing the predictions, and that my GPU or GPU memory is not maxing out. I've also verified that my CPU cores are not getting maxed out at any point during the program. I'm wondering if I am doing something wrong or if there is no way to keep these 2 processes truly separate. Any advice is appreciated. Here is my code for reference

class Consumer(multiprocessing.Process):



 def __init__(self, task_queue, result_queue):
        multiprocessing.Process.__init__(self)
        self.task_queue = task_queue
        self.result_queue = result_queue
        #other initialization stuff

    def run(self):
        caffe.set_mode_gpu()
        caffe.set_device(0)
        #Load caffe net -- code omitted 
        while True:
            image = self.task_queue.get()
            #crop image -- code omitted
            text = net.predict(image)
            self.result_queue.put(text)

        return

import cv2
import caffe
import multiprocessing
import Queue 

tasks = multiprocessing.Queue()
results = multiprocessing.Queue()
consumer = Consumer(tasks,results)
consumer.start()

#Creating window and starting video capturer from camera
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
#Try to get the first frame
if vc.isOpened():
    rval, frame = vc.read()
else:
    rval = False
frame_copy[:] = frame
task_empty = True
while rval:
    if task_empty:
       tasks.put(frame_copy)
       task_empty = False
    if not results.empty():
       text = results.get()
       #Add text to frame
       cv2.putText(frame,text)
       task_empty = True

    #Showing the frame with all the applied modifications
    cv2.imshow("preview", frame)

    #Getting next frame from camera
    rval, frame = vc.read()
    frame_copy[:] = frame
    #Getting keyboard input 
    key = cv2.waitKey(1)
    #exit on ESC
    if key == 27:
        break

I've tried testing the code skeleton by passing dummy text from the consumer process to the main one, and get no slowdown at all. I'm not sure why running the prediction itself makes OpenCV slow to get webcam frames. Here's that code below:

class Consumer(multiprocessing.Process):

    def __init__(self, task_queue, result_queue):
        multiprocessing.Process.__init__(self)
        self.task_queue = task_queue
        self.result_queue = result_queue
        #other initialization stuff

    def run(self):
        caffe.set_mode_gpu()
        caffe.set_device(0)
        #Load caffe net -- code ...
(more)
edit retag flag offensive close merge delete

Comments

If you read a group of frames and just do the predictions, how long does that take? I'll bet the GPU predictions are slower than the frame rate of the camera, so it's backing up.

Tetragramm gravatar imageTetragramm ( 2016-10-17 18:10:18 -0600 )edit

Not sure that would make sense in this context since I'm trying to do object classification in real time. The GPU predictions are slower than the frame rate, but that's why I'm doing it in a separate python process that shouldn't really be using any CPU at all. I'm ok with lag between the caption on the stream and the what the stream is showing, since the predictions are slower. But I don't want the webcam stream itself to lag, which is what seems to be happening. It happens even when I run the two programs separately.

bfc_opencv gravatar imagebfc_opencv ( 2016-10-17 18:32:57 -0600 )edit

Oh, sorry. I mis-read the indentations.

Hmm. Is it a steady 15 fps, or does it stutter on the transfers? In other words, if you time the loop with read(), is it a consistent time, or does one iteration in however many take extra long?

Tetragramm gravatar imageTetragramm ( 2016-10-17 19:43:47 -0600 )edit

It's not consistent. The fps varies between 13-18 ish. I'd say the average is around 15.

bfc_opencv gravatar imagebfc_opencv ( 2016-10-17 22:05:59 -0600 )edit

Can you time and see if most of the effort comes from the get or the put? That will help you figure out which is the bottleneck.

Tetragramm gravatar imageTetragramm ( 2016-10-17 23:07:09 -0600 )edit

Well I passed dummy text from the consumer process back to the main process (in lieu of doing the actual neural net prediction) and experienced no slowdown. I've added that test code to the answer.

bfc_opencv gravatar imagebfc_opencv ( 2016-10-18 00:56:59 -0600 )edit

I timed both the get and put calls and both of them are pretty much very close to 0 in their timings. I don't think that's the bottle neck. I know a lot of it is coming from waitKey() but I know a lot of the rendering for imshow() happens there. Any idea what might be going on here? Been stuck on this one for months

bfc_opencv gravatar imagebfc_opencv ( 2016-10-19 00:44:48 -0600 )edit

Do you have several CPU cores, or could blocking in one thread hold up the other?

Tetragramm gravatar imageTetragramm ( 2016-10-19 07:32:24 -0600 )edit

I have 2 cores and 4 with hyperthreading (i7 6500U CPU). i don't think they're holding up each other since I'm using multiprocessing instead of multi threading. I also don't believe I'm running any blocking methods.

bfc_opencv gravatar imagebfc_opencv ( 2016-10-19 12:19:34 -0600 )edit

I'm not sure if it's the structure of the code itself because when I ran a program that just streamed from the webcam without doing anything else and in a separate window ran a caffe prediction in a loop over and over I got a slowdown in the webcam stream.

bfc_opencv gravatar imagebfc_opencv ( 2016-10-19 12:21:19 -0600 )edit