Ask Your Question
2

waitKey(1) timing issues causing frame rate slow down - fix?

asked 2015-01-11 18:01:10 -0500

Lamar Latrell gravatar image

updated 2015-01-12 22:27:26 -0500

Hello,

I am using a camera that can run at 120fps, I am accessing it's frames via it's SDK and then formatting them into opencv Mat data types for display.

At 120fps I am hoping that the while(1) loop will achieve a period of 8.3ms - i.e. the camera fps will be the bottleneck. However, currently I am only achieving around a 13ms loop (~70fps)

Using a timer that gives microsecond resolution I have timed the components of the while(1) loop and see that the bottleneck is actually the 'waitKey(1);' line.

If I'm not mistaken I should expect a 1ms delay here ? Instead I see around 13ms spent on this line.

i.e. waitKey(1); is the bottle neck

Also of note is that if I try waitKey(200); I will see a 200ms delay, but anything lower than around waitKey(20); will not give a delay that reflects the waitkey input parameter:

waitkey(1) = 12ms

waitkey(5), waitkey(10) and waitkey(15) all give a 12ms delay

waitkey(20) gives 26ms

waitkey(40) gives 42ms

etc.

It would seem that everything else but the waitkey is taking around 2ms, which will give me 6ms or so for other openCV processing - all in keeping up with 120fps - this is my goal.

How can I either 'fix' or avoid waitkey? or perhaps at least understand where I am going wrong conceptually :)

The code follows hopefully it's clear considering the cameras SDK:

#include "cameralibrary.h"
#include "supportcode.h"  

#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace cv;
using namespace std;
using namespace CameraLibrary;

int main(int argc, char** argv){ 

    CameraManager::X().WaitForInitialization();
    Camera *camera = CameraManager::X().GetCamera();
    camera->SetVideoType(Core::GrayscaleMode);  

    int cameraWidth  = camera->Width();
    int cameraHeight = camera->Height();

    Mat imageCV(cv::Size(cameraWidth, cameraHeight), CV_8UC1);
    const int BITSPERPIXEL = 8;

    Frame *frame;

    namedWindow("frame", WINDOW_AUTOSIZE);

    camera->Start();

    while(1){

        //maximum of 2 micro seconds:
        frame = camera->GetLatestFrame();   

        //maximum of 60 micro seconds:
        if(frame){                      
            frame->Rasterize(cameraWidth, cameraHeight, imageCV.step, BITSPERPIXEL, imageCV.data);
        }

        //maximum of 0.75 ms
        imshow("frame", imageCV); 

        // **PROBLEM HERE**  12 ms on average
        waitKey(1);                     

        //maximum of 2 micro seconds:
        frame->Release();                   

    }

    camera->Release();
    CameraManager::X().Shutdown();
    return 0;
}
edit retag flag offensive close merge delete

Comments

Is it anything to the refresh rate of my screen ?? (60Hz)

I don't need the image on screen to update at 120fps - quarter rate/30fps would be fine. But I do need the while(1) loop to run at 120Hz - i.e. the full 120 images will be accessible to the code.

I don't really fancy every 4th frame forcing a 13ms period so I'm still keen to find out how to get waitkey() to happen as fast as it indicates it should (?)

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-11 19:44:22 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2015-01-12 22:46:20 -0500

Lamar Latrell gravatar image

updated 2015-01-14 22:48:49 -0500

Forum/browser software deleted a more detailed answer - but the following code is working with threads :)

I get 120fps in fastLoop and 63fps in the viewLoop - adding processing to fastLoop shows an eventual slowdown, once this reaches 63 they are both 63 and the threads I guess are then redundant.

It's pretty bare - but worked straight away, which was surprising. Maybe someone can point out issues with it?

Quick and hopefully correct explanation: mtx.lock means all memory locations within the lock are only accessible by the code within the lock, for the duration of the lock - this avoids concurrent read writes of the Mat memory locations by each thread. It is my understanding that it grants the fastLoop thread more priority over the viewLoop thread.

//#include something specific to your camera ...

#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

#include <thread>  
#include <mutex>          

using namespace cv;
using namespace std;

mutex mtx;   // might not actually need to be global ...

void fastLoopCode(Mat& frameCV) {

        //mutex mtx;   //try it in here

        //initialize your camera etc.

        Frame *frame;      //direct data from camera...

        while(1){
                frame = camera->GetLatestFrame();      

                if(frame){                                     

                        mtx.lock();
                        //whatever code required to turn the camera data into an openCV Mat
                        mtx.unlock();

                        frame->Release();
                }
        }
        camera->Release();
}

void viewLoopCode(Mat& frameCV) {

        namedWindow("frame", WINDOW_AUTOSIZE); 

        while(1){

                imshow("frame", frameCV);
                waitKey(1);    
        }
}

int main(int argc, char** argv){

        Mat frameCV(Size(CAMERA_WIDTH, CAMERA_HEIGHT), CV_8UC1);

        thread fastLoop (fastLoopCode, frameCV);
        thread viewLoop (viewLoopCode, frameCV);

        fastLoop.join();
        viewLoop.join();
}
edit flag offensive delete link more

Comments

2

Maybe it's not the done thing accepting your own answer - but it is... the answer :) I need 20+ points or something though ??

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-13 20:25:58 -0500 )edit

You have it now :)

StevenPuttemans gravatar imageStevenPuttemans ( 2015-01-14 02:13:42 -0500 )edit
1

Thanks! Also, just edited out a lot of my camera specifics and made it slightly more general ...

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-14 22:33:05 -0500 )edit
6

answered 2015-01-12 03:34:22 -0500

berak gravatar image

updated 2015-01-12 10:25:40 -0500

Doombot gravatar image

waitKey() contains the whole message loop for the window.

When you do a imshow(), only a Mat header is copied, the actual blitting / drawing gets triggered later in the waitKey(). (that's why imshow() is so fast and waitKey() is so slow in your measurement above)

Also, when your os is doing something heavy, and stalls your main thread for a moment, it will show up in this place.

Calling it just 'waitKey()' might be a bit of a misnomer ...

Still, you do not have to draw the image each time in the loop, and don't have to call waitKey() each time:

int frameCounter=0;
while(1){
        frame = camera->GetLatestFrame();   
        if(frame){                      
            frame->Rasterize(cameraWidth, cameraHeight, imageCV.step, BITSPERPIXEL, imageCV.data);
        }
        if (++frameCounter % 10 ==0)  {
            imshow("V120", imageCV); 
            waitKey(1);                     
        }
        frame->Release();                   
    }
edit flag offensive delete link more

Comments

2

Thanks, it's a something I had considered. I alluded to this solution in my comment above, except it would have been %4. It would mean a hiccup in loop timing precision every %n frames, which wouldn't be too bad for my application, still though it doesn't rub so well with me, what if it did affect my application... :)

I'm thinking a solution might be in threading the processing loop to run 120 fps with no imshow/waitkey and every n frames signal to main (or vice versa) that a frame is ready for display. Got to read more about mutex and/or the timing hit for copying a frame to another memory allocation for the display function .. never done threads :)

Yes, waitkey is a misnomer (!) and not only that, but the input parameter isn't linearly mapped to the behavior it ostensibly represents

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-12 15:12:09 -0500 )edit

I ran out of space... Also wanted to say thanks for the detailed answer ! It makes more sense now.

Although, it's I guess not the greatest of 'news', it means I can move on knowing that there wasn't some hidden and perhaps more direct solution :)

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-12 15:15:42 -0500 )edit

^^ yes, i know, you already considered the %something solution. that was more for the rest of the world (as this is a quite common question)

again, if you find a nice way using multithreading, don't be shy to add another answer !

berak gravatar imageberak ( 2015-01-12 15:22:08 -0500 )edit

Let me add to this answer that the visualization of OpenCV is actually only there for debugging purposes. If you need to get rid of the 1ms processing (which is actually a lot more due to the redrawing calls) then you should use a native library for visualization purposes.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-01-14 02:12:26 -0500 )edit

Well, using both DLIB and openGL gave similar draw periods - although must admit I was pretty much in copy and paste mode for most of that 'development' period :) Maybe I was neglecting some part of the process that weighed down the loop ...

Lamar Latrell gravatar imageLamar Latrell ( 2015-01-14 22:31:59 -0500 )edit
Login/Signup to Answer

Question Tools

3 followers

Stats

Asked: 2015-01-11 18:01:10 -0500

Seen: 12,399 times

Last updated: Jan 14 '15