Ask Your Question

Test Time Execution Validation for reading Frame at CPU i-7

asked 2015-11-06 00:18:45 -0600

zms gravatar image

updated 2015-11-13 00:47:28 -0600

Hello Team Member, Can anyone help to validate the test time execution for opencv to read the frame from video. I'm doing comparison with someone else data and found out the data is a bit different. I'm using processor Intel i7-4790 @3.60GHz and 16GB Memory. Using this code to get the test execution. I'm just trying to compare on how other do this and is my result and assumption correct which for reading the frame only will take around 10ms?

Here how I implement the code in the program

   double t = (double)getTickCount(); 
   bool bSuccess =; // read a new frame from video

    if (!bSuccess) //if not success, break loop
                    cout << "Cannot read the frame from video file" << endl;

   t = ((double)getTickCount() - t)/getTickFrequency(); 
   std::cout << "Times passed in seconds: " << t << std::endl;

and here is the snippet of the result - and I think the average is @10ms (is this right?) Times passed in seconds: 0.0117758 Times passed in seconds: 0.00500268 Times passed in seconds: 0.0114046 Times passed in seconds: 0.0110537 Times passed in seconds: 0.0152564 Times passed in seconds: 0.0102511 Times passed in seconds: 0.00492798 Times passed in seconds: 0.0109479 Times passed in seconds: 0.0115418 Times passed in seconds: 0.0102865 Times passed in seconds: 0.0124572 Times passed in seconds: 0.00492086 Times passed in seconds: 0.0155164 Times passed in seconds: 0.0100909 Times passed in seconds: 0.0152786 Times passed in seconds: 0.0222282 Times passed in seconds: 0.0128396 Times passed in seconds: 0.0119007 Times passed in seconds: 0.0145368 Times passed in seconds: 0.010985 Times passed in seconds: 0.0101251 Times passed in seconds: 0.00457987 Times passed in seconds: 0.0112319

Edited to add for verification @pklab, sorry for a bit slow response here as I need to digest the code and try to implement in my code. Since that I have a video file and from the glossary, it should be reading from it as below. Which means that, total time in this case would be the total time for reading the frame right?

while(1) { start = cv::getTickCount(); std::cout << std::endl << "Clock resolution: "<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;

   start = cv::getTickCount();

   bool bSuccess =; // read a new frame from video

    if (!bSuccess) //if not success, break loop
                    cout << "Cannot read the frame from video file" << endl;
  stop = cv::getTickCount();
  double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
edit retag flag offensive close merge delete


so that means 1 frame in 10ms, that is 100 frames in 1 second ... 100FPS, what is your camera's FPS, 60? Your time is less than 16ms so your FPS is less than 1000/16=66.66 FPS, so I suppose it is 60

thdrksdfthmn gravatar imagethdrksdfthmn ( 2015-11-06 06:46:35 -0600 )edit

@thdrksdfthmn .. I'm running the code to get the fps and the result below @30fps.

 double fps = cap.get(CV_CAP_PROP_FPS);
 cout << "Frame per seconds : " << fps << endl;

  Frame per seconds : 29.97

It suppose to be correct for the camera @30fps. The question which I try to understand is how long opencv will process to grab each frame from a video. From the above coding, is it correct? So means that the, ~10ms is needed to grabe each of the frame from the video. (ignoring the fps)

And what is the difference of this time measurement calculation compared with 1) gettimeofday() function, declared in "sys/time.h" t.start(); // do something t.pause(); . t.stop();

2) #include <time.h> clock_t start = clock(); clock_t end = clock();

zms gravatar imagezms ( 2015-11-11 04:58:42 -0600 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2015-11-06 06:36:04 -0600

pklab gravatar image

updated 2015-11-13 05:44:55 -0600

1st of all your big difference in your timing might be due to cout that introduce some cache and buffering effects on grabbing.

After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency() can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;

On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns

To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.

EDIT 2: Code refactored to be more clear. #define TESThas been removed, and tests has been encapsulated in functions. I hope it's better now. Below are results using the new code (system: intel i3@2.53GHz, win7/64, OCV 2.4.10)

    Clock resolution: 405.217ns
    Expected time: 20ms           Measured time: 19.9917ms
    INFO: try to set cam at: 25fps
    INFO: currently we should grab at: 25fps
    Measuring time needed to grab a frame from camera...
    Expected time: 40ms           Measured time: 49.7196ms~=20fps
    INFO frame size: [640 x 480]
    INFO: The file has been created at: 20fps
    Measuring time needed to read a frame from video file...
    Expected time: UNAVAILABLE    Measured time: 1.32921ms~=752fps
    INFO frame size: [640 x 480]

here the new code

// Defines a standard _PAUSE function
#if __cplusplus >= 199711L  // if C++11
    #include <thread>
    #define _PAUSE(ms) (std::this_thread::sleep_for(std::chrono::milliseconds(ms)))
#elif defined(_WIN32)       // if windows system
    #include <windows.h> 
    #define _PAUSE(ms) (Sleep(ms))
#else                       // assume this is Unix system
    #include <unistd.h>
    #define _PAUSE(ms) (usleep(1000 * ms))

// Tests the accuracy of our functions
void TestMeasurementSys()
    int64 start;
    double totalTime = 0, averageTime = 0;

    std::cout << "-----------------------" << std::endl;
    std::cout << "TEST MEASUREMENT SYSTEM" << std::endl;

    double clockResolution = cv::getTickFrequency();      // ticks per second
    std::cout << "\tClock resolution: "
        << 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;

    int testTimeMs = 20;
    int count = 0, maxCount = 100;
    for (count = 0; count < maxCount; count++)
        start = cv::getTickCount();
        totalTime += (cv::getTickCount() - start);
    totalTime /= cv::getTickFrequency(); // seconds
    averageTime = totalTime / count;                     // seconds
    std::cout << "\tExpected time: " << testTimeMs << "ms"
        << "\tMeasured time: " << averageTime * 1000 << "ms" << std::endl;
*  \brief Measures time needed get a frame from a cv::VideoCapture
*  \param [in]cap valid and opened cv::VideoCapture instance
*  \note If we are grabbing from a cam with high fps(>10) and if driver is working fine
*  some \b simple additional code inside the grab loop (like imshow and waitkey(1) )
*  shouldn't introduce a delay because \c cap>>frame will wait for the driver
*  the needed time to run at given fps, than a bit of time spent here
*  doesn ...
edit flag offensive delete link more


@pklab, Thanks for the details answer. Based on this result, can I say that, the time needed for each of the frame to be grab from the video is @~10ms?

zms gravatar imagezms ( 2015-11-11 05:03:09 -0600 )edit

no! check the code... it's depend on #define TEST

  • #define TEST 1: the function uses available Sleep(testTimeMs ) to test the measurement function... if it returns ~testTimeMs the function it's ok.
  • #define TEST 0: the function runs with grabbing and measure time elapsed between two consecutive grabs (that includes waiting time for a new frame).

In case of grabbing from camera, time elapsed between two consecutive grabs depends on FPS. If you grab at 10FPS you should have a result of ~100ms

pklab gravatar imagepklab ( 2015-11-11 05:39:21 -0600 )edit

I'm sorry, I'm getting confused right now. If the camera specification is 30 frames per second, I understand that in 1 sec, there are 30 frames with time at 1/30. Now, the open cv coding is not 30 fps -right? The coding is grabbing frames from the video for eg. wmv and read the frames for processing. So from the result, am I right by saying, in average, the total grab and read the frame would be ~10ms which is not related to the camera spec at 30fps. I'm sorry if I don't really get it.

zms gravatar imagezms ( 2015-11-11 09:53:18 -0600 )edit

Glossary: read time is the time needed to read a frame from a videofile, grab time is the time needed to get a frame from a camera. Camera specification give the maximum fps for your camera..

When you start grab from cam, you can select the grab fps with cap.set(CV_CAP_PROP_FPS, wantedFps); where wantedFps<=maximum fps. If the driver is working fine you will receive a frame each ~1/wantedFps. Take the time needed to grab N frames divide it for N. You should have a grab time ~1/wantedFps per frame in average. Check it !

If you read frames from video file, the cam doesn't matter and fps stored in the file is just a (read only) information. You can read frames at highest speed that depends on your hardware and codec complexity. My read time is ~48ms per frame in average.

pklab gravatar imagepklab ( 2015-11-11 11:07:13 -0600 )edit

@pklab when reading from file, does grabbing time also depend on frame size?

LorenaGdL gravatar imageLorenaGdL ( 2015-11-11 11:58:50 -0600 )edit

@LorenaGdL I think yes ! Frame size has influence on decoding time and on memory transfer time. Thank you for details. but I need to be short in my long answer :)

pklab gravatar imagepklab ( 2015-11-11 13:42:07 -0600 )edit

@pklab, thanks for the details answer too, it took me sometime to digest all this new things. :) From your code, what I understand is, it has both grab from the camera and read the video files. Correct me if I'm wrong. Since opencv also has the same coding to read the video files, can u please help to confirm whether my understanding on the code is true. In the edited version of the question, that code is only for reading the frame right? and what I had measure is the time needed for a single frame to be read?

zms gravatar imagezms ( 2015-11-13 00:54:37 -0600 )edit

Yes VideoCapture accepts a device index or a filename. To be more clear I edited my code 2nd time... and yes given time is for a single frame ! When you have all clear please accept my answer for future reference. tnks

pklab gravatar imagepklab ( 2015-11-13 05:50:06 -0600 )edit

@pklab, sorry a bit late because trying to understand again those code and getting better in understand all of it. Just another question when comparing your answer and the output. Is the read time still ~48ms as per in the comment?

This is what in the comment "My read time is ~48ms per frame in average"

And this is the result from the test execution. TEST GRABBING FROM VIDEO FILE INFO: The file has been created at: 20fps Measuring time needed to read a frame from video file... Expected time: UNAVAILABLE Measured time: 1.32921ms~=752fps INFO frame size: [640 x 480]

zms gravatar imagezms ( 2015-11-16 05:25:30 -0600 )edit

The "old" comment ~48ms was an example using old code to read a video file with very huge frame (3840x1024 RGB XVID).

Again, time needed to read frame from file depends from processor, hard disk, memory, frame size, encoding.... it will change every time !

In the answer you can read Below are results using the new code , follows the output from the new code on my pc than the new code... try it on your PC. The info you are looking for is TEST GRABBING FROM VIDEO FILE/Measured time.

Sure my English is bad, I suggest to follow the code, it should be more clear. If you like accept the answer.

pklab gravatar imagepklab ( 2015-11-16 09:25:55 -0600 )edit

Question Tools



Asked: 2015-11-06 00:18:45 -0600

Seen: 879 times

Last updated: Nov 13 '15