# Measuring cv::VideoCapture::read() execution time in milliseconds gives me incorrect fps

Good evening !

I am trying to measure execution time (in milliseconds) of the function cv::VideoCapture::read over 1000 frames. I am using an USB camera called ELP-USB130W01MT. I set the frame format to 640x480, thus this format allows me to capture at 30 frames per second. The pixel format used is YUYV and the video driver I go through is V4L2.

To measure the execution time, I am using the std::clock function (from the ctime library) that retrieves the CPU execution time and I am doing a basic difference between two timestamps. Since the frame format is 640x480, I would have expected an execution time equals or greater than 33 ms (1000 milliseconds / 30 frames equals 33 ms for a frame), but I am getting incorrect results : when calculating the average over 1000 frames I get an execution time from 4 ms to 8 ms ! I don't understand such results so I have taken a look at the read source code and it seems that it only consists of C select function to wait for the file descriptor being set and some V4L2 functions to retrieve the frame from the camera.

Here is a sample code :

double  Clock::duration()
{
return ((this->_end - this->_start) / (double) CLOCKS_PER_SEC);
}


Note : the results seem to be the same when I modify the exposure value from 48 to 350 (I only tested this range). Usually, the FPS gets lower when increasing the exposure time, so I don't really understand the results.

Could someone explain me what is going on ? Am I doing something wrong ? Is it from OpenCV, the camera, or anything else ?