Test Time Execution Validation for reading Frame at CPU i-7
Hello Team Member, Can anyone help to validate the test time execution for opencv to read the frame from video. I'm doing comparison with someone else data and found out the data is a bit different. I'm using processor Intel i7-4790 @3.60GHz and 16GB Memory. Using this code to get the test execution. I'm just trying to compare on how other do this and is my result and assumption correct which for reading the frame only will take around 10ms?
Here how I implement the code in the program
double t = (double)getTickCount();
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
t = ((double)getTickCount() - t)/getTickFrequency();
std::cout << "Times passed in seconds: " << t << std::endl;
and here is the snippet of the result - and I think the average is @10ms (is this right?) Times passed in seconds: 0.0117758 Times passed in seconds: 0.00500268 Times passed in seconds: 0.0114046 Times passed in seconds: 0.0110537 Times passed in seconds: 0.0152564 Times passed in seconds: 0.0102511 Times passed in seconds: 0.00492798 Times passed in seconds: 0.0109479 Times passed in seconds: 0.0115418 Times passed in seconds: 0.0102865 Times passed in seconds: 0.0124572 Times passed in seconds: 0.00492086 Times passed in seconds: 0.0155164 Times passed in seconds: 0.0100909 Times passed in seconds: 0.0152786 Times passed in seconds: 0.0222282 Times passed in seconds: 0.0128396 Times passed in seconds: 0.0119007 Times passed in seconds: 0.0145368 Times passed in seconds: 0.010985 Times passed in seconds: 0.0101251 Times passed in seconds: 0.00457987 Times passed in seconds: 0.0112319
Edited to add for verification @pklab, sorry for a bit slow response here as I need to digest the code and try to implement in my code. Since that I have a video file and from the glossary, it should be reading from it as below. Which means that, total time in this case would be the total time for reading the frame right?
while(1) { start = cv::getTickCount(); std::cout << std::endl << "Clock resolution: "<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
start = cv::getTickCount();
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
so that means 1 frame in 10ms, that is 100 frames in 1 second ... 100FPS, what is your camera's FPS, 60? Your time is less than 16ms so your FPS is less than 1000/16=66.66 FPS, so I suppose it is 60
@thdrksdfthmn .. I'm running the code to get the fps and the result below @30fps.
It suppose to be correct for the camera @30fps. The question which I try to understand is how long opencv will process to grab each frame from a video. From the above coding, is it correct? So means that the, ~10ms is needed to grabe each of the frame from the video. (ignoring the fps)
And what is the difference of this time measurement calculation compared with 1) gettimeofday() function, declared in "sys/time.h" t.start(); // do something t.pause(); . t.stop();
2) #include <time.h> clock_t start = clock(); clock_t end = clock();