1 | initial version |
The shell time includes all steps needed to start and close the program like allocate and freeing up memory objects, load and unload OpenCV and others shared libraries and the program itself
But your big mistake is that your clock starts after cap.open("video.mp4")
and before cap.release(
) thus you are ignoring time needed to load/unload the video itself and related library, codecs and so on.
Please note that you could also measure performance with OpenCV using cv::getTickCount() and cv::getTickFrequency()
int main( int argc, char** argv )
{
//take time as 1st instruction
double t = (double)cv::getTickCount();
//declare objects
VideoCapture cap("video.mp4");
Mat frame;
vector<Mat> frames;
// do something ...
while(!should_stop)
{
your loop
}
// destroy all the objects
cap.release();
frame.release();
frames.clear();
vector<Mat>(frames).swap(frames);// trick to free up the memory
//take time as last instruction
t = (cv::getTickCount() - t) / cv::getTickFrequency();
//you lost (negligible) time here
std::cout << "Time: " << t << "seconds"<< std::endl;
return 0;
}
BTW the above code can't consider the program initialization overhead that you are seeing from the shell.
2 | No.2 Revision |
Starting with your clocks should be 1st and last instruction, we all should remember that std::clock()
returns the approximate __processor time used by the process__ check the doc. If the process doesn't use the processor than std::clock() doesn't advance, this is the case when you have multi threading or a lot of I/O or sleep. In addiction std::clock() shouldn't depends so much from overall system load.
Test it with a simple std::cout << "Press a Key"; std::cin.get();
The shell time includes all steps processor time returned by std::clock() is always the same despite of needed time to press the key !!!
NOTE: THIS IS NOT TRUE ON VS 2013. see this. (Really I've same behaviour also with Win764+gcc version 5.1.0 tdm64-1)
Because it looks you are using *nix OS, in your example:
But your big mistake is that your clock starts after cap.open("video.mp4")
and before cap.release(
) thus
If you are ignoring time needed want to load/unload the video itself and related library, codecs and measure real time elapsed between 2 events please refer to so on.called wall clock.
Please note that you could also can measure performance time elapsed with OpenCV using cv::getTickCount() and cv::getTickFrequency()
The code below compares processor time, wall time and elapsed time measured with OpenCV
int main( int argc, char** argv )
{
//take time as 1st instruction
double t procTime,wallTime,cvTime;
clock_t clk0,clk1;
std::chrono::high_resolution_clock::time_point chrono0,chrono1;
double cv0,cv1;
clk0 = std::clock();
chrono0 = std::chrono::high_resolution_clock::now();
cv0 = (double)cv::getTickCount();
//declare objects
VideoCapture cap("video.mp4");
Mat frame;
vector<Mat> frames;
// do something ...
while(!should_stop)
{
your loop
}
// destroy all the objects
cap.release();
frame.release();
frames.clear();
vector<Mat>(frames).swap(frames);// trick to free up the memory
//take time as last instruction
t = (cv::getTickCount()
clk1 = std::clock();
chrono1 = std::chrono::high_resolution_clock::now();
cv1 = (double)cv::getTickCount();
cout << endl << "Processor time (ms) "
<< 1000.0*float( clk1 - t) clk0) / CLOCKS_PER_SEC;
<< endl << "Wall time (ms): "
<< std::chrono::duration<double, std::milli>(chron1 - chron0).count();
<< endl << "Wall time using OpenCV (ms):"
<< cvTime = 1000.0*(t1 - t0) / cv::getTickFrequency();
//you lost (negligible) time here
std::cout << "Time: " << t << "seconds"<< std::endl;
return 0;
}
<< endl;
BTW the above code can't consider the program initialization overhead that you are seeing from the shell.