I'm trying to do a simple video playback using OpenCV on OS X 10.11.6, but the performance of imshow() is very bad. I'm getting around 1 fps out of the video, no matter the waitKey() parameter value.
The crazy thing is: I have windows 7 as a VMWare Virtual Machine running on this machine and if I compile and run exactly the same code on this virtualised Windows (running inside the same OS that has bad performance), I get the full frame rate expected (30 FPS). Which means that this is not related to the memory or processing power of my machine (i7, 2.5GHZ, 16GB RAM).
This is the code I'm running
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, const char * argv[]) {
VideoCapture cap("/myvideo.mp4");
if(!cap.isOpened())
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame;
imshow("edges", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
I've ported this code to python (using the official OpenCV port of python) and the problem persisted. If I run the same python code on this virtualised windows, I get the expected frame rate (30 FPS).
I've read this could be related to VSYNC, but I could not find a way to disable it and test. How can it run faster on a virtualised windows than on the host OS?!
Help is greatly appreciated.
Thank you.
BurningFuses