Storing frames from large video [Memory issue]

asked 2016-10-01 06:59:40 -0600

Eldar gravatar image

Hey guys,

I'm investigating how I can process video frames in bulk. Once I have my frames, I intend to just greyscale them and reconstruct the video. But what I'll add more to the process once this is achieved. Below is what I currently have

cv::Mat currentFrame;
std::vector<cv::Mat> frames;
cv::VideoCapture capture("../Clips/myFilm.avi");

int totalFrameCount = capture.get(CV_CAP_PROP_FRAME_COUNT);

for (int i = 0; i < totalFrameCount ; i++)
    capture >> currentFrame;
    frames.push_back(currentFrame.clone()); //Push our frame into a vector

This works when the video is small, but with larger ones I run out of memory. I don't know how to effectively free up memory in this scenario. On one hand, I can scale down the images in size before adding them, but that just pushes the problem further along when extremely high frame counts are involved.

Could anyone recommend me alternative designs that can handle long video files?

Thanks in advance.

edit retag flag offensive close merge delete



why do you need all frames in memory ?

(can't you just convert the image on the fly, and save it to a new video ?)

berak gravatar imageberak ( 2016-10-01 07:02:56 -0600 )edit

I'd like to manage memory more effectively and learn how this works. I'm sure there can be optimised solutions if once I have the frames, I could distribute them effectively across threads/processes and perhaps CUDA warps. I could recombine them once the work is done.

I am open to any ideas and will certainly keep the "on-the-fly" in mind.

Eldar gravatar imageEldar ( 2016-10-01 07:16:53 -0600 )edit

i still think, you're making an artificial problem of it.

berak gravatar imageberak ( 2016-10-01 11:27:06 -0600 )edit

There is not a single computer vision pipeline that stores all video data into direct memory if it is not needed explicitly. You better split up frames to efficiently process them multithreaded (which is already handled by OpenCV internally for many functions if built with parallel framework) than to try to do this the way you are doing it. The concept you suggest is just plain wrong ...

StevenPuttemans gravatar imageStevenPuttemans ( 2016-10-04 06:37:31 -0600 )edit