OpenCV2 python storing frames memory leak?
Hi all,
I'm doing some motion detection using the accumulated averaging technique using opencv2 and python. My script reads in a frame, compares it to the background and if there has been a threshold of movement, stores the frame to a list. I'm finding that the memory usage on storing each image seems disproportionately large, such that to hold on to 100 images, is taking 1 gb of memory. This makes little sense since written to file each image is about 120 kb.
What is the best way to store a list of images, is there a numpy approach that someone can suggest?
This seems to be an ongoing issue. It would appear that the memory is not being dropped from cap.grab(). My goal using cap.grab() and not cap.read() was to skip some frames.
"since written to file each image is about 120 kb." - that's compressed, right ?
the frames you store in the array are not.
100*640*480*3 = 92160000
// yea, that's a lot. with HD frames, you easily get a gigabyte there