Extracting a vector of pixel values across multiple frames
Hi -- I am very new to OpenCV, so please forgive my naivety. I'd like to examine how pixel values, at particular pixel locations, change over multiple frames. As a result, I am interested in reading in a vector of pixels across multiple images, rather than all pixels of an image. I can do this by reading in entire frames and storing the pixel values of interest to a separate vector, but is there is a more efficient and elegant approach using OpenCV?
Eventually I will be performing statistical analysis (mean, variance, etc.) of pixel intensity values in time. It appears that OpenCV has some nice functions for computing statistics of pixels in space (i.e. within an image), such as cvMean_StdDev, but it is not clear to me if OpenCV supports similar capability across multiple frames. In other words, for N frames of video, with each frame of dimensions W x H, does OpenCV have a quick and easy way to return a single W x H image representing the mean, std, or variance of each pixel over time?
Thank you Michael and Kirill for your answers. Both approaches work well. One issue that wasn't mentioned was the possibility of saturation. I am starting with an array of matrices ( Mat *frameArray ) of type CV_U16C1, but it is necessary to perform the accumulation and division in floating point math to avoid overflow. I implemented the following:
Mat sumImage, tempDouble; frameArray[0].convertTo(sumImage,CV_64FC1,0); // initialize accumlation to zeros for(int i=0; i < numFrames; i++) { frameArray[i].convertTo(tempDouble,CV_64FC1); sumImage += tempDouble; }
This works, but requires converting each frame to double precision, which seems slow and inefficient. Is there a way to recast the entire array pointer or some other more efficient approach?
Question above is discussed here: http://answers.opencv.org/question/1082