Hi -- I am very new to OpenCV, so please forgive my naivety. I'd like to examine how pixel values, at particular pixel locations, change over multiple frames. As a result, I am interested in reading in a vector of pixels across multiple images, rather than all pixels of an image. I can do this by reading in entire frames and storing the pixel values of interest to a separate vector, but is there is a more efficient and elegant approach using OpenCV?
Eventually I will be performing statistical analysis (mean, variance, etc.) of pixel intensity values in time. It appears that OpenCV has some nice functions for computing statistics of pixels in space (i.e. within an image), such as cvMean_StdDev, but it is not clear to me if OpenCV supports similar capability across multiple frames. In other words, for N frames of video, with each frame of dimensions W x H, does OpenCV have a quick and easy way to return a single W x H image representing the mean, std, or variance of each pixel over time?