Hello, I stumbled across this API which captures images directly and converts the camera feed (raw YUV data) to RGBA and does this on the GPU. This is beneficial because I'm using my RPi and it doesn't have a lot of processing power, so I want a live camera feed which can be directly seen on screen for instance, and then the CPU will continuously process the data in the background.
The API lacks documentation and is very unclear on how to use with OpenCV, it is simple and efficient and is just want I need, so it really is quiet helpful if I can get it working. So this is the code I've tried this to convert the camera feed to a cv::Mat
object with no success.
const void* frame_data; int frame_sz;
if(cam->BeginReadFrame(1,frame_data,frame_sz))
{
//if doing argb conversion the frame data will be exactly the right size so just set directly
textures[1].SetPixels(frame_data);
cam->EndReadFrame(1);
}
//convert the frame data to a cv::Mat object.
cv::Mat TempMat = cv::Mat(MAIN_TEXTURE_HEIGHT, MAIN_TEXTURE_WIDTH, CV_8UC1, &frame_data,frame_sz);
imshow("Camera Feed",TempMat);
Of course, this is just a small segment from the code, but it contains a pointer (cam) which refers to a class which contains these functions:
bool BeginReadFrame(int level, const void* &out_buffer, int& out_buffer_size);
void EndReadFrame(int level);
And the BeginReadFrame function basically make a call to some MMAL routines and reads from the camera output and converts it into RGBA format and the other function releases the data buffer back to the pool from whence it came.
So the API is pretty much a wrapper for the relatively complex library MMAL to make stuff a lot easier.
On the other hand, converting this frame data to a OpenCV Image is not shown how. I believe it is quiet simple, but I'm not very experienced in neither OpenCV or OpenGL.
I encourage you to check out the API to get a better idea of what it does, and how it works.
Thank you very much in advance!