Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Converting an OpenCV frame to JPEG format in app

Currently, I am following https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/ . I got the app up running on Raspberry Pi.

I have modified the code a bit in that it will send the name of the detected face into my Websocket server. I could already sent the name as a simple string in JSON without any issue. However, I would like to send in the image (or frame, if you preferred) of the detected face as well. No face segmentation required. I just want the whole image at the time of face detection.

I am thinking of something like converting the frame into JPEG first, then just send the data right into Websocket. From looking around, I see that there is imwrite function, but it saves a file. Is it possible to convert a frame into JPEG data without saving to a file? How can this be done? Or are there better ways to do this?

Converting an OpenCV frame to JPEG format in app

Currently, I am following https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/ . I got the app up running on Raspberry Pi.

I have modified the code a bit in that it will send the name of the detected face into my Websocket server. I could already sent the name as a simple string in JSON without any issue. However, I would like to send in the image (or frame, if you preferred) of the detected face as well. No face segmentation required. I just want the whole image at the time of face detection.

I am thinking of something like converting the frame into JPEG first, then just send the data right into Websocket. From looking around, I see that there is imwrite function, but it saves a file. Is it possible to convert a frame into JPEG data without saving to a file? How can this be done? Or are there better ways to do this?