# Getting MJPEG data from UVC webcams on Linux

Hi,

I'm working on a project that requires input from three webcams, all connected by USB, to be used with OpenCV. Separately, the code works for any one camera, but as soon as I want to use two cameras at the same time I get USB bandwidth issues. Using kernel module quirk modes helps somewhat, but not enough.

I've figured out that the problem does not arise when streaming from the three webcams using guvcview, because in there I can opt for MJPEG compression of the video feed from the cameras. Indeed, if I try to use YUYV as the pixel format in guvcview I get problems as soon as I try to stream from two or more cameras.

Therefore I want to get MJPEG streams going from the cameras into OpenCV. One way to do this would be to use FIFOs and gstreamer, but I think that solution is rather bad. Is there a more OpenCV-esque way of doing this, by telling V4L to request an MJPEG stream and then uncompressing it in band?

edit retag close merge delete

Sort by » oldest newest most voted

Well hello, future internet person with the same problem as me.

I ran out of patience and fell back to a UNIX pipe solution. What I did is something like this:

gst-launch-1.0 v4l2src device=/dev/video1 ! 'image/jpeg,width=640,height=480,framerate=15/1' ! filesink buffer-size=0 location=/dev/stdout | ./camera_app

i.e. I open the camera with gstreamer, ask for an image/jpeg stream, forcing it to use MJPEG, and put it into a file sink using stdout for its output. I pipe that into my camera app which I tell to VideoCapture.open the file /dev/stdin. This is rather ugly and it does have some delay, so I'd prefer not doing it this way, but this works for three USB cameras in parallel, on a UDOO board (similar to Wandboard etc.).

more

Official site

GitHub

Wiki

Documentation