Hey, if you're dealing with python, then you can use my powerful vidgear library that supports FFmpeg backend with its WriteGear API for writing to network directly from OpenCV. The complete example for writing opencv frames to a RTP stream is as follows:
# import required libraries
from vidgear.gears import WriteGear
import cv2
# Open your source
stream = cv2.VideoCapture(0)
# define required FFmpeg parameters for your writer
output_params = {"-vcodec":"libx264","-profile:v":"main","-preset:v":"veryfast","-g":60,"-keyint_min":60,"-sc_threshold":0,"-b:v":"2500k","-maxrate":"2500k","-bufsize":"2500k", "-f":"flv"}
# Define writer with defined parameters and address such as `rtp://127.0.0.1:1234`
writer = WriteGear(output_filename = 'rtp://127.0.0.1:1234', logging =True, **output_params)
# loop over
while True:
# read frames from stream
(grabbed, frame) = stream.read()
# check for frame if Nonetype
if not grabbed:
break
# {do something with the frame here}
# write frame to writer
writer.write(frame)
# Show output window
cv2.imshow("Output Frame", frame)
# check for 'q' key if pressed
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
# close output window
cv2.destroyAllWindows()
# safely close video stream
stream.release()
# safely close writer
writer.close()
Docs:https://abhitronix.github.io/vidgear
maybe it helps, if you explain a bit, on which side of the process you need opencv ?
there's always a server and a client side to the connection, and vlc is good at both, so you probably don't need to code both sides.
with a bit of luck, your opencv version already supports to read from streams already by dispatching that job to ffmpeg ( the "client" side ), you could try, if :
gives you a (mjpg) stream to read from ( in c++ )
that propobly works. My intention is to read from a webcam, process it with opencv and then send it to the network as (hd) stream. i´m sure that vlc,.... can read that stream easily but i don´t know how to send the iplimages or the .avi TO the network.
i had a look at virtual devices in linux, so i think that a videowriter in opencv can write the images to that device and finally the device can be used with vlc to read from and transver it as stream to the network. But how can i write into a virtual device so that vlc can read it from?
my opencv is compiled with ffmpeg support
to explain it in detail, i´m using an embedded board (Pandaboard with Linux 12.04 Server) where opencv is used to process images and i want to display the result on a desktop.
ah, pandaboard. shame i don't own one. Linux 12.04 Server probably is a stripped ubuntu ? like , without desktop and such ?
if it comes with a webserver, you could try to build a cgi-interface for that. similar to what php or perl would be doing, only that your program would just write a html-header then imencode() the image in memory, and then print it to the console.
if you're a bit into networking, sockets, even writing your own webserver for that (same idea, imencode() and send out) might be an idea
if you only need to send it from the panda to your workstation, simple serial transfer over usb, might work, too. (probably too slow for hd, though)
networking is not really my business. i wanna keep it as simple as possible. Does anyone have examples how to do that. i already have an example how to send an iplimage by splitting and sending via a socket. But that is very processloading. I´d rather use vlc with compression like x.264.
<strike>netcat, even ;)</strike>
ok, ignore. i got no real solution.
netcat is really simple Thanks!!!