Ask Your Question

How to send opencv video's to ffmpeg

asked 2016-05-19 23:01:13 -0500

jxb gravatar image

I m trying to send the processed opencv Mat as video to ffmpeg. I m encoding the frame and writing it to std output and then piping it to ffmpeg. Here is my code.


if(!cap.isOpened()) {
    cout << "Video not accessible" << endl;
    return -1;
} else {
    cout << "Video is accessible" << endl;

while (true) {

    cap >> frame;

    //some processing

    cv::imencode(".jpg", frame, buff);
    for (i = buff.begin(); i != buff.end(); ++i)
        std::cout << *i ;

My input video resolution is 640x418. I do not alter the video size

After building, I use following command to execute.

./a.out | ffmpeg -f rawvideo -pix_fmt bgr24 -s 640x418 -r 30 -i - -an -f mpegts udp://

and also this

./a.out | ffmpeg -i pipe:0 -f rawvideo -pix_fmt bgr24 -s 640x418 -r 30 -i - -an -f mpegts udp://

However none of this seems to work.

Kindly help.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2016-05-20 07:00:49 -0500

LBerger gravatar image

I think your problem is not a opencv problem but I hope this answer will help you.

In this program I make a screen copy at 20 fps and send image to ffmpeg. i don't use pipe but socket. I use this command line :

ffmpeg -f rawvideo -pixel_format rgb24  -video_size 640x480 -i  "tcp://" -codec:v libx264 -pix_fmt yuv420p Video.mp4

to run ffmpeg and then send data to port 2345 using socket

    sock->Write(b.GetData(), nb);

I don't encode frame it is raw data

edit flag offensive delete link more


Thanks. Now am sending to socket and from there reading from ffmpeg. However I have one issue. The color of the video is different. Any ideas?

jxb gravatar imagejxb ( 2016-05-20 20:17:53 -0500 )edit

I found out the reason. I changed the pixel format from rgb24 to bgr24 and it worked.

jxb gravatar imagejxb ( 2016-05-20 22:21:45 -0500 )edit

answered 2019-05-01 09:09:29 -0500

AbhiTronix gravatar image

@jxb If you're planning to use Python, Then you can use powerful VidGear Python Library that automates the process of pipelining OpenCV frames into FFmpeg on any platform. Here's a basic python example:

# import libraries
from vidgear.gears import WriteGear
import cv2

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:

    (grabbed, frame) =
    # read frames

    # check if frame empty
    if not is grabbed:
        #if True break the infinite loop

    # {do something with frame here}
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer

        # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out

# close output window

# safely close video stream
# safely close writer


You can check out VidGear Docs for more advanced applications and features.

Hope that helps!

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower


Asked: 2016-05-19 23:01:13 -0500

Seen: 2,904 times

Last updated: May 01