Live video object detection
Hello.
I have a USB camera which is connected to a linux machine. I've configured this machine to record video using ffmpeg command.
ffmpeg -f v4l2 -framerate 30 -video_size 1280x720 -input_format mjpeg -i /dev/video0 -preset faster -pix_fmt yuv420p -b 5M -t 00:01:00 out.mp4
My USB camera supports up to 30 FPS in FHD. I've also connected the USB camera to USB 3.0 ports for enabling higher data rate.
The problem is, when I load this video to a python program and run NN-based algorithm object detection (SSD+MobileNetsv2) frame by frame, moving object in the frame seems blurrier in comparison to the original video that I've recorded. There would be a lot of reasons for failing object detection from a video frame but I presume due to the blurriness it fails to detect. (Also I know that there is no perfect object detection algorithm which suits all cases)
So, could anyone give me pointers on where to adjust, so that moving objects are better detected frame by frame? 1) better quality USB webcam? 2) changing the ffmpeg command line? 3) adjust opencv cap.read()?
Thanks in advance
so, in the end, you're saying, that a video played from opencv'2 VideoCapture is more blurry than from ffmpeg ?
Yes, if the target is moving, it is blurry. Target is approximately 5m away fromthe camera.
Can you post an example image comparing the ffmpeg-recorded video frame vs. a frame acquired directly from the webcam?
Hmm, i don't think its about ffmpeg-recorded vs frame acquired directly from the webcam.But here is one of the scene from the video. (Sorry I had to crop 2/3 of the image for privacy issues) the resolution is set to 640x720
https://imgur.com/MPkB0Xs
I am wondering if this an issue of resolution. If I increase the resolution of the recording video, I presume that it would degrade the frame rate. Is there a increase to resolution & maintain frame rate at 30? (yes, the camera is capable of recording at the rate)