Ask Your Question
2

How to use cudacodec + blobFromImage

asked 2019-12-30 05:29:26 -0600

VectorVP gravatar image

updated 2019-12-30 07:19:52 -0600

The problem is that i can't transfer my frame from cv2.cudastream directly to cv2.blobFromImage. Here is a part of my implementation (I skipped many parts of code, 'cause it is not necessary)

...
net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

vs2 = cv2.cudacodec.createVideoReader(stream_name)
while True:
    (grabbed, frame1) = vs2.nextFrame()
    frame = cv2.cuda.resize(frame1, (416, 416))

    net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
    net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)

    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
    ...

Separately cudacodec and blobFromImage (with VideoCapture) works fine, also if i do

frame2 = frame_res.download()
frame = frame2[:, :, :3]
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)

It works still fine. However, if i load frame without .download and directly to blobFromImage an error occurs

Traceback (most recent call last):   File "yolo_recog.py", line 559, in <module>
    main()   File "yolo_recog.py", line 552, in main
    args.one_video   File "yolo_recog.py", line 455, in testsys
    dir_to_images)   File "yolo_recog.py", line 181, in video_processing
    blob = cv2.dnn.blobFromImage(frame, 1 /
255.0, (416, 416), swapRB=True, crop=False) TypeError: Expected Ptr<cv::UMat> for argument 'image'

After that i change blobFromImage to blobFromImages and get another error

Traceback (most recent call last):
  File "yolo_recog.py", line 559, in <module>
    main()
  File "yolo_recog.py", line 552, in main
    args.one_video
  File "yolo_recog.py", line 455, in testsys
    dir_to_images)
  File "yolo_recog.py", line 181, in video_processing
    blob = cv2.dnn.blobFromImages(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
SystemError: <built-in function blobFromImages> returned NULL without setting an error

How can i transfer a frame from cudacodec to blobfromimage without downloading it to CPU?

edit retag flag offensive close merge delete

Comments

imho, you could do all that blobFromImage() does, on your own (using cuda functions).

but the next culprit would be net.setInput() and it also looks like it won't work with GpuMAT so far.

berak gravatar imageberak ( 2019-12-30 07:28:52 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
2

answered 2019-12-30 06:49:53 -0600

The best person to ask is @Yashas. That said for the reasons explained below this will not currently work.

From a quick inspection of the source code it doesn't look like any of the dnn functions are built to work with GpuMat, for example; blobFromImages() will always retrieve a Mat from an InputArrayOfArrays and then perform resizing, cropping etc. on the host; and predict() will only work with 'Mat'. In C++ these functions may take a GpuMat as input (I need to check this but at least in theory this should not be a problem due to the TAPI) but even if they do they will download them to the host before performing any processing.

Secondly because the dnn module is not in the CUDA namespace, python bindings for processing GpuMat are not generated meaning the function call will fall through to UMat giving you the

TypeError: Expected Ptr<cv::umat> for argument 'image'

you observed.

edit flag offensive delete link more

Comments

So there is no way to use cv2.dnn with GpuMat type? cv2.DNN_BACKEND_CUDA is only one possible way to use GPU computations (and frame should come from CPU)?

VectorVP gravatar imageVectorVP ( 2019-12-30 06:55:41 -0600 )edit
1

There may be a way to pass GpuMat's but all the pre-processing + the prediction method use Mat's so this would just save you calling download() before passing to the relevant function.

The CUDA DNN back end is brand new whereas the the DNN module, which as far as I can tell was built to use Mat's, has been around for a while. This is the reason why everything currently goes through the host. Hopefully in the future this will be changes, however it is not guaranteed since Intel is now heavily involved in the development of OpenCV and the CUDA modules have been moved to from the main repo to opencv_contrib. I would definitely recommend asking @Yashas as he built the CUDA DNN backend.

cudawarped gravatar imagecudawarped ( 2019-12-30 07:16:20 -0600 )edit

@VectorVP . There are 3 missingVideoCapture, Read() and GpuMat.

supra56 gravatar imagesupra56 ( 2019-12-30 08:38:10 -0600 )edit

You cannot grabbed, frame1) = vs2.nextFrame() w/out vs2.Read().

supra56 gravatar imagesupra56 ( 2019-12-30 08:58:49 -0600 )edit
1

@supra56 you are confusing cv2.VideoCapture and cv2.cudacodec.createVideoReader, the latter having a nextFrame() not a Read() method. VetorVP's problem is not decoding the video, it is that the decoded video frame is stored on the GPU and, due to the API, to run inference it must be downloaded to the host before being uploaded again to the GPU which is extremely inefficient, although depending on the GPU/CPU, it should still be quicker than running the inference on the CPU.

cudawarped gravatar imagecudawarped ( 2019-12-30 09:13:03 -0600 )edit
1

There is no GpuMat support yet because it does not support arbitrary dimensions. It's possible to allow single image inference with GpuMat but when GpuMat gets support for arbitrary dimensions, an API change will be required to support it the way cv::Mat is currently supported. The DNN module entirely operates on packed tensors: 1D memory which is split into N-dimensions artifically. GpuMat needs to support this kind of arbitrary dimensions capability to ensure that both cv::Mat and cuda::GpuMat inputs/outputs have semantically identical APIs in the DNN module.

Yashas gravatar imageYashas ( 2019-12-30 10:49:09 -0600 )edit

@cudawarped, @Yashas Thank you for your responses! I appreciate it! @Yashas, is realization of GpuMat support in plans for the nearest future or not?

VectorVP gravatar imageVectorVP ( 2019-12-30 13:47:26 -0600 )edit
1

@VectorVP Yes, it is planned. It's just 1 hour work once GpuMat gets support for arbitrary dimensions.

Yashas gravatar imageYashas ( 2019-12-30 22:24:18 -0600 )edit

@Yashas Is their also a plan to support CUDA streams for async inference on the GPU?

cudawarped gravatar imagecudawarped ( 2020-01-03 02:13:10 -0600 )edit

@cudawarped It's there in my list.

Yashas gravatar imageYashas ( 2020-01-03 22:42:37 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2019-12-30 05:21:06 -0600

Seen: 3,066 times

Last updated: Dec 30 '19