Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

OpenCV GPU Accelerated using Python

I am trying to run my python code which is basically related to image processing and finding defects. I want to get this code on GPU (it works perfectly fine using CPU but takes time due to many libraries) and was suggested using opencv gpu accelerated library. I have no clue how to start doing this.. I have tried to do this following example but does not have any change in its time taken to complete the task. import cv2 import time

import cv2

import time

t=time.time()

img="Red.jpg"

img_original=cv2.UMat(cv2.imread(img))

imgUmat=cv2.UMat(img_original)

blur= cv2.pyrMeanShiftFiltering(imgUmat,21,49)

gray_image= cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY)

cv2.imshow('original',gray_image)

print('Done in', (time.time()-t))

cv2.waitKey(0)

cv2.destroyAllWindows()

Output:

Done in 9.723002672195435

It would be great, if anyone can suggest where do i start from and how does this even work??