Ask Your Question
0

How Can I use CUDA in Python for fastNlMeanDenosingColored?

asked 2020-09-12 08:58:53 -0600

heavyswat gravatar image

Hello,

I have compiled OpenCV 4.5.0-pre with CUDA and I would like to convert my cv2 code to CUDA code. I couldn't find a python CUDA tutorial on here but I have searched Q&A and it seems like I can do cv2.cuda for using GPU acceleration.

However, when I tired to use "cv2.cuda.fastNlMeansDenosingColored" this error comes out.

AttributeError: module 'cv2.cuda' has no attribute 'fastNlMeansDenoisingColored'

The denoising document shows CUDA implementation. May I know how I can do it? https://docs.opencv.org/master/d1/d79...

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2020-09-15 05:18:02 -0600

updated 2020-09-16 04:01:10 -0600

The python bindings for fastNlMeansDenosingColored are not currently generated.

If you change CV_EXPORTS to CV_EXPORTS_W here and re-compile, the bindings will be generated, but I cannot guarantee they will work.

To use the function the arguments must be in the same order as described in the help, if not the bindings for the correct type (Mat,UMat,GpuMat) will not be selected and you will get a confusing error message.

>>> help(cv2.cuda.fastNlMeansDenoisingColored) Help on built-in function fastNlMeansDenoisingColored:

fastNlMeansDenoisingColored(...)
    fastNlMeansDenoisingColored(src, h_luminance, photo_render[, dst[, search_window[, block_size[, stream]]]]) -> dst
    .   @brief Modification of fastNlMeansDenoising function for colored images
    .
    .   @Param src Input 8-bit 3-channel image.
    .   @Param dst Output image with the same size and type as src .
    .   @Param h_luminance Parameter regulating filter strength. Big h value perfectly removes noise but
    .   also removes image details, smaller h value preserves details but also preserves some noise
    .   @Param photo_render float The same as h but for color components. For most images value equals 10 will be
    .   enough to remove colored noise and do not distort colors
    .   @Param search_window Size in pixels of the window that is used to compute weighted average for
    .   given pixel. Should be odd. Affect performance linearly: greater search_window - greater
    .   denoising time. Recommended value 21 pixels
    .   @Param block_size Size in pixels of the template patch that is used to compute weights. Should be
    .   odd. Recommended value 7 pixels
    .   @Param stream Stream for the asynchronous invocations.
    .
    .   The function converts image to CIELAB colorspace and then separately denoise L and AB components
    .   with given h parameters using FastNonLocalMeansDenoising::simpleMethod function.
    .
    .   @sa
    .      fastNlMeansDenoisingColored

Therefore the both

src = cv2.cuda_GpuMat(np.random.random((1024, 1024,3)).astype(np.uint8))
dst = cv2.cuda_GpuMat(src.size(),src.type())
cv2.cuda.fastNlMeansDenoisingColored(src, 2, 2, dst)

and

dst = cv2.cuda.fastNlMeansDenoisingColored(src, 2, 2)

should work.

As I said above I have not tested the results are correct, just the input and output functionality.

edit flag offensive delete link more

Comments

Hello,

Yes, the function works but it seems like CPU and GPU results are different. However, you can adjust the parameters for getting an optimal result!

Thank you

heavyswat gravatar imageheavyswat ( 2020-09-21 15:09:18 -0600 )edit
0

answered 2020-09-15 13:55:51 -0600

heavyswat gravatar image

updated 2020-09-15 14:04:33 -0600

Thank you for giving me some insight into my problem.

I have re-compiled with cudawarped's suggestion and now I can call the function by cv.cuda.fastNlMeansDenoisingColored.

However,

src = cv2.cuda_GpuMat(cv2.imread('image_path.jpg')) cv2.cuda.fastNlMeansDenoisingColored(src, None, 2, 2, 7, 21)

gives me the following error.

TypeError Traceback (most recent call last) <ipython-input-123-cd4f445e42f1> in <module> ----> 1 cv2.cuda.fastNlMeansDenoisingColored(src, None, 2, 2, 7, 21)

TypeError: Expected Ptr<cv::umat> for argument 'src'

I could do

npTmp = np.random.random((1024, 1024)).astype(np.float32) npMat1 = npMat2 = npMat3 = npDst = np.stack([npTmp,npTmp],axis=2) cuMat1 = cuMat2 = cuMat3 = cuDst = cv2.cuda_GpuMat(npMat1) %timeit cv2.cuda.gemm(cuMat1, cuMat2,1,cuMat3,1,cuDst,1)

The second code works without any trouble. Also, the image exists on the path because I can run the code on C++ with the same image path

Is it any other way I can get around the error?

edit flag offensive delete link more

Comments

I think you are passing the arguments in the wrong order see my amended answer.

cudawarped gravatar imagecudawarped ( 2020-09-16 04:01:44 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2020-09-12 08:58:53 -0600

Seen: 2,283 times

Last updated: Sep 16 '20