# OpenCV gpu::dft distorted image after inverse transform

I'm working on GPU implementation of frequency filtering of an image. My code works great on CPU (I used something like this) but I have spent whole day trying to make the same work on GPU - without success. I want to apply a filter in the frequency domain hence I need the full (complex) result of the forward transform. I have read that I need to pass two complex matrices (src and dst) to forward dft to obtain full spectrum (32FC2). However, I fail to obtain the same image after inverse transform (the returned image is very distorted).

My code (with the closest result):

gpu.img1 = gpu::GpuMat(imgHeight, imgWidth, CV_32FC2);
gpu.img2 = gpu::GpuMat(imgHeight, imgWidth, CV_32FC2);
gpu.img4 = gpu::GpuMat(imgHeight, imgWidth, CV_32FC1);
gpu.img5 = gpu::GpuMat(imgHeight, imgWidth, CV_8UC1);

Mat planes[] = {imageIn, Mat::zeros(imageIn.size(), CV_32FC1)};
merge(planes, 2, imageIn);

gpu::Stream stream;

gpu::dft(gpu.img1, gpu.img2, gpu.img1.size(), 0, stream);
gpu::dft(gpu.img2, gpu.img4, gpu.img1.size(), DFT_INVERSE | DFT_REAL_OUTPUT | DFT_SCALE, stream);
stream.enqueueConvert(gpu.img4, gpu.img5, CV_8U);

stream.waitForCompletion();
namedWindow("processed",1); imshow("processed", imageOut); waitKey(1000);


Your help and suggestions are much appreciated.

edit retag close merge delete

1

Hmm, in contrast to the code from the link you haven't set DFT_COMPLEX_OUTPUT | DFT_SCALE in your forward-dft. Could that be the problem?

( 2013-04-12 12:14:22 -0500 )edit

Sort by » oldest newest most voted

It took me several more hours but I have eventually solved the problem. There are two options

1) real-to-complex (CV_32FC1 -> CV_32FC2) forward and complex-to-real (CV_32FC2 -> CV_32FC1) inverse
As a result of the forward transform a narrower spectrum matrix is obtained (newWidth = oldWidth/2+1 as explained in documentation). It is not CSS compact matrix as in case of non-gpu dft. It is a complex matrix that uses the fact that frequency spectrum is symmetric. Hence any filter can also be applied here with the speed up from performing nearly half less multiplication than in the second case. In this case the following flags should be set: forward -> 0 inverse -> DFT_INVERSE | DFT_REAL_OUTPUT | DFT_SCALE This worked great for me. Remember to declare earlier properly the GpuMat used to their types (CV_32FC1 or CV_32FC2)

2) complex-to-complex (CV_32FC2 -> CV_32FC2) forward and complex-to-complex(CV_32FC2 -> CV_32FC2) inverse Full size spectrum (CV_32FC2) is produced in the forward DFT. In this case the flags are forward -> 0 inverse -> DFT_INVERSE The result of inverse transform is a complex matrix (CV_32FC2), hence you need to split it and extract the desired result from the zero channel. Later the data needs to be scaled explicitly:

double n,x;
minMaxIdx(imageAfterInverseDFT, &n, &x);
imageAfterInverseDFT.convertTo(imageAfterInverseDFT, CV_8U, 255.0/x);


As simple as that! I have no idea why I didn't come across this earlier. I decided to post it any way as someone out there might have the same problem or need some guidance.

more

nice explanation!

( 2013-04-15 08:47:14 -0500 )edit

Official site

GitHub

Wiki

Documentation