# Fourier Transform

I used DFT in opencv to get the complex and real images values but some of the values which I get for the Complex image are Negative. Is it usual? what is the reason that I get Negative values?

edit retag close merge delete

Sort by ยป oldest newest most voted

Yes, that is true because DFT will generate a matrix of complex numbers with negative and positive values of real and imaginery parts. So when you display, or save to disk as images, that results, you should first call the normalize function to normalize these values to [0..255] range (integer, CV_8U) or [0..1] range (float, CV_32F).

more

Thanks for the clear answer...I came up with a new question: If I define an empty kernel on the image, is there any difference If I apply DFT on whole image or just apply it into this specific kernel?I mean what would be the output of pixel values inside the kernel when I apply DFT into whole image and when I apply it on this kernel?In better way: I want to get magnitude and orientation of some values inside the imege. Should I apply DFT on whole image and then find the correspond values result or should I try to calculate DFT on those specific values(for example by assuming that they are defined inside a kernel)? It would be appreciated if you make it clear for me...

( 2014-02-17 03:02:19 -0500 )edit
1

I do not understand what you mean "empty kernel" and "what would be the output of pixel values inside the kernel". You have an image and a kernel with the same size (saying in frequency domain with dft) and a filter by dft is proceeded like that: you generate the dft components from your image (forward dft), multiply them with the kernel (element by element) and carry out the inverse dft to get the result (you may want magnitude or orientation). So the kernel values are not changed in that process. I have never seen an example that works on sub-regions of image (as you said "some values inside the image") but with a spatial filter, it is possible. You can see more details from this sample: http://breckon.eu/toby/teaching/dip/opencv/lecture_demos/c++/butterworth_lowpass.cpp

( 2014-02-17 03:15:49 -0500 )edit

Let me to explain what I meant more..assume that I am interested in output(magn and orientation) of the pixel which is located in row 4 and column 8. Is it meaningful to just calculate DFT for this specific pixel? In DFT calculation seems it is possible but in Image procesing viewpoint...I do not know! do you think neighbour pixels have any affection on the output of each pixel value at the end?

( 2014-02-17 03:29:31 -0500 )edit

In spatial domain, Yes, it is possible but I need to have orientation and magnitude as output.. which is impossibel in spatial domain

( 2014-02-17 03:33:33 -0500 )edit
1

It seems not being very clear to me but I think it is also possible to sastify your wish in spatial domain: you can use gradient images to calculate magnitude and orientation components at a specific location (or sub-region) of image without using the whole image.

( 2014-02-17 03:41:42 -0500 )edit

wow..Gradient Images, sounds interesting..do you have any good book or website which is relevant to it?

( 2014-02-17 06:24:53 -0500 )edit

you meant I should try to find the gradient in X and Y direction and use tan to get the orientation?

( 2014-02-17 06:45:00 -0500 )edit

Yes, because gradient images are calculated based on neighbor pixels of every image pixel, so you don't have to use all the image. In OpenCV, you can use Sobel filter or filter2d function with your favourite kernel.

( 2014-02-17 07:06:59 -0500 )edit

Official site

GitHub

Wiki

Documentation