OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Mon, 11 Feb 2019 04:38:41 -0600Help in understanding the theory to select optimum fringe frequency in three phase fringe structured lighthttp://answers.opencv.org/question/208657/help-in-understanding-the-theory-to-select-optimum-fringe-frequency-in-three-phase-fringe-structured-light/Hi guys,
I'm hoping someone can help with the following.
I am trying to understand better how the theory about what makes the best fringe pattern for three phase structured light. Where I am struggling is how to select the optimal fringe frequency. This is probably best explained by an example and an image of a pattern
Lets say I have a fringe pattern and the fringe frequency is 16. See this example ![C:\fakepath\Pattern_0.png](/upfiles/15498808003308475.png)
Now suppose I change the fringe frequency to 24? What effect does this have? The theory is that the measurement will be more accurate but I don't understand why.
Any help clearing up this issue would be greatly appreciated (Please don't just point to a paper online about this subject. I have yet to come across one that simply explains why)
JTJT3DMon, 11 Feb 2019 04:38:41 -0600http://answers.opencv.org/question/208657/Signal in frequency domain with OpenCV dfthttp://answers.opencv.org/question/201084/signal-in-frequency-domain-with-opencv-dft/I am experimenting with `cv::dft`: a 1HZ sinus signal is generated, and displayed in the frequency domain. But for some reason it hasn't got the maximum component at 1Hz. My code is the following:
const int FRAME_RATE = 20; //!< sampling rate in [Hz]
const int WINDOW_SIZE = 256;
double len = double(WINDOW_SIZE)/double(FRAME_RATE); // signal length in seconds
double Fb = 1./len; // frequency bin in Hz
// Constructing frequency vector
std::vector<double> f;
double freq_step = 0;
for (int i = 0; i < WINDOW_SIZE; ++i)
{
f.push_back(freq_step);
freq_step += Fb;
}
// Create time vector
std::vector<double> t;
double time_step = 0;
for(int i = 0; i<WINDOW_SIZE; ++i)
{
t.push_back(time_step);
time_step += 1./double(FRAME_RATE);
}
// Creating sin signal with 1Hz period
std::vector<double> y;
for(auto val : t)
{
y.push_back(sin(1*FRAME_RATE*val));
}
// Compute DFT
cv::Mat fd;
cv::dft(y, fd, cv::DFT_REAL_OUTPUT);
fd = cv::abs(fd);
If I plot the signal in time and frequency domain: `plot(t, y); plot(f, fd)` the result is the following:
[![enter image description here][1]][1]
The time signal is good, but the frequency signal has maximum around 6HZ instead of 1HZ.
Where did I take the mistake?
[1]: https://i.stack.imgur.com/SzanR.pngt3rb3dSat, 13 Oct 2018 08:01:16 -0500http://answers.opencv.org/question/201084/Fourier Transforms: Depict an image's variance?http://answers.opencv.org/question/188987/fourier-transforms-depict-an-images-variance/ I'm trying to understand Fourier transforms. If I convert a spatial domain image to the frequency domain using a Fourier transform.
- Does a Fourier transformed image depict/find the variances (intensity/colour/brightness change) over an image?
- In OpenCV, when I perform the FT I get a Mat/image back. Do the coordinates in that FT image correspond exactly to the original spatial image? For example, the pixel at location `[10,100]` on the original image can be found at location `[10,100]` on the FT image. And that pixel value on the FT image represents the variance of that pixel? sazrMon, 09 Apr 2018 08:48:16 -0500http://answers.opencv.org/question/188987/Fourier Spectrumhttp://answers.opencv.org/question/133116/fourier-spectrum/ Hello, I'm new to OpenCV, I've done the Fourier Transform of an image and got it's Spectrum.
I would like to remove frequency components (from the Spectrum) that are greater than a circle that's diameter is 100, I don't think my code is the right thing for what I want, thank you in advance for helping me
Here's my code :
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include "stdafx.h"
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char ** argv)
{
const char* filename = argc >= 2 ? argv[1] : "lena.bmp";
Mat I = imread(filename, CV_LOAD_IMAGE_GRAYSCALE);
if (I.empty())
return -1;
Mat padded; //expand input image to optimal size
int m = getOptimalDFTSize(I.rows);
int n = getOptimalDFTSize(I.cols);
copyMakeBorder(I, padded, 0, m - I.rows, 0, n - I.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = { Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI); // Add to the expanded another plane with zeros
dft(complexI, complexI); // this way the result may fit in the source matrix
split(complexI, planes); // planes[0] = Re(DFT(I), planes[1] = Im(DFT(I))
magnitude(planes[0], planes[1], planes[0]);// planes[0] = magnitude
Mat magI = planes[0];
magI += Scalar::all(1); // switch to logarithmic scale
log(magI, magI);
// Recadrer le spectre, si il y a un nombre impair de lignes ou de colonnes
magI = magI(Rect(0, 0, magI.cols & -2, magI.rows & -2));
int cx = magI.cols / 2;
int cy = magI.rows / 2;
Mat q0(magI, Rect(0, 0, cx, cy)); // Top-Left - Create a ROI per quadrant
Mat q1(magI, Rect(cx, 0, cx, cy)); // Top-Right
Mat q2(magI, Rect(0, cy, cx, cy)); // Bottom-Left
Mat q3(magI, Rect(cx, cy, cx, cy)); // Bottom-Right
Mat tmp;
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
normalize(magI, magI, 0, 1, CV_MINMAX);
imshow("Input Image", I); imshow("spectrum magnitude", magI);
/*________________________________________________________________________________________________*/
Mat src, dst;
Mat kernel;
Point anchor;
double delta;
int ddepth;
int kernel_size;
char* window_name = "filter2D Demo";
int c;
/// Create window
namedWindow(window_name, CV_WINDOW_AUTOSIZE);
/// Initialize arguments for the filter
anchor = Point(-1, -1);
delta = 0;
ddepth = -1;
/// Loop - Will filter the image with different kernel sizes each 0.5 seconds
int ind = 0;
while (true)
{
c = waitKey(500);
/// Press 'ESC' to exit the program
if ((char)c == 27)
{
break;
}
/// Update kernel size for a normalized box filter
kernel_size = 10 + 10 * (ind % 10);
ind++;
if (kernel_size == 100) { break; }
kernel = Mat::ones(kernel_size, kernel_size, CV_32F) / (float)(kernel_size*kernel_size);
/// Apply filter
filter2D(magI, dst, ddepth, kernel, anchor, delta, BORDER_DEFAULT);
imshow(window_name, dst);
ind++;
/*_______________________________________________________________________________________*/
//calculating the idft
Mat inverseTransform;
dft(complexI, inverseTransform, DFT_INVERSE | DFT_REAL_OUTPUT);
normalize(inverseTransform, inverseTransform, 0, 1, CV_MINMAX);
imshow("Reconstructed", inverseTransform);
waitKey();
return 0;
}
}EOEngineerThu, 09 Mar 2017 11:52:06 -0600http://answers.opencv.org/question/133116/How to implement a butterworth filter in OpenCVhttp://answers.opencv.org/question/75582/how-to-implement-a-butterworth-filter-in-opencv/I want to implement a function which takes an image, and apply a bandpass butter worth filter on to it, but can not seem to figure out how using OpenCV shall compute the DFT of an image, and apply a filter onto it .
Some form of help would be helpful here..
My implementation of
Mat bandpass(double d0, double n, int wy, int wx, int cx, int cy)
{
cv::Mat_<cv::Vec2f> pf(wy, wx);
for(int y = 0; y < wy; ++y) {
for(int x = 0; x < wx; ++x) {
// Real part
for(int i = 0; i < 3 ; i++)
{
const double d = std::sqrt( double((x-cx)*(x-cx)) + double((y-cy)*(y-cy)) );
const double d_k = std::sqrt(pow(x-cx-(cx+100),2.0) + pow(y-cy-(cy+100),2.0));
const double d_mk = std::sqrt(pow(x-cx+(cx+0),2.0) + pow(y-cy+(cy+0),2.0));
if(d==0) // Avoid division by zero
pf(y,x)[0] = 0;
else
// pf(y,x)[0] = 1.0 / (1.0 + std::pow(d0/d, 2.0*n));215Fri, 06 Nov 2015 15:03:54 -0600http://answers.opencv.org/question/75582/Set power line frequency for camerahttp://answers.opencv.org/question/74120/set-power-line-frequency-for-camera/I use python and opencv to build a stereo tracker using two cameras.
The question is how to set up power line frequency in openCV version 3.0.0 (or any orher). I know that the camera I use (microsoftHD3000) has this property and it can be set from v4l2 in linux and on windows I can use skype to set it once, but this is ugly.madanhFri, 23 Oct 2015 11:20:09 -0500http://answers.opencv.org/question/74120/exclude moving objects in camera framehttp://answers.opencv.org/question/67160/exclude-moving-objects-in-camera-frame/I have some LEDs which are blinking at a frequency half the frame rate of camera. Which means ideally in one frame LED is on and in another frame the LED is off. I need to find the positions of LED lights in camera feed. I capture N number of frames in grayscale mode, compute absolute difference between the frames, threshold the resulting image and check for contours. This algorithm works fine when there is no object moving in the camera feed. If some object is moving this algorithm fails. Can any one please suggest what can I do to exclude moving objects in this algorithm? If there is another way ( completely different and better ) to detect the position of LED lights please suggest that as well.
Here is my code to detect the LED positions in camera frame. ( where framesToProcess is N number of frames captured quickly and contourCenter is indexed Cartesian coordinates that are candidate for LED points.)
void getContourCenters(vector<Mat> &framesToProcess, vector<pointI>& contourCenter)
{
size_t j = 0;
for (int i = 1; i < framesToProcess.size(); i++)
{
Mat tempDifferenceImage, tempThresholdImage;
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
Rect objectBoundingRectangle = Rect(0, 0, 0, 0);
absdiff(framesToProcess[i - 1], framesToProcess[i], tempDifferenceImage);
threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
findContours(tempThresholdImage, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for (int k = 0; k < contours.size(); ++k)
{
objectBoundingRectangle = boundingRect(contours[k]);
int xpos = objectBoundingRectangle.x + objectBoundingRectangle.width / 2;
int ypos = objectBoundingRectangle.y + objectBoundingRectangle.height / 2;
contourCenter.push_back(mp(xpos, ypos, j++));
}
}
}
PremMon, 27 Jul 2015 20:35:48 -0500http://answers.opencv.org/question/67160/LED Blinking Frequencyhttp://answers.opencv.org/question/65545/led-blinking-frequency/[C:\fakepath\frame12.jpg](/upfiles/14359978483977215.jpg)I decided not to update my old post because it already has so many comments. Here is the program I wrote to detect blinking LEDs. It works so so when the surrounding is a bit dark and doesn't work at all when its bright out there. I have been given some suggestion to improve the efficiency like pre allocating but I think I need to work on the logic as well. Kindly guide me how can I detect the position and of blinking LED? Camera frame rate is 90fps and blinking frequency is 45Hz and there are more than one LEDs in the frame. Attached are two frames in a bright light condition. here is the logic
1. Setup camera parameters to make it 90fps
2. Quickly capture 30 frames and compute the difference and threshold of difference of the frames
3. Find contour centers in the the threshold image
4. organize contours in a R*tree and check the frequency of contour centers in user defined neighborhood.
5. If the count falls within the frequency and tolerance range. Predict the point to be LED light.
Kindly guide me to modify this code so that it works in bright light conditions and the success rate of detecting LED is high.[C:\fakepath\frame11.jpg](/upfiles/14359976778402195.jpg)(/upfiles/14359976621521109.jpg).
As suggested the question seems to be too long. I am trying to get the difference between two frames and threshold the difference, check for contours and then check the frequency of contour center to detect the light. Following function accepts N number of images and does as explained. I need this to work in all light scenario, it is working in low light environment only. Kindly guide me how can I modify the code to make it work in any scenario.
>
const static int SENSITIVITY_VALUE = 50;
const static int BLUR_SIZE = 6;
void getThresholdImage(vector<Mat> &framesToProcess, vector<Mat> &thresholdImages, vector<Mat> &differenceImages)
{
vector<Mat> grayImage;
for (int i = 0; i < framesToProcess.size(); i++)
{
Mat tempMatImage, tempGrayImage;
resize(framesToProcess[i], tempMatImage, Size(600, 800));
cvtColor(tempMatImage, tempGrayImage, COLOR_BGR2GRAY);
grayImage.push_back(tempGrayImage);
if (i > 0)
{
Mat tempDifferenceImage, tempThresholdImage;
absdiff(grayImage[i - 1], grayImage[i], tempDifferenceImage);
imshow("difference Image", tempDifferenceImage);
//erode(tempDifferenceImage, tempDifferenceImage, Mat(), Point(-1, -1), 2, BORDER_CONSTANT);
differenceImages.push_back(tempDifferenceImage);
threshold(tempDifferenceImage, tempThresholdImage, SENSITIVITY_VALUE, 255, THRESH_BINARY);
imshow("before blur", tempThresholdImage);
blur(tempThresholdImage, tempThresholdImage, Size(BLUR_SIZE, BLUR_SIZE));
imshow("After BlurThreshold Image", tempThresholdImage);
thresholdImages.push_back(tempThresholdImage);
}
}
}
PremSat, 04 Jul 2015 03:15:58 -0500http://answers.opencv.org/question/65545/Infrared Led tracking with frequencyhttp://answers.opencv.org/question/64922/infrared-led-tracking-with-frequency/ Hello everybody,
First of all, i would like to say that i'm new on OpenCV so please don't laugh at me, and I'm sorry for my english I'm a french student..
In my project, I have got 3 circles of 7 LED which are moving. I have got a camera above them and each circle are blinking at differents frequency ( so 3 differents frequencies). I would like to track each circle and then recognize their frequency.
I'm using Community Core Vision as software, so I already detect blobs. But I would like to ignore all the others noise around it and then track only the LED. That's means, if something is moving next to the LED it needs to be ignore.
I checked on internet and I readed that is easy to detect LED and ignore the others object by using a filter with the frequency that I want to detect.
Is that true ? And if so, do you have any advices for my project ? What library should I use ? And if there a program which already exist for this ?
Thanks you for you answer and please feel free to comment.BoyadjianFri, 26 Jun 2015 11:57:10 -0500http://answers.opencv.org/question/64922/Detect Multiple LEDs and their flashing frequencyhttp://answers.opencv.org/question/63358/detect-multiple-leds-and-their-flashing-frequency/[C:\fakepath\code.png](/upfiles/14340061982184972.png)![image description](/upfiles/14337422796199242.jpg)![image description](/upfiles/14337421976894176.jpg)(/upfiles/14337421785288362.jpg)Hi All,
I am new to openCV. I am trying to detect the position and frequency of multiple LEDs using OpenCV. Kindly guide me how can I achieve the same? I couldn't use the HSV conversion method because there may be other lights brighter than LED as well. Here is the basic logic.
1. LEDs are flashing at predefined rate. My camera has been set to 90 fps and the LEDs have frequency of 90Hz, 45Hz, 30Hz and 15Hz ( these frequencies and camera frame rate are known parameters )
2. Now I need to find the location of these lights within camera frame in any lightning condition. Be it night where the light is the brightest in the room or be it sunlight where it may not be the brightest object in the scene.
I would appreciate the help.PremSat, 06 Jun 2015 02:16:30 -0500http://answers.opencv.org/question/63358/spatial frequency corresponding to the image planehttp://answers.opencv.org/question/62599/spatial-frequency-corresponding-to-the-image-plane/ Hi everybody
I am doing image filtering in frequency domain, and I need to find the frequency of each image pixel . the only thing I know about the image is size of image for example: (225, 225)
There is a function in python "np.fft.fftfreq" to calculate frequencies, but two question arises here:
1. Using this function, I have to find the frequency in x and y direction , so two numbers, but i need one number for each pixel (I need a matrix the same size as image fill with frequencies correspond to the image pixels)?
2. it starts with zero!!! but as I know after shifting fft, DC component is in the middle so why zero at the beginning?
if anybody knows another way to calculate the frequencies, more than welcome :)
feryTue, 26 May 2015 03:05:06 -0500http://answers.opencv.org/question/62599/Image filtering in frequency domain pythonhttp://answers.opencv.org/question/62324/image-filtering-in-frequency-domain-python/ Hi everybody,
I am new in programming and I would like to apply a filter on an image in frequency domain. actually, its from a paper and i want to re implement it. its the formula: im_out= (1+ 5*((1-e^-f)/f)) * im_in
and here are my codes:
im = cv2.imread('lena.jpg',0)
x=im.shape[0]
y=im.shape[1]
fft= np.fft.fft2(im)
fshift= np.fft.fftshift(fft)
freqx = np.fft.fftfreq(fft.shape[0])
freqy = np.fft.fftfreq(fft.shape[1])
zr = freqx==0
freqx[zr]=0.000000001
for i in xrange(x):
for j in xrange(y):
filter = 1+(5*(1-np.e**(-1*(np.sqrt((freqx)**2 + (freqy)**2))))/(np.sqrt((freqx)**2 + (freqy)**2)))
fim= fshift* filter
fishift= np.fft.ifftshift(fim)
imback=np.fft.ifft2(fishift)
imback=np.uint8(np.real(imback))
plt.subplot(221)
plt.imshow(im,cmap='gray')
plt.subplot(222)
plt.imshow(imback,cmap='gray')
plt.show()
but i get the image without any visible changes, it should be kind of low pass filter.
now I am wondering if its correct to use np.fft.fftfreq to find "spatial frequency in the image plane".
or I should use distance from center as "f"?!!!
in the paper they said "f" is spatial frequency of the image plane!!!!
could anybody help me plz !!!
feryThu, 21 May 2015 03:06:16 -0500http://answers.opencv.org/question/62324/