OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 29 Sep 2020 02:47:58 -0500unwrapPhaseMap() does not take ndarray in Pythonhttp://answers.opencv.org/question/235820/unwrapphasemap-does-not-take-ndarray-in-python/ Hi there,
I'm doing some structured light and fringe analysis work and am trying to use the phase unwrapping function cv.phase_unwrapping_PhaseUnwrapping.unwrapPhaseMap in OpenCV (4.4.0) with Python ([doc here](https://docs.opencv.org/master/d8/d83/classcv_1_1phase__unwrapping_1_1PhaseUnwrapping.html#acad1a355e86402cb190956f9a9cbae99))
> unwrapPhaseMap() virtual void
> cv::phase_unwrapping::PhaseUnwrapping::unwrapPhaseMap ( InputArray
> wrappedPhaseMap, OutputArray
> unwrappedPhaseMap, InputArray
> shadowMask = noArray() ) pure
> virtual Python:
> unwrappedPhaseMap = cv.phase_unwrapping_PhaseUnwrapping.unwrapPhaseMap( wrappedPhaseMap[,
> unwrappedPhaseMap[, shadowMask]] )
However, when I tried to call the function in Python, the TypeError occured:
> Exception has occurred: TypeError
> descriptor 'unwrapPhaseMap' for
> 'cv2.phase_unwrapping_PhaseUnwrapping'
> objects doesn't apply to a
> 'numpy.ndarray' object
Looks like the function doesn't take ndarray as an input. I'm assuming it takes cv::Mat. But after some version (3.0?), OpenCV removed the cv2.cv and the related fromarray() function that converts the ndarray to cv::Mat. It seems there is no way to use cv::Mat in the current version of OpenCV in Python. Anyone knows how to use the unwrapPhaseMap() function with Python and is this possibly a legacy issue?
Many thanks!
yaor42Tue, 29 Sep 2020 02:47:58 -0500http://answers.opencv.org/question/235820/Fringe Phase unwrappinghttp://answers.opencv.org/question/215718/fringe-phase-unwrapping/Hi all,
I am just looking for some psuedo code logic for a structured light problem I am working on
Lets suppose the following
- I have calculated a Mat object with all the phase values from three fringe patterns using the following formula for each xy location of the three fringe images (phase_value = atan(sqrt(3.0) * (intensity_phase_n120 - intensity_phase_p120) /
(2.0*(intensity_phase_0)-(intensity_phase_n120)-(intensity_phase_p120)))
/ M_PI), and as such the phases are wrapped
- Assume that each fringe pattern is like the attaced image here ![C:\fakepath\Pattern_0.bmp](/upfiles/15631934992301392.bmp)
- Assume I know the correct multiple of 2 pi to apply to each of the locations in the phase Mat object so that I can add the appropriate 2 pi jump to unwrap
What does the pseudo code look like to properly unwrap all the phases given all of the above? I presume it's not just a simple case of adding 1 x 2 pi to the first fringe N number, 2 x 2 pi to the second one etc. What else needs to be done in Pseudo code?
Thanks in advance
JT
Suppose I have a Mat object that is filled with phase values that are unwrappedJT3DMon, 15 Jul 2019 07:32:55 -0500http://answers.opencv.org/question/215718/LNK2019: unresolved external symbol "public: __cdecl cv::structured_light::SinusoidalPattern::Params::Paramshttp://answers.opencv.org/question/144095/lnk2019-unresolved-external-symbol-public-__cdecl-cvstructured_lightsinusoidalpatternparamsparams/Hi
i want to create sinusoidal pattern using opencv in QT and currently use this opencv library structured_light while i write this line
>
**cv::structured_light::SinusoidalPattern::Params params**
>
it give error...
faisal aliWed, 26 Apr 2017 10:14:08 -0500http://answers.opencv.org/question/144095/phasecorr.cpp function magSpectrums() DC component missed the square root operationhttp://answers.opencv.org/question/111618/phasecorrcpp-function-magspectrums-dc-component-missed-the-square-root-operation/ There is a possible bug in function magSpectrums(), it seems that the calculation of magnitude of DC component (the first element) is wrong, it has missed the square root operation.
if( !is_1d && cn == 1 )
{
for( k = 0; k < (cols % 2 ? 1 : 2); k++ )
{
if( k == 1 )
dataSrc += cols - 1, dataDst += cols - 1;
dataDst[0] = dataSrc[0]*dataSrc[0]; /// DC component
if( rows % 2 == 0 )
dataDst[(rows-1)*stepDst] = dataSrc[(rows-1)*stepSrc]*dataSrc[(rows-1)*stepSrc]; /// DC component
for( j = 1; j <= rows - 2; j += 2 )
{
dataDst[j*stepDst] = (float)std::sqrt((double)dataSrc[j*stepSrc]*dataSrc[j*stepSrc] +
(double)dataSrc[(j+1)*stepSrc]*dataSrc[(j+1)*stepSrc]);
}
if( k == 1 )
dataSrc -= cols - 1, dataDst -= cols - 1;
}
}lowoodzWed, 09 Nov 2016 04:10:35 -0600http://answers.opencv.org/question/111618/I want to know how to apply phase correlation with difference size image.http://answers.opencv.org/question/97751/i-want-to-know-how-to-apply-phase-correlation-with-difference-size-image/![image description](/upfiles/14674178998158839.jpg)
I want to apply phase correlation in this picture which size is totally different.
Actually, my final goal is like this![image description](/upfiles/14674180291516273.png)(/upfiles/14674180167669987.png)
I want to apply phase correlation with unit cell. Finally, I can find linearity of this picture.davidkimFri, 01 Jul 2016 19:08:50 -0500http://answers.opencv.org/question/97751/rotation rate from image stream(please help)http://answers.opencv.org/question/88956/rotation-rate-from-image-streamplease-help/Hi,
I have little experience with OPENCV, but have always been intrigued by it. In the hopes of avoiding any pitfalls, I would like the input of experienced users before I start learning/hacking.
My Problem...
Imagine a camera on a platform that can rotate in azimuth only. The initial conditions will have the camera pointed at a uniform background with a target object filling the majority of the field of view. The wall/background and object will not move. The camera will have random rotations applied by rotating the table. I am attempting to use image processing to determine the rate of rotation from frame to frame.
I assume if my background and object don't change contrast or shape rapidly (and range is constant), I don't need any sophisticated object tracking algorithm. I plan to perform a phase correlation between two sequential images to get the quantity of pixels shifted. If range and frame rate are constant, I should be able to calculate a rotational rate from this stream of pixel shifts. Eventually I would implement this algorithm with hardware acceleration(FPGA/Zynq) and use this derived rate as feedback to a control system, so the target wouldn't move too far from the center of the image in the final system.
My Questions...
Is phase correlation a good approach? Is there maybe a better way to get the pixel shift as I rotate the camera? Before going to hardware, I would like to prototype as much as possible on a PC. To perform this sort of task on a 640x512 image would I be able to do this with the processor @ 60-100 Hz, or will I need to incorporate a GPU to get to those speeds?
Any other suggestions are appreciated. Thank you.pdmMon, 29 Feb 2016 18:16:17 -0600http://answers.opencv.org/question/88956/opencv and FFThttp://answers.opencv.org/question/77044/opencv-and-fft/
function fft_img = get_FFT() % ****************************************************************************************************************************
% Compute the FFT phase or Amplitude
CtrlFigHdl = GetFigHdl('CtrlFig');
ToggleButtonAmpHdl = findobj(CtrlFigHdl, 'Tag', 'ToggleButtonAmp');
ToggleButtonPhaseHdl = findobj(CtrlFigHdl, 'Tag', 'ToggleButtonPhase');
% detect if we want a Phase or Amplitude
if get(ToggleButtonAmpHdl, 'Value')
BufferType = 1; % FFT Amplitude type
elseif get(ToggleButtonPhaseHdl, 'Value')
BufferType = 2; % FFT Phase type
else
fft_img = [];
return;
end
CtrlFigUsrDat = get(CtrlFigHdl, 'UserData');
% Do we need to update the FFT?
if (BufferType == CtrlFigUsrDat.Current.BufferType) && ~isempty(CtrlFigUsrDat.Current.Buffer)
% No we don't
fft_img = CtrlFigUsrDat.Current.Buffer;
else
% we need to compute a new fft
% Notes: fft(uint16) is double; fft(single) is single; fft(double) is double
clear CtrlFigUsrDat; % Avoid duplication of data
CtrlFigUsrDat = ClearBuffer('main'); % makes room
% Get the ROI limits
[ROI_min_x ROI_max_x ROI_min_y ROI_max_y] = getROIlimits();
try
h = [];
% pre-allocation. We make it single as it's half the data needed and still good enough.
% The FFT computation below is converted in single as it is done, plane by plane
fft_img = zeros(ROI_max_y-ROI_min_y+1, ROI_max_x-ROI_min_x+1, CtrlFigUsrDat.Current.z_dim, 'single');
sizeOfSingle = 4;
h = waitbar(0,[num2str(size(fft_img,2)) ' * ' num2str(size(fft_img,1)) ' pixels * ', ...
num2str(size(fft_img,3)) ' frames * single (32 bpp) = ' num2str(numel(fft_img)*sizeOfSingle/(1024*1024),'%.2f'), ' MB'], ...
'Name', 'Computing FFT...', 'WindowStyle', 'modal');
set(h, 'HandleVisibility', 'off'); % do not integrate this into the line above
pause(0.1); % allow refresh
if BufferType == 1 % Amplitude
for i = ROI_min_x:ROI_max_x % along x
fft_img( : , i-ROI_min_x+1 , :) = abs(fft(single(get_RawData('ROI',i,':')),[],3));
if ishandle(h)
waitbar((i-ROI_min_x+1)/(ROI_max_x-ROI_min_x+1), h);
else
error('FFT aborted.');
end
end
else % Phase
for i = ROI_min_x:ROI_max_x % along x
fft_img( : , i-ROI_min_x+1 , :) = -angle(fft(single(get_RawData('ROI',i,':')),[],3));
if ishandle(h)
waitbar((i-ROI_min_x+1)/(ROI_max_x-ROI_min_x+1), h);
else
error('FFT aborted.');
end
end
end
% Update Buffer
CtrlFigUsrDat.Current.BufferType = BufferType;
CtrlFigUsrDat.Current.Buffer = fft_img;
setF(CtrlFigHdl, 'UserData', CtrlFigUsrDat);
catch FFTError
messageBox(FFTError.message, 1, 'FFT Error');
fft_img = []; % means it failed computing the FFT
end
if ishandle(h)
delete(h)
end
end rourou11Mon, 23 Nov 2015 04:10:56 -0600http://answers.opencv.org/question/77044/Easy way to Convert Magnitude/Phase back to Real/Imag for DFT?http://answers.opencv.org/question/64091/easy-way-to-convert-magnitudephase-back-to-realimag-for-dft/I'm facing a poblem in OpenCV4Android. I'm trying to do a phase recovery of an incoming image like the [Gerchber-Saxton algorithm](http://en.wikipedia.org/wiki/Gerchberg%E2%80%93Saxton_algorithm) does.
I'm propagating the lightfield with a "Fresnel-Propagator" along Z-axis. This works quiet well. In Matlab code I have a complex-datatype with phase/magnitude as well as real/imaginary part. I'm having no problems to switch forth and back, but in OpenCV the exact same operation seems to have no proper result. Converting back and forth doesnt give same results
I've written some code for Imag/Real conversion like the one below (real sin is simply a cosine)
Mat toImag(Mat magMat, Mat phaseMat) {
Mat resultMat = new Mat(magMat.size(), magMat.type());
for (int iwidth = 0; iwidth < magMat.width(); iwidth++) {
int mag=0, phase=0;
double imag=0;
for (int iheight = 0; iheight < magMat.height(); iheight++) {
mag = (int)magMat.get(iwidth, iheight)[0];
phase = (int)phaseMat.get(iwidth, iheight)[0];
imag= mag*sin(phase);
resultMat.put(iwidth, iheight, imag);
}
}
The alternative was using PolarToCart-function, but I'm not sure if I could use it to convert the Euler-representation of a compolex number to compnent-representation. Does anybody know how to solve this issue?
beniroquaiSat, 13 Jun 2015 12:39:00 -0500http://answers.opencv.org/question/64091/opencv idft output and calculation of phasehttp://answers.opencv.org/question/56345/opencv-idft-output-and-calculation-of-phase/ I have been trying to calculate the phase information of a complex matrix in opencv. As I am new in using opencv I am sure I am failing to look for the correct answer. So, I have this program.
I am sure the matrix invDFT holds complex values.
So what is the easiest way of calculating the phase of the total matrix? And how can I imshow the DFT output for this program? I have used phase and I am not sure if its correct.
Thanks. I already said I am new in opencv. So please pardon if my questions are too basic. Thanks once again.
int main()
{
// Read image from file
// Make sure that the image is in grayscale
Mat img = imread("input.bmp", 0);
Mat mag, ph;
Mat planes[] = { Mat_<float>(img), Mat::zeros(img.size(), CV_32F) };
Mat complexI; //Complex plane to contain the DFT coefficients {[0]-Real,[1]-Img}
merge(planes, 2, complexI);
dft(complexI, complexI); // Applying DFT
// Reconstructing original imae from the DFT coefficients
Mat invDFT, invDFTcvt;
idft(complexI, invDFT, DFT_SCALE | DFT_REAL_OUTPUT); // Applying IDFT
Mat planesi[] = { Mat_<float>(invDFT), Mat::zeros(invDFT.size(), CV_32F) };
split(invDFT, planesi);
invDFT.convertTo(invDFTcvt, CV_8U);
imshow("Output", invDFTcvt);
phase(planesi[0], planesi[1], ph, false);
namedWindow("phase image", CV_WINDOW_AUTOSIZE);
imshow("phase image", ph);
//show the image
imshow("Original Image", img);
// Wait until user press some key
waitKey(0);
return 0;
}tahseen_kamalThu, 26 Feb 2015 22:20:13 -0600http://answers.opencv.org/question/56345/