OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Sat, 26 Nov 2016 00:44:19 -0600How to use the functions of convertTo, applyColorMap,imshow to get the result I want?http://answers.opencv.org/question/114730/how-to-use-the-functions-of-convertto-applycolormapimshow-to-get-the-result-i-want/ Here is the code in Matlab and the result picture I get using Matlab. For some purposes, I need to convert Matlab code into C++. I have done the conversion of data structure in c++, and then I need to deal with the part of showing picture. I decide to use Opencv library to replace the image procesing in Matlab. I have found that some functions like convertTo, applyColorMap and imshow in Opencv can replace the function imagesc in Matlab. So, I imitate the code online I searched. But it doesn't work.Maybe there are some grammer mistakes in my questions, sorry about that.
Here is my code in C++
for (cutNumber = 1; cutNumber <= 9; cutNumber++)
{
momentString = "dBT";
DataSelect* BaseData_Select = select(theObj,cutNumber,momentString);
int ncols = BaseData_Select->allLength / BaseData_Select->dataLength;
//对应matlab auto.m第138行。下面是绘制dBZ的B显图
Mat mydata(BaseData_Select->dataLength, ncols, CV_32F);
for (int i = 0; i < BaseData_Select->dataLength; i++)
{
for (int j = 0; j < ncols; j++)
{
int temmmp = i*ncols+ j;
mydata.at<float>(i, j) = BaseData_Select->data[i*ncols + j];
}
}
double Amin = *min_element(mydata.begin<float>(), mydata.end<float>()); //Amin is -19
double Amax = *max_element(mydata.begin<float>(), mydata.end<float>()); //Amax is 64
cv::minMaxIdx(mydata, &Amin, &Amax);
cv::Mat adjMap;
float scale = 255 / (Amax - Amin);
mydata.convertTo(adjMap, CV_8UC1, scale, -Amin*scale);
cv::Mat resultMap;
applyColorMap(adjMap, resultMap, cv::COLORMAP_AUTUMN);
cv::imshow("Out", resultMap);
cv::imwrite("output.bmp", resultMap);
}
Here is the code dealing with images in Matlab.
figure(H_figure_ZDR);
subplot(3,3,Cut_Number);
imagesc(ZDR.Data);
colormap(radarcolor_CJJ(40,1)); %产生40种颜色
caxis([ -2 6]);
ylim([Sphere_Distance_Cell-Sphere_Distance_Cell_Extend Sphere_Distance_Cell+Sphere_Distance_Cell_Extend])
xlim([Sphere_Center_Ray-Sphere_Azimuth_Cell_Extend Sphere_Center_Ray+Sphere_Azimuth_Cell_Extend])
xlabel('径向数目');
ylabel('距离库');
The result picture running on Matlab which is I want to get using Opencv functions.
![image description](/upfiles/14801431839617214.png)
And my result picture in C++ is the following picture which is the wrong picture absolutely. I am a new learner for opencv, but time is limited. Could any one help me to solve this problem?
![image description](/upfiles/14801432917966361.png)
buyi1128Sat, 26 Nov 2016 00:44:19 -0600http://answers.opencv.org/question/114730/Is stereoRectifyUncalibrated efficient?http://answers.opencv.org/question/233/is-stereorectifyuncalibrated-efficient/hello everybody,
I'm using of OpenCV 2.4 for rectification of images with findFundamentalMatrix and stereoRectifyUncalibrated. nearly 2 weeks ago, I saw a matlab code about rectify and I became interested to compare the result between them. at the first I thought that the result of opencv will be better than matlab but after several experiments, I found that the matlab code is better. but why?
I searched in internet about them and I found that they use of two difference algorithm according by 2 papers.
I think the opencv uses of "Theory and Practice of Projective Rectiﬁcation" paper by "Richard I. Hartley" that you can found [here](http://users.cecs.anu.edu.au/~hartley/Papers/joint-epipolar/journal/joint3.pdf).
But the base of matlab algorithm code is a paper from "[A. Fusiello, E. Trucco and A. Verri](http://profs.sci.univr.it/~fusiello/demo/rect/)" with "Quasi-Euclidean Uncalibrated Epipolar Rectiﬁcation" title that you can find [here](http://profs.sci.univr.it/~fusiello/papers/icpr08.pdf)
and the matlab source code is [here](http://profs.sci.univr.it/~fusiello/sw/RectifKitU.zip). if you see the compRect.m file, you will notice that they use of non-linear least square method (Levenberg–Marquardt algorithm) to find the extrinsic parameters (rotation matrix and focal point).
And my question:
why opencv don't use of second method while the result of that is better than opencv. If somebody used of second method (matlab code) already, please explain his experience.
Amin AboueeTue, 10 Jul 2012 11:50:35 -0500http://answers.opencv.org/question/233/