OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 13 Dec 2019 06:01:53 -0600Mahalanobis Distance between 2 imageshttp://answers.opencv.org/question/223369/mahalanobis-distance-between-2-images/Hi,
I'm trying to compare the color between 2 images (A model and a ROI extracted with Local Features). I Tryed HistComp with the hue with very bad results because of noise (BLUE model was similar to Orange ROI than Orange Model).
Now I would like to try with Mahalanobis distance, but after 3 days in forums, examples and documentation I couldn't understand how to use calcCovarMatrix and Mahalanobis.
As I studied, the CovarMatrix is done across all image and the mahalanobis is done between 2 images, but Instead I only found examples with vectors and numbers (I get it that this tools can also be applied to numerical vectors, but opencv is supposed to be about MAT, no?)- Moreover there are plenty of posts about the excess of ram usage and the errors in calcCovarMatrix with Zero comments below.
So I'm trying to write my own post, and If I don't get answers I'll start to write my own calcCov and Mahalanobis. And I'll post below.
EventineEventineFri, 13 Dec 2019 06:01:53 -0600http://answers.opencv.org/question/223369/OpenCV4Android - calcCovarMatrix for image averageshttp://answers.opencv.org/question/92657/opencv4android-calccovarmatrix-for-image-averages/So I have two RBG averages that I want to get a Mahalanobis distance for. The Mahalanobis function requires an inverse co-variance matrix. My question is how do I create a inverse co-variance matric for averages?
The averages would just be two 1x3 vectors but those averages come from a single image that's 27x27.
The image looks similar to this:
[![enter image description here][1]][1]
One of the averages is for an estimate of the average RBG values for inside the circle and the other is outside the RBG values outside the circle. Is it as simple as creating two images one with the inside of the circle and filling the background with my average circle values and another with the background and filling in the circle with my average background values? From what I've read the inverse co-variance matrix would need to be a 3x3 so I don't think that can work...
Mat coloredImage = colorImage.clone();
Mat threshedImage = threshImage.clone();
Mat mask = new Mat(coloredImage.size(), CvType.CV_8UC1, Scalar.all(255));
Core.circle(mask, new Point(mask.rows() / 2, mask.cols() / 2), (int) mR, Scalar.all(0), -1, 8, 0);
Bitmap gridBitmap = Bitmap.createBitmap(mask.width(), mask.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mask, gridBitmap);
double externalRed = 0;
double externalGreen = 0;
double externalBlue = 0;
int externalCount = 0;
double internalRed = 0;
double internalGreen = 0;
double internalBlue = 0;
int internalCount = 0;
//FIXME: Revisit when I have a better understanding of masks in OpenCV - Masks appear to only effect 1 channel
for (int column=0; column<mask.cols(); column++)
{
for (int row=0; row<mask.rows(); row++)
{
double[] maskValue = mask.get(row, column);
double[] value = coloredImage.get(row, column);
if (maskValue[0] == 255)
{
externalRed = externalRed + value[0];
externalGreen = externalGreen + value[1];
externalBlue = externalBlue + value[2];
externalCount++;
}
else
{
internalRed = internalRed + value[0];
internalGreen = internalGreen + value[1];
internalBlue = internalBlue + value[2];
internalCount++;
}
}
}
int externalAvgRed = (int)(externalRed / externalCount);
int externalAvgGreen = (int)(externalGreen / externalCount);
int externalAvgBlue = (int)(externalBlue / externalCount);
int internalAvgRed = (int)(internalRed / internalCount);
int internalAvgGreen = (int)(externalGreen / internalCount);
int internalAvgBlue = (int)(externalBlue / internalCount);
Mat smallColoredImage = new Mat();
Mat mean = new Mat();
Mat covar = new Mat();
MatOfFloat invcovar = new MatOfFloat(3,3);
Imgproc.resize(coloredImage, smallColoredImage, new Size(3, 3));
Core.calcCovarMatrix(smallColoredImage, covar, mean, Core.COVAR_NORMAL+Core.COVAR_ROWS, -1);
Core.invert(covar, invcovar, Core.DECOMP_SVD);
MatOfFloat mu0 = new MatOfFloat(3,1);
MatOfFloat mu1 = new MatOfFloat(3,1);
mu0.put(0, 0, externalAvgRed);
mu0.put(1, 0, externalAvgGreen);
mu0.put(2, 0, externalAvgGreen);
mu1.put(0, 0, internalAvgRed);
mu1.put(1, 0, internalAvgGreen);
mu1.put(2, 0, internalAvgBlue);
double d2 = Core.Mahalanobis(mu1, mu0, invcovar);
[1]: http://i.stack.imgur.com/gVaOS.pngSilberlichtWed, 13 Apr 2016 16:13:19 -0500http://answers.opencv.org/question/92657/Qn regarding Mahalanobis distancehttp://answers.opencv.org/question/90746/qn-regarding-mahalanobis-distance/Hello,
I am writing a simple function to compute the mahalanobis distance from the mean.
pow(input, 2, input);
divide(input, variance, out);
cout << sum(out)(0) << endl;
Based on what I read, the distance should be within 3 standard deviations or in this case, sum(out)(0) should be lower than 9. However I am getting values like 20+. Could it be because my training data is not good ?NbbWed, 23 Mar 2016 04:28:05 -0500http://answers.opencv.org/question/90746/Unable to get Mahalanobis distancehttp://answers.opencv.org/question/70292/unable-to-get-mahalanobis-distance/ I am trying to find the Mahalanobis distances between a test sample image and a few training data images (At&t database). I took http://answers.opencv.org/question/3494/adding-new-method-in-facerecognizer/ as reference for my code. When I run the code I am getting the following error
"OpenCV Error: Assertion failed (type == v2.type() && type == icovar.type() && sz == v2.size() && len == icovar.rows && len == icovar.cols) in Mahalanobis, file /home/opencv-2.4.9/modules/core/src/matmul.cpp, line 2244
"
Please find the code snippet for mahalanobis distance here http://pastebin.com/Mg8DbFQJ
Mat covar, invcovar, mean;
for(size_t sampleIdx = 0; sampleIdx < _projections.size(); sampleIdx++) {
calcCovarMatrix(_projections[sampleIdx], covar, mean, CV_COVAR_SCRAMBLED|CV_COVAR_ROWS,CV_64F); //Calculating the covariance matrix
invert(covar, invcovar, DECOMP_SVD); //Calculating the inverse covariance matrix
double dist=Mahalanobis( _projections[sampleIdx], q, invcovar );
// Add to the resulting distance array:
if(distances.needed()) {
distances.getMat().at<double>(sampleIdx) = dist;
}
if((dist < minDist) && (dist < _threshold)) {
minDist = dist;
minClass = _labels.at<int>((int)sampleIdx);
}
}
lm35Mon, 07 Sep 2015 04:15:05 -0500http://answers.opencv.org/question/70292/normalized euclidean Distance between 2 points in an imagehttp://answers.opencv.org/question/67291/normalized-euclidean-distance-between-2-points-in-an-image/ Hello forum,
When attempting to find the distance stated above, would it be better to use the bhattacharrya distance or the mahalanobis distance ?
The mahalanobis function requires an input of the covariance matrix. Based on wikipedia it says the matrix should be diagonal for the function to be equal to the normalized euclidean distance. I am not sure how to get that diagonal covariance matrix from 2 points in an image.
NbbWed, 29 Jul 2015 02:04:39 -0500http://answers.opencv.org/question/67291/Blob extraction by colour segmentationhttp://answers.opencv.org/question/29646/blob-extraction-by-colour-segmentation/Hello everyone,
I made an application which detects objects of a set color in a webcam image. I implemented a simple segmentation using Mahalanobis distance (the said reference colour is "trained" from 10/20 images obtaining a mean vector and covariance matrix), and I obtained a binary image (a Mat in which pixel values are 255 or 0).
How can I perform blob analysys using this binary image? Which classes/methods/functions can I use?
I'd like, for example, to determine the size of the biggest blob and ignore smaller blobs.
Thanks in advance. ilchiodoSat, 08 Mar 2014 10:09:43 -0600http://answers.opencv.org/question/29646/Weird problem with Mahalanobis distance functionhttp://answers.opencv.org/question/29354/weird-problem-with-mahalanobis-distance-function/Hello there!
I'm trying to do an image segmentation based on Mahalanobis distance, but the implementation in Visual Studio with OpenCV 2.4.8 throws a weird exception I don't know how to solve.
Here is the snippet:
// cam_frame is a Mat object, while pixel is a std::vector made of 3 elements.
// mn is a 1x3 Mat representing the mean vector, and i_covar has been successfully computed
for (int i=0; i<cam_frame.rows; i++){ //for every pixel,
for (int j=0; j<cam_frame.cols; j++)
{
pixel[0]=cam_frame.at<Vec3b>(i, j)[0]; //B component
pixel[1]=cam_frame.at<Vec3b>(i, j)[1]; //G component
pixel[2]=cam_frame.at<Vec3b>(i, j)[2]; //R component
if (Mahalanobis(mn, pixel, i_cvm)<threshold)
mask.at<unsigned char>(i,j)=255;
else mask.at<unsigned char>(i,j)=0;
}
}
The Mahalanobis function throws an exception regarding the size of the arguments, but I can't really figure out. ilchiodoSun, 02 Mar 2014 07:27:38 -0600http://answers.opencv.org/question/29354/