# Mahalanobis Distance between 2 images

Hi,

I'm trying to compare the color between 2 images (A model and a ROI extracted with Local Features). I Tryed HistComp with the hue with very bad results because of noise (BLUE model was similar to Orange ROI than Orange Model). Now I would like to try with Mahalanobis distance, but after 3 days in forums, examples and documentation I couldn't understand how to use calcCovarMatrix and Mahalanobis. As I studied, the CovarMatrix is done across all image and the mahalanobis is done between 2 images, but Instead I only found examples with vectors and numbers (I get it that this tools can also be applied to numerical vectors, but opencv is supposed to be about MAT, no?)- Moreover there are plenty of posts about the excess of ram usage and the errors in calcCovarMatrix with Zero comments below.

So I'm trying to write my own post, and If I don't get answers I'll start to write my own calcCov and Mahalanobis. And I'll post below.

Eventine

edit retag close merge delete

Sort by » oldest newest most voted

This is the function. Is not well written, is not optimized, but it seems to work. Thanks to berak who helped me understand better the opencv functions.

/// code start

    double MahalanobisDistance(cv::Mat& model, cv::Mat& scene) {

std::vector<int> mean(3);
mean[0] = 0;
mean[1] = 0;
mean[2] = 0;

//cv::Mat covar = cv::Mat::zeros(3, 3, CV_32F);
std::vector<double> covar, invCovar(9);

int numberOfPixel = model.cols * model.rows;

for (int iter = 0; iter < 9; iter++)
covar.push_back(0.0);

//mean: Sum for each channel
for (int x = 0; x < model.cols; x++)
for (int y = 0; y < model.rows; y++)
{
//cout << "\n Pixel: " << int(model.at<Vec3b>(y, x)[1]) << " \n";
mean[0] += int(model.at<Vec3b>(y, x)[0]);
mean[1] += int(model.at<Vec3b>(y, x)[1]);
mean[2] += int(model.at<Vec3b>(y, x)[2]);
}

mean[0] = cvRound(double(mean[0]) / double(numberOfPixel));
mean[1] = cvRound(double(mean[1]) / double(numberOfPixel));
mean[2] = cvRound(double(mean[2]) / double(numberOfPixel));

for (int x = 0; x < model.cols; x++)
for (int y = 0; y < model.rows; y++)
{
for (int i = 0; i < 3; i++)
for (int j = 0; j <3; j++)
{
covar[i * 3 + j] += (int(model.at<Vec3b>(y, x)[i]) - mean[i]) * (int(model.at<Vec3b>(y, x)[j]) - mean[j]);
}

}

for (int iter = 0; iter < 9; iter++)
covar[iter] = cvRound(double(covar[iter]) / double(numberOfPixel));

//invert covar
//compute det of A.
double detA = (covar[0] * covar[4] * covar[8]) + (covar[1] * covar[5] * covar[6]) + (covar[2] * covar[3] * covar[7])
- (covar[2] * covar[4] * covar[6]) - (covar[0] * covar[5] * covar[7]) - (covar[8] * covar[3] * covar[1]);

//compute det of minors
std::vector<double> minors(9);

minors[0] = (covar[4] * covar[8]) - (covar[5] * covar[7]);
minors[1] = -((covar[3] * covar[8]) - (covar[5] * covar[6]));
minors[2] = (covar[3] * covar[7]) - (covar[4] * covar[6]);

minors[3] = -((covar[1] * covar[8]) - (covar[2] * covar[7]));
minors[4] = (covar[0] * covar[8]) - (covar[2] * covar[6]);
minors[5] = -((covar[0] * covar[7]) - (covar[1] * covar[6]));

minors[6] = (covar[1] * covar[5]) - (covar[4] * covar[2]);
minors[7] = -((covar[0] * covar[5]) - (covar[2] * covar[3]));
minors[8] = (covar[0] * covar[4]) - (covar[3] * covar[1]);

//transpose and detA
invCovar[0] = minors[0] / detA;
invCovar[1] = minors[3] / detA;
invCovar[2] = minors[6] / detA;

invCovar[3] = minors[1] / detA;
invCovar[4] = minors[4] / detA;
invCovar[5] = minors[7] / detA;

invCovar[6] = minors[2] / detA;
invCovar[7] = minors[5] / detA;
invCovar[8] = minors[8] / detA;

//test
/*std::vector<double> test(9);

test[0] = (covar[0] * invCovar[0]) + (covar[1] * invCovar[3]) + (covar[2] * invCovar[6]);
test[1] = (covar[0] * invCovar[1]) + (covar[1] * invCovar[4]) + (covar[2] * invCovar[7]);
test[2] = (covar[0] * invCovar[2]) + (covar[1] * invCovar[5]) + (covar[2] * invCovar[8]);

test[3] = (covar[3] * invCovar[0]) + (covar[4] * invCovar[3]) + (covar[5] * invCovar[6]);
test[4] = (covar[3] * invCovar[1]) + (covar[4] * invCovar[4]) + (covar[5] * invCovar[7 ...
more

to use the Mahalanobis distance, you need to build a global "model" first - the inverse covariance matrix of all your training samples. then, using that, you can compare a test sample to a train sample, to find out which is the best match in your database . so:

        //// step 1, offline / once:
// put all your processed train samples into a single Mat,
//  each feature flattened onto a row of its own
Mat features;

Mat _covar, _mean;
calcCovarMatrix(features, _covar, _mean, CV_COVAR_NORMAL|CV_COVAR_ROWS, CV_32F);
_covar /= (features.rows-1);

Mat icovar; // this is the one used later
invert(_covar, icovar, DECOMP_SVD);

//// step 2, predict on a test image
Mat testFeature;
for (all trainFeatures) {
Mat trainFeature = ...
double dist = Mahalanobis(testFeature, trainFeature, icovar);
}


so far, so good. however, be aware, that building a covariance matrix and even inverting it is extremely expensive, that's why most often some feature reduction like PCA is applied before (shorter features).

more

Thanks for the answer. I dont' have training samples or fetures. I have only 1 image model of which I have to compute the covariance matrix and then compare through mahalanobis with other images

( 2019-12-13 07:05:30 -0500 )edit

with other images

those are your "train" samples. (and no, you cannot use this with 2 images only)

( 2019-12-13 07:15:08 -0500 )edit

Ok, then I suppose that I have to write my own functions. I'll keep the post alive and paste the function for others to find. Thank you again for your help

( 2019-12-13 07:57:35 -0500 )edit

Official site

GitHub

Wiki

Documentation