OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 13 Dec 2019 06:01:53 -0600Mahalanobis Distance between 2 imageshttp://answers.opencv.org/question/223369/mahalanobis-distance-between-2-images/Hi,
I'm trying to compare the color between 2 images (A model and a ROI extracted with Local Features). I Tryed HistComp with the hue with very bad results because of noise (BLUE model was similar to Orange ROI than Orange Model).
Now I would like to try with Mahalanobis distance, but after 3 days in forums, examples and documentation I couldn't understand how to use calcCovarMatrix and Mahalanobis.
As I studied, the CovarMatrix is done across all image and the mahalanobis is done between 2 images, but Instead I only found examples with vectors and numbers (I get it that this tools can also be applied to numerical vectors, but opencv is supposed to be about MAT, no?)- Moreover there are plenty of posts about the excess of ram usage and the errors in calcCovarMatrix with Zero comments below.
So I'm trying to write my own post, and If I don't get answers I'll start to write my own calcCov and Mahalanobis. And I'll post below.
EventineEventineFri, 13 Dec 2019 06:01:53 -0600http://answers.opencv.org/question/223369/Covariance of estimated R and T from iterative solvePnP()http://answers.opencv.org/question/219611/covariance-of-estimated-r-and-t-from-iterative-solvepnp/Hi all,
I am using solvePnP with the iterative flag to estimate R and T (rvec, tvec more correctly) from a known correspondence of image corners and respective world points. I was wondering if it is possible to get the uncertainty (covariance) associated with this estimation?
I have been doing some research on this topic over the past few days and I came across the function projectPoints() in openCV which returns the jacobian of image points wrt intrinsic and extrinsic parameters. Can I just use this jacobian to get an estimate of covariance? Something like covariance = (J.transpose() * J).inverse()
Any help or explanation ?
Thanks.subodhFri, 11 Oct 2019 15:57:59 -0500http://answers.opencv.org/question/219611/Cross-covariance of 2 sample sets.http://answers.opencv.org/question/212302/cross-covariance-of-2-sample-sets/``calcCovarMarix`` works great for calculating the auto-covariance matrix of a given sample set.
Now, how does one calculate the cross-covariance of 2 sample sets?
Example: every row of a matrix is a sample. Samples are random vectors with possibly differing amount of variables:
Mat A = [[1, 2],
[2, 3]]
Mat B = [[3, 4, 5],
[4, 5, 6]]
In MATLAB one can simply do ``cov(A, B)``. How would one do it in C++ OpenCV?KibouoMon, 29 Apr 2019 13:37:23 -0500http://answers.opencv.org/question/212302/Covariance matrices for Kalman Filterhttp://answers.opencv.org/question/191798/covariance-matrices-for-kalman-filter/ Hi,
While going through [this tutorial](https://docs.opencv.org/3.4/dc/d2c/tutorial_real_time_pose.html) on real time pose estiamtion , the Linear Kalman Filter implemented in the tutorial utilizes values close to zero for setting the covariance matrices for process noise, measurement noise and error covariance.
Can someone please explain -
1. Isn't the main diagonal of a covariance matrix supposed to be ***not close to zero***. since it is a relation (variance) between the same variable ?'
2. How exactly is the covariance value found ? Are we supposed to take a bunch of measurements , feed it into a software (such as excel) and then just use it ?
Thanks !malharjajooFri, 18 May 2018 00:22:26 -0500http://answers.opencv.org/question/191798/How to compute the covariance of an inter-camera relative pose measurement?http://answers.opencv.org/question/179919/how-to-compute-the-covariance-of-an-inter-camera-relative-pose-measurement/If I'm doing pose estimation using a single camera using 3D-2D correspondences (E.g. PNP algorithm), I have read that reprojecting the points can give me an estimate of the Jacobian (cv::projectPoints), which can then be used to compute an estimate of the covariance of the pose.
But if I have two cameras, and I am performing relative pose estimation between the cameras using the fundamental/essential matrix (cv::findEssentialMat) and subsequent decomposition of the matrix, how can I compute the covariance of the relative pose between the cameras?saihvWed, 06 Dec 2017 20:23:32 -0600http://answers.opencv.org/question/179919/How to get the underlying corner detection uncertainty of findChessboardCornershttp://answers.opencv.org/question/179775/how-to-get-the-underlying-corner-detection-uncertainty-of-findchessboardcorners/Is there a way to obtain the uncertainty (as a covariance matrix) for corner detector algorithms in OpenCV such as the Harris corner detection used by `findChessboardCorners` ? I need to take into account the chessboard corner detection uncertainty in my algorithm. I know the algorithm's covariance matrix for the corner position is given [here](https://en.m.wikipedia.org/wiki/Corner_detection#The_Harris_.26_Stephens_.2F_Plessey_.2F_Shi.E2.80.93Tomasi_corner_detection_algorithms) in terms of its precision matrix. However, can this be returned somehow by the `findChessboardCorners` function?
ThanksopencvslaveTue, 05 Dec 2017 08:14:25 -0600http://answers.opencv.org/question/179775/Key-Frame VO Key-Point Data Fusionhttp://answers.opencv.org/question/114031/key-frame-vo-key-point-data-fusion/I have a key-point based visual odometry routine which accepts as input an RGB-D frame. Successive image are tracked to each other and a cumulative rotation and translation is maintained. In this current form, significant drift occurs. I intend to transition this routine to make use of key-frames, whereby until some sufficient displacement has occurred to necessitate a new key-frame, new RGB-D frames are tracked to the most recent key-frame. Key-frames should significantly reduce drift and are useful for further processing if so desired (m-frame bundle adjustment, etc.).
My question is pretty fundamental. Assume I have performed tracking (key-point matching and PNP) and have [R|t] for the current frame to the current key-frame. Now, given a key-point pair, one in the key-frame and one in the current frame, each with 3D position and uncertainty/covariance, how can I fuse the new data into the key-frame data? Of course, there are many papers that dance around this and take it for granted, but for someone new to this sort of thing, I am having trouble finding a source that offers a good explanation (this might even come from radar systems).
Der LuftmenschSat, 19 Nov 2016 13:48:15 -0600http://answers.opencv.org/question/114031/Filter for covariancehttp://answers.opencv.org/question/94701/filter-for-covariance/Ok, so this may seem a little odd but bear with me.
I have an image, and I have a line (point slope form). What I need is a way to find the covariance above the line and the covariance below the line.
I have brute forced this by copying all of the pixels above/below the line into a new cv::Mat one by one (while keeping track of count and sum color as I do this), and then calculating the convariance manually.
While I am geting the results that I want this is incrediblly slow. Ideally I want to add GPU acceleration however because cv::GpuMat doesn't have direct access I cant exclude the unwanted pixels from either the count or the sum color.
I believe the key is to use some kind of filter however that concept is completly foreign and I am having trouble understanding what I need to do using the documentation.
Could someone please help me bring these cookies to a lower shelf.
Thanks
L.willoughbyMon, 23 May 2016 09:59:03 -0500http://answers.opencv.org/question/94701/OpenCV4Android - calcCovarMatrix for image averageshttp://answers.opencv.org/question/92657/opencv4android-calccovarmatrix-for-image-averages/So I have two RBG averages that I want to get a Mahalanobis distance for. The Mahalanobis function requires an inverse co-variance matrix. My question is how do I create a inverse co-variance matric for averages?
The averages would just be two 1x3 vectors but those averages come from a single image that's 27x27.
The image looks similar to this:
[![enter image description here][1]][1]
One of the averages is for an estimate of the average RBG values for inside the circle and the other is outside the RBG values outside the circle. Is it as simple as creating two images one with the inside of the circle and filling the background with my average circle values and another with the background and filling in the circle with my average background values? From what I've read the inverse co-variance matrix would need to be a 3x3 so I don't think that can work...
Mat coloredImage = colorImage.clone();
Mat threshedImage = threshImage.clone();
Mat mask = new Mat(coloredImage.size(), CvType.CV_8UC1, Scalar.all(255));
Core.circle(mask, new Point(mask.rows() / 2, mask.cols() / 2), (int) mR, Scalar.all(0), -1, 8, 0);
Bitmap gridBitmap = Bitmap.createBitmap(mask.width(), mask.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mask, gridBitmap);
double externalRed = 0;
double externalGreen = 0;
double externalBlue = 0;
int externalCount = 0;
double internalRed = 0;
double internalGreen = 0;
double internalBlue = 0;
int internalCount = 0;
//FIXME: Revisit when I have a better understanding of masks in OpenCV - Masks appear to only effect 1 channel
for (int column=0; column<mask.cols(); column++)
{
for (int row=0; row<mask.rows(); row++)
{
double[] maskValue = mask.get(row, column);
double[] value = coloredImage.get(row, column);
if (maskValue[0] == 255)
{
externalRed = externalRed + value[0];
externalGreen = externalGreen + value[1];
externalBlue = externalBlue + value[2];
externalCount++;
}
else
{
internalRed = internalRed + value[0];
internalGreen = internalGreen + value[1];
internalBlue = internalBlue + value[2];
internalCount++;
}
}
}
int externalAvgRed = (int)(externalRed / externalCount);
int externalAvgGreen = (int)(externalGreen / externalCount);
int externalAvgBlue = (int)(externalBlue / externalCount);
int internalAvgRed = (int)(internalRed / internalCount);
int internalAvgGreen = (int)(externalGreen / internalCount);
int internalAvgBlue = (int)(externalBlue / internalCount);
Mat smallColoredImage = new Mat();
Mat mean = new Mat();
Mat covar = new Mat();
MatOfFloat invcovar = new MatOfFloat(3,3);
Imgproc.resize(coloredImage, smallColoredImage, new Size(3, 3));
Core.calcCovarMatrix(smallColoredImage, covar, mean, Core.COVAR_NORMAL+Core.COVAR_ROWS, -1);
Core.invert(covar, invcovar, Core.DECOMP_SVD);
MatOfFloat mu0 = new MatOfFloat(3,1);
MatOfFloat mu1 = new MatOfFloat(3,1);
mu0.put(0, 0, externalAvgRed);
mu0.put(1, 0, externalAvgGreen);
mu0.put(2, 0, externalAvgGreen);
mu1.put(0, 0, internalAvgRed);
mu1.put(1, 0, internalAvgGreen);
mu1.put(2, 0, internalAvgBlue);
double d2 = Core.Mahalanobis(mu1, mu0, invcovar);
[1]: http://i.stack.imgur.com/gVaOS.pngSilberlichtWed, 13 Apr 2016 16:13:19 -0500http://answers.opencv.org/question/92657/Calculate covariance in Opencvhttp://answers.opencv.org/question/55764/calculate-covariance-in-opencv/Dear forum followers I have a issue which I don't know how solve it.
I have a Mat like:
500.0 350.2
500.5 355.8
498.7 352.0
............
And I need calculate the covariance. The result would be something like:
0.8633 1.2167
1.2167 8.1733
Of course, the function which I need is **calcCovarMatrix**.... BUT if I execute these code:
cv::Mat a = (cv::Mat_<double>(3, 2) << 500.0, 350.2, 500.5, 355.8, 498.7, 352.0);
cv::Mat mu, new_covs;
cv::calcCovarMatrix(a, new_covs, mu, CV_COVAR_NORMAL | CV_COVAR_COLS);
The result is a incomprehensible 3x3 matrix...
new_covs=
[11220.02, 10838.03, 10987.83;
10838.03, 10469.045, 10613.745;
10987.83, 10613.745, 10760.445]
Another question:
If I have a set of clusters could I get the weight of these clusters like Expectation Maximization does it. How?: (http://docs.opencv.org/modules/ml/doc/expectation_maximization.html#ml-expectation-maximization)
I hope you can help my with my problem!!
**Thank you in advanced!** RiSaMaFri, 20 Feb 2015 07:12:36 -0600http://answers.opencv.org/question/55764/