OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Wed, 06 Apr 2016 08:27:37 -0500coefficients clusteringhttp://answers.opencv.org/question/92106/coefficients-clustering/ Hi guys, well my problem is not exactly opencv related but more statistical/image processing wise, I hope therefore that there isn't a problem adding it in this forum. Otherwise, the mods feel free to remove it. So let's say that I have some signals that want to identify. I have a kind of dataset of 16 images (lets keep it simple for now) describing 4 different use cases each case is described by 4 images (i.e. 4 use cases X 4 images for each case = 16 images in total). On these 16 images I am applying a kind [non negative factorization](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) (a technique similar to PCA, SVD, etc... but with the advantage of dealing only with positive values which is supposed it is easier to work with and since in my problem the input signals are additive it makes sense to use it instead of the other techniques). From this technique I am getting two approximation matrices `W` and `H` where `W`x`H` should give me approximately the original input images. Briefly the `W` describes the basis and `H` the coefficients/weights. In my case I asked the algorithm to decompose my input dataset into 15 components, this gives me the following `H` matrix:
![image description](/upfiles/14599456802256415.png)
I then tried to find the correlation between these components to each other since I want from 15 to find 4 (this is my input number of signals if you remember that I want to identify). So, I need to find the 4 strongest combinations from the 15 components that describe me better my 4 signals. For that reason I run a correlation coefficient method based on the Pearson's linear correlation coefficient approach, where it gives me the correlation of the input data with a distance measurement between 1 and -1, values close to 1 show high correlation while close to -1 low correlation. The extracted correlation matrix is the following:
![image description](/upfiles/14599463751182638.png)
removing the low correlation values, the above matrix can be transformed into:
![image description](/upfiles/14599465409780514.png)
The latter matrix can be translated in a more understanding visual output by sorting the correlation values row-wise and keeping the original indices. This leads to two new matrices as below:
![image description](/upfiles/145994692678416.png)
So, what these matrices are telling me (read it row wise), is that for example coefficient 3 is related to 12 and 8 (matrix C, third row) with correlation ratios 0.9736 and 0.9726 respectively (matrix B, third row) and so on. Having this then applying some hierarchical clustering based on [Jaccard similarity distance](https://en.wikipedia.org/wiki/Jaccard_index), I get the following clustering result:
![image description](/upfiles/14599477382993057.png)
where as you can see it is obvious that my clusters are 8-12-3, 1-2-5-7-13, 6-14-9-11 and 4-10-15 however, there is a mistake here because component 2 should be included also in cluster 6-14-9-11 according to the matrix C (I do not know though if I am reading the dendrogram wrongly?). Also verifying this by obtaining the residual error of the clustered components against the corresponding input images shows this as well. So, from what I understand my clustering way is not that good and I need to improve it somehow. My question is if you have any other methods in mind that you think that might be better to use on clustering my extracted components based also on the type of the data that I have here or any other methodology that might be helpful. Because if you have a look on the matrix C you can visually see that it is quite clear which coefficients are in correlation. However, there are cases where one coefficient might affect another group as well (for example coefficient 2 is such a case, it correlates with both cluster 1-5-7-13 and 6-14-9-11). So, I need somehow to show this and include it in my validation.
Moreover, bear in mind though that the number of the initial sources to identify is not known, therefore using cluster methods were the number of clusters should be known (k-means) might not be proper for my case.
Thanks.theodoreWed, 06 Apr 2016 08:27:37 -0500http://answers.opencv.org/question/92106/Statistical and Central Momentshttp://answers.opencv.org/question/68468/statistical-and-central-moments/How to compute the statistical and central moments in OpenCV ?
I need help.
Thanks! diegomoreiraWed, 12 Aug 2015 11:55:57 -0500http://answers.opencv.org/question/68468/How do you classify true negatives ?http://answers.opencv.org/question/12620/how-do-you-classify-true-negatives/I'm gathering results from my image detector algorithm. So basically what I do is that, from a set of images (with the size of 320 x 480), I would run a sliding window of 64x128 thru it, and also under a number of predefined scales.
I understand that:
- True Positives = when my detected window overlaps (within defined intersection size / centroid) with the ground-truth (annotated bounding boxes)
- False Positives = when the algorithm gives me positive windows, which are outside of the grond truth.
- False Negatives = when it failed me to give positive window, while the ground truth annotation states that there's an object.
But what about **True Negatives** ? Are these true negatives all the windows that my classifier gives me negative results ? That sounds weird, since I'm sliding a small window (64x128) by 4 pixels at a time, and I've around 8 different scales used in detection. If I were to do that, then I'd have lots of true negatives per image.
Or do I prepare a set of pure negative images (no objects / human at all), where I just slide thru, and if there's one or more positive detections in each of these images, I'd count it as False Negative, and vice versa ?
Here's an example image (with green rects as the ground truth)
![Example image, not real result][1]
[1]: http://i.stack.imgur.com/mOb2n.pngsub_oMon, 29 Apr 2013 00:18:18 -0500http://answers.opencv.org/question/12620/How to calculate HuMoments for a contour/image in opencv using c++http://answers.opencv.org/question/8871/how-to-calculate-humoments-for-a-contourimage-in-opencv-using-c/hi,
i tried out the following tutorial.I used opencv 2.4.2 with visual studio 2010 and c++.(http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/moments/moments.html)
there i can calculate **moments** for each and every contour of the image, like this
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
mu[i] = moments( contours[i], false );
}
then i tried to calculate **Hu-Moments** in the same way. i tried with various combinations, but didn't got it work. My problems are,
01. is it possible to calculate **Hu-Moments** only for a **one contour** ?
02. should i calculate **Hu-Moments** for the **whole image**?
please can anyone explain this with some examples?Heshan SandeepaSun, 10 Mar 2013 13:49:10 -0500http://answers.opencv.org/question/8871/