2019-11-25 04:07:24 -0500 received badge ● Nice Answer (source) 2018-12-29 18:15:28 -0500 received badge ● Nice Answer (source) 2017-12-27 08:05:54 -0500 received badge ● Nice Answer (source) 2017-12-27 07:38:10 -0500 received badge ● Nice Answer (source) 2016-03-29 04:52:04 -0500 received badge ● Nice Answer (source) 2015-11-23 08:12:59 -0500 received badge ● Guru (source) 2015-11-23 08:12:57 -0500 received badge ● Great Answer (source) 2015-11-22 04:01:22 -0500 edited answer image data processing in cv::Mat There are great tutorials in the OpenCV documentation. Just have a look at the available tutorials for the core module. The Core Functionality. You want to read: 2014-11-08 01:25:53 -0500 received badge ● Good Answer (source) 2014-10-29 07:48:25 -0500 received badge ● Good Answer (source) 2014-07-31 02:49:21 -0500 received badge ● Guru (source) 2014-07-31 02:49:21 -0500 received badge ● Great Answer (source) 2014-05-04 12:50:08 -0500 edited answer How to reorder matrix columns according to sortIdx() result? 1 year later, but I have come across a solution that works on 2D arrays. so instead of 2 vectors, you can make it a Nx2 array, and sort the 2nd column based on the 1st column - no need for index list. Furthermore, if there are duplicates in the 1st column, you can then sort the duplicates based on the 2nd column - e.g.: [7,15],[7,10],[1,2],[3,4],[7,2] => [1,2],[3,4],[7,2],[7,10],[7,15] this can even be extended to Nx3 as done in the example below... most of the code is actually the cout to show the results - the code is a single line of qsort, and then the qsort comparison routine (which is the engine). Made a variation for int by changing the arr definition, and the qsort type - works like a charm...Since I am new to this forum, I am restricted in submitting code.... hope it comes out. #include #include #include #include using namespace std; static int compfloat(const void* p1, const void* p2) { float* arr1 = (float*)p1; float* arr2 = (float*)p2; float diff1 = arr1[0] - arr2[0]; if (diff1) return diff1; return arr1[1] - arr2[1]; //only compares 2nd column if the 1st is the same - can remove } #define arraysize 5 int main() { float arr[arraysize][3] = {{5,10,1},{2,2,1}, {1,5,2}, {5,4,3}, {5,20,4}}; //example data for (int i=0; isave("/var/www/html/photos/saved.txt");  And the problems should be gone. 2013-09-03 23:25:09 -0500 received badge ● Good Answer (source) 2013-08-27 16:39:16 -0500 received badge ● Civic Duty (source) 2013-08-07 02:20:32 -0500 received badge ● Good Answer (source) 2013-07-21 09:06:49 -0500 commented question Recommended HD camera I would be interested as well. 2013-06-19 13:49:50 -0500 commented answer OpenCV and face recognition There's nothing I can add to this. Shervin, good to see you here! +1 :-) 2013-06-09 05:10:35 -0500 commented question Confused on the status of contrib FaceRecognizer and Java binding I'll look into it next week and see what I can do. 2013-05-26 05:55:58 -0500 commented answer How to create a face recognition algorithm? @berak From next week on I am going to have a better internet connection and then I'll start adding new algorithms and merge your suggestion for uniform patterns. :) 2013-05-16 15:26:36 -0500 edited answer Extract a RotatedRect area There's a great article by Felix Abecassis on rotating and deskewing images. This also shows you how to extract the data in the RotatedRect: You basically only need cv::getRotationMatrix2D to get the rotation matrix for the affine transformation with cv::warpAffine and cv::getRectSubPix to crop the rotated image. The relevant lines in my application are:  // rect is the RotatedRect (I got it from a contour...) RotatedRect rect; // matrices we'll use Mat M, rotated, cropped; // get angle and size from the bounding box float angle = rect.angle; Size rect_size = rect.size; // thanks to http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/ if (rect.angle < -45.) { angle += 90.0; swap(rect_size.width, rect_size.height); } // get the rotation matrix M = getRotationMatrix2D(rect.center, angle, 1.0); // perform the affine transformation warpAffine(src, rotated, M, src.size(), INTER_CUBIC); // crop the resulting image getRectSubPix(rotated, rect_size, rect.center, cropped);  A simple trick to only rotate the content of the RotatedRect is to first get the ROI from the Bounding Box of the RotatedRect by using RotatedRect::boundingRect() and then perform the same as above, for cv::RotatedRect see: 2013-05-16 02:16:34 -0500 received badge ● Nice Answer (source) 2013-05-15 15:58:51 -0500 answered a question face recognition with opencv If you are new to OpenCV (and computer vision probably), then tackling such a problem is optimistic I would say. The algorithms I have added are nowhere to be suited for datasets of 100,000 images. If you are going to run the Eigenfaces or Fisherfaces algorithm, you won't be able to allocate that much memory. Algorithms like Local Binary Patterns don't need to allocate that much memory, but finding the best match is going to be very time consuming as it's a Nearest Neighbor Search over the entire dataset. Coming up with a solution that scales is far from trivial. While I can't offer source code and algorithm implementations, I think there are interesting papers available. Among them is one of the face.com team (a company quite successful in this area): Yaniv Taigman, Lior Wolf "Leveraging Billions of Faces to Overcome Performance Barriers in Unconstrained Face Recognition" (Online available on arxiv.org) As for similarity measures I suggest looking into algorithms like One Shot Similarity Kernels, as I think they still provide state of the art results for similarity measures. There's a great paper by Lior Wolf, Tal Hassner and Yaniv Taigman (face.com Founder/CTO); Lior Wolf, Tal Hassner and Yaniv Taigman "Effective Unconstrained Face Recognition by Combining Multiple Descriptors and Learned Background Statistics". IEEE Transactions on Pattern Analysis and Machine Intelligence archive, Volume 33, Issue 10, October 2011 (PDF online available) You can find some MATLAB Code on the project page for One Shot Similarity Kernels: So do I think such a project feasible, if you are working alone and don't have a (very strong) background in computer vision? I know, that such a project requires a lot of tough problems to be solved in order to create a robust and efficient (and useful) face recognition system. In my opinion way too many tough problems for one person. 2013-04-29 03:22:54 -0500 received badge ● Nice Answer (source) 2013-04-24 06:15:04 -0500 received badge ● Nice Answer (source) 2013-04-13 13:25:24 -0500 commented answer Mismatch of the Eigen Vector & Value @StevePuttemans I really want to see you going through the implementation details of an eigenvalue solver. Most of the stuff is mathematical magic to me and I don't think I would find the difference in the implementation. I guess looking at the data I feed into a solver should be the very first thing to look at. And I tend rely on the fact, that the algorithms implemented by BLAS (and associated projects) have been used by millions of mathematicans, that would have spotted errors. I don't think the OpenCV project has invented a new solver, but used the BLAS implementations. 2013-04-13 13:02:23 -0500 answered a question Mismatch of the Eigen Vector & Value I thought I turn this into an answer instead of using comments, as I want to avoid confusion for people coming here from Google. If you don't provide any sample data, it is hard to say what's going on. First of all an eigenvector has nothing like a "sign", see that's the whole point of eigenvectors. Multiplying an eigenvector with -1 yields a totally valid eigenvector again. I could throw the math at you, but probably this picture already helps you understanding it (as found on the Wikipedia page on Eigenvalues and Eigenvectors): Let's recall the eigenvalue problem as: Av = Bv  Now let us multiply a constant c and we end up having: A(cv) = c(Av) = c(Bv) = B(cv)  So v is an eigenvector and cv is an eigenvector as well. The following doesn't apply to your situation, as your matrix is symmetric, I just write it for reference. The solvers implemented in OpenCV solve the eigenvalue problem for symmetric matrices. So the given matrix has to be symmetric, else you aren't doing an eigenvalue decomposition and the (same goes for a Singular Value Decomposition). I have used a solver for the general eigenvalue problem for my Fisherfaces implementation in OpenCV, but it isn't exposed to the OpenCV API. You can find a header-only implementation in one of my GitHub repositories: Now to the eigenvalue accuracy, you refer to. In case of an ill-conditioned matrix small rounding errors in the computation will lead to large errors in the result. I won't throw the math at you again, but you can read all this up at: The page is explicitly mentioning eigenvalue problems. Now here is my practical rule of thumb. It's very easy to calculate the condition number with MATLAB or GNU Octave by simply using the cond function. See if rcond (reciprocal condition number) yields a number close to your machine eps, your matrix is most likely to be ill-conditioned and solvers are going to calculate you some totally random stuff. You can easily determine your machine eps by typing eps in GNU Octave; I guess it is similar for Matlab. Just as a side note, usually for a given matrix X the log10(cond(X)) gives you the number of decimal places that you may lose due to roundoff errors. The IEEE standard for double precision numbers has 16 decimal digits, and let us assume your matrix has a condition number of 10^12, then you should expect only 4 digits to be accurate in your result. 2013-04-08 23:51:25 -0500 received badge ● Nice Answer (source)