OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 21 Jun 2019 18:16:51 -0500MatrixXf eigen and Mat opencvhttp://answers.opencv.org/question/35906/matrixxf-eigen-and-mat-opencv/Hi ,
what is the difference between using MatrixXf eigen and Mat opencv ?
I know that each one offer different class and functionality .
But my main question is : if it is possible to use both structure in the code which of them is prefered for increasing the matrix computation speed ?
Thansk in advance
SarahsaraSun, 29 Jun 2014 07:16:43 -0500http://answers.opencv.org/question/35906/Mat to Eigen, eigen2cv and cv2eigen errorshttp://answers.opencv.org/question/210869/mat-to-eigen-eigen2cv-and-cv2eigen-errors/Dear guys,
I'm trying to solve SVD using the Eigen library, since I'm trying to solve one of the biggest error I've got so far in retrieving the fundamental matrix ([here](http://answers.opencv.org/question/209787/extracting-the-essential-matrix-from-the-fundamental-matrix/) the link)
I'm using OpenCV 3.x and I can't compile OpenCV from sources with Eigen and/or LAPACK support, so I downloaded the Eigen library and putted under `usr/local/include`. No problem so far, I can see Eigen function and variables, so the linker and compiler are ok with that, I suppose.
So, since I'm interesting in SVD and since I mostly use OpenCV for my stuff, I would like to write a function that directly compute the SVD for me. I've seen that there are a lot of possibilities to make `cv::Mat` working as `Eigen::Matrix` and vice-versa. What I would like is a function that do that:
int runEigenSVD(cv::InputArray _mat,
cv::OutputArray _U, cv::OutputArray _S, cv::OutputArray _V,
unsigned int QRPreconditioner,
unsigned int computationOptions)
I'm not even able to compile since using different approaches I always get different errors. I obtain the input matrix with `cv::Mat inputMat = _mat.getMat();`, so suppose mat as my input matrix, always.
First try:
----------
//OpenCV-> Eigen: it works!
Eigen::Map<Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>>
mat_Eigen(inputMat.ptr<double>(), mat.rows, mat.cols);
//Executing Eigen stuff, here SVD
Eigen::JacobiSVD<Eigen::MatrixXd, Eigen::FullPivHouseholderQRPreconditioner>
svd(mat_Eigen, Eigen::ComputeThinU | Eigen::ComputeThinV);
//Eigen -> OpenCV
int U_row = svd.matrixU().rows();
int U_cols = svd.matrixU().cols();
cv::Mat U_OpenCV(U_row, U_cols, CV_64FC1, svd.matrixU().data());
The errors I got are in the last line:
> invalid conversion from ‘const void*’ to ‘void*’ [-fpermissive]
> no matching function for call to ‘cv::Mat::Mat(int&, int&, int, const Scalar*)’
## Using eigen2cv and cv2eigen ##
I got always error, using both function, even with simple code like this one:
cv::Mat_<float> a = Mat_<float>::ones(2,2);
Eigen::Matrix<float,Eigen::Dynamic,Eigen::Dynamic> b;
cv::cv2eigen(a,b);
The error I get is:
> Invalid arguments ' Candidates are:
> void eigen2cv(const ? &, cv::Mat &)
> void eigen2cv(const ? &, cv::Matx<#0,int3 #1 0,int3 #2 0> &)'
Any suggestion? The documentation regarding it is like... nothing and it's incredible that nobody else have done it so far!!
## EDIT ## I'm able to map from Mat to Eigen, but still find no way to map it back without the use of for-loopHYPEREGOThu, 28 Mar 2019 10:31:25 -0500http://answers.opencv.org/question/210869/Eigen compilation errorhttp://answers.opencv.org/question/214606/eigen-compilation-error/I am trying to build OpenCV from Source using CMake.
I have used master branch to install.
I installed Eigen via apt-get `sudo apt-get install libeigen3-dev`.
And cmake shows eigen 3.3.4 is installed.
But when I did 'make', it showed the following error that Eigen/Core is not detected.
```
/opencv/modules/core/include/opencv2/core/private.hpp:66:12: fatal error: Eigen/Core: No such file or directory
#include < Eigen/Core >
compilation terminated.
```
How can I add Eigen to C++ library list?dolgomFri, 21 Jun 2019 18:16:51 -0500http://answers.opencv.org/question/214606/Behavior of PCA, Eigen, SVD (and other SVD)http://answers.opencv.org/question/214258/behavior-of-pca-eigen-svd-and-other-svd/Hi.
In a development I'm doing I need an SVD for a 2x2 matrix. This matrix is created from the gradients of the image, (inside a window).
SUM - sum of values
SUM(dx*dx) , SUM(dx*dy) ;
SUM(dx*dy) , SUM(dy*dy)
the thing is that I tried with OpenCV functions SVD, Eigen; and a specific and simple SVD method for a 2x2 matriz (lets call it svd22).
also with PCA, with an Nx2 matrix with the N gradients dx and dy inside the window. if I have not seen wrong de opencv code, PCA builds this same matrix, or something like that, by mean M * M´, as it should be.
all deliver the same eigenvectors (but 2 diferent sign) and one (PCA) diferent eigenvalues
e-values [2x1]
e-vectors [2x2]
the problem is:
the values in positions 0,1 and 1,0 of the matrix of e-vectors, PCA and Eigen deliver eigenvectors with opposite sign to SVD and the function svd22. I mean, for example:
where PCA and Eigen give
1,2
-2,1
SVD and svd2x2 give
1,-2
2,1
the other difference is that PCA e-values are much smaller than those that deliver the other three functions
Is this the behavior of PCA, Eigen and SVD?
What does that sign change mean?
Why does PCA deliver smaller e-values?MikeSZThu, 13 Jun 2019 11:43:06 -0500http://answers.opencv.org/question/214258/OpenCv SolvePnp result, to Eigen Quaternionhttp://answers.opencv.org/question/182862/opencv-solvepnp-result-to-eigen-quaternion/I am running opencv solvePnP using 3d points that I am passing in manually, and 2d image points from the corner of an aruco marker.
my 4 x 2d, and 4 x 3d points are as follows:
2d: [631.811, 359.303]
2d: [750.397, 364.894]
2d: [761.65, 448.106]
2d: [621.837, 440.352]
3d: [2.73788, -2.19532, 119.784]
3d: [-18.8274, -3.25251, 119.167]
3d: [-17.7176, -15.5556, 101.448]
3d: [3.84767, -14.4984, 102.065]
my camera matrix is:
[657.0030186215286, 0, 646.6970647943629;
0, 657.0030186215286, 347.1175117491306;
0, 0, 1]
When I run this, I get rvec and tvec:
rot: [3.542907682527767e-08;
9.321253763995684e-10;
3.141592612624269]
tran: [-2.599387317758439e-06;
7.428777431217353e-07;
-1.306287668720693e-06]
Already, this looks strange. Please see the image attached for an example of where the marker is in relation to the camera. Then, i run the rvec and tvec through a function to get the camera pose in relation to the marker as an `Eigen::quaterniond`
void GetCameraPoseEigen(cv::Vec3d tvecV, cv::Vec3d rvecV, Eigen::Vector3d &Translate, Eigen::Quaterniond &quats)
{
Mat R;
Mat tvec, rvec;
tvec = DoubleMatFromVec3b(tvecV);
rvec = DoubleMatFromVec3b(rvecV);
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R*tvec; // translation of inverse
Eigen::Matrix3d mat;
cv2eigen(R, mat);
Eigen::Quaterniond EigenQuat(mat);
quats = EigenQuat;
double x_t = tvec.at<double>(0, 0);
double y_t = tvec.at<double>(1, 0);
double z_t = tvec.at<double>(2, 0);
Translate.x() = x_t * 10;
Translate.y() = y_t * 10;
Translate.z() = z_t * 10;
}
The resulting quaternion is:
`1.12774e-08 2.96705e-10 1 -2.04828e-08`
Which is obviously wrong. But where is my mistake? Do my 2d - 3d correspondences look correct? Or is it my Eigen conversion?
Thank you.
![image description](/upfiles/15164543231226668.png)
EDIT:
I have adjusted the getCorners function based on the helpful comments below.
std::vector<cv::Point3f> getCornersInWorld(double side, cv::Vec3d rvec, cv::Vec3d tvec) {
//marker half size
double half_side = side / 2;
// compute rot_mat
cv::Mat rot_mat;
cv::Rodrigues(rvec, rot_mat);
// transpose of rot_mat
cv::Mat rot_mat_t = rot_mat.t();
//points in marker space
cv::Point3d cnr1(0, -half_side, half_side);
cv::Point3d cnr2(0, half_side, half_side);
cv::Point3d cnr3(0, half_side, -half_side);
cv::Point3d cnr4(0, -half_side, -half_side);
//to mat
cv::Mat cnr1Mat(cnr1);
cv::Mat cnr2Mat(cnr2);
cv::Mat cnr3Mat(cnr3);
cv::Mat cnr4Mat(cnr4);
//rotate points
cv::Mat pt_newMat1 = (-rot_mat_t * cnr1Mat);
cv::Mat pt_newMat2 = (-rot_mat_t * cnr2Mat);
cv::Mat pt_newMat3 = (-rot_mat_t * cnr3Mat);
cv::Mat pt_newMat4 = (-rot_mat_t * cnr4Mat);
// convert tvec to point
cv::Point3d tvec_3d(tvec[0], tvec[1], tvec[2]);
cv::Mat tvecMat(tvec_3d);
//rotate tvec
cv::Mat tvec_new = (-rot_mat_t * tvecMat);
cv::Point3d p(tvec_new);
//subtract tvec from all
pt_newMat1 = pt_newMat1 - tvec_new;
pt_newMat2 = pt_newMat2 - tvec_new;
pt_newMat3 = pt_newMat3 - tvec_new;
pt_newMat4 = pt_newMat4 - tvec_new;
//back to pnt
cv::Point3d p1(pt_newMat1);
cv::Point3d p2(pt_newMat2);
cv::Point3d p3(pt_newMat3);
cv::Point3d p4(pt_newMat4);
std::vector<cv::Point3f> retPnts;
retPnts.push_back(p1);
retPnts.push_back(p2);
retPnts.push_back(p3);
retPnts.push_back(p4);
return retPnts;
}
This gives me:
2d: [640.669, 489.541]
2d: [746.65, 479.212]
2d: [787.032, 547.899]
2d: [662.316, 561.966]
3d: [26.7971, 83.641, -78.1216]
3d: [28.0609, 91.2866, -57.9595]
3d: [31.636, 111.129, -65.7082]
3d: [30.3722, 103.484, -85.8702]
rot: [-0.4156437942693543;
1.047535551778523;
-1.063860719199172]
tran: [20.6613956945096;
-2.802643913661057;
239.6085645311116]
quat:
-0.187639 0.472901 -0.480271 0.71449
Which looks more sensible, but is still wrong. the resulting pose seems to be around 45 degrees off what I would expect, and on the wrong axis.antithingSat, 20 Jan 2018 07:19:41 -0600http://answers.opencv.org/question/182862/project points from known 3d points, strange issuehttp://answers.opencv.org/question/181681/project-points-from-known-3d-points-strange-issue/ I have a set of known 3d world points that i am passing to `cv::projectPoints`.
I pass in the points along with the camera matrices, and my camera position (converted from Eigen Quaternion and Vector3f). The projected 2d points look good when the camera is rotated, but when translated, they slide in the direction of camera movement. Weirdly, inverting the camera tvec matrix has no effect. I feel like I am missing something simple, but I cannot spot it!
I am doing the following:
std::vector<cv::Point2d> projectedPoints(std::vector<cv::Point3d> objectPoints, cv::Mat rVec, cv::Mat tVec)
{
std::vector<cv::Point2d> imagePoints;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
return imagePoints;
}
and passing the data as follows:
Eigen::Quaterniond zQuatRaw(quat.w, quat.y, -quat.z, quat.x); //camera rotation data
Eigen::Vector3f zPosRaw(translation.ty, -translation.tz, translation.tx); //camera position data
Eigen::Matrix3d R = zQuatRaw.toRotationMatrix();
cv::Mat Rr(3, 3, cv::DataType<double>::type);
cv::eigen2cv(R, Rr);
cv::Mat rvecR(3, 1, cv::DataType<double>::type);
cv::Rodrigues(Rr, rvecR);
cv::Mat tVecc(3, 1, cv::DataType<double>::type);
cv::eigen2cv(zPosRaw, tVecc);
//repro points
if (clickedPoints3d.size() > 0)
{
reprojectedUserPnts = projectedPoints(clickedPoints3d, rvecR, tVecc);
}
Like I said above, camera rotation looks great, the projected points stick in place. Camera translation causes them to slide in the direction of camera movement.
The 3d point coordinates are in cm, as is the camera position.
What could be going wrong here?
Thank you.antithingThu, 04 Jan 2018 16:05:57 -0600http://answers.opencv.org/question/181681/Why is findHomography() so fast?http://answers.opencv.org/question/98054/why-is-findhomography-so-fast/Hey there people,
maybe I got a silly question for you, but I am quite interested in the answer and have not found out myself yet:
Why is findHomography() so fast? I mean okay, with good inlier/outlier ratio and RANSAC we will find a very good initial solution for a homography. But still with 300< features the final refinement is so fast, I got computation times of less than 10 ms total, while testing with Googles Ceres Solver my Estimations are easily 10 times slower.
Is there a smart logic behind the algorithm that reduces the computational costs of homography refinement or is the chance high, my Ceres Code is just bad?
Thanks in advance for any good hint!
laxn_panderThu, 07 Jul 2016 10:48:12 -0500http://answers.opencv.org/question/98054/warpPerspective Transform Matrix type ?http://answers.opencv.org/question/94560/warpperspective-transform-matrix-type/I am using opencv on C++. I am quite new at Opencv, and I think that my problem should be easy to solve ... thank you for your help.
I don't understand what should be the data type of the transformation matrix in warpPerspective and how it works.
I have a 3 x 3 matrix (of type Eigen::MatrixXf) H, and I manually build a cv::Mat H2 equal to H, so that I could execute that transformation on an image, as follows :
warpPerspective(src , newImg , H2 , Size (5*H1 , 5*W) ) ;
As I have not succeeded to build H2 correctly, I do it and check it step by step, as follows :
Mat H2(3,3, CV_64F);
for (int g=0;g<3;g++) for (int g1=0; g1<3; g1++) H2.at<float>(g,g1)= float(H(g,g1)) ;
cout <<endl<< "Matrix H2 (OpenCV) : "<<endl;
cout<< H2<<endl;
cout <<endl<< "Matrix H2 (OpenCV) Element by element: "<<endl;
for (int t=0; t<3; t++)
{
for (int s=0; s<3; s++)
{
cout<<"Element "<<t<<" , "<<s<<endl;
cout<< H2.at<Vec3f>(t,s)<<endl;
}
cout<<endl;
}
cout <<endl<< "Matrix H (Eigen) : "<<endl;
cout << H<<endl;
But when I print I get in H2 something that seems to be a 3x3x3 tensor, with column 0 for H2 equal to H. As shown in the attached screenshot. [C:\fakepath\Capture du 2016-05-20 09:31:51.png](/upfiles/14637295517493309.png)
Thank you for your help
navonnFri, 20 May 2016 02:37:39 -0500http://answers.opencv.org/question/94560/Eigen Sparse support?http://answers.opencv.org/question/93715/eigen-sparse-support/Hello,
What is the official Eigen version that OpenCV supports? Is Eigen/Sparse supported?
My [contribution](https://github.com/Itseez/opencv_contrib/pull/442) fails to build on Windows machines because Eigen/Sparse header can't be found, while on Linux machines it builds fine.
Can someone either decide on an Eigen version or fix the building bot?
My contribution is pending for a long time now :(
Thanks in advance,
Yuval
YuvalNirkinMon, 25 Apr 2016 14:41:41 -0500http://answers.opencv.org/question/93715/error C1083 cannot open eigen/corehttp://answers.opencv.org/question/92555/error-c1083-cannot-open-eigencore/I download a source code from internet. There is no instruction on developing this code except that it said it uses Microsoft visual studio and opencv 2.1. Then I found in the additional include file consist of file from Eigen 2.0.17. So I install it and include it in the project. However there are errors as stated above. How should I use this Eigen thing in this project? Aj-611Tue, 12 Apr 2016 08:02:12 -0500http://answers.opencv.org/question/92555/Broken python bindings with createEigenFaceRecognizerhttp://answers.opencv.org/question/253/broken-python-bindings-with-createeigenfacerecognizer/When Trying to use the createEigenFaceRecognizer() and subsequent train(images, lables) method it throws an error:
cv2.error: matrix.cpp:357: error: (-215) r == Range::all() || (0 <= r.start && r.start < r.end && r.end <= m.size[i]) in function Mat
I am using the CPP tutorial as my guide along with the ATT dataset.
Here is a copy of the script I am using to test this.
import cv2,numpy,csv
if __name__ == '__main__':
inFile = csv.reader(open('att.csv', 'r'), delimiter=';')
img_dict = {}
images = []
lables = []
for row in inFile:
image = cv2.imread(row[0])
lable = int(row[1])
images.append(image)
lables.append(lable)
test_mat = images[-1]
test_lable = lables[-1]
images = numpy.array(images[:-1])
lables = numpy.array(lables[:-1])
gallery = cv2.createEigenFaceRecognizer()
gallery.train(images, lables)
I am currently building the opencv package from SVN and this is the build script I am using:
[https://aur.archlinux.org/packages/op/opencv-svn/PKGBUILD](https://aur.archlinux.org/packages/op/opencv-svn/PKGBUILD)
Any help would be greatly appreciated. Thank you.btreecatWed, 11 Jul 2012 11:42:56 -0500http://answers.opencv.org/question/253/Querying and controlling OpenCV on used accelaration methodhttp://answers.opencv.org/question/86176/querying-and-controlling-opencv-on-used-accelaration-method/ I was doing [testing](https://www.youtube.com/playlist?list=PLDTDDd_unmOrGfE1Y5-sMcIs9k27eBaAy) of OpenCV background subtraction algorithms. After recompiling OpenCV (from github, tagged as 3.1) and adding CUDA and EIGEN, I've noticed that MOG2 became about 30% faster but KNN became 4 times slower. I also have OPENCL, IPP, TBB and SSE/AVX enabled. I usually use one universal include file (opencv2/opencv.hpp)
Is there a way to find out what acceleration method OpenCV have used in its functions?
Or is there a way to control what acceleration method will be used?Leonid VolnitskyMon, 01 Feb 2016 06:29:35 -0600http://answers.opencv.org/question/86176/Eigenface average face in javahttp://answers.opencv.org/question/62668/eigenface-average-face-in-java/ Hi im doing a face rec app with opencv and java & i use the face recognizer of opencv createEigenfaceRecognizer()
everything works fine but my problem is that i want to acces to the eigenvector & the eigenvalues and also acess to the calculated average image by the recognizer from my training set and i dont know to access to them how can i do it and thank u vildetheTue, 26 May 2015 19:52:24 -0500http://answers.opencv.org/question/62668/Opencv builds on lab computer, but not laptop.http://answers.opencv.org/question/30940/opencv-builds-on-lab-computer-but-not-laptop/
I have been struggling to install opencv 2.4.8 on my own laptop, but managed to install it on a computer in my schools computer lab using the same procedure.
http://docs.opencv.org/2.4/doc/tutorials/introduction/clojure_dev_intro/clojure_dev_intro.html
When I first ran make -j8 on my laptop, terminal returned an error saying that it could not find a library called eigen which I then downloaded and included on that particular files source path. Now I am having an issue where the make command finds the file, but can not read an identifier type ' numext'.
My professor thinks that this error may appear on my computer, but not on the lab computer because of a difference in cmake or xcode versions. So far only the cmake versions differ.
LAB LAPTOP
OS macosx 10.8.5 macosx 10.8.5
Java Version 1.6 1.8
Cmake Version 2.8-10 2.8-12
xcode version 5.0.2 (5A3005) 5.0.2(5A3005)
Any ideas on how I should go about trouble shooting this problem so that I can use opencv on my laptop?
ThanksboolMon, 31 Mar 2014 13:20:40 -0500http://answers.opencv.org/question/30940/How to get eigenvalues in opencv java?http://answers.opencv.org/question/14584/how-to-get-eigenvalues-in-opencv-java/Hi guys, i have this snipped code
coeffs.<Double>at(0,0) = ((value1 * 1.0 / alphaMax) - 0.5) * 2 * 3 * Math.sqrt(pca.eigenvalues.<Double>at(0,0));
and i want to write it in java language. based on what i have read, if above code i write into java it will become like this
//coeffs is CV_64f
double coeffsBuff[] = new double[(int) (coeffs.total() * coeffs.channels())];
double pcaEigenvalueBuff[] = new double[(int) (pcaEigenvalue.total() * pcaEigenvalue.channels())];
Mat pcaEigenvalue = new Mat();
Mat pcaEigenvactor = new Mat();
Core.eigen(pca, true, pcaEigenvalue, pcaEigenvactor);
coeffs.get(0, 0, coeffsBuff);
coeffsBuff = ((value1 * 1.0 / alphaMax) - 0.5) * 2 * 3 * Math.sqrt(pcaEigenvalue.get(0, 0, pcaEigenvalueBuff));
i got an error because coeffsBuff require double[] value meanwhile `((value1 * 1.0 / alphaMax) - 0.5) * 2 * 3 * Math.sqrt(pcaEigenvalue.get(0, 0, pcaEigenvalueBuff))` return double value.
can anybody tell me how to convert those code into java?orochiMon, 03 Jun 2013 11:08:13 -0500http://answers.opencv.org/question/14584/Mismatch of the Eigen Vector & Valuehttp://answers.opencv.org/question/10841/mismatch-of-the-eigen-vector-value/Hello,
I need to retrieve the eigen vector from a matrix. The problem is I don't match the result same as my matlab code.
I have a matrix symmetric 100x100 and trying to obtain eigen value and eigen vector from it. I work with double matrix (CV_64F) to have the best precision as possible (Already tried Float and it fail more).
My eigen value seems good but the vector loses some accuracy for each value ( Ex : value 1 to 25 exactly match with matlab but further you get to 100 , more I am losing precision, but I can work with that.)
But the problem is more with the eigen vector.
The result is same as eigen value, lose precision but the problem is with the sign.
If I take the firsts 25x25 results. I'm totally matching with matlab but randomly have positive or negative value. So I got wrong information at the end.
Right now I'm using cv:eigen function like this :
> cv::eigen(oResultMax,oMatValue,oMatVector);
I have already tried the SelfAdjointSolver from the library Eigen.
[Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page)
[SelfAdjointSolver](http://eigen.tuxfamily.org/dox/classEigen_1_1SelfAdjointEigenSolver.html)
Anyone have an idea ? Or suggest me something ?
EDIT : (Add image of result)
Here you can see the image of the eigen value. If you refere to the MatLab result, the double result are almost same till the 27th and for the float till 14th.
![image description](/upfiles/13660503247970353.png)
And those Image, you have Eigen Vector in order : MatLab , OpenCV Double and OpenCV Float.
As you can see double matching until 35th and float until 15th.
And the sign of the result are differente from Matlab, Double and Float
![image description](/upfiles/13660505732191637.png)
![image description](/upfiles/13660505816901852.png)
![image description](/upfiles/13660505887633373.png)
Alexandre BizeauFri, 05 Apr 2013 13:33:00 -0500http://answers.opencv.org/question/10841/Online Learning and Confidence with cv::FaceRecognizerhttp://answers.opencv.org/question/1760/online-learning-and-confidence-with-cvfacerecognizer/platform: Ubuntu 12.04 LTS 64bit
OpenCV version: 2.4.2
I am trying to make a program that captures face (say user 1) using haarcascade.I then run the crop the face are of image and add it to a vector<Mat>.I then add diff faces(say user 2) using the same method to the same vector. But I add different labels (say 0 and 1) for the 2 different cases.
After some kind of keyboard interrupt,I train the FaceRecognizer using train() with both the vector (Mat and int). From the next frame onwards I take the face area and try to predict the label.
The code is compiling and running fine.But the outputs are a little irritating. It always outputs the same label( on more inspection I found, the label that I "push_back" for the very first frame) and the confidence is always 0.
What I am basically trying to achieve is a kind of online learning FaceRecognizer. But Since the predicted labels are not correct,I assume I have done something wrong. Is online learning even possible with PCA/LDA/LBPH?? I have tried using the same model as well as saving and opening with another model. Below is my code.Am I doing something wrong??Any help will be much appreciated!!! Thanks
using namespace cv;
using namespace std;
vector< Rect_<int> > faces;
vector<cv::Mat> learnt_face;
vector<int> learnt_label;
bool pred = false;
bool pos_ex = false;
int main(int argc,char* argv[])
{
cv::CascadeClassifier haar_cascade;
Ptr<cv::FaceRecognizer> model = cv::createEigenFaceRecognizer(0,140.0);
Ptr<cv::FaceRecognizer> model0 = cv::createEigenFaceRecognizer(0,140.0);
string fn_haar = string("haarcascade_frontalface_alt.xml");
haar_cascade.load(fn_haar);
VideoCapture cap(0);
Mat img,gray_img,crop_face,crop_face_res;
for(;;)
{
cap>>img;
cv::cvtColor(img,gray_img,CV_RGB2GRAY);
haar_cascade.detectMultiScale(gray_img, faces);
Rect face_i = faces[0]; //Take only one face at a time * For debugging Purposes
//Crop and Resize
crop_face = gray_img(face_i);
cv::resize(crop_face, crop_face_res, Size(100,100), 1.0, 1.0, INTER_CUBIC);
if((!crop_face.empty()) && pred == false && pos_ex == true ) //If cropped,not predicting and learning positive images
{
cout<<"Learning"<<endl;
learnt_face.push_back(crop_face_res);
learnt_label.push_back(0);
if(learnt_face.size() >= 100)
{
learnt_face.erase( learnt_face.begin());
learnt_label.erase( learnt_label.begin());
}
rectangle(img, face_i, CV_RGB(0, 255,0), 1); //Green Faces label 0.
model->train(learnt_face,learnt_label);
}
//If cropped,not predicting and learning negetive images
else if((!crop_face.empty()) && pred == false && pos_ex == false )
{
cout<<"Learning"<<endl;
learnt_face.push_back(crop_face_res);
learnt_label.push_back(1);
if(learnt_face.size() >= 100)
{
learnt_face.erase( learnt_face.begin());
learnt_label.erase( learnt_label.begin());
}
rectangle(img, face_i, CV_RGB(255, 0,0), 1); //Red faces label 1.
model->train(learnt_face,learnt_label);
}
//If cropped and predicting
else if((!crop_face.empty()) && pred == true)
{
//model->save("model.xml");
//model0->load("model.xml");
cout<<"Predicting"<<endl;
int prediction = -1;
double predicted_confidence = 0.0;
cout<<model->getDouble("threshold")<<endl;
model->predict(crop_face_res,prediction,predicted_confidence);
rectangle(img, face_i, CV_RGB(0, 0,255), 1);
string box_text = format("Prediction = %d Confidence = %f", prediction,predicted_confidence);
int pos_x = std::max(face_i.tl().x - 10, 0);
int pos_y = std::max(face_i.tl().y - 10, 0);
putText(img, box_text, Point(pos_x, pos_y), FONT_HERSHEY_PLAIN, 1.0, CV_RGB(0,255,0), 2.0);
}
imshow("LEARN",img);
char esc = cv::waitKey(33);
if( esc == 27) break;
if( esc == 48) pos_ex = !pos_ex;
if ( esc == 32) pred = !pred;
}
std::cout<<learnt_face.size();
cap.release();
}
ranjanriteshFri, 24 Aug 2012 04:58:55 -0500http://answers.opencv.org/question/1760/OpenCV 2.4.2 FaceRec_demo.cpp - Interpreting output of Predict functionhttp://answers.opencv.org/question/2141/opencv-242-facerec_democpp-interpreting-output-of-predict-function/Hi All,
I'm running the OpenCv2.4.2 Sample code Facerec_demo.cpp (Using Eigen Faces) on Fedora Linux (Code is here http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html).
I'm not able to interpret the PredictedLevel and Confidence values of the Predict functions.
I also checked the output for various conditions of having matching and nonmatching input image. I have also gone through the OpenCV 2.4.2 documenation but not very clear about the interpretation of output of predict function?.
The Test results of predict function is as follows.
1. For matching Input face -> predictedLabel = 0; Confidence =0
2. For Non matching Input Face -> predictedLabel = 1; Confidence =-1602920021
3. For Slightly matching- means i have only 1 image in face database matching this image. then: predictedLabel = 1; Confidence =1594149678.
Request you to help me understand these values. I read in the documentation that, the predictedLabel should be -1 for nonmatching images but i'm getting 1?
Please let me know what Predictedlabel and Confidence values i should get for matching, non matching and slightly matching images? Please suggest
seetaram.ntFri, 07 Sep 2012 04:09:53 -0500http://answers.opencv.org/question/2141/Interpreting OpenCV FaceRecognition predicted confidence valueshttp://answers.opencv.org/question/1636/interpreting-opencv-facerecognition-predicted-confidence-values/I am toying with the various FaceRecognition algorithms, and I'd like to better understand the confidence values so that I can have a sense of when to ignore a match or when I can rely on a match.<br>
<br>
Using the ATT face database, I did test 1 where I trained on the 40 faces then ran prediction on a known face (With unknown image of course). I then did a second test where I trained on 39 faces and ran prediction on an unknown face (happens to be the same image as used in test 1).<br>
<br>
The values I got were:<br>
Eigenspace<br>
- 1806 when face known<br>
- 2618 when face unknown<br>
<br>
Fisherface<br>
- 372 known<br>
- 841 unknown<br>
<br>
LBPH<br>
- 36 known<br>
- 55 unknown<br>
<br>
If I am interpreting the algorithms correctly, Eigenspace and Fisherface work in a high dimension space and try to find the closest neighbor for a given test image. This means the confidence value will change depending on data set, and I can not have a simple threshold. Is there any other information I can gather, such as average distance between clusters, so that I can understand if I should keep or ignore a prediction?<br>
<br>
In regards to LBPH, is this confidence acting the same way?<br>
<br>
Many thanks<br>lvicksMon, 20 Aug 2012 11:08:49 -0500http://answers.opencv.org/question/1636/python FaceRecognizer questionshttp://answers.opencv.org/question/1342/python-facerecognizer-questions/I have read this:
* http://answers.opencv.org/question/936/python-face-recognition-with-opencv/#948
I still have a few questions if I may.
* If I understand correctly, facerec_demo.py just trains the recognizer? When I run it, I always get the same output, but am at a loss to determine what input the code is using to recognize: I get Predicted label = 0 and confidence = 0.00, the eigenfaces output to my folder just fine, and I get a test.png that matches s2/10.pgm from the att database. I'm thinking the 0 label and confidence indicate I'm doing something wrong. I read in your comments in the code that "you should always use unseen images for testing your model, but ... I am just using an image we have trained with."
* Is that the test.png image? If I were to build my own database, how would I pass the test image (what I want to recognize) in to the now trained recognizer?
* Would a python cv2.model.save(filename) work as it described your FaceRecognizer wiki pages?
* Once I get these bits figured out, based on my reading of the other post listed above, if I build a database with, say, my pictures cropped and grayscaled, added in as a new file to the att database, get a webcam snapshot, normalize it, crop it, grayscale it, is the above saying I could I then use (for example) KNN to compare the new pic to the database and find the closest match as a predicted output? rkapplerThu, 09 Aug 2012 15:10:54 -0500http://answers.opencv.org/question/1342/createEigenFaceRecognizer no attribute errorhttp://answers.opencv.org/question/1284/createeigenfacerecognizer-no-attribute-error/I'm running a newly installed copy of 2.4.2, python 2.7 in Ubuntu 12.04 on a 64 bit HP G62. Trying to run the facerec_demo.py file and getting:
Traceback (most recent call last):
File "facerec_demo.py", line 106, in <module>
model = cv2.createEigenFaceRecognizer()
AttributeError: 'module' object has no attribute 'createEigenFaceRecognizer'
Any ideas?
regards, RichardrkapplerWed, 08 Aug 2012 16:59:12 -0500http://answers.opencv.org/question/1284/