Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

again, making a PCA of a 1 row Mat does not make any sense, and also even trying to reduce that to 0.9 leads to your error. you need at least as many rows in your PCA, as your desired feature size.

simple example:

Mat m(1,200,CV_32F);
PCA pca(m,Mat(),0,0); // retain all
Mat n = pca.project(m);
cout << n.size() << endl;
[1x1]

it would work, like this:

Mat m(100,200,CV_32F);
PCA pca(m,Mat(),0,50);
Mat n = pca.project(m);
cout << n.size() << endl;
[50 x 100]

so, either entirely skip the PCA idea, and use only the desired 214 clusters in your BOW, or:

  • make a PCA from the whole SVM traindata set (a [1000 x nImages] Mat)
  • project the traindata set, so it's [214 x nImages]
  • train the SVM on that
  • for testing later, project any BOW feature, using the PCA from before (from 1000 x 1 to 214 x 1)
  • predict with SVM

but again, imho none of it makes much sense. do some profiling, and you'll see, that the major bottleneck is the feature detection and the BOW matching, not the SVM prediction (but this is, what you're trying to optimize here)

again, making a PCA of a 1 row Mat does not make any sense, and also even trying to reduce that to 0.9 leads to your error. you need at least as many rows in your PCA, as your desired feature size.

simple example:

Mat m(1,200,CV_32F);
PCA pca(m,Mat(),0,0); // retain all
Mat n = pca.project(m);
cout << n.size() << endl;
[1x1]

it would work, like this:

Mat m(100,200,CV_32F);
PCA pca(m,Mat(),0,50);
Mat n = pca.project(m);
cout << n.size() << endl;
[50 x 100]

so, either entirely skip the PCA idea, and use only the desired 214 clusters in your BOW, or:

  • make a PCA from the whole SVM traindata set (a [1000 x nImages] Mat)
  • project the traindata set, so it's [214 x nImages]
  • train the SVM on that
  • for testing later, project any BOW feature, using the PCA from before (from 1000 x 1 to 214 x 1)
  • predict with SVM

but again, imho none of it makes much sense. do some profiling, and you'll see, that the major bottleneck is the feature detection and the BOW matching, not the SVM prediction prediction (but this is, what you're trying to optimize here)